Given an Field parameter, parse the field’s content in the input tuple as an unnested JSON object string and add the key-value pairs as fields to the output tuple.
Parameters
References a field available in the input tuple.
Input
The field references by the Field parameter should be available in the input tuple.
Output
The output tuple contains the key-value pairs from the JSON object string as fields.
concept Example { flow => code[j = '{"a": "a", "b": 1}'] => addjson['j'] }
{ j: {"a": "a", "b": 1}, a: "a", b: 1 }
Aggregate takes a set of tuples as input and aggregate the results into 1 tuple. Additionally add new fields which use the functions count, sum or avg on any of the existing fields
Parameters
References fields available in the input tuple or extension on top of them
Input
The fields referenced by the List of Fields parameter should be available in the input tuple. Futhermore new fields can be defined which are the result of function on top of the existing fields
Output
The output tuple for each unique occurrence of the values in the List of Fields
concept Example { val url = `document.location.href` flow => bucket['10s'] => aggregate['url',views='count url']}
Convert a stream of tuples that arrive in a predefined time slot to a single collection of tuples. Guaranties the timely delivery of the output collection if at least a single tuple has arrived in the time slot.
Parameters
The postfix time notation uses ‘ms’ for milliseconds, ‘s’ for seconds, ‘m’ for minutes, ‘h’ for hours and ‘d’ for days.
Input
No specific requirements
Output
All tuples of the input stream
concept Example { val url = `document.location.href` flow => bucket['10s'] => aggregate['url',views='count url'] }
Description
The buffer element takes a single unnamed parameter that defines the minimum time to buffer tuples. The buffer tries to send the data as soon as possible, but makes no guarantees. The data is send as a single list of tuples and stamped with the current time as packet time. This time can be retrieved using the time element..
Parameters
The postfix time notation uses ‘ms’ for milliseconds, ‘s’ for seconds, ‘m’ for minutes, ‘h’ for hours and ‘d’ for days.
Input
No specific requirements
Output
All tuples of the input stream
concept Example { val url = `document.location.href` flow => buffer['10s'] => aggregate['url',views='count url'] }
Description
This element combines both the buffer and the aggregate flow elements into one element
Parameters
The postfix time notation uses ‘ms’ for milliseconds, ‘s’ for seconds, ‘m’ for minutes, ‘h’ for hours and ‘d’ for days. References fields available in the input tuple or extension on top of them
Input
The fields referenced by the List of Fields parameter should be available in the input tuple. Futhermore new fields can be defined which are the result of function on top of the existing fields
Output
The output tuple for each unique occurrence of the values in the List of Fields
concept Example { val url = `document.location.href` flow => buffer:aggregate['10s','url',views='count url'] }
Description
This element buffers input tuples them using a session id. Apart from the session id, a time out can be defined after which the session has ended. The body of this code element can be used to perform additional calculation on the input tuples such as storing data in the session object. The body is executed at the moment the input tuple reach the code element. If the body returns false, the input tuple is discard. This is particularly convenient for calculting properties of a session both only storing them at the end of the session when the session expiration took place. All fields of the session object will be available as fields in the resulting flow
Parameters
The postfix time notation uses ‘ms’ for milliseconds, ‘s’ for seconds, ‘m’ for minutes, ‘h’ for hours and ‘d’ for days. The session id refers
Input
The fields referenced by the List of Fields parameter should be available in the input tuple.
Output
Based on the use of false, a number of input tuple after the expiration as defined as field. Furthermore additional fields might be available
concept Global { match '*' def guid = `dimml.sha1(+new Date()+Math.random().toString(36).slice(2)+navigator.userAgent).slice(20)` val url = `location` val sessionId = `sessionStorage.dimmlsid=sessionStorage.dimmlsid||guid()` @groovy flow => buffer:session['sessionId', timeout = '30s', ` session.startPage = session.startPage?:url session.endPage = url session.pageCount = (session.pageCount?:0)+1 false`] => console }
{startPage=http://documentation.dimml.io, endPage=http://documentation.dimml.io/basic-syntax, pageCount=2}
Description
Send the input tuple to a `target` flow and forward the resulting output. The `target` flow can be defined in a separate concept, possibly in a separate file. This effectively sends the data to a flow in another concept, executes that concept, and returns the result.
Parameters
Input
No specific requirements
Output
The output of the called flow given the current input tuple.
concept Example { match '*' val a = `3.0`@groovy val b = `4.0`@groovy flow => call[`Functions:pythagoras`@dimml] => debug plugin debug } concept Functions { flow (pythagoras) => code[`Math.sqrt(a.power(2) + b.power(2))`@groovy] }
{b=4.0, a=3.0, c=5.0}
Description
Define new fields based on client side or server side code. The flow element is split into two specific elements: camel:to and camel:from, indicating if the DimML application should get data from that source or actually send the data.
Parameters
Details on possible camel instructions can be found at camel.apache.org/components.html. Please contact us if a specific Camel integration is required for providing the specific code. Additionally the example below illustrates several uses of the Camel flow element.
Input
No specific requirements
Output
The Camel component is started. Note that for the Camel components which run continously, calling the Camel flow element will result in the Camel component to start executing continously as well. For example calling the Camel flow element which makes a call to an FTP server for receiving files, will result in the Camel component to run continuously. That means that after completing the initial flow, if additional files are placed on the server in 2 hours, the flow will be executed with the new data (file) starting from the camel component.
flow => camel:from['ftp://..', $type = 'java.io.InputStream'] => code[body = `def zip=new java.util.zip.ZipInputStream(body);zip.getNextEntry();zip`]
Additionally, most camel connections generate a burst of events. When opening a (log) file and transforming each row into an event, a high amount of events will be processed by the platform. While this is not a problem for the platform, not all end points where the data is distributed to can handle this. That is way it is best practice to make the data from the camel flow element available as a stream and use the stream flow element to throttle the events. Below is an example of that
@groovy concept TestService { match `keepalive`@cron flow => camel:from['ftp://corona@46.51.177.161/david/input?binary=true&password=...&move=done&passiveMode=true', $type = 'java.io.InputStream'] => code[body = `def zip=new java.util.zip.ZipInputStream(body);zip.getNextEntry();zip`] => stream:in[scanner = '\n'] => console }
Description
Define new fields based on client side or server side code.
Parameters
Each assignment contains an expression which is assigned to a field. This script can either be in Javascript (default, executed client side) or Groovy (executed server side). For the later, add @groovy after the script text.
Input
No specific requirements
Output
The data tuple is extended each field defined in de code element
concept Example { match '*' val browser = 'Mozilla/5.0 (Windows NT) Firefox/36.0' flow => code[ ie = `ua.contains('MSIE')`, firefox = `ua.indexOf('Firefox')>=0`@groovy ] }
“browser”: “1234567”, “ie”: “false”, “firefox”: “true”
Description
Add a field called ‘compacted’ to the tuple that contains a (unmodifiable) map view of the tuple. The default field name can be overridden
Parameters
Input
No specific requirements
Output
An additional field with a map as specificed
concept all { match '*' val firstName = 'John' val lastName = 'Doe' flow => compact['map'] => debug plugin debug }
{lastName=Doe, firstName=John, map={lastName=Doe, firstName=John}}
Description
Convert the data tuple to a CSV representation.
Parameters
An alternative mode of operation can be selected using the mode=’1′ parameter. This will use the semicolon (;) as separator, no double quotes to escape data, escape any semicolon in the data (using \;)
Input
No specific requirements
Output
This element will output a new tuple with a single field called ‘serialized’. This field is of type String and contains the CSV formatted data of the input fields. Note that all output flow elements use this serialized field by default as input. The delimiter is a comma (,). All data is escaped using double quotes.
Note that there is no predefined sequence in which the values are shown. Should this be required, use the filter element
concept Example { match '*' val firstName = 'John' val lastName = 'Doe' flow => filter['firstName','lastName'] => csv[mode='1'] => debug plugin debug }
John;Doe
John;the one and only;Doe
Processing this data would cause challenges since the additional comma results in 3 fields being identified (or the one and only as last name). Therefore it is advised to filter out any occurrences of the separator in the field values. The code below does that for the semicolon.
concept Example { match '*' val firstName = 'John;the one and only' val lastName = 'Doe' flow => filter['firstName','lastName'] //the value of all fields are put in the serialized field, and then every occurence of the the semicolon is replaced => compact['serialized'] => code[serialized = `serialized.values()*.replace(';','').join(';')`] => debug plugin debug }
Description
Data will pass through the ‘debug’ element unaltered. This allows the use of debug anywhere in a flow to get a peek of the data passing through. This element should be combined with the ‘debug’ plugin to make its output visible in the Javascript console.
Parameters
None
Input
Uses the ‘serialized’ field which contains a string to be shown
Output
No changes to the data tuple
concept Example { match '*' val firstName = 'John' val lastName = 'Doe' flow => filter['firstName','lastName'] => csv => debug plugin debug }
“John”;”Doe”
Description
For each input tuple, wait for the amount of time specified by the `delay` parameter before outputting it.
Parameters
Input
No additional requirements
Output
No changes to the data tuple, though the flow is continued later (the delay)
concept Example { match '*' val a = 'a' flow => delay['30s'] => debug plugin debug }
{a=a}
Description
Interpret the data tuple as a map and include its fields in the current data tuple.
Parameters
Input
If no field is defined, the field ‘expanded’ will be use. The Field that is used has to be of type map. Therefore it is the map alternative to addjson, which uses a string as input field
Output
Each item in the map is added as a field to the data tuple.
concept all { match '*' flow => code[map= `["a":1,"b":1]`@groovy] => expand['map'] => debug plugin debug }
{map={a=1, b=1}, a=1, b=1}
Description
Filter is used to limit the number of fields in the output tuples and also order them in a specific order. The latter is particularly convenient when having to write to a file.
Parameters
Input
No specific requirements
Output
The output tuples only contain the fields specified as in the element parameter. If one of the fields specified is not in the input tuple, filtering is ignored
concept all { match '*' val first = "first" val second = "second" val third = "third" flow => filter["fourth","second","first"] => debug plugin debug }
{second=second, first=first}
concept all { match '*' val first = "first" val second = "second" val third = "third" flow => filter["-first"] => debug plugin debug }
{second=second, third=third}
Description
Store flow data on a remote FTP server according to the specified FTP server settings. By default the data is appended to existing file; data is streamed real time.
Parameters
If flush mode is enabled, the element sends data to the FTP server even when data is still being received for a particular file.
Input
If the input tuple contains the ?filename property, its value will be used as the filename to store the data. This property can be set using other elements, like time or log.
When no filename is specified, the default filename output.log will be used.
Output
No output tuple is generated by this element.
concept Example { match '*' val a = 'a' flow => time[?filename = "origin:yyyyMMdd'.log'"] => csv => ftp[ host = 'example.com', user = 'user', password = 'password', dir = 'directory', flush = 'true' ] }
Connect to the example.com FTP server, with username user and password password, to append a CSV format of the input tuple to the file myfile[yyyyMMdd].log (with yyyyMMdd being the current date) in the directory directory.
In this case the file would contains the value ‘a’ (which is the CSV format of the entire input tuple.
The sftp element is similar to the ftp element, the only difference being that it connects to an secure server. Therefor the syntax and use is the same as the ftp element.
Description
Use an iterable field in the input tuple, to generate multiple similar output tuples. Every output tuple has the same content as the input tuple, but with one specific value of the iterable field.
Parameters
All parameters can define their respective content in the definition of the ‘http’ element (like method, url and headers do in the example). Alternatively, the parameters url, headers and data can optionally specify a field in the input tuple to use. The field is identified by prepending it with the at sign (@)
Input
No specific input requirements
Output
A field result is added to the data tuple which contains the result of the HTTP request
Important: the header `Content-Type: application/json` is added to the requests automatically. Adding it to the headers parameter will cause an error when the code is executed.
Example
concept Example { match '*' => code[apiurl = `"http://53.15.28.16:8080/restapi/getversion"`@groovy] => http[ url = '@apirul', method = 'GET', headers = 'Accept: application/json\n Authorization: Basic YWRtaRtaW4=' data = '' ] => filter['result'] => debug plugin debug }
{result = v3.4}
Description
The if flow evaluates an expression and continues with the flow if the expression evaluates to true. If the evaluation evaluates to false, all named flows provided with the if flow element are executed.
Output
The current flow is continued or stopped based on the evaluation of the expression
Example
flow
=> if[`beer == 'Heineken'`] (no-heineken)
=> ftp[..]
flow (no-heineken)
=> sql[..]
The expression in the if flow element in this example evaluated whether the beer parameter is set to a specific value. If this is the case, the flow is continued as defined (an no other flows are initiated). In this case the FTP element is executed. If the expression is not true, the no-heineken flow is initiated and the current flow stops (nothing is sent by FTP). With all of the current data in the data tuple the SQL element is executed.
Description
The ip flow adds the IP address attached to the current tuple’s original HTTP request as a field named ip to the data tuple. Note that some flow elements might make it impossible to access the original request, notably aggregation flows and the join flow.
Output
Example
concept Example { match '*' flow => ip => debug plugin debug }
{ip=81.30.41.185}
Some flow elements might make it impossible to access the original request, notably aggregation flows and the join flow.
Description
Takes the input tuple and converts it to JSON. No escaping is performed. By default, the output tuple contains a single field called serialized, containing the JSON representation of the input fields. It is also possible to specify a field name as a parameter, in which case that field is added to the tuple, containing the JSON representation of the input fields.
Parameters
Output
If the json is used without a parameter:
If the field parameter is used:
Example 1
concept Example { match '*' val a = 'a' val b = `1` flow => json => debug plugin debug }
{"b":"1","a":"a"}
concept Example { match '*' val a = 'a' val b = `1` flow => json['c'] => debug plugin debug }
{b=1, a=a, c={"b":"1","a":"a"}}
Description
The `mail` element allows the application to send an email using fields that have been processed until that point.
Parameters
The configuration parameter is an expression resulting in a Javascript object / Java map that contains the configuration for connecting to an SMTP server. The following fields are used commonly in the SMTP configuration:
Input
Output
The output tuple is equal to the input tuple.
Example
concept Example { val Title = `'Email title'`@groovy val Body = `'Email body'`@groovy flow => code[ to = `'support@yourdomain.com'`, subject = `'DimML Mail services'`, mime = `'text/html'`, text = `'<html><head><title>'+Title+'</title></head><body style="font-family:verdana;font-size:11px;">'+Body+'</body></html>'` ] => mail[`EMAIL_OPTIONS`] const EMAIL_OPTIONS = `{ username: 'userXXX', password: 'passwordXXXX', from: 'mail@dimml.io', auth: true, 'starttls.enable': true, host: 'email-smtp.domain.com', port: 123 }` }
Description
The `mongo` element allows usage of a MongoDB document store using the provided query and database connection settings. Documents can be inserted or modified based on flow tuple fields.
Parameters
For the `uri` parameter you can use the format as described on https://docs.mongodb.com/manual/reference/connection-string/.
The `key` parameter is a comma-separated list of vals used as the filter query for document updates. If if provided an upsert query is executed instead of an insert query. The documetns in MongoDB to be updated are the ones that match the conjunction of value equality checks for all listed keys in the `key` parameter, where the MongoDB document values are compared to the data tuple vals. If no documents match the filter query, a single document is inserted.
The `set` parameter is a code block returning a map of MongoDB fields and according values to be set. When omitted all vals in the data tuple will be sent to MongoDB. Vals in the `set` parameter that are also present in the `inc` parameter are excluded.
The `inc` parameter is a map returning code block specifying the MongoDB fields and according increment values. The counter field is either updated according to the increment value, or added as a field to the matching documents with the increment value as value.
Example
concept Example { val metric1 = `1`@groovy flow =>mongo[ uri = 'mongodb://user:password@ds054118.mongolab.com:54118/data', db = 'data', collection = 'web' key = 'href, test', set = `[ test: "bar", metric1: metric1 + 5 ]`@groovy, inc = `[ counter1: metric1, counter2: (2 + 3) / metric1 ]`@groovy ] }
// {text: 'This is a happy message!'} flow => nlp:language
flow => nlp:tokenize
flow => nlp:tag
flow => nlp:polarity
concept Test { match '*' val text = 'What an very good product!!' flow => select[`text != null`@groovy] => nlp:language // {language: 'en_UK', ...} => nlp:tokenize // {tokens: ['This', 'is', 'a', 'happy', 'message', '!'], ...} => nlp:tag // {tags: ['DT', 'VBZ', 'DT', 'JJ', 'NN', 'SENT'], ...} => nlp:polarity // {polarity: 'positive', score: 2.1915518004205867, ...} => nlp:emotion // {surpriseAnticipation: 0.0, trustDisgust: 0.0, fearAnger: 0.0, joySadness: 1.0, ...} => console => debug plugin debug } //Output: //{surpriseAnticipation=0.0, trustDisgust=1.0, fearAnger=0.0, joySadness=1.0, polarity=positive, score=4.4324966380895585, tags=[WP, DT, RB, JJ, NN, SENT, SENT], tokens=[What, an, very, good, product, !, !], language=en_UK, text=What an very good product!!}
=> session['sessionid'] => pattern[previous = 'Global', current = 'Global']
In the example an output tuple will be produced when two Global concept accesses occur and the last access is the current request. Since both parameters in the example are ‘named’ (namely ‘previous’ and ‘current’), these fields will be added to the output tuple. They contain a reference to the data collected for these concepts when they reached the pattern element in the past. A more elaborate example:
concept Global { val url = `location` val sessionid = `dimml.cookies.get('session')` flow => session['sessionid'] => pattern[previous = 'Global', current = 'Global'] => select[`previous.url.contains('/orderform.html')`@groovy] => ... }
This will collect the values ‘url’ and ‘sessionid’ on the client and start flow processing. First a ‘session’ field is added containing a server-side session based on the ‘sessionid’ field. Then pattern matching is executed to detect the occurence of two ‘Global’ concept accesses. Then only those matches are selected where the ‘previous’ access was for a page that contained ‘/orderform.html’ as part of the URL.
Description
Evaluates an expression for every incoming tuple. The input tuple is dropped when the expression evaluates to false. In the expression, all fields in the input tuple can be accessed as local variables.
Parameters
Output
The output tuple is equal to the input tuple if the expression evaluates to true. If the expression evaluates to false, the input tuple is dropped and there is no output.
Example 1
concept Example { match '*' val a = 'a' flow => select[`a == 'a'`@groovy] => debug plugin debug }
{a=a}
concept Example { match '*' val a = 'a' flow => select[`a == 'b'`@groovy] => debug plugin debug }
Output: Empty
=> session['sessionid']
A server-side session is a concurrent map that is made available to subsequent flow elements. Every distinct value of the field specified as a parameter will result in a distinct session. Sessions are guaranteed to remain alive on the server for at least 15 minutes of inactivity. The default name of the session object is ‘session’. This can be overwritten by providing a named parameter:
=> session[sessmap = 'sessionid']
Description
Use an iterable field in the input tuple, to generate multiple similar output tuples. Every output tuple has the same content as the input tuple, but with one specific value of the iterable field.
Parameters
Input
Output
For every item of the iterable field value an output tuple is generated with the same fields as the input tuple, except for the value of field, which contains a single item from the original field iterable.
Example
concept Example { match '*' val a = 'a' val i = `[1, 2, 3]`@groovy flow => split['i'] => debug plugin debug }
Output:
{a=a, i=3} {a=a, i=1} {a=a, i=2}
The output tuples are in arbitrary order.
Description
The sql element allows usage of a relational database using the provided query and database connection settings. The query can be parameterized by referring to flow tuple fields using the colon notation.
Parameters
The statement is the SQL query, which using colon notation can be parameterized by including the values from the data tuple. So field field from the data tuple is available as :field in the SQL query statement.
The configuration parameter is an expression resulting in a Javascript object / Java map that contains the configuration for connecting to a datasource. The following fields are used commonly in the datasource configuration:
Any field available in configuring the target datasource can be specified using this configuration mechanism. The exact fields depend on the type of database.
Parameter cache applies only to selection queries. It enables caching of results returned by a selection query. When the same query is executed again, results will be returned from the cache. A cache hit is identified by comparing all tuple fields that are used to parameterize the query. The syntax of this parameter is [size]:[time]. The size determines the maximum size of the cache. When there is no more room in the cache the least recently used item is evicted. The time parameter specifies a maximum time for items to remain in the cache. The prefix can range from ms for milliseconds to h for hour. The default is 1m.
Parameter limit applies only to selection queries. It specifies the maximum amount of records to retrieve. By default this limit is set to 1, which is a special mode of operation in that it will add retrieved fields to the current tuple. When a higher limit is set, the data will be stored as a list of maps, where each map contains the data of one row. By default this list is stored in the field named result. This can be overwritten using the fieldName parameter.
Parameter fieldName applies only to selection queries. It overwrites the field name used to store the result of a multi-record selection query as described in the limit parameter explanation.
The batch parameter gives control over how queries are batched. By default all calls to the external datasource are batched. This will improve throughput at the cost of latency. By waiting a small amount of time the sql flow will ensure that most calls to the database benefit from this mechanism. The batch size can be specified using this parameter; the default value is 50.
Sending insert or update queries in batches is usually the best thing to do. In selection queries this might be problematic, especially when the query is part of a flow that produces synchronous output. Furthermore, data that passed through a batched selection query will be bundled together and treated as one set. It will have lost access to the original context that produced the data. When using a selection query in a synchronous flow always set batch to1, which effectively disables batching for that query. Tuples passing through the sql flow will retain their original context, allowing the flow to be used in combination with synchronous flows as used by the output plugin.
Output
For insert and update queries:
For selection queries with one result row:
For selection queries with multiple result rows:
Example
concept Example { match '*' const dataSource = `{ type: 'mysql', port: 3306, serverName: 'localhost', databaseName: 'test', user: 'test', password: 'test' }` val a = 'a' flow => sql["INSERT INTO `test` (`a`) VALUES (:a)", `dataSource`] => debug plugin debug }
{a=a, sqlCount=1}
Remarks
Description
The store flow element is a more robust implementation of the FTP flow element. The store flow element stores data in a file on a FTP or SFTP server. The biggest difference is that the store flow element stores the file in an intermediate server. This allows for better robustness since the file is resend several times if the connection is unavailable. Additionally, the file is transferred from 1 location/server at the end of the time interval. This requires a lot less (parallel) connections. The only disadvantage is that data is stored in a AWS S3 environment while the FTP element streams the data directly to the end point
Parameters
scheme://username:password@host:port/path/to/storage?param1=value1¶m2=value2
The currently supported schemes are ‘ftp’ and ‘sftp’. They both support encrypted credentials by supplying the credentials as a parameter (called ‘credentials’). Depending on the contents of the credentials you can omit one or more of the following: username, password, host. The ‘ftp’ scheme supports tunneling FTP over SSL (called FTPS). To activate add ‘secure’ to the list of parameters. Certificate validation can optionally be disabled by specifying ‘no-validate':
ftp://user2:...@46.51.177.161/david?secure&no-validate
The ‘sftp’ scheme supports private key authentication. To use it, enter the URI encoded private key as a parameter named ‘key’.
All scheme support storing the data in a zip file. To use it, add zip as a query parameter.
Output
At the specified time interval, the file with all data is provided on the location
Example
flow => compact => store[ uri = 'ftp://user2:...@46.51.177.161/david', window = '1m', separator = ',\n ', header = '[\n ', footer = '\n]', data = `JsonOutput.toJson(compacted)`@groovy ] //In this example the 'compact' flow is used to make the tuple available as a map. //The store flow is configured to create a JSON array containing all tuples as JSON objects.
The above syntax uses a static storage key. The URI and other parameters do not change during the lifetime of the flow. Alternatively, you can employ a dynamic storage key. When the first parameter to the store flow is an unnamed code block, it is interpreted to be a map / javascript object that contains all the parameters required to store the data.The example below would store the data from different sections of a website in different files (assuming ‘url’ contains the current URL and the first path element identifies a section). You could also use this mechanism to create files per user session / visitor ID.
flow => store[`[ uri: 'ftp://user2:894zhmkl%40!@46.51.177.161/david', window: '1m', separator: ',\n', pattern: "${url==null?'root':(uri(url).getPath()?.substring(1)?.takeWhile{it!='/'}?:'root')}_%s.log" ]`@groovy]
Description
The stream flow element will make the input tuples available as an HTTP streaming resource. This can be used for instance in as a part of a web page containing a stream of DimML generated data. Also the sandbox environment is designed to display all data made available for the current sandbox environment. The URL uses the path component of the DimML URI preceded by http://baltar-dev.dimml.io:8080/
. That means that since all sandbox DimML files are located in 1 folder that the URI to capture a stream from the file example.dimml for sandbox user 1234 will be
http://balater-dev.dimml.io:8080/sandbox/1234/example.dimml.
Note that it is also possible to add a name and an IP addres as parameter. This can be used to call a specific stream from a DimML file and make streams only available from defined IP addresses.
Parameters
Output
An HTTP streaming resource containing all data tuples
Example
concept Global { match `0/5 * * * * ?`@cron flow => code[url=`'hello'`@groovy] => stream plugin stream }
This DimML application can be copy pasted directly into the sandbox environment. There it will show the text ‘hello’ every 5 seconds in the output stream.
Description
When a tuple reaches a flow element there are three timestamps that might be of interest: the original creation time of a tuple, the last modified time of a tuple, and a timestamp for the entire set. These three times are referred to as ‘origin’, ‘modified’, and ‘packet’ time respectively. With the time flow element it is possible to add fields to the tuple with values based on this time information.
Parameters
Every field has the format field = ‘type:format’, where
Output
The input tuple plus the list of fields with corresponding time values as specified as defined by the flow element parameters.
Example
concept Example { match '*' flow => time[ dateStamp = 'origin:yyyy-MM-dd', timeStamp = 'modified:HH:mm:00' ] => debug plugin debug }
Output:
{dateStamp=2015-04-02, timeStamp=15:17:00}
The special ?filename property can also be set with the time flow element. This property is used by file-based flow elements to set a time based file name.
Description
The twitter producer flow emits data whenever it is available in Twitter. It should be used in combination with a keepalive service, to keep the flow active and ensure that the `twitter` flow element keeps emitting data tuples.
Parameters
The configuration parameter is an expression resulting in a Javascript object / Java map that contains the configuration for connecting to the Twitter streaming data API. The following fields are used commonly in the Twitter configuration:
Output
Example
concept Example { match `keepalive`@cron flow => twitter[`TWITTER_OPTIONS`] => debug const TWITTER_OPTIONS = `{ consumerKey: '...', consumerSecret: '...', token: '...', secret: '...', terms: ['#beer'] lang: ['nl','en'] }` }
Data tuples containing JSON formatted `message` values, for every message placed on Twitter that matches the specified terms and languages.
The `twitter` flow element keeps emitting data tuples for each new tweet retrieved from the Twitter API. Normally the flow would be deactived by the system after a certain period of time. To prevent this the keepalive mechanism from the example can be used to keep the flow active.
Description
The user agent flow element will process the user agent headers into a list of properties by processing the value for the header server side. Note that the request header can be used to capture the user agent string. Additionally for web pages the Javascript function navigator.userAgent can be used to capture the user agent string as well.
Parameters
Output
The following fields are added to the output tuple:
More details on the processing can be found here
Example
@groovy concept Test { match '*' request[ua = `headers['User-Agent']`] flow => useragent[parsed = 'ua'] => console }
{parsed={name=Chrome, device=PERSONAL_COMPUTER, family=CHROME, os=WINDOWS, osVersion=10.0, type=BROWSER, version=54.0.2840.99}, ua=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36}