bp add `stream` `inputStream`
[`condition`] – adds a breakpoint on a stream, before it begins processing an input
record from another stream (inputStream). Optionally, use a SPLASH expression to specify a condition
to trigger the breakpoint. The breakpoint is only triggered when the
condition evaluated on the input record is true. The expression may refer to either or both of these predefined variables:- currow – the current input record.
- oldrow – the previous value of the record with this key, that is being updated or deleted.
The condition
may refer to the fields in the records, "row.field".
The stream's local and global variables may be used as well. This command prints the ID of the newly created breakpoint.
bp add `stream` any – adds a breakpoint on a stream, before it starts processing an input
record from any stream. You cannot specify the condition.This command prints the ID of the newly created breakpoint.
bp add `stream` out [`condition`] – adds a breakpoint on a stream, after it has processed an input record
and produced some (possibly empty) output. Optionally, use a SPLASH expression to specify a condition
to trigger the breakpoint. The breakpoint is only triggered when the
condition evaluated on the input record is true. The expression may refer to one predefined variable, currow, which is the current output record. Since
one input record may produce multiple output records, the condition
is evaluated for each of them sequentially. If there is no output
produced, the condition still evaluates once with currow set to NULL.
The condition may refer to the fields in the records,
"row.field".This command prints the ID of the newly created breakpoint.
ex `kind` [`stream` [`object`]] – examines data in the Event Stream Processor. ex takes the
name of the kind of data, of the stream to which it belongs, and of
the particular object. For some kinds of data, the stream and object
arguments may not be applicable. The data is printed in XML
format, with the element name for most data kinds set to "row". If
the data represents a transaction, it is enclosed in a <trans> element.
If the data represents an update pair, it is enclosed in a
<pair> element. The exact fields depend on the data being examined.When you examine the input data kinds (input queue, current input
transactions and row, input history), the data may be a mix of
rows of different types, produced by different streams. The name of the XML element is set to the
name of the stream that produced it (for base streams, this is
the name of the base stream itself).
Kinds of data currently supported are:
- `pause` – state of the user streams when paused. The fields are:
- name – name of the stream.
- loc – location where the stream is paused.
- onbp – if on a triggered breakpoint, the ID of that breakpoint. Otherwise, onbp is set to 0. If multiple breakpoints are triggered simultaneously, onbp contains the ID of one of them.
- throttle – the input queue throttle value.
- history – maximum size of the kept history.
- postSeq – count of transactions posted to the input queue.
- inSeq – count of transactions ever read from the input queue.
- outSeq – count of transactions processed to the output (including
the empty transactions that get discarded, and the expiry
transactions).
- stepSeq – the count of steps (as defined by the step command) made
in trace mode. This includes both single-stepping and
running. Use the changes in this count to determine
which streams have changed state.
- `pauseall` – same as pause but includes the metadata streams as well.
- `breakpoints` – information about all currently registered breakpoints. The
fields are:
- id – ID of the breakpoint. Does not change throughout the breakpoint's
life.
- stream – name of the stream on which the breakpoint is defined.
- origin – contains the name of the input stream for a breakpoint on a
particular input stream. Use "*" for a breakpoint on input from
any stream, and "" (empty) for a breakpoint on output.
- expr – conditional expression.
- enabledEvery – n to trigger the breakpoint on every nth matching record.
See bp every.
- leftToTrigger – the number of matches that are currently left for the breakpoint until
triggering.
- onit – set to 1 if the breakpoint is currently triggered, otherwise set to 0.
- `var``` `var-name` – contents of a global variable (one defined in the global DECLARE block). The fields depend on the type of variable. The indexes
in the arrays are shown as ESP_Index. The keys in the
dictionaries are shown as ESP_Key_<field-name>. The values
of records are shown with fields as in the record definition.
The simple variables are represented with the ESP_Value
field. For structured values, this command may return multiple
rows. If a variable is NULL, nothing is returned. For an array,
only the elements with non-NULL values are shown.
- `listVar` – the list of all global variable names.
- name – name of the variable.
- type – type of the variable.
- `store``stream` – contents of a stream's store. The fields are as in the stream's
row definition.
- `outTrans` `stream` – the current output transaction as it is being built. The fields are
the same as in the stream's row definition.
- `outRow``stream` – output produced from processing the previous input row.
May contain multiple or no rows. The fields are the same as in the
stream's row definition.
- `badRows``stream` – when the Event Stream Processor is paused on a bad
rows exception, contains these bad rows. The fields are the same as in
the row definition of the stream that produced the data (or, for a
base stream, of the current stream).
- `badRowsReason``stream` – for each bad row reported by badRows, this data contains an error message
explaining why it is bad. The message is in the reason
field.
- `outHist``stream` – the output transactions from the stream's history. The empty
transactions are returned as records with all fields containing
NULL. There is one-to-one match between the transactions returned
by ex `outHist` and ex `inHist`. The fields are the same as in the
stream's row definition.
- `lastOutTrans``stream` – the newest output transaction in the stream's history. Similar to " ex `outHistLatest` `stream` `0`", but
if the history is empty, returns a success with no rows,
while outHistLatest returns an error. The fields are the same as in the
stream's row definition.
- `outHistEarliest``stream` `index` – selects an individual output transaction from the stream's history.
The index is a number, 0 selecting the earliest
transaction saved in the history, and increasing index numbers indicate later
transactions. If there is no transaction with such an index, returns
the "No such object" error. The fields are the same as in the
stream's row definition.
- `outHistLatest``stream` `index` – select an individual output transaction from the stream's history.
The index is a number, 0 selecting the latest transaction
saved in the history, and increasing index numbers indicate earlier
transactions. If there is no transaction with such an index, returns
the "No such object" error. The fields are the same as in the
stream's row definition.
- `var` `stream` `var-name` – contents of a stream's local variables. You can examine only the local variables
(those defined in the stream's local DECLARE block).
Local variables include the variables of array, dictionary,
and eventCache. The variables defined inside the SPLASH
blocks exist only when the appropriate methods run, and cannot
be examined. The fields depend on the type of variable. The indexes
in the arrays and eventCaches are shown as ESP_Index.
The keys in the dictionaries and eventCaches are shown as
ESP_Key_<field-name>. The values of records are shown with fields as in the record definition. The simple variables are
represented with the ESP_Value field. For structured values,
var may return multiple rows. If a variable is NULL,
nothing is returned. For an array only the elements with non-
NULL values are shown. You cannot access global variables this way; instead, use the empty stream name to access them.
- `listVar` `stream` – the list of all variable names defined on this stream. Applicable
only to the streams that are allowed to have the local DECLARE block. Does not include the global variables.
- name – name of the variable.
- type – type of the variable.
- `queue` `stream` – an input data kind. Contents of the stream's input queue. The
fields are the same as in the row definition of the stream that produced
the data (or, for a base stream, of the current stream).
- `inTrans` `stream` – an input data kind. The current input transaction that is being
processed. The fields are the same as in the row definition of the stream
that produced the data (or, for a base stream, of the current
stream).
- `inRow` `stream` – an input data kind. The current input row that is being processed.
The fields are the same as in the row definition of the stream that
produced the data (or, for a base stream, of the current stream).
- `queueHead` `stream` `index` – an input data kind. Select an individual transaction from the
stream's input queue. The index is a number, 0 selecting
the transaction at the head of the queue, increasing index numbers indicating
following transactions. If there is no transaction with such an index,
returns the "No such object" error. The fields are the same as in the row
definition of the stream that produced the data (or, for a base
stream, of the current stream).
- `queueTail` `stream` `index` – an input data kind. Select an individual transaction from the
stream's input queue. The index is a number, 0 selecting
the last transaction at the tail of the queue, increasing index
numbers indicating previous transactions. If there is no transaction with
such an index, returns the "No such object" error. The fields are
the same as in the row definition of the stream that produced the data (or, for a base stream, of the current stream).
- `inHist` `stream` – an input data kind. The input transaction from the stream's history.
There is a one-to-one match between the transactions returned
by ex `outHist` and ex `inHist`. The fields are as in the
row definition of the stream that produced the data (or, for a
base stream, of the current stream).
- `lastInTrans` `stream` – an input data kind. The newest input transaction in the stream's
history. Similar to "`inHistLatest` `stream`
`0`", but if the history is empty, returns a success with
no rows, while `inHistLatest returns an error. The fields are the same as
in the row definition of the stream that produced the data (or,
for a base stream, of the current stream).
- `inHistEarliest` `stream` `index` – an input data kind. Select an individual input transaction from
the stream's history. The index is a number, 0 selecting
the earliest transaction saved in the history, increasing index
numbers indicating later transactions. If there is no transaction with such an
index, returns the "No such object" error. The fields are as in
the row definition of the stream that produced the data (or, for a
base stream, of the current stream).
- `inHistLatest` `stream` `index` – an input data kind. Select an individual input transaction from
the stream's history. The index is a number, 0 selecting
the latest transaction saved in the history, increasing index
numbers indicating earlier transactions. If there is no transaction with such
an index, returns the "No such object" error. The fields are the same as in
the row definition of the stream that produced the data (or, for a
base stream, of the current stream).
- `hist` `stream` – a mixed representation of history, including both input and
output data. Each input transaction is followed by its matching
output transaction. The rows in the input transactions are marked with
the XML tag of their origin stream name, while the rows in the output
transaction are marked with the XML tag "row".
- `lastTrans` `stream` – a mixed input-and-output data kind. See
`hist`. The newest input and output transactions in
the stream's history.
- `histEarliest` `stream` `index` – a mixed input-and-output data kind. See
`hist`. Select an individual transaction pair from the
stream's history. The index is a number, 0 selecting the
earliest transaction saved in the history, increasing index numbers indicating
later transactions. If there is no transaction with such an index,
returns the "No such object" error.
- `histLatest` `stream` `index` – a mixed input-and-output data kind. See `hist`. Select an individual transaction pair from the
stream's history. The index is a number, 0 selecting the
latest transaction saved in the history, increasing index numbers indicating
earlier transactions. If there is no transaction with such an index,
returns the "No such object" error.
- `aggrGroup` `aggregationStream` – the internal state of an aggregation stream, from its group index.
Works only for aggregation streams that are not optimized
to use additive aggregations. The value fields have
names from the input stream row definition. The key fields
have the same name as in this stream's row definition but with
ESP_Key_ prefixed to them. The index of the record in the aggregation bucket
is in the ESP_Indexfield.
- `states` `patternStream` [`patternNum`] – states of the automatons in a pattern stream. Initially, a pattern stream has one automaton per defined pattern. As data is received
and matched by patterns, a new automaton is cloned for
each sequence of events that may match the pattern. As
complete patterns are found, or the sequences of events are
found to not match the patterns, automatons are destroyed.
If the optional parameter patternNum is present, states shows only the
automatons for that pattern. The fields are:
- pnum – number, starting with 0, of the pattern being parsed by this automaton.
- instance – instance number of the automaton. As new automatons are
cloned, each receives a unique instance number. The
instance number is never repeated (unless the Event Stream Processor is restarted). The pair (pnum, instance)
identifies an automaton during its
execution.
- state – a number identifying the current state of the automaton. Automatons are linear; they have no loops in their logic, and
may visit a state only once. If the state has not changed since
last examination, it indicates that the automaton has not matched
any new data. The
numbers used for states are not sequential.
- timed – if set to 1, an expiration timer attaches to
this automaton. If the timer expires, its pattern match is considered
failed and the automaton is destroyed. If set to 0,
the automaton does not expire. The untimed
automaton is the very initial automaton of a pattern, used to
clone all others.
- time_left – the time, in seconds, until expiration for a timed automaton.
For an untimed automaton, this is always 0. By default, when the Event Stream Processor is paused, the platform logical
clock stops. However, if the clock is set to not stop on pause,
the timers do not stop. A value of 0 or negative means that
the automaton expires when the Event Stream Processor execution resumes.
- `bindings` `patternStream` [`patternNum`] – the pattern variable
bindings that are caused by the data parsed so far, for each automaton in patternStream. If the optional parameter patternNum is present, shows only the
data for that pattern. The fields are:
- pnum – number, starting from 0, of the pattern being parsed by this automaton.
- instance – instance number of the automaton. As new automatons are
cloned, each receives a unique instance number. The
instance number is never repeated, unless the Event Stream Processor is restarted. pnum, instance
identifies an automaton during its
execution.
- var – name of the bound variable. Besides these variables,
bound events and constants are listed as well. The constants
appear with unique compiler-generated names.
- value – value of the bound variable, in string format. The values of
bound events are reported here as
NULL. Examines the events data kind to see the contents of
the event rows.
- `events` `patternStream` [`patternNum`] – the events that have
been parsed by the automaton so far, for each automaton in a pattern stream. This data kind returns a
mix of records of different types. Record types are named
after the input streams where they flow from.
If the optional parameter patternNum is present, shows only the
data for that pattern. The fields are:
- ESP_Pnum – number, starting from 0, of the pattern being parsed by this automaton.
- ESP_Instance – instance number of the automaton. As new automatons are
cloned, each receives a unique instance number. The
instance number is not repeated, unless the Event Stream Processor is restarted. pnum, instance
identifies an automaton during its
execution.
- ESP_Var – name of the event variable.
- as in input stream – the rest of the fields keep the names as in the row type of
the input stream that they are from.
- `expect` `patternStream` [`patternNum`] – the expected records
that would advance the automaton to the next state, for each automaton in a pattern stream. This data
kind returns a mix of records of different types. Record
types are named after the input streams that they flow from.
If the optional parameter patternNum is present, shows only the
data for that pattern. The fields are:
- ESP_Pnum – number, starting from 0, of the pattern being parsed by this automaton.
- ESP_Instance – instance number of the automaton. As new automatons are
cloned, each receives a unique instance number. The
instance number is not repeated, unless the Event Stream Processor is restarted. pnum, instance
identifies an automaton during its
execution.
- ESP_Var – name of the event variable. If preceded by a "!", receiving
such a record causes a pattern mismatch. Otherwise, it advances the automaton to the next state.
- as in input stream – the rest of the fields keep the names as in the row type of
the input stream that they are from. Only the fields that
are bound to values are shown, the rest are shown as NULL.
- exf `kind` [`stream` [`object`]]
`filter` – similar to ex, but specifies a filter SPLASH expression to be evaluated
on the Event Stream Processor. Only records
for which the filter evaluates to a true (non-zero, non-
NULL) value are returned. With filters, any transaction and update
pair boundaries are lost; each record returns by itself.
The filter may refer to predefined variables with names
matching the XML tags of the rows when printed. For most
data kinds, the variable row contains the current
record to be filtered. For the input data kinds multiple variables
are defined, each is named after an input stream of the target stream.
In this case when evaluating a record, the variable matching the
stream of its origin contains the record, and all other variables
are set to NULL. The condition may refer to the fields
in the records as usual, for example "currow.field".
- eval `stream` `block` – evaluate a SPLASH statement (not expression) on a stream, to
change the contents of the local variables (those defined in the
local DECLARE block or global DECLARE block) of the stream. The variables
that are defined inside the SPLASH blocks of a stream exist only when
the appropriate methods run, and cannot be modified. Evaluation in
context of any computational stream is used to
modify the global variables.
The SPLASH statement must be either a simple
statement terminated by ";" or a block enclosed in braces "{}".
Multiple statements must always be enclosed in a block. If you use braces to quote the block argument, the outside
braces do not count as the block delimiters (they are just esp_client
quotes).
Correct Example:
`a := 1;`
{a := 1;}
`{ typeof(input) r := [ a=9; |
b= 's1'; c=1.; d=intDate(0);];
keyCache(s0, r); insertCache(s0, r); }`
{{ typeof(input) r := [ a=9; |
b= 's1'; c=1.; d=intDate(0);];
keyCache(s0, r); insertCache(s0, r); }}
Incorrect Example:
`a := 1`
{a := 1}
`typeof(input) r := [ a=9; |
b= 's1'; c=1.; d=intDate(0);];
keyCache(s0, r); insertCache(s0, r);`
{ typeof(input) r := [ a=9; |
b= 's1'; c=1.; d=intDate(0);];
keyCache(s0, r); insertCache(s0, r); }
All the usual SPLASH syntax applies, including that for defining the temporary
variables in the block. All of the stream's variables and global
variables are visible and may be read or changed in the statement.
No streams or stream iterators are visible in the statement.
You cannot use back quotes and curly braces when entering multiline statements. In the previous examples, splitting
the lines represents the wrapping of the line on the terminal. In
many cases, the multiline quoting format would be more convenient:
eval {stream} <<!
{ typeof(input) r := [ a=9; |
b= 's1'; c=1.; d=intDate(0);];
keyCache(s0, r); insertCache(s0, r); }
!
Operations on eventCaches require special preparation. Normally, the key of the eventCache is determined by the current input record. In the current example, there is no input record, so the key is not set
and operations on eventCaches have no effect. For
operations to work, you must manually set the key using the operator
keyCache(ec-variable, record), before performing any
aggregation operations on the eventCache.