Performance Tuning

There are several attributes you can use to fine-tune performance in an input adapter.

Attribute Description
flushInterval Specify an interval of time in microseconds (for example, 5000 microseconds = 5 milliseconds) to wait while accumulating data. At the end of this interval, any accumulated events are sent to Event Stream Processor. Send events less often to allow more events to be placed into a message resulting in a communications overhead savings. Use a nonzero flushInterval to make even accumulation time-based.
maxRecordsPerBlock Specify the maximum number of accumulated events that the adapter should send to Event Stream Processor at a time. When the number of accumulated events is larger than this value, the envelope or transaction is broken into fragments that are less than or equal to the specified value. For example, if accumulated event counts of more than 1024 (which would immediately fill the Event Stream Processor Gateway's inbound queue) are expected, set maxRecordsPerBlock to a value like 500 to prevent the inbound queue from filling.
pendingLimit Specify a threshold for the number of events that must accumulate before they are sent to Event Stream Processor. Set this parameter to zero to publish each event immediately when it happens (providing the lowest latency), at the expense of high network overhead (a TCP/IP packet for each update). If you set this parameter to a larger value, the adapter waits until that number of events have accumulated, packs them efficiently in TCP/IP packets, and sends them to Event Stream Processor. This saves communication work but increases latency on both the adapter and Event Stream Processor.
sendAsTransactions

This parameter controls whether events are sent as an envelope or a transaction. You can specify this parameter on a per-stream basis.

Set this parameter to true for Event Stream Processor to treat a group of updates as a single transaction. Transactions typically cause application-level workload savings, since Event Stream Processor collapses multiple updates to the same value in a transaction to a single update. If a transaction contains a delete, additional savings are achieved since updates prior to the delete can be discarded.

If you set this parameter to false and you are not in low-latency mode (pendingLimit and flushInterval both set to zero), then use the maxRecordsPerBlock to control the size of the envelope. You still gain the communications overhead savings mentioned above, but not the transactional savings. This is the preferred configuration for applications that require every event to be sent separately , such as a market data compliance application.

As a general rule, for quote-based applications, where only the most recent update matters, use transactions to be most efficient. For trades, however, where every event must be processed separately to compute a total volume, use envelopes instead.

When you use both flushInterval and pendingLimit, no event waits longer than the time indicated in the flushInterval before being sent, and as long as the number of events specified in pendingLimit arrive, they are sent immediately. The adapter waits for the amount of time specified in the flushInterval and, if any events have accumulated, it sends them. If the number of pendingLimit events, or more accumulate while the adapter is sending the earlier events, the new events are sent immediately (without waiting for the flushInterval). If fewer than the number of pendingLimit events accumulate while the adapter is sending events, it waits for the flushInterval to elapse.

You can also use the rfaQueue attribute at the itemLists, itemList, or item element level. When specified, the rfaQueue attribute causes the element to be subscribed from Reuters on a named rfaQueue. Each rfaQueue is processed by its own thread within the Reuters adapter. Spreading requests across multiple threads can reduce latency and improve overall adapter throughput at the cost of greater CPU usage.

Since all images and updates come from Reuters on the same queue, the integrity of the order of arrival is maintained for any individual RIC. If you do not specify an rfaQueue for any of the elements, a single default queue (named "defaultQueue") is used for all RICs.