Data Flow in Event Stream Processor

The throughput of an Event Stream Processor project depends on the throughput of the slowest component in the project.

Each stream in ESP has an internal queue that holds up to 1024 messages. This queue size is hard-coded and cannot be modified. An internal queue buffers data feeding a stream if that stream is unable to keep up with the inflowing data.

Consider an example where data flows from an input adapter, through streams A, B, and C, and then through an output adapter. If the destination target of the output adapter cannot handle the volume or frequency of messages being sent by the output adapter, the internal queue for the stream feeding the output destination fills up and stream C cannot publish additional messages to it. As a result, the internal queue for stream C also fills up and stream B can no longer publish to it.

This continues up the chain until the input adapter can no longer publish messages to stream A. If, in the same example, the input adapter is slower than the other streams, messages will continue being published from stream to stream, but the throughput is constrained by the speed of the input adapter.

Note that if your output destination is a database, you can batch the data for faster inserts and updates. Set the batch size for a database adapter in the service.xml file for the database. For information on configuring the service.xml file, see the Configuration and Administration Guide.

Batching data carries some risk of data loss because the database adapters run on an in-memory system. To minimize the risk of data loss, set the batch size to 1.