Set sizing parameters for a log store in a CREATE LOG STORE statement in the project’s CCL file.
The CREATE LOG STORE parameters described here control the size and behavior of the log store.
The maximum file size is the largest size, in bytes, that the log store file is allowed to reach. See Sizing a Log Store for instructions on calculating this value.
Unlike memory stores, log stores do not extend automatically. Sizing log stores correctly is important. A store that is too small requires more frequent cleaning cycles, which severely degrades performance. In the worst case, the log store can overflow and cause processing to stop. A store that is too large also causes performance issues due to the larger memory and disk footprint; however, these issues are not as severe as those caused by log stores that are too small.
As the amount of data in the store grows, if the free space falls below 10 percent (excluding the reserve), Event Stream Processor starts reporting "log store is nearing capacity" in the server log. If the data is deleted from the store in bursts, (for example, if data is collected during the day, and data older than a week is discarded at the end of the day), these messages may appear intermittently even after the old data has been flushed. As the cleaning cycle rolls over the data that has been deleted, the messages disappear.
Unless your log store is very small, these warnings appear before the store runs out of space. If you see them, stop Event Stream Processor when convenient, and increase the store size. Otherwise, Event Stream Processor aborts when the free space in the project falls below the reserve size.
If a log store is sized incorrectly, the entire reserve may be used up, which causes the store to become wedged. If this happens, you cannot resize the log store or preserve the content. Delete the store files and restart Event Stream Processor with a clean store. If you make a backup of the store files before deleting them SAP Technical Support may be able to extract content. Change the store size in the project, and it is resized on restart. You cannot decrease the store size. When you restart a project after resizing the store, it will likely produce server log messages about the free space being below the reserve until the cleaning cycle assimilates the newly added free space.
96 + 32 * ceiling (log2(number_of_records_in_the_stream))
If a stream is small (for example, fewer than 1000 records), the overhead for each record is:96 + 32 * ceiling (log2(1000)) = 96 + 32 * 10 = 416
In many cases, the record itself is smaller than its overhead of 416 bytes. Since the effect is logarithmic, large streams are not badly affected. A stream with a million records has a logarithm of 20 and incurs an overhead of 736 bytes per record. The increased overhead affects performance by writing extra data and increasing the frequency of store cleaning.
If your project includes any windows configured for guaranteed delivery (GD), consider adjusting the value of ckcount to improve performance and latency.
The sweepamount parameter determines how much of the log file is “swept through” during each cleaning pass. It must be between 5 percent to 20 percent of the maxfilesize parameter. A good lower bound for the sweep size is half the size of the write cache on your storage array. Usually, it indicates a sweep size of 512 to 1024 megabytes. Smaller sweep sizes minimize spikes in latency at the expense of a higher average latency. High values give low average latency, with higher spikes when reclaiming space.
If the value of the sweepamount parameter is too small, the system performs excessive cleaning; in some cases, this does not allow the log store to free enough space during cleaning.
The size of the sweep is also limited by the amount of free space left in reserve at the start of the cleaning cycle. If the reserve is set lower than the sweep amount and the sweep does not encounter much dead data, the sweep stops if the relocated live data fills up the reserve. The swept newly cleaned area becomes the new reserve for the next cycle. Unless other factors override, SAP recommends that you keep the sweep and the reserve sizes close to each other. reservePct is specified in percent while sweepamount is specified in megabytes.
If your project includes any windows configured for guaranteed delivery (GD), consider adjusting the value of sweepamount to improve performance and latency.
Ensure the total size of all log store files does not exceed the size of the machine’s available RAM. If this occurs, the machine takes longer to process the data, causing all monitoring tools to display low CPU utilization for each stream, and standard UNIX commands such as vmstat to display high disk usage due to system paging.
For storing data locally using log stores, SAP recommends that you use a high-speed storage device, for example, a redundant array of independent disks (RAID) or a storage area network (SAN), preferably with a large dynamic RAM cache. For a moderately low throughput, place backing files for log stores on single disk drives, whether SAS, SCSI, IDE, or SATA.