Zero
data loss protects a project against data loss in the event of a client failure, server
failure, or both. Achieving zero data loss requires a judicious use of log stores when you
set up the project, as well as configuration of project options and clients that use
guaranteed delivery (GD).
With zero data loss,
- Event Stream Processor recovers windows protected by one or
more log stores to a consistent state as of the most recent checkpoint. (Any
uncheckpointed data is lost and must be sent again by the publisher.)
- Clients can be confident they will not miss any events.
- Clients can minimize the number of duplicate events they receive by controlling
how frequently they issue GD commits.
- Publishers can ensure that the data they publish is fully processed by the
server and thereby reduce transmission of duplicates when the server
restarts.
- You can optionally configure the server to control how frequently it issues
automatic checkpoints and thus control how much uncheckpointed data is liable to
be lost on a server failure.
- At the expense of performance, you can minimize (but not fully eliminate) the
production of duplicate rows on server or subscriber restart by tweaking how
frequently the server checkpoints data and how frequently GD subscribers issue
GD commits.