Understand fundamentals of replication-based (RBS) synchronization so you can understand how to improve performance for this synchronization model.
Replication-based synchronization has two distinct operation and data transfer phases where only differences are transferred: upload (where only updates from the mobile
client to the server are integrated with data in the mobile middleware cache and pushed to the EIS) and download (where EIS delta changes are determined for, and supplied to, the specific client).
These phases are conducted as discrete transactions to protect the integrity of synchronization. Replication-based
synchronizations are carried out within a single data transfer session, allowing for exchanges of large payloads. Therefore the goal of tuning performance for RBS is to ensure those transfers of large volumes are handled effectively and not to create bottlenecks in the runtime.
Considerations that affect performance can be seperated into synchronization phases and architecture components.
Replication-based synchronization use cases center around three primary phases in moving data between server
and client, each of which need to be considered when choosing performance setting values. Each phase describes mobility environment components, and how they affect performance: MBO development and data design, EIS data models and runtime servers, and Unwired Server runtimes. These phases are:
- Initial synchronization – where data is moved from the back-end enterprise system, through the mobile
middleware, and finally onto the device storage. This phase represents the point where a new device is put into
service or data is reinitialized, and therefore represents the largest movement of data of all the scenarios. In these
performance test scenarios, the device-side file may be purged to represent a fresh synchronization, or preserved
to represent specific cache refreshes on the server.
For this phase, the most important performance consideration is the data and how it's partitioned (EIS design), and loaded by the MBO (MBO development) using operation parameters, synchronization filter parameters). Synchronization parameters
determine what data is relevant to a device; they are used as filters within the server-side cache. Load parameters
determine what data to load from the EIS into the Unwired Platform cache.
- Incremental synchronization – involves create, update, and delete (CUD) operations on the device where some
data is already populated on the device and changes are made to that data which then need to be reconciled with
the middleware and the back-end enterprise system. When create and update operations occur, changes may
be pushed through the middleware cache to the back end, reads may occur to properly represent the system of
record, for example, data on the back end may be reformatted, or both. This scenario represents incremental delta
changes to and from the system of record.
As in the initial synchronization phase, the EIS accounts for the bulk of the device synchronization response
time: a slow EIS consumes resources in the Unwired Server, with the potential to further impede devices that are
competing for resources in connection pools.
Additionaly, the complexity of the mobile model, measured by the number of relationships between MBOs, has a significant
impact on create, update, and delete operation performance. Shared partitions among users or complex locking
scenarios involving EIS operations can become a major performance concern during device update operations. Cache
and EIS updates are accomplished within the scope of a single transaction, so other users in the same partition are
locked out during an update of that partition. Consider denormalizing the model if complex relationships cause
performance concerns.
- Data change notification (DCN) – changes to the back-end data are pushed to the mobile middleware and then
reconciled with the device data. DCN is typically observed in the context of additional changes to the device data
so that changes from both device and back end are simultaneously impacting the mobile middleware cache.
DCN efficiently updates the Unwired Server because it does not require the Unwired Server to poll the EIS or to
refresh the cache based on a schedule. EIS DCN application to the cache is independent of the client synchronizations.
If DCN data is located in a shared partition, multiple devices benefit from the single EIS update to the cache. There are several ways to materially improve DCN performance:
- Use a load-balancer between the EIS
and the Unwired Server – DCNs can be efficiently applied across an Unwired Platform cluster, as each node in the
cluster helps to parse incoming payloads.
- Combine multiple updates into a single batch.
- Run DCNs from a multithreaded source to parallelize updates. Note that there is a diminishing return beyond three to four clients, in large part
due to the nature of the model.
Different models exhibit different performance characteristics when applying updates,
so proper analysis of application behavior is important.