Reference Data Loading

The strategy for reference data loading is to cache and share it.

The more stable the data, the more effective the cache. Once the data is cached, Unwired Server can support a large number of users without additional load on the EIS. The challenge with reference data is size and finding the most efficient method for loading and updating the data.

Load via Pull

Configure Unwired Platform components to pull data into the CDB from the EIS:


Partition data, if possible, within an MBO or object hierarchy.



Load partitions on demand to spread the load, increase parallelism, and reduce latency.



Use a separate cache group for each independent object hierarchy to allow each to load separately.



Use a scheduled policy only if the EIS updates data on a schedule; otherwise, stay with an on-demand cache group policy.



Use a large cache interval to amortize the load cost.



Use the "Immediately update the cache" cache policy if the reference MBOs use update or create operations.



Consider DCN for large data volume if cache partition or separation into multiple cache groups is not applicable or ineffective. This avoids heavy refresh costs.



use DCN if high data freshness is required.



Targeted change notification (TCN), previously called server-Initiated synchronization (SIS) is challenging due to cache interval settings and require user activities to refresh for on-demand cache group policy. Change detection is impossible until the cache is refreshed.



Do not use a zero cache interval.

Load via Push

Configure Unwired Platform components and the EIS so that the EIS can push data changes to EIS:


Use parallel DCN streams for initial loading.



Use Unwired Server clustering to scale up data loading.



Use a single queue in the EIS for each individual MBO or object graph to avoid data inconsistency during updates. You can relax this policy for initial loading, as each instance or graph is sent only once.



Use a notification MBO to indicate loading is complete.



Adjust change detection interval to satisfy any data freshness requirements.

Parallel Push

DCN requests are potentially processed in parallel by multiple threads, or Unwired Server nodes in case of clustering. To avoid a race condition, serialize requests for a particular MBO or an MBO graph. That is, send a request only after completion of the previous one. This ordering must be guaranteed by the EIS initiating the push. Unwired Server does not return completion notification to the EIS until the DCN request is fully processed.

Hybrid Load – Initial Pull and DCN Update



Ensure that the end of the initial load and start of the DCN update is coordinated in the EIS to avoid missing updates.



Use parallel loading via multiple cache groups and partitions. Once the DCN update is enabled, there is always a single partition.