During compilation, HVAR rearranges data to be replicated by clustering the data together based on each table, and each insert, update, and delete operation, and then compiling the operations into net row operations.
HVAR distinguishes different data rows by the primary key defined in a replication definition. If there is no replication definition, all columns except for text and image columns are regarded as primary keys.
For the combinations of operations found in normal replication environments, and given a table and row with identical primary keys, HVAR follows these compilation rules for operations:
An insert followed by a delete results in no operation.
A delete followed by an insert results in no reduction.
An update followed by a delete results in a delete.
An insert followed by an update results in an insert where the two operations are reduced to a final single operation that contains the results of the first operation, overwritten by any differences in the second operation.
An update followed by another update results in an update where the two operations are reduced to a final single operation that contains the results of the first operation, overwritten by any differences in the second operation.
Other combinations of operations result in invalid compilation states.
This is an example of log-order, row-by-row changes. In this example, T is a table created earlier by the command: create table T(k int , c int)
1. insert T values (1, 10) 2. update T set c = 11 where k = 1 3. delete T where k = 1 4. insert T values (1, 12) 5. delete T where k =1 6. insert T values (1, 13)
insert T values (1, 13)
In another example of log-order, row-by-row changes:
1. update T set c = 14 where k = 1 2. update T set c = 15 where k = 1 3. update T set c = 16 where k = 1
With HVAR, the update in 1 and 2 can be reduced to the update in 2. The updates in 2 and 3 can be reduced to the single update in 3 which is the net-row change of k = 1
Replication Server uses an insert, delete, and update table in an in-memory net-change database to store the net row changes which it applies to the replicate database. Net row changes are sorted by replicate table and by type of operations—insert, update, or delete—and are then ready for bulk interface. HVAR loads insert operations into the replicate table directly. Since Adaptive Server does not support bulk update and delete, HVAR loads update and delete operations into temporary worktables that HVAR creates inside the replicate database. HVAR then performs join-update or join-delete operations with the replicate tables to achieve the final result. The work tables are created and dropped dynamically.
In Example 2, where compilation results in update T set c = 16 where k = 1:
As HVAR compiles and combines a larger number of transactions into a group, bulk operation processing improves; therefore, replication throughput and performance also improves. You can control the amount of data that HVAR groups together for bulk apply by adjusting HVAR sizes with configuration parameters.
There is no data loss, although HVAR does not apply row changes in the same order in which the changes are logged because for: