HVAR Compilation and Bulk Apply

During compilation, HVAR rearranges data to be replicated by clustering the data together based on each table, and each insert, update, and delete operation, and then compiling the operations into net row operations.

HVAR distinguishes different data rows by the primary key defined in a replication definition. If there is no replication definition, all columns except for text and image columns are regarded as primary keys.

If a replicate table contains multiple unique keys, the primary key in the table replication definition must contain all columns which are named in the unique indexes. Otherwise, replication may produce duplicate key errors.

For the combinations of operations found in normal replication environments, and given a table and row with identical primary keys, HVAR follows these compilation rules for operations:

Other combinations of operations result in invalid compilation states.

Example 1

This is an example of log-order, row-by-row changes. In this example, T is a table created earlier by the command: create table T(k int , c int)

1. insert T values (1, 10)
2. update T set c = 11 where k = 1
3. delete T where k = 1
4. insert T values (1, 12)
5. delete T where k =1
6. insert T values (1, 13)
With HVAR, the insert in 1 and the update in 2 can be converted to insert T values (1, 11). The converted insert and the delete in 3 cancel each other and can be removed. The insert in 4 and the delete in 5 can be removed. The final compiled HVAR operation is the last insert in 6:
insert T values (1, 13)

Example 2

In another example of log-order, row-by-row changes:

1. update T set c = 14 where k = 1
2. update T set c = 15 where k = 1
3. update T set c = 16 where k = 1

With HVAR, the update in 1 and 2 can be reduced to the update in 2. The updates in 2 and 3 can be reduced to the single update in 3 which is the net-row change of k = 1

Replication Server uses an insert, delete, and update table in an in-memory net-change database to store the net row changes which it applies to the replicate database. Net row changes are sorted by replicate table and by type of operations—insert, update, or delete—and are then ready for bulk interface. HVAR loads insert operations into the replicate table directly. Since Adaptive Server does not support bulk update and delete, HVAR loads update and delete operations into temporary worktables that HVAR creates inside the replicate database. HVAR then performs join-update or join-delete operations with the replicate tables to achieve the final result. The work tables are created and dropped dynamically.

In Example 2, where compilation results in update T set c = 16 where k = 1:

  1. HVAR creates the #rs_uT(k int, c int) worktable.

  2. HVAR performs an insert into the worktable with this statement:
    insert into #rs_uT(k, c) location ‘idemo.db’ {select * from rs_uT}
  3. HVAR performs the join-update:
    update T set T.c=#rs_uT.c from T,#rs_uT where T.k=#rs_uT.k

As HVAR compiles and combines a larger number of transactions into a group, bulk operation processing improves; therefore, replication throughput and performance also improves. You can control the amount of data that HVAR groups together for bulk apply by adjusting HVAR sizes with configuration parameters.

There is no data loss, although HVAR does not apply row changes in the same order in which the changes are logged because for:

  • Different data rows, the order in which the row changes are applied does not affect the result.

  • The same row, applying delete before insert after compilation maintains consistency.