HVAR compilation and bulk apply

During compilation, HVAR rearranges data that is to be replicated by clustering the data together based on each table, and each insert, update, and delete operation, and then compiling the operations into net row operations.

HVAR distinguishes different data rows by the primary key defined in a replication definition. If there is no replication definition, all columns except for text and image columns are regarded as primary keys.

For the combinations of operations found in normal replication environments, given a table and row with identical primary keys, HVAR follows these compilation rules for operations:

Other combinations of operations result in invalid compilation states.

Example 1 This is an example of log-order, row-by-row changes:

0. create table T (k int, c int)
1. insert T values (1, 10)
2. update T set c = 11 where k = 1
3. delete T where k = 1
4. insert T values (1, 12)
5. delete T where k =1
6. insert T values (1, 13)

With HVAR, the insert in 1 and the update in 2 can be converted to insert T values (1, 11). The converted insert and the delete in 3 cancel each other and can be removed. The insert in 4 and the delete in 5 can be removed all together. The final compiled HVAR operation is the last insert in 6:

insert T values (1, 13)

Example 2 In another example of log-order, row-by-row changes:

1. update T set c = 14 where k = 1
2. update T set c = 15 where k = 1
3. update T set c = 16 where k = 1

With HVAR, the update in 1 and 2 can be reduced to the update in 2. The updates in 2 and 3 can be reduced to the single update in 3 which is the net-row change of k = 1

Replication Server uses an insert, delete, and update table in an in-memory net change database to store the net row changes which it applies to the replicate database. Net row changes are sorted by replicate table and by type of operations—insert, update, or delete—and are then ready for bulk interface. HVAR loads insert operations into the replicate table directly. Since Adaptive Server do not support bulk update and delete, HVAR loads update and delete operations into temporary work tables that HVAR creates inside the rep;ocate database. HVAR then performs join-update or join-delete operations with the replicate tables to achieve the final result. The work tables are created and dropped dynamically.

Taking Example 2 where compilation results in update T set c = 16 where k = 1:

  1. HVAR creates the #rs_uT(k int, c int) work table.

  2. HVAR performs an insert into the work table with this statement:

    insert into #rs_uT(k, c) location ‘idemo.db’ {select * from rs_uT}
    
  3. HVAR performs the join-update:

    update T set T.c=#rs_uT.c from T,#rs_uT where T.k=#rs_uT.k
    

As HVAR compiles and combines a larger number of transactions into a group, bulk operation processing improves; therefore, replication throughput and performance also improves. You can control the amount of data that HVAR groups together for bulk apply by adjusting HVAR sizes with the parameters in “Configuring HVAR”.

There is no data loss although HVAR does not apply row changes in the same order in which the changes are logged because for:

See “HVAR processing and limitations” for limitations to HVAR processing.