Replacing data with new rows

If, when using incremental transfer, you change the key values for some rows, the next transfer into the table considers the changed key data rows to be new data, but replaces the data for rows whose keys have not changed.

  1. transfer_example uses the f1 column as the primary-key column. Adaptive Server uses this column to determine whether an incoming row contains new data, or whether it replaces an existing row.

    For example, if you replace rows with keys 3, 5, and 7 by adding 10 to their respective values:

    update transfer_example
    set f1 = f1 + 10
    where f1 in (3,5,7)
    
    (3 rows affected)
    

    transfer_example now includes rows with keys 13, 15, and 17, which transfer considers to be new rows. When you transfer the same data into transfer_example, transfer inserts rows with keys 3, 5, and 7, and retains rows with keys 13, 15, and 17.

    transfer table transfer_example
    from 'transfer_example-data.ase'
    for ase
    
    (10 rows affected)
    
  2. Verify that the data for row 3 is the same as 13, row 5 is the same as row 15, and row 7 is the same as row 17 for f2 and f3:

    select * from transfer_example
    order by f1
    
    f1     f2                             f3                            
    ------- ------------------------------ ----------------------------
    1       AAA                             Jul 17 2009  4:40:14.465789PM 
    2       BBBBBB                          Jul 17 2009  4:40:14.488003PM 
    3       CCCCCCCCC                       Jul 17 2009  4:40:14.511749PM 
    4       DDDDDDDDDDDD                    Jul 17 2009  4:40:14.536653PM 
    5       EEEEEEEEEEEEEEE                 Jul 17 2009  4:40:14.559480PM 
    6       FFFFFFFFFFFFFFFFFF              Jul 17 2009  4:40:14.583400PM 
    7       GGGGGGGGGGGGGGGGGGGGG           Jul 17 2009  4:40:14.607196PM 
    8       HHHHHHHHHHHHHHHHHHHHHHHH        Jul 17 2009  4:40:14.632152PM 
    9       IIIIIIIIIIIIIIIIIIIIIIIIIII     Jul 17 2009  4:40:14.655184PM 
    10      JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ  Jul 17 2009  4:40:14.678938PM 
    13      CCCCCCCCC                       Jul 17 2009  4:40:14.511749PM 
    15      EEEEEEEEEEEEEEE                 Jul 17 2009  4:40:14.559480PM 
    17      GGGGGGGGGGGGGGGGGGGGG           Jul 17 2009  4:40:14.607196PM 
    
    (13 rows affected)
    
  3. Transfer transfer_example out again: all 13 rows are transferred. Adaptive Server views the rows you transferred in with keys 3, 5, and 7 as new since they replaced existing rows (this example uses a tracking ID value of 101):

    transfer table transfer_example
    to 'transfer_example-data-01.ase'
    for ase
    with tracking_id = 101
    
    (13 rows affected)
    
  4. Modify rows to show that incremental transfer transfers only rows that have been modified since the previous transfer (this update affects 3 rows).

    update transfer_example
    set f3 = current_bigdatetime()
    where f1 > 10
    
    (3 rows affected)
    
  5. Transfer the table out again to verify that only the 3 changed rows are transferred. You need not specify for ase: Adaptive Server uses this parameter, which was set in the previous transfer, as the default.

    transfer table transfer_example
    to 'transfer_example-data-02.ase'
    with tracking_id = 102
    
    (3 rows affected)
    
  6. View the transfer information using the tracking_id from step 3:

    select id, sequence_id, end_code, ts_floor, ts_ceiling, row_count
    from spt_TableTransfer
    where id = object_id('transfer_example')
    and tracking_id = 101
    
    id          sequence_id   end_code    ts_floor             ts_ceiling
    row_count            
    -----------   -----------    --------     --------------------   --------------------
    592002109    3             0           5309                 5716
    13