Initiates an incremental table transfer.
transfer table [[db.]owner.]table [to | from] destination_file [ for { ase | bcp | iq | csv } ] [ with {column_separator=string}, {column_order=option}, {encryption=option}, {row_separator=string}, {resend=id}, {progress=sss}, {tracking_id=nnn} {sync = true | false]}, {fixed_length = true | false} , null_byte = true | false}]
any valid table within Adaptive Server. transfer table requires sa_role, or ownership of the table. Table owners can grant transfer table privileges for tables they own.
indicates the direction of the transfer. You cannot use all the parameters for the transfer table...from that you use with transfer table. . .to. The parameters for the from parameter are:
column_order=option (does not apply to a load using for ase; reserved for future use)
column_separator=string (does not apply to a load using for ase; reserved for future use)
encryption={true | false} (does not apply to a load using for ase; reserved for future use)
progress=nnn
row_separator=string (does not apply to a load using for ase; reserved for future use)
any file or path name that is valid for the operating system, and that Adaptive Server can access. If the file is a relative file path, Adaptive Server provides its absolute path. If Adaptive Server cannot open destination_file, it issues an error message.
Adaptive Server removes a destination file if all these conditions apply:
The file is a regular file, not a named pipe, symbolic link, or other special file.
Adaptive Server opens the file as part of this transfer.
The transfer fails, or sends no rows.
Names one of the destination data formats. If you omit the for clause, the default value of the first transfer of a specified table is for ase. Subsequent transfers default to the format used by the previous successful transfer, unless the command specifies format with resend = id, in which case the default is the specified prior transfer format.
ase – a format for importing data into Adaptive Server. This output format is a licensed feature, available to RAP customers and to customers with licenses for in-memory databases. No data transformations are applied. This file format includes a header that describes the table, including the source computer’s byte order, character set, and default sort order. This is the default value for tables that have had no successful prior transfers.
bcp – a format for importing data as bcp binary-formatted data. Rows are output as binary data that can be loaded using bcp. No data transformations are applied. As part of the transfer, Adaptive Server creates a format file that bcp uses to describe the data and appears in the same directory as the output file.
You cannot use for bcp to transfer to a named pipe.
If the output file is any file type other than a named pipe, Adaptive Server uses this naming convention for the format file:
{table_name},{database_id},{object_id}.fmt
iq – writes data in a form suitable for loading into Sybase IQ using IQ’s load table command. Adaptive Server writes data to the file in a binary format, and applies any necessary data transformations to convert its datatypes to versions that are compatible with IQ. Unless you include the with fixed_length='true' or with null_byte='true' modifiers, for iq writes data in a default format:
Default format – nullable data includes a following “null byte,” a one-byte indicator containing a:
0 if the column is not null
1 if the column is null
Non-nullable data does not include this null byte (see IQ’s documentation for load table). Variable-length strings are preceded by one or two bytes indicating the length of the string, where the number of prefix bytes is determined by the column’s maximum length: one byte for strings up to 255 bytes, or two bytes for strings 256 bytes or longer (Adaptive Server supports strings up to about 16000 bytes). Except for strings, every column is transmitted as a fixed width, padded if necessary to expand it to this fixed-width size.
Use these modifiers to determine the data’s format:
with fixed_length='true' – all
columns, including strings, are padded to their full width. Strings
are padded with blanks; other columns are padded with <NUL>
or 0x00
.
No column has a length indicator
with null_byte='true' – all columns must have a null byte, whether or not the column is nullable. This qualifier forces for iq to use the fixed_length='true’ modifier, regardless of what the command specifies
csv – a format of character-coded values. Rows are output as character-coded data. Columns are separated by a designated column separator, and rows are terminated by a designated row terminator. Separators and terminators are user-defined.
Provides options that modify the command’s operation.
declares a string written between output columns in csv format, replacing the default. The string written for subsequent transfers defaults to the previously specified column_separator.
declares the order in which column data is written to the output. These options are:
id – order by column ID, as given in syscolumns. This is the only acceptable column order when the transfer is for bcp, and is the default value for those transfers.
name – order by column name as given in syscolumns, using the Adaptive Server current character set and sort order.
name_utf8 – order by column name as given in syscolumns, translating the column name to UTF8 characters before sorting.
offset – order by column offset within the data row. This is the only acceptable column order when the transfer is for ase, and is the default value for those transfers.
Adaptive Server issues an error message if you use a column_order that does not match the for clause format. The column orders are:
for ase – use the offset column order
for bcp – use the id column order
Specifies how the command handles encrypted columns. Options are:
true – decrypt columns before transmission. This is the default value. The user must have permission to decrypt any encrypted columns.
false – transmit columns encrypted exactly as they appear in the data row.
To recover the data, the receiver must know the encryption key and algorithm used to encrypt it. If Adaptive Server writes encrypted data to a file, it writes the data as it was encrypted when it was first stored in the table. Transferring the data does not alter it. To recover that data, the receiver must know the key that encrypted the data, and any special features of the encryption algorithm (for example, whether it used an initialization vector).
indicates that the transfer should generate progress messages every sss seconds during its operation. The default is to omit progress messages.
Declares a string to be written at the end of each output row in csv format, replacing the default. This option has no effect unless the transfer is for csv. As with column_separator, the default for all transfers in csv mode after the first is the default value of the most recent successful transfer.The default row separator is platform-dependent: a line feed (Ctrl+J) on Linux and UNIX, or a carriage return and line feed (Ctrl+M Ctrl+J) on Windows.
identifies a history entry from a table whose sequence ID column obtains the starting timestamp for this data transfer.This option resends previously sent data. resend =id is ignored unless the table named in the command is marked for incremental transfer. If the specified sequence ID does not exist for this table, Adaptive Server resends the entire table.
Adaptive Server selects the indicated entry’s starting timestamp as the starting timestamp for this transfer, and the indicated entry’s destination type (ase, bcp, and so on) as the transfer’s default destination type.
Negative values for id retrieve history entries for previously completed successful transfers of the specified table. -1 names the most recently complete successful transfer, -2 the next most recent, and so forth. The transfer history table stores entries for both successful and failed transfers.
specifies an optional integer identifier that helps track a given transfer. Use the spt_TableTransfer.tracking_id column to determine the value for nnn and use the value in queries. This example returns the ending status and sequence ID of tracking ID number 123, along with the complete path to the output data file (returns NULL if these values do not exist):
select end_code, sequence_id, pathname from spt_TableTransfer where id = object_id('mytable') and tracking_id = 123
Adaptive Server does not control the tracking_id or require that it be unique.
This tracking ID is not the sequence ID used for resend =id.
determines how the transfer reacts with transactions. Option are:
true – the transfer is synchronized so that rows from the table included in the transfer are captured as a group. transfer waits until all transactions affecting this table end before it starts. New transactions that affect this table cannot modify the table while transfer waits to start. Instead, they wait until transfer begins. While transfer is in progress, transactions cannot modify rows in this table until they are inspected by transfer.
false – the transfer is not synchronized. transfer sends rows that are within the selected timestamp range, regardless of whether it can send other rows from the table. This is the default behavior.
sync affects only the tables you are transferring. transfer does not consider cross-table constraints.
determines whether transfer. . .for iq transfers all columns as fixed-length fields in the output file. Typically, Adaptive Server transfers varying-length strings with a 1- or 2-byte prefix length. Setting fixed_length to true causes Adaptive Server to pad strings with blanks until they reach column’s maximum width. You must use the parameter with the for iq parameter. Setting fixed_length to:
true – Adaptive Server pads strings to their full width instead of using a prefix length.
false – Adaptive Server sends the string using the default behavior—sending the string with the prefix length.
determines whether transfer. . .for iq appends a byte to the end of each transmitted column, indicating whether the column is null. Typically, Adaptive Server provides this byte only for nullable columns. Options are:
true – Adaptive Server includes a null byte at the end of all columns—0 if the column is null, 1 if it is not—whether or not the column is nullable. true forces for iq to use the fixed_length='true’ modifier, regardless of what you specified with the transfer command.
false – Adaptive Server provides a null byte only for nullable columns.
Regardless of whether it is set to true or false, null_byte only applies to transfers that include for iq clause.
Grants permission for user “john” to transfer table mytable:
grant transfer table on mytable to john
Transfers mytable to an output file, formatted for loading into Sybase IQ. If this example did not include name_utf8, the default order would default to the column ID order:
transfer table mytable to '/path/to/file' for iq with column_order = 'name_utf8'
Transfers mytable, formatted for the Adaptive Server file format, which uses a column output order of offset. This example requests resend from a history entry that does not exist; therefore, the entire table is transferred:
transfer table mytable to '/path/to/file3/'for ase with resend=10
The example changes the default column order for a for ase transfer. After the transfer, the default receiver is ase, the column order is offset, and column and row separators are null.
transfer table sends only committed data that has changed since the previous transfer.
The string argument to column_separator and row_separator in the with clause may be up to 64 bytes long, and may contain formatting instructions:
“\b”indicates a backspace, <BS> (Ctrl+H).
“\n” indicates a newline, <LF> (Ctrl+J).
“\r” indicates a carriage return, <CR> (Ctrl+M).
“\t” indicates a <TAB> (Ctrl+I).
“\\” indicates a backslash.
Any “\” in the string that is not part of one of these sequences is an actual backslash, and appears as such in the string.
transfer table .. from does not fire triggers during updates or inserts.
When transfer table runs into an error (such as a duplicate key), Adaptive Server displays just an error number, making it difficult to understand the cause of the error. For example:
Msg 2633, Level 20, State 1 Server 'SYB155', Line 1 TRANSFER TABLE failed to insert a row to table 'my_tab'. The indicated error was 2601. Msg 16025, Level 16, State 1 Server 'SYB155', Line 1 TRANSFER TABLE my_tab: command failed with status 2633.
To retrieve the error message, manually query master..sysmessages. For example, if 2601 is your error number, enter:
select * from master..sysmessages where error = 2601
See the Troubleshooting Guide for more information about error 2601.
You can use transfer table for tables that are not marked for incremental transfer, with these restrictions:
Not all rows are always transferred. If a user updates the table while the transfer is in progress, the updated rows may not be transferred.
The transfer is not incremental; you can transfer only the entire table, and later transfers are not notified of this transfer.
No history entry is written to spt_TableTransfer. The transfer appears in monTableTransfer for the duration of the transfer, but once the transfer is complete the record vanishes.
Permission to transfer a table does not automatically grant permission to decrypt data within that table. To decrypt any encrypted columns, you must gain specific permission from the table owner.
The following describes permission checks for transfer table that differ based on your granular permissions settings.
Granular permissions enabled |
With granular permissions enabled, you must be the table owner or a user with tranfer table permission on the table. |
Granular permissions disabled |
With granular permissions disabled, you must be the table owner, be a user with transfer table permission, or a user with sa_role. |
Values in event and extrainfo columns of sysaudits are:
Event |
Audit option |
Command or access audited |
Information in extrainfo |
---|---|---|---|
136 |
transfer table |
transfer table |