When you load data, you can adjust several factors to improve load performance:
Use the LOAD TABLE command whenever you have access to raw data in ASCII or binary format. especially for all loads of over a hundred rows. The LOAD TABLE command is the fastest insertion method.
When loading from a flat file, use binary data if you have a choice of using binary or character data. This can improve performance by eliminating conversion costs and reducing I/O.
Set LOAD TABLE command options appropriately, as described in “Bulk loading data using the LOAD TABLE statement.” In particular, if you have sufficient memory to do so, or if no other users are active during the load, increase the BLOCK FACTOR.
Sybase strongly recommends setting the LOAD TABLE IGNORE CONSTRAINT option limit to a non-zero value, if you are logging the ignored integrity constraint violations. Logging an excessive number of violations affects the performance of the load.
Place data files on a separate physical disk drive from the database file, to avoid excessive disk head movement during the load.
Increase the size of the database cache. Providing enough memory for the load is a key performance factor. Use the command line options iqmc and iqtc to increase the cache size; see “Server command-line switches” in Chapter 1, Utility Guide for details. For these options to take effect, you must restart the server.
Adjust the amount of heap memory used by load operations by using the SET OPTION command to change the LOAD_MEMORY_MB option. When LOAD_MEMORY_MB is set to the default (0), Sybase IQ uses the amount of heap memory that gives the best performance. If your system runs out of virtual memory, specify a value less than 2000 and decrease the value until the load works. For insertions into wide tables, you may need to set LOAD_MEMORY_MB to a low value (100-200 MB). If you set the value too low, it may be physically impossible to load the data. Note that this option also affects INSERT, UPDATE, SYNCHRONIZE, and DELETE operations.
If you are loading very wide varchar data, reduce the value of the LOAD_MEMORY_MB database option and the BLOCK FACTOR option in the LOAD TABLE command. As with all performance tuning, adjusting these values may require experimentation.
Adjust the degree of buffer partitioning for your database or server, to avoid lock contention. Buffer partitioning based on the number of CPUs is enabled by default, and can be adjusted by setting the -iqpartition server command line option or the Cache_Partitions database option. See “Managing lock contention” for more information.
Ensure that only one user at a time updates the database. While users can insert data into different tables at the same time, concurrent updates can slow performance.
Schedule major updates for low usage times. Although many users can query a table while it is being updated, query users require CPU cycles, disk space, and memory. You will want these resources available to make your inserts go faster.
If you are using the INSERT statement, run DBISQL or the client application on the same machine as the server if possible. Loading data over the network adds extra communication overhead. This might mean loading new data during off hours.
If you are using INSERT...LOCATION to load large amounts of text or bulk data across a network from a remote Adaptive Server Enterprise database, use the PACKETSIZE parameter of the LOCATION clause to increase the TDS packet size. This change may significantly improve load performance. For details on the syntax of the INSERT statement, see Chapter 1, “SQL Statements,” in Reference: Statements and Options.