When you install Adaptive Server, it has single default data cache, with a 2K memory pool, one cache partition and a single spinlock.
To improve performance you can add data caches and bind databases or database objects to them:
To reduce contention on the default data cache spinlock, divide the cache into n where n is 1, 2, 4, 8,16, 32 or 64. If you have contention on the spinlock with 1 cache partition, the contention is expected to reduce x/n where n is the number of partitions.
When a particular cache partition spinlock is hot, consider splitting the default cache into named caches.
If there is still contention, consider splitting the named cache into named cache partitions.
You can configure 4K, 8K, and 16K buffer pools from the logical page size in both user-defined data caches and the default data caches, allowing Adaptive Server to perform large I/O. In addition, caches that are sized to completely hold tables or indexes can use relaxed LRU cache policy to reduce overhead.
You can also split the default data cache or a named cache into partitions to reduce spinlock contention.
Configuring the data cache can improve performance in the following ways:
You can configure named data caches large enough to hold critical tables and indexes.
This keeps other server activity from contending for cache space and speeds up queries using these tables, since the needed pages are always found in cache.
You can configure these caches to use relaxed LRU replacement policy, which reduces the cache overhead.
You can bind a “hot” table—a table in high demand by user applications—to one cache and the indexes on the table to other caches to increase concurrency.
You can create a named data cache large enough to hold the “hot pages” of a table where a high percentage of the queries reference only a portion of the table.
For example, if a table contains data for a year, but 75% of the queries reference data from the most recent month (about 8% of the table), configuring a cache of about 10% of the table size provides room to keep the most frequently used pages in cache and leaves some space for the less frequently used pages.
You can assign tables or databases used in decision support systems (DSS) to specific caches with large I/O configured.
This keeps DSS applications from contending for cache space with online transaction processing (OLTP) applications. DSS applications typically access large numbers of sequential pages, and OLTP applications typically access relatively few random pages.
You can bind tempdb to its own cache to keep it from contending with other user processes.
Proper sizing of the tempdb cache can keep most tempdb activity in memory for many applications. If this cache is large enough, tempdb activity can avoid performing I/O.
Text pages can be bound to named caches to improve the performance on text access.
You can bind a database’s log to a cache, again reducing contention for cache space and access to the cache.
When changes are made to a cache by a user process, a spinlock denies all other processes access to the cache.
Although spinlocks are held for extremely brief durations, they can slow performance in multiprocessor systems with high transaction rates. When you configure multiple caches, each cache is controlled by a separate spinlock, increasing concurrency on systems with multiple CPUs.
Within a single cache, adding cache partitions creates multiple spinlocks to further reduce contention. Spinlock contention is not an issue on single-engine servers.
Most of these possible uses for named data caches have the greatest impact on multiprocessor systems with high transaction rates or with frequent DSS queries and multiple users. Some of them can increase performance on single CPU systems when they lead to improved utilization of memory and reduce I/O.