Cache layout

Relaxed-durability databases, and objects in them, that are bound to any named cache use the same layout as regular caches.

The access pattern of the object bound to the cache dictates the layout for a regular cache. Adaptive Server selects the least recently used (LRU) buffer and page pair for replacement with strict LRU cache replacement policy. Adaptive Server loads pages for objects bound to the named cache that reside on disk (identified by the database, table, and index ID) into the cache when they are requested. These objects remain in the cache until Adaptive Server selects them for replacement. Adaptive Server determines the location in the cache where this page is read, according to the availability of free buffers or according to which buffers are replaceable, depending on the LRU or most-recently used (MRU) strategy. The buffer Adaptive Server uses to hold a page in memory might vary, and the location of the buffer in the buffer cache may also vary in this cache configuration

Databases that use in-memory storage caches are bound to the cache, and all the objects and their indexes in this database use the same cache. The entire database is hosted in the cache. All the pages in the database, both allocated and unallocated, are hashed, so page searches in the cache should always find the required page. Pages are laid out sequentially (described in Figure 4-1), and because all the pages reside in the cache, the position of the buffer and page pair does not change.

Figure 4-1: Pages arranged sequentially in an in-memory database

Image showing a series of pages laid out sequentially, above which are a series of buffers pointing to pages in the array

In-memory storage cache supports only the default pool (which uses the server page size for any logical read or write on that page). In-memory storage caches do not need large buffer pools, which are used to perform large I/O from disk. Asynchronous prefetch, as a strategy to improve runtime I/O performance and to reduce cache misses while searching for a page, is not supported for in-memory storage caches.

Because they do not store any data in disk, in-memory storage caches:

Cache partitions are supported for in-memory storage caches, and reduce the contention on a single spinlock by distributing subsets of pages in an in-memory database to a partition, where each subset is controlled by a separate spinlock.