Target architecture

The SMP product is intended for machines with the following features:

Adaptive Server consists of one or more cooperating processes that are scheduled onto physical CPUs by the operating system.

Adaptive Server uses multiple cooperative processes to leverage parallel hardware when running in process mode, and uses multiple threads from the same process when running in threaded mode. Each process in the process-mode kernel is an Adaptive Server engine. The threaded-mode kernel uses some threads as engines, and has additional nonengine threads.

Process mode uses multithreaded processes. However, because Adaptive Server performs most of its work in the main thread of each process, consider these processes to be single-threaded when researching and tuning CPU resources.

Adaptive Server uses engines as processors to execute SQL queries. In process mode, the engine is the main thread for each process. In threaded mode, the engines are threads from one or more engine thread pools. Multiple engines use shared memory to communicate. Process and threaded mode both use shared memory, including single engine or uniprocessor environments.

The operating system schedules Adaptive Server threads onto CPU resources. Adaptive Server does not distinguish between physical processors, cores, or subcore threads.

When configured for kernel mode, Adaptive Server executes the engines within the thread of the single operating system process. Adaptive Server acquires threads used to support engines from engine thread pools.

Adaptive Server includes other nonengine threads that are used for particular tasks, but are not considered to be engines.

Figure 5-1: Threaded-mode architecture

Image of the achiecture of an SMP environment. On the top level are the clients, cpu’s, and disks. The next level is the operating system. The next level is the engines. Beneath this is the shared execurable, consisting of the program memory, and at the bottom level is the shared memory.

Figure 5-2: Process-mode architecture

Image of the achiecture of an SMP environment. On the top level are the clients, cpu’s, and disks. The next level is the operating system. The next level is the engines. Beneath this is the shared execurable, consisting of the program memory, and at the bottom level is the shared memory.

The operating system schedules Adaptive Server threads (engine and non-engine) onto physical CPU resources, which can be processors, cores, or subcore threads. The operating system—not Adaptive Server—assigns threads to CPU resources: Adaptive Server performance depends on receiving CPU time from the operating system.

Adaptive Server engines perform all database functions, including updates and logging. Adaptive Server—not the operating system—dynamically schedules client tasks to available engines. Tasks are execution environments within Adaptive Server.

“Affinity” is a process in which certain Adaptive Server tasks run only on a certain engine (task affinity), certain engines handle network I/O for a certain task (network I/O affinity), or certain engines run only on a certain CPU (engine affinity).

In process mode, a connection to Adaptive Server has network I/O affinity with the engine that accepted the connection. This engine must do all the network I/O for that connection. Network I/O affinity does not exist in threaded mode because any engine can perform the I/O for any connection, which typically reduces context switching and improves performance.

You can use the logical process manager to establish task affinity so the manager runs a task, or set of tasks, only on a specific engine or specific set of engines. In threaded mode, use thread pools to accomplish task affinity. In process mode, use engine groups.

Thread pools and engine groups have different behaviors. Engines in a thread pool execute tasks assigned only to that thread pool. The scheduler search space (the area in which the scheduler searches for runnable tasks) is limited to engines in that thread pool. Engines in an engine group may run any task, as long as the task is not restricted to engines in a different group. That is, including a task in an engine group restricts where the task may run, but does not reserve the engines in the group for that task.

Configure task affinity using the Adaptive Server logical process manager (see Chapter 4, “Distributing Engine Resources,” in the Performance and Tuning Series: Basics). Configure engine or CPU affinity with dbcc tune or equivalent operating system commands (see the Reference Manual: Commands and your operating system documentation).