Configuring the I/O controller

Adaptive Server includes I/O controllers and an I/O controller manager.

The I/O controller issues, tracks, polls, and completes I/Os. Each Adaptive Server I/O type—disk, network, Client-Library, and, for the Cluster Edition, CIPC—has its own I/O controller.

Adaptive Server can include multiple instances of disk or network controllers (multiple CIPC or Client-Library controllers are not allowed). Each task represents one controller. For example, configuring three network tasks means network I/O uses three controllers.

Each controller task is allocated a dedicated operating system thread. Additional tasks mean more CPU resources are dedicated to polling and completing I/O.

A single I/O controller per system is usually sufficient. However, you may need additional controllers on systems with very high I/O rates or low single-thread performance. In this situation, engines may become starved for I/O, and throughput decreases.

Use the sp_sysmon “Kernel Utilization” section to determine if additional I/O tasks are necessary.

Consider additional I/O tasks if:

The mode for which you configure Adaptive Server determines how it handles I/O. In threaded mode—the default—Adaptive Server uses threaded polling for I/O; in process mode, Adaptive Server uses a polling scheduler for I/O.

In process mode, Adaptive Server assigns each engine its own network, disk, and Open Client controller. When the scheduler polls for I/O, it searches only the engine’s local controllers (except for CIPC, for which all engines share a single controller).

One benefit to process mode polling is that, when you scale the number of engines, you scale the amount of CPU available to manage I/Os (that is, more engines = more CPU). However, you can configure too much CPU to manage the I/Os, devoting more time to a higher number of engines than is necessary to perform the tasks. Another performance implication is that the engine on which the I/O starts must finish the I/O (that is, if a task running on engine 2 issues a disk I/O, that I/O must be completed by engine 2, even if other engines are idle). This means that engines may remain idle even while there are tasks to perform, and I/Os may incur additional latency if the responsible engine is running a CPU-bound task.

When configured for threaded polling, the controller manager assigns each controller a task, and this task is placed into syb_system_pool. Because syb_system_pool is a dedicated pool, it creates a thread to service each task. This thread runs the polling and completion routine exclusively for the I/O controller. Because this thread is dedicated to performing this task, the task can block waiting for I/O completions, reducing the number of system calls and empty polls.

You can create multiple threads to service each I/O controller, allowing you to avoid single-thread saturation problems during which a single thread cannot keep up with a high rate of I/Os.

Process-mode polling introduces I/O latency when the I/O completes at the operating system level. However, the engine does not detect I/O latency because the engine is running another task. Threaded-mode polling eliminates this latency because the I/O thread task processes the completion immediately, and any I/O latency is a function of the device, and is not affected by the CPU load the query thread execution places on the system.

In threaded mode, the query processor and user tasks need not context switch for I/O polling when they go through the scheduler. Threaded polling reduces the amount of time spent polling as a percentage of total CPU time for all threads, making Adaptive Server more efficient in CPU consumption.

Use sp_configure with number of disk tasks and number of network taks to determine the number of tasks dedicated to handling I/O and the thread polling method the tasks use.

See Chapter 5, “Setting Configuration Parameters,” in System Administration Guide, Volume 1.

By default, each I/O task uses a thread from syb_system_pool, allowing the task to block during the I/O polling, reducing overhead from busy polling. During periods of low I/O load, these threads consume little physical CPU time. The CPU time for the I/O thread increases as the I/O load increases, but the amount of load increase depends on the processor performance and the I/O implementation.