Adaptive Server creates a new client task for every new connection. It fulfills a client request as outlined in the following steps:
The client program establishes a network socket connection to Adaptive Server.
Adaptive Server assigns a task from the pool of tasks, which are allocated at start-up time. The task is identified by the Adaptive Server process identifier, or spid, which is tracked in the sysprocesses system table.
Adaptive Server transfers the context of the client request, including information such as permissions and the current database, to the task.
Adaptive Server parses, optimizes, and compiles the request.
If parallel query execution is enabled, Adaptive Server allocates subtasks to help perform the parallel query execution. The subtasks are called worker processes, which are discussed in the Performance & Tuning: Optimizer.
Adaptive Server executes the task. If the query was executed in parallel, the task merges the results of the subtasks.
The task returns the results to the client, using TDS packets.
For each new user connection, Adaptive Server allocates a private data storage area, a dedicated stack, and other internal data structures.
It uses the stack to keep track of each client task’s state during processing, and it uses synchronization mechanisms such as queueing, locking, semaphores, and spinlocks to ensure that only one task at a time has access to any common, modifiable data structures. These mechanisms are necessary because Adaptive Server processes multiple queries concurrently. Without these mechanisms, if two or more queries were to access the same data, data integrity would be sacrificed.
The data structures require minimal memory resources and minimal system resources for context-switching overhead. Some of these data structures are connection-oriented and contain static information about the client.
Other data structures are command-oriented. For example, when a client sends a command to Adaptive Server, the executable query plan is stored in an internal data structure.