By default, Adaptive Server uses a single network listener, which can run on any engine. When Adaptive Server receives a connection request, the engine that accepts the connection becomes the network engine for that connection, and when the corresponding task performs network I/O, it must migrate to this engine.
If many client connections take place simultaneously, Adaptive Server may not be able to schedule the listener between the connection requests (unless it yields due to running out of timeslice), and accepts all connections on the same engine. Since any of the corresponding tasks must run on this engine to perform network I/O, this engine becomes a bottleneck. Use the sp_sysmon Network I/O Management section to determine how Adaptive Server distributes the network I/O. This example shows Adaptive Server not distributing the network I/O proportionally: Engine 2 uses more than 88 percent of the network I/O, while other engines use as little as 0 percent:
Network I/O Management ---------------------- Total Network I/O Requests 7301.5 1.4 438092 n/a Network I/Os Delayed 0.0 0.0 0 0.0 % Total TDS Packets Received per sec per xact count % of total ------------------------- ------------ ------------ -------- ------------ Engine 0 308.8 0.1 18528 7.7 % Engine 1 163.6 0.0 9818 4.1 % Engine 2 3558.8 0.7 213527 88.3 % Engine 3 0.0 0.0 2 0.0 % Engine 4 0.0 0.0 0 0.0 % Engine 5 0.0 0.0 0 0.0 % ------------------------- ------------ -------- ------- Total TDS Packets Rec'd 4031.3 0.8 241875 Avg Bytes Rec'd per Packet n/ n/a 136 n/a ----------------------------------------------------------------------------- Total TDS Packets Sent per sec per xact count % of total ------------------------- ------------ ------------ -------- ------------ Engine 0 308.8 0.1 18529 7.7 % Engine 1 163.6 0.0 9818 4.1 % Engine 2 3558.9 0.7 213531 88.3 % Engine 3 0.0 0.0 2 0.0 % Engine 4 0.0 0.0 0 0.0 % Engine 5 0.0 0.0 0 0.0 % ------------------------- ----------- ------------ -------- Total TDS Packets Sent 4031.3 0.8 241880
To resolve unbalanced network I/O usage, use multiple network listeners and bind them to different engines (typically, one listener per engine). To determine how many clients to bind to each network listener, divide the client connections so that each listener accepts approximately the same number of connections. For example, if there are 6 network listeners and 60 clients, connect each group of 10 clients to one listener.
The sp_sysmon output after balancing the network listeners above looks similar to:
Network I/O Management ---------------------- Total Network I/O Requests 8666.5 1.3 519991 n/a n/a Network I/Os Delayed 0.0 0.0 0 0.0 % Total TDS Packets Received per sec per xact count % of total ------------------------- ------------ ------------ -------- ------------ Engine 0 893.4 0.1 53602 17.8 % Engine 1 924.5 0.1 55468 18.5 % Engine 2 701.9 0.1 42113 14.0 % Engine 3 906.0 0.1 54358 18.1 % Engine 4 896.1 0.1 53763 17.9 % Engine 5 683.8 0.1 41028 13.7 % ------------------------- ------------ -------- ------- Total TDS Packets Rec'd 5005.5 0.8 300332 Avg Bytes Rec'd per Packet n/ n/a 136 n/a ----------------------------------------------------------------------------- Total TDS Packets Sent per sec per xact count % of total ------------------------- ------------ ------------ -------- ------------ Engine 0 893.3 0.1 53595 17.8 % Engine 1 924.5 0.1 55467 18.5 % Engine 2 701.9 0.1 42113 14.0 % Engine 3 905.9 0.1 54355 18.1 % Engine 4 896.1 0.1 53763 17.9 % Engine 5 683.8 0.1 41026 13.7 % ------------------------- ----------- ----------- -------- Total TDS Packets Sent 4031.3 0.8 241880
Unbalanced network listeners are not specific to in-memory databases, but can also occur in disk-resident databases. However, because they do not use disk I/O, in-memory-resident databases typically have greater throughput than disk-resident databases. The increased throughput and non-existent disk I/O latency results in in-memory databases performing more work than disk-resident databases, including more network I/O, which could increase the severity of the bottleneck due to unbalanced network listener loads.