Distributed query processing can benefit from an optional high-speed interconnect. Use scalable interconnect technology to connect multiplex nodes.
A high-speed network interconnect providing a local network that connects all multiplex nodes. Use an interconnect bandwidth of 1Gb or higher or the highest bandwidth, lowest latency interconnect available.
A public network for multiplex interconnection traffic and client traffic.
A private network for multiplex interconnect traffic only, excluding external client traffic. Currently, multiplex interconnects support only the TCP/IP standard.
These two networks improve security, fault-tolerance, and performance.
A switch that enables high-speed communication between nodes.
Network cards that reside on different fabrics so that the multiplex survives network failure. Separate the public and private networks physically.
Private interconnect fabrics that contain only links to machines participating in the multiplex. Private interconnect for all multiplex nodes should connect to the same switch, which connects to no other public switches or routers.
Redundant network interface cards added to private or public networks if desired. The private and public connection information allows for multiple IP addresses on each.