Cluster Edition System Requirements for Using Private Interconnect Technology

The Cluster Edition supports only the UDP network protocol for private interconnects; do not use the TCP network protocol.

A private interconnect is a physical connection that allows internode communication, and is an essential component of a shared-disk cluster installation. A private interconnect can be a simple crossover cable with Ethernet, or it can be a complex solution. When you configure more than two nodes, a switch that enables high-speed communication between the nodes in the cluster is required.

To handle the amount of traffic created by contention, use scalable interconnect technology to connect nodes. The amount of traffic is directly proportional to the number of interinstance updates and transfers. SAP recommends that you implement the highest bandwidth, lowest-latency interconnect available.

Sybase recommends that Linux environments use an interconnect bandwidth of 1GB Ethernet.

The Cluster Edition supports the current standards for interconnects. SAP recommends that you research the available interconnects to find the one that works best for your site.

The Cluster Edition supports Infiniband in IP over IB (internet protocol over Infiniband) mode – the server uses a standard IP interface to communicate with the Infiniband interconnect. This is the simplest mode to configure.