Planning the Installation

Learn about planning for the installation procedure.

Note:

There has been a change to license quantity used by per-CPU and per-chip license types in version 15.5 Cluster Edition. Adaptive Server checks the same number of licenses as the number of cores on the machine (or chips, depending on the license type) regardless of any configuration settings. This behavior is not a change in licensing terms, but instead is a correction from earlier versions of Adaptive Server, in which if Adaptive Server was licensed per CPU or CPU chip, the license quantity requested was reduced if the max online engines configuration parameter was fewer than the number of CPUs on the machine.

Note: See the Users Guide for hardware requirements for using Infiniband, Interconnect on a production system. Sybase does not support file system devices when running on multiple nodes.
Note: If you intend to run the cluster under Symantec Storage Foundation for Sybase Cluster Edition, refer to Chapter 11, “Using the Cluster Edition with the Veritas Cluster Server,” In the Clusters Users Guide for more information on how to do this.
Database devices in the Cluster Edition must support SCSI-3 persistent group reservations (SCSI PGRs). Cluster Edition uses SCSI PGRs to guarantee data consistency during cluster membership changes. Sybase cannot guarantee data consistency on disk subsystems that do not support SCSI PGRs (such a configuration is supported for test and development environments that can tolerate the possibility of data corruption).
  1. Create a $HOME directory on the node on which you will run the installer.
  2. Ensure that all nodes are running on the same operating system version. The number of processors and the amount of memory can vary from node to node but the operating system version cannot.
  3. Ensure that the quorum resides on its own device.
  4. Create the local system temporary databases on a shared device using the Adaptive Server plug-in or sybcluster. Do this for each instance during the initial startup of the cluster and later on whenever you add an instance to the cluster. You can create or drop a local system temporary database from any instance, but you can access it only from the owning instance.
  5. Ensure that all database devices, including quorum devices, are located on raw partitions. Do not use the Network File System (NFS).
    Warning!   Do not use file system devices for clusters – The Cluster Edition is not designed to run on a file system; mounting a nonclustered file system on multiple nodes will immediately cause a corruption, leading to a total loss of the cluster and all of its databases. For this reason, Sybase does not support file system devices when running on multiple nodes.
  6. Ensure that the raw partitions are accessible from each node using the same access path. Sybase recommends storage area network (SAN) connected devices.
    Note: Local user temporary databases do not require shared storage and can use local file systems created as private devices—unlike local system temporary databases, which do require shared storage.
    For test environments, use a single node or machine to run multiple instances of the Cluster Edition in a cluster configuration. When you do this, you must use the local file system (not NFS) or SAN Storage for the database devices.
  7. Ensure that all hardware nodes use Network Time Protocol (NTP) or a similar mechanism to ensure that clocks are synchronized.
  8. If you are using a shared installation, ensure that all Adaptive Server Enterprise software and configuration files (including the $SYBASE directory, the interfaces file) are installed on a Network File System (NFS) or a clustered file system (CFS or GFS) that is accessible from each node in the cluster using the same access path. Supported versions of clustered file system are detailed in the next section. If you are using a private installation, each node must have its own installation on a cluster file system.
  9. Ensure that you have a high-speed network interconnection (for example, a gigabit Ethernet) providing a local network connecting all hardware nodes participating in the cluster.
  10. Sybase recommends that each node in the cluster have two physically separate network interfaces:
    • A primary network – for cluster interconnect traffic.
    • A secondary network – for cluster interconnect traffic.
    The primary and secondary networks should be physically separated from each other, and are needed for security, fault-tolerance, and performance reasons. For fault-tolerance, the two network cards should be on different fabrics so that a cluster survives network failure.
  11. Private interconnect fabrics should not contain links to any machines not participating in the cluster (that is, all cluster nodes should have their primary interconnect connected to the same switch, and that switch should not be connected to any other switches or routers).