Create The Cluster Input File

Before configuring the cluster, create a cluster input file that specifies the name of the cluster, the number of instances in the cluster, the path to the directories containing the interfaces file, log files, quorum disk devices, and other required configuration information. Choose any name for the cluster input file (for example, mycluster.inp).

When you configure the cluster, Adaptive Server reads the information from the cluster input file and stores it securely in the quorum device. Adaptive Server subsequently retrieves the cluster configuration information from the quorum device.

See the Reconfiguring the Cluster topic for information about changing configuration information after the cluster has been initialized.

Note: You can configure one cluster with each cluster input file.

The cluster input file is distinct from the server configuration file, which stores Adaptive Server configuration values associated with sp_configure.

This is the syntax for the cluster input file:
# all input files must begin with a comment
[cluster]
name = cluster name
max instances = number
master device = path to the master device
configuration file = common path to all server configuration files
primary protocol = udp | tcp | other
secondary protocol = udp | tcp | other
installation mode = shared | private
configuration file = Adaptive Server configuration file name
interfaces path = interfaces file path 
traceflags = trace flag number, trace flag number, . . . 
additional run parameters = any additional run parameters

[management nodes] 
hostname = node_name
hostname = node_name
hostname = node_name
hostname = node_name

[instance]
id = instance ID
name = instance name
node = name of node on which this instance runs
primary address = primary interconnect address
primary port start = port number
secondary address = secondary interconnect address
secondary port start = port number
errorlog = file name
interfaces path = interfaces file path
config file = path to server configuration file for this instance
traceflags = trace flag number, trace flag number, . . .
additional run parameters = any additional run parameters

[instance]
id = instance ID
name = instance name
node = name of node on which this instance runs
primary address = primary interconnect address
primary port start = port number
secondary address = secondary interconnect address
secondary port start = port number
errorlog = file name
interfaces path = interfaces file path
configuration file = path to server configuration file for this instance
traceflags = trace flag number, trace flag number, . . .
additional run parameters = any additional run parameters
Where:
In this example, the cluster input file defines a cluster named “mycluster” with two instances, “ase1” on node “blade1” and “ase2” on node “blade2.” The addresses on the private interconnects are 192.169.0.1 and 192.169.0.2. The name of the server configuration file is mycluster.cfg. The maximum instances is 2. “ase1” has a starting port range of 15015, and “ase2” has a starting port range of 16015.This adds additional information to the mycluster cluster:
#input for a 2 node / 2 instance cluster
[cluster]
name = mycluster
max instances = 2
master device = /opt/sybase/rawdevices/mycluster.master
config file = /opt/sybase/ASE-15_0/mycluster.config
interfaces path = /opt/sybase
primary protocol = udp
secondary protocol = udp

[management nodes]
hostname = blade1.sybase.com
hostname = blade2.sybase.com


[instance]
id = 1
name = ase1
node = blade1.sybase.com
primary address = 192.169.0.1
primary port start = 15015
secondary address = 192.169.1.1
secondary port start = 15015
errorlog = /opt/sybase/ASE-15_0/install/ase1.log
additional run parameter = -M/opt/sybase/ASE-15_0

[instance]
id = 2
name = ase2
node = blade2.sybase.com
primary address = 192.169.0.2
primary port start = 16015
secondary address = 192.169.1.2
secondary port start = 16015
errorlog = /opt/sybase/ASE-15_0/install/ase2.log
additional run parameter = -M/opt/sybase/ASE-15_0

For an example of cluster input file where all instances are located on a single node, see the Clusters Users Guide.