The Cluster Input File

Before configuring the cluster, create a cluster input file that specifies the name of the cluster, the number of instances in the cluster, the path to the directories containing the interfaces file, log files, quorum disk devices, and other required configuration information. Choose any name for the cluster input file (for example, mycluster.inp).

When you configure the cluster, Adaptive Server reads the information from the cluster input file and stores it securely in the quorum device. Adaptive Server subsequently retrieves the cluster configuration information from the quorum device.

See Reconfiguring the Cluster for information about changing configuration information after the cluster has been initialized.

Note: You can configure one cluster with each cluster input file.

The cluster input file is distinct from the server configuration file, which stores Adaptive Server configuration values associated with sp_configure.

The syntax for the cluster input file is:
# all input files must begin with a comment
[cluster]
name = cluster_name
max instances = number
master device = path_to_the_master_device
configuration file = common_path_to_all_server_configuration_files
primary protocol = udp | tcp | other
secondary protocol = udp | tcp | other
installation mode = shared | private
configuration file = Adaptive_Server_configuration_file_name
interfaces path = interfaces_file_path 
traceflags = trace_flag_number, trace_flag_number, . . . 
additional run parameters = any_additional_run_parameters

[management nodes] 
hostname = node_name
hostname = node_name
hostname = node_name
hostname = node_name

[instance]
id = instance_ID
name = instance_name
node = name_of_node_on_which_this_instance_runs
primary address = primary_interconnect_address
primary port start = port_number
secondary address = secondary_interconnect_address
secondary port start = port_number
errorlog = file_name
interfaces path = interfaces_file_path
config file = path_to_server_configuration_file_for_this_instance
traceflags = trace_flag_number, trace_flag_number, . . .
additional run parameters = any_additional_run_parameters

[instance]
id = instance_ID
name = instance_name
node = name_of_node_on_which_this_instance_runs
primary address = primary_interconnect_address
primary port start = port_number
secondary address = secondary_interconnect_address
secondary port start = port_number
errorlog = file_name
interfaces path = interfaces_file_path
configuration file = path_to_server_configuration_file_for_this_instance
traceflags = trace_flag_number, trace_flag_number, . . .
additional run parameters = any_additional_run_parameters
where:
In this example, the cluster input file defines a cluster named "mycluster" with two instances, "ase1" on node "blade1," and "ase2" on node "blade2." The addresses on the private interconnects are 192.169.0.1 and 192.169.0.2. The name of the server configuration file is mycluster.cfg. The maximum instances is 2. "ase1" has a starting port range of 15015, and "ase2" has a starting port range of 16015.This adds additional information to the mycluster cluster:
#input for a 2 node / 2 instance cluster
[cluster]
name = mycluster
max instances = 2
master device = /opt/sybase/rawdevices/mycluster.master
config file = /opt/sybase/ASE-15_0/mycluster.config
interfaces path = /opt/sybase
primary protocol = udp
secondary protocol = udp

[management nodes]
hostname = blade1.sybase.com
hostname = blade2.sybase.com


[instance]
id = 1
name = ase1
node = blade1.sybase.com
primary address = 192.169.0.1
primary port start = 15015
secondary address = 192.169.1.1
secondary port start = 15015
errorlog = /opt/sybase/ASE-15_0/install/ase1.log
additional run parameter = -M/opt/sybase/ASE-15_0

[instance]
id = 2
name = ase2
node = blade2.sybase.com
primary address = 192.169.0.2
primary port start = 16015
secondary address = 192.169.1.2
secondary port start = 16015
errorlog = /opt/sybase/ASE-15_0/install/ase2.log
additional run parameter = -M/opt/sybase/ASE-15_0

For an example of a cluster input file where all instances are located on a single node, see the Clusters Users Guide.