Before configuring the cluster, create a cluster input file that specifies the name of the cluster, the number of instances in the cluster, the path to the directories containing the interfaces file, log files, quorum disk devices, and other required configuration information. Choose any name for the cluster input file (for example, mycluster.inp).
When you configure the cluster, Adaptive Server reads the information from the cluster input file and stores it securely in the quorum device. Adaptive Server subsequently retrieves the cluster configuration information from the quorum device.
See Reconfiguring the Cluster for information about changing configuration information after the cluster has been initialized.
The cluster input file is distinct from the server configuration file, which stores Adaptive Server configuration values associated with sp_configure.
# all input files must begin with a comment [cluster] name = cluster_name max instances = number master device = path_to_the_master_device configuration file = common_path_to_all_server_configuration_files primary protocol = udp | tcp | other secondary protocol = udp | tcp | other installation mode = shared | private configuration file = Adaptive_Server_configuration_file_name interfaces path = interfaces_file_path traceflags = trace_flag_number, trace_flag_number, . . . additional run parameters = any_additional_run_parameters [management nodes] hostname = node_name hostname = node_name hostname = node_name hostname = node_name [instance] id = instance_ID name = instance_name node = name_of_node_on_which_this_instance_runs primary address = primary_interconnect_address primary port start = port_number secondary address = secondary_interconnect_address secondary port start = port_number errorlog = file_name interfaces path = interfaces_file_path config file = path_to_server_configuration_file_for_this_instance traceflags = trace_flag_number, trace_flag_number, . . . additional run parameters = any_additional_run_parameters [instance] id = instance_ID name = instance_name node = name_of_node_on_which_this_instance_runs primary address = primary_interconnect_address primary port start = port_number secondary address = secondary_interconnect_address secondary port start = port_number errorlog = file_name interfaces path = interfaces_file_path configuration file = path_to_server_configuration_file_for_this_instance traceflags = trace_flag_number, trace_flag_number, . . . additional run parameters = any_additional_run_parameters
In a private installation where all configuration files share the same path name, this is the common path.
hostname = node_name – is the name of the node. This name should be the same as returned by the host name command when run on this node. There is one host name field for each node that must be registered. Specify this node only once in the management node section.
In a private installation where path names to individual server configuration files are not the same, this is the path to the current server configuration file.
The formula for finding the socket port range is:
start_port_number + (max_instances * 5) – 1
#input for a 2 node / 2 instance cluster [cluster] name = mycluster max instances = 2 master device = /opt/sybase/rawdevices/mycluster.master config file = /opt/sybase/ASE-15_0/mycluster.config interfaces path = /opt/sybase primary protocol = udp secondary protocol = udp [management nodes] hostname = blade1.sybase.com hostname = blade2.sybase.com [instance] id = 1 name = ase1 node = blade1.sybase.com primary address = 192.169.0.1 primary port start = 15015 secondary address = 192.169.1.1 secondary port start = 15015 errorlog = /opt/sybase/ASE-15_0/install/ase1.log additional run parameter = -M/opt/sybase/ASE-15_0 [instance] id = 2 name = ase2 node = blade2.sybase.com primary address = 192.169.0.2 primary port start = 16015 secondary address = 192.169.1.2 secondary port start = 16015 errorlog = /opt/sybase/ASE-15_0/install/ase2.log additional run parameter = -M/opt/sybase/ASE-15_0
For an example of cluster input file where all instances are located on a single node, see the Clusters Users Guide.