In Adaptive Server Cluster Edition version 15.5, you can configure your cluster as a "shared" or "private"
installation. See Chapter 1, "An Overview of the Cluster Edition," in the Clusters Users Guide.
Upgrading a symmetric multiprocessing (SMP) version of Adaptive Server to a private installation of the Cluster Edition must be manually performed. First, upgrade your Adaptive Server to a Cluster Edition shared installation, then switch to a private installation using the steps below. Since private installations were introduced in version 15.0.3, cluster instances created with earlier versions of Adaptive Server Cluster Edition automatically continue as a shared installation. See the installation guide for your platform for instructions on how to upgrade your SMP Adaptive Server to a shared-disk cluster.
Note:
When deciding on the installation location for Adaptive Server Cluster Edition 15.5, choose the location where you will install the private installation for this node. This location need not be accessible from other nodes participating in the cluster.
Changing from shared installation mode to private installation mode
-
Make sure each participating node in the cluster has its own $SYBASE environment variable. Typically, the private installation is performed on a local file system, as there is no longer a need for other nodes participating in the cluster to have access to this installation.
-
Install the Cluster Edition on each node participating in the cluster. You can set up one of the nodes to use the existing installation if it satisfies the needs, otherwise you can discard it at the end of this process. You may need to discard an existing installation if, for example, it is on an NFS file system being used by nodes and you want to install on a local file system. See the installation guide for your platform for instructions on how to install a Cluster Edition for every node.
-
On each node, shut down the Cluster and UAF Agent.
-
On one of the nodes in the cluster, set up your environment by sourcing SYBASE.csh or SYBASE.sh, depending on the shell you are using.
If the SYBASE installation location differs from the shared installation and the private installation, set up the environment from the shared installation area.
-
Extract the current cluster quorum configuration from the quorum device. For example:
% $SYBASE/$SYBASE_ASE/bin/qrmutil
--extract-config=mycluster_shared.cfg
--quorum-dev=/dev/raw/raw50m41
Executing command 'extract cluster configuration', argument 'mycluster_shared.cfg'...
Extracted input file 'mycluster_shared.cfg'
Command 'extract cluster configuration', argument 'mycluster_shared.cfg' succeeded.
qrmutil execution completed.
-
Create a new cluster configuration file and update the required information:
-
Make a copy of the extracted configuration file, then edit the new file to change required configurations, such as: cp mycluster_shared.cfg mycluster_private.cfg
-
Edit the new configuration file; in the [cluster] section.
Change:
installation mode = shared
To:
installation mode = private
-
In the [instance] section:
- Move the configuration file and interfaces entry from the [cluster] to the [instance] section
- If the SYBASE installation location has changed from shared to private, adjust the paths in the error log, config file, and interfaces path locations.
- If you have more than one instance in the configuration file, perform these actions for each instance. For example:
% cat mycluster_private.cfg
# All input files must begin with a comment
[cluster]
name = mycluster
max instances = 4
primary protocol = udp
secondary protocol = udp
master device = /dev/raw/raw1g2
traceflags =
additional run parameters =
installation mode = private
membership mode =
[management nodes]
hostname = nuno1
hostname = nuno2
[instance]
name = mycluster_instance1
id = 1
node = nuno1
primary address = nuno1
primary port start = 15100
secondary address = nuno1
secondary port start = 15181
errorlog = /mysybase1/mycluster_inst1.log
config file = /mysybase1/mycluster.cfg
interfaces path = /mysybase1
traceflags =
additional run parameters =
[instance]
name = mycluster_instance2
id = 2
node = nuno2
primary address = nuno2
primary port start = 15100
secondary address = nuno2
secondary port start = 15181
errorlog = /mysybase2/mycluster_inst2.log
config file = /mysybase2/mycluster.cfg
interfaces path = /mysybase2
traceflags =
additional run parameters =
-
Load the updated cluster configuration file into the cluster quorum device. For example:
% $SYBASE/$SYBASE_ASE/bin/qrmutil
--quorum-dev=/dev/raw/raw50m41
--cluster-input=mycluster_private.cfg
Loaded a new quorum configuration.
qrmutil execution completed.
-
If you have:
-
More than one node in the cluster or have changed the SYBASE installation location – copy the Adaptive Server configuration file—typically named
servername.cfg—and interfaces file from the original shared installation cluster into the corresponding interfaces path and configuration file locations for each instance in the private installation cluster. You can find the locations in the [instance] section of the updated cluster configuration file.
-
Only one node/instance in the cluster and are not changing the SYBASE installation directory – update the UAF agent configuration information. The agent plug-in XML file is located at $SYBASE/UAF-2_5/nodes/[machine_name]/plugins/[cluster_name]/agent-plugin.xml.
In it, replace:
<set-property property="ase.installation.mode" value="shared" />
With:
<set-property property="ase.installation.mode" value="private" />
-
Restart UAF Agent on each node in the cluster using the private installation directories. From the $SYBASE directory, enter UAF-2_5/bin/uafstartup.sh
-
If you have more than one node in the cluster or have changed the SYBASE installation location, deploy UAF Agent plug-in for each node:
-
Start sybcluster. For example, enter:
sybcluster -U uafadmin -P -C mycluster
-F "blade1, blade2,blade2"
-
Deploy the plug-in on each node individually. For example, enter:
deploy plugin agent "blade1"
deploy plugin agent "blade2"
deploy plugin agent "blade3"
See “The sybcluster Utility” in the Cluster Users Guide for complete syntax and usage information for sybcluster and the Adaptive Server plug-in.
-
You have now upgraded your shared installation to a private installation. You can start the cluster using start cluster command, or the individual nodes using start instance <instance name> command.
While issuing either command, you may see sybcluster display the following, including an error message that the cluster ID on the quorum device does not match the master device:
INFO - Starting the cluster mycluster instance mycluster_instance1 using the operating system command:
/mysybase1/ASE-15_0/bin/dataserver --quorum_dev= /dev/raw/raw50m41 --instance_name= mycluster_instance1
INFO - 01:00:00000:00000:2009/06/07 23:09:35.46 kernel Quorum UUID: 00000000-0000-0000-0000-000000000000
INFO - 01:00:00000:00000:2009/06/07 23:09:35.46 kernel Master UUID: 91f058aa-bc57-408d-854d-4c240883a6c9
INFO - 01:00:00000:00000:2009/06/07 23:09:35.46 kernel Unique cluster id on quorum device does not match master device. You may be using the wrong master device. If this is the correct master, pass 'create-cluster-id' on the command line to pair the devices.
When this occurs, reissue the same command, but add create-cluster-id as suggested in the message to pair the master device, and start the node manually. For example, issue:
/mysybase1/ASE-15_0/bin/dataserver --quorum_dev= /dev/raw/raw50m41 --instance_name= mycluster_instance1--create-cluster-id
The command should now run without an error message.
-
To add new nodes to this cluster, you can now use either the Sybase Central Adaptive Server plug-in or the sybcluster tool to do so. See Clusters Users Guide.
You have now upgraded your shared installation to a private installation.
To add new nodes to this cluster, you can now use either the Sybase Central Adaptive Server plug-in or the sybcluster tool to do so. See Clusters Users Guide.