To upgrade an Adaptive Server in a high availability configuration, you must temporarily break the companionship between the primary and secondary companion, and disable monitoring of the Adaptive Server packages. You can shut down or restart either Adaptive Server companion independently during the upgrade process without triggering unexpected failovers by the MC/ServiceGuard cluster.
Unless you specify a different node with the -n node_name parameter, the
MC/ServiceGuard commands for starting packages assume you
are issuing the command for the node on which the command is performed.
Before you issue these commands, use the the MC/ServiceGuard
documentation to verify the packages are started on the correct
node.
You cannot add, delete, or modify any databases, objects, users,
or logins during the upgrade process. Making these changes after
the companionship is dropped and before it is reestablished may
cause the upgrade to fail or destabilize the cluster by causing
inconsistencies between servers.
Stopping the monitoring service and dropping companionship
Drop the companionship. From the secondary companion, issue:
sp_companion primary_server_name, "drop"
From the primary companion, issue:
sp_companion secondary_server_name,"drop"
Ensure that both nodes are in single-server mode by issuing, on each node:
sp_companion
If the companions are in single-server mode, they return:
Server 'server_name' is not cluster configured. Server 'server_name' is currently in 'Single server' mode.
On Adaptive Server packages on all nodes in the cluster stop the monitoring service. As root, issue:
cmhaltserv -v primary_package_name
On each node, disable high availability:
sp_configure 'enable HA', 0
Restart Adaptive Server for this change to take effect.
Follow the instructions in the installation guide to upgrade each server.
On all nodes, reenable high availability:
sp_configure 'enable HA', 1
Restart Adaptive Server for this change to take effect.
On the upgraded servers, reinstall the installmaster and installhasvss scripts. See “Reinstalling installmaster” and “Rerunning installhasvss”. When you reinstall installmaster, you must reinstall installhasvss.
Ensure that permissions are set correctly for the sybha binary and sybhausers file.
As root, issue these commands from $SYBASE/$SYBASE_ASE/bin:
chown root sybha chmod 4550 sybha
As root, perfrom these tasks from $SYBASE/$SYBASE_ASE/install:
Ensure that the sybase user is included in the sybhauser file.
Issue:
chown root sybhauser chmod 600 sybhauser
Changes are properly reflected in package properties or any files related to high availability in the new installation (for example, PRIM_SYBASE, PRIM_RUNSCRIPT, PRIM_CONSOLE_LOG, and so on) in the /etc/cmcluster/package_name/package_name.sh script.
Reestablishing companionship and resuming monitoring
On each node, manually restart Adaptive Server.
As root, from the primary node, restart the monitoring service:
cmmodpkg -e primary_package_name
Verify you have performed the prerequisite steps for establishing companionship described in “Configuring companion servers for failover”.
Reestablish companionship between the servers.
For asymmetric configurations, issue these commands on the secondary server; for symmetric configurations, issue these commands on both companions:
dbcc traceon (2209) sp_companion primary_server_name, configure, NULL, user_name, password
If user databases exist on the secondary server, you may see one or more warning messages, which you can safely ignore:
Msg 18739, Level 16, State 1: Server 'svr2', Procedure 'sp_hacmpcfgvrfy', Line 102: Database 'svr2_db1': a user database exists. Drop this database and retry the configuration again.
As root, take the packages offline:
dbcc traceoff(2209) cmhaltpkg "primary_package_name" cmhaltpkg "secondary_package_name"
Restart the packages on their appropriate nodes. As root on the primary node, issue:
cmrunpkg -v "primary_package_name"
As root on the secondary node, enter:
cmrunpkg -v "secondary_package_name"
Run sp_companion to verify that the system is properly configured for failover. To verify that failover and failback works for the companion servers, relocate the primary package to the secondary node.