To upgrade an Adaptive Server in a high availability configuration, you must temporarily break the companionship between the primary and secondary companion, and disable monitoring of the Adaptive Server resource groups. You can shutdown or restart either Adaptive Server independently during the upgrade process without triggering unexpected failovers by the HACMP cluster.
You cannot add, delete, or modify any databases, objects,
users, or logins during the upgrade process. Making these changes
after the companionship is dropped and before it is reestablished
may cause the upgrade to fail or destabilize the cluster by causing
inconsistencies between servers.
Stopping the monitoring service and dropping companionship
As root, issue these commands to take the resource group offline:
dbcc traceoff(2209) clRGmove -g secondary_resource_group -d -s false clRGmove -g secondary_resource_group -d -s true clRGmove -g group_name -d -s false clRGmove -g group_name -d -s true
You may also use SMIT (see your SMIT user documentation).
On all nodes in the cluster, stop the monitoring service. As root, issue:
ps -ef | grep "RUNHA_server_name.sh monitor" kill -9 pid
After killing the monitoring process, you can bring the companion server down as many times as necessary and it will not fail over.
From the secondary companion, issue:
sp_companion primary_server_name, "drop"
(For symmetric configuration) Drop the secondary’s companionship from the primary companion:
sp_companion secondary_server_name,"drop"
Ensure that both nodes are in single-server mode by issuing, on each node:
sp_companion
If the companions are in single-server mode, they return:
Server 'server_name' is not cluster configured. Server 'server_name' is currently in 'Single server' mode.
On each node, disable high availability:
sp_configure 'enable HA', 0
Restart Adaptive Server for this change to take effect.
Follow the instructions in the installation guide to upgrade each server.
On all nodes, reenable high availability:
sp_configure 'enable HA', 1
Restart Adaptive Server for the change to take effect.
On the updraded servers, reinstall the scripts (installmaster, installhasvss, installsecurity, and so on). See “Reinstalling installmaster” and “Rerunning installhasvss”. When you reinstall installmaster, you must reinstall installhasvss.
Ensure that permissions are set correctly for the sybha binary and sybhausers file.
As root, issue these commands from $SYBASE/$SYBASE_ASE/bin:
chown root sybha chgrp sybhagrp sybha chmod 4550 sybha
As root, perfrom these tasks from $SYBASE/$SYBASE_ASE/install:
Ensure that the sybase user is included in the sybhauser file.
Issue:
chown root sybhauser chmod 600 sybhauser
Verify:
Changes are properly reflected in resources, Resource Group properties, or any files related to high availability in the new installation (for example, PRIM_SYBASE_HOME, PRIM_RUNSCRIPT, PRIM_CONSOLE_LOG, and so on) in the /usr/sbin/cluster/event/RUNHA_server_name.sh script.
You have performed all actions required for establishing companionship described “Preparing Adaptive Server to work with high availability” and “Configuring the IBM AIX subsystem for Sybase Failover” and the system maintains these changes after the upgrade is complete.
Reestablishing companionship and resuming package
monitoring
On each node, manually restart Adaptive Server.
As root, restore the monitoring service for the cluster by issuing this command, which automatically executes the RUNHA_server_name.sh monitoring script:
/usr/sbin/cluster/etc/rc.cluster -boot '-N' '-b' '-i'
Verify you have performed the prerequisite steps for establishing companionship described in “Configuring companion servers for failover”.
Reestablish companionship between the servers. On the secondary server, issue:
dbcc traceon (2209) sp_companion primary_server_name,configure
For symmetric configurations, issue this command on
both companions.
If the secondary server includes user databases, you may see one or more warning messages, which you can safely ignore:
Msg 18739, Level 16, State 1: Server 'server_name', Procedure 'sp_hacmpcfgvrfy', Line 102: Database 'database_name': a user database exists. Drop this database and retry the configuration again.
Restart the resource groups on their appropriate nodes. As root, on the primary node, enter:
clRGmove -g group_name -u -s false
As root, on the secondary node, enter:
clRGmove -g group_name -u -s true
Run sp_companion to verify that the system is properly configured for failover. Verify failover and failback.