Upgrading Adaptive Server

To upgrade an Adaptive Server in a high availability configuration, you must temporarily break the companionship between the primary and secondary companion, and disable monitoring of the Adaptive Server packages. You can shut down or restart either Adaptive Server companion independently during the upgrade process without triggering unexpected failovers by the MC/ServiceGuard cluster.

NoteUnless you specify a different node with the -n node_name parameter, the MC/ServiceGuard commands for starting packages assume you are issuing the command for the node on which the command is performed. Before you issue these commands, use the the MC/ServiceGuard documentation to verify the packages are started on the correct node. You cannot add, delete, or modify any databases, objects, users, or logins during the upgrade process. Making these changes after the companionship is dropped and before it is reestablished may cause the upgrade to fail or destabilize the cluster by causing inconsistencies between servers.

StepsStopping the monitoring service and dropping companionship

  1. Drop the companionship. From the secondary companion, issue:

    sp_companion primary_server_name, "drop"
    
  2. From the primary companion, issue:

    sp_companion secondary_server_name,"drop"
    
  3. Ensure that both nodes are in single-server mode by issuing, on each node:

    sp_companion
    

    If the companions are in single-server mode, they return:

    Server 'server_name' is not cluster configured.
    Server 'server_name' is currently in 'Single server' mode.
    
  4. On Adaptive Server packages on all nodes in the cluster stop the monitoring service. As root, issue:

    cmhaltserv -v primary_package_name
    

StepsUpgrading Adaptive Server

  1. On each node, disable high availability:

    sp_configure 'enable HA', 0
    

    Restart Adaptive Server for this change to take effect.

  2. Follow the instructions in the installation guide to upgrade each server.

  3. On all nodes, reenable high availability:

    sp_configure 'enable HA', 1
    

    Restart Adaptive Server for this change to take effect.

  4. On the upgraded servers, reinstall the installmaster and installhasvss scripts. See “Reinstalling installmaster” and “Rerunning installhasvss”. When you reinstall installmaster, you must reinstall installhasvss.

  5. Ensure that permissions are set correctly for the sybha binary and sybhausers file.

    As root, issue these commands from $SYBASE/$SYBASE_ASE/bin:

    chown root sybha
    chmod 4550 sybha
    

    As root, perfrom these tasks from $SYBASE/$SYBASE_ASE/install:

    1. Ensure that the sybase user is included in the sybhauser file.

    2. Issue:

      chown root sybhauser
      chmod 600 sybhauser
      
  6. Changes are properly reflected in package properties or any files related to high availability in the new installation (for example, PRIM_SYBASE, PRIM_RUNSCRIPT, PRIM_CONSOLE_LOG, and so on) in the /etc/cmcluster/package_name/package_name.sh script.

StepsReestablishing companionship and resuming monitoring

  1. On each node, manually restart Adaptive Server.

  2. As root, from the primary node, restart the monitoring service:

    cmmodpkg -e primary_package_name
    
  3. Verify you have performed the prerequisite steps for establishing companionship described in “Configuring companion servers for failover”.

  4. Reestablish companionship between the servers.

    For asymmetric configurations, issue these commands on the secondary server; for symmetric configurations, issue these commands on both companions:

    dbcc traceon (2209)
    sp_companion primary_server_name, configure, NULL, user_name, password
    

    If user databases exist on the secondary server, you may see one or more warning messages, which you can safely ignore:

    Msg 18739, Level 16, State 1:
    Server 'svr2', Procedure 'sp_hacmpcfgvrfy', Line 102:
    Database 'svr2_db1': a user database exists. Drop this
    database and retry the configuration again.
    
  5. As root, take the packages offline:

    dbcc traceoff(2209)
    cmhaltpkg "primary_package_name"
    cmhaltpkg "secondary_package_name"
    
  6. Restart the packages on their appropriate nodes. As root on the primary node, issue:

    cmrunpkg -v "primary_package_name"
    

    As root on the secondary node, enter:

    cmrunpkg -v "secondary_package_name"
    
  7. Run sp_companion to verify that the system is properly configured for failover. To verify that failover and failback works for the companion servers, relocate the primary package to the secondary node.