Upgrading Adaptive Server

To upgrade an Adaptive Server in a high availability configuration, you must temporarily break the companionship between the primary and secondary companion, and disable monitoring of the Adaptive Server resource groups. You can shutdown or restart either Adaptive Server independently during the upgrade process without triggering unexpected failovers by the HACMP cluster.

NoteYou cannot add, delete, or modify any databases, objects, users, or logins during the upgrade process. Making these changes after the companionship is dropped and before it is reestablished may cause the upgrade to fail or destabilize the cluster by causing inconsistencies between servers.

StepsStopping the monitoring service and dropping companionship

  1. As root, issue these commands to take the resource group offline:

    dbcc traceoff(2209)
    clRGmove -g secondary_resource_group -d -s false
    clRGmove -g secondary_resource_group -d -s true
    clRGmove -g group_name -d -s false
    clRGmove -g group_name -d -s true
    

    You may also use SMIT (see your SMIT user documentation).

  2. On all nodes in the cluster, stop the monitoring service. As root, issue:

    ps -ef | grep "RUNHA_server_name.sh monitor"
    kill -9 pid
    

    After killing the monitoring process, you can bring the companion server down as many times as necessary and it will not fail over.

  3. From the secondary companion, issue:

    sp_companion primary_server_name, "drop"
    
  4. (For symmetric configuration) Drop the secondary’s companionship from the primary companion:

    sp_companion secondary_server_name,"drop"
    
  5. Ensure that both nodes are in single-server mode by issuing, on each node:

    sp_companion
    

    If the companions are in single-server mode, they return:

    Server 'server_name' is not cluster configured.
    Server 'server_name' is currently in 'Single server' mode.
    

StepsUpgrading Adaptive Server

  1. On each node, disable high availability:

    sp_configure 'enable HA', 0
    

    Restart Adaptive Server for this change to take effect.

  2. Follow the instructions in the installation guide to upgrade each server.

  3. On all nodes, reenable high availability:

    sp_configure 'enable HA', 1
    

    Restart Adaptive Server for the change to take effect.

  4. On the updraded servers, reinstall the scripts (installmaster, installhasvss, installsecurity, and so on). See “Reinstalling installmaster” and “Rerunning installhasvss”. When you reinstall installmaster, you must reinstall installhasvss.

  5. Ensure that permissions are set correctly for the sybha binary and sybhausers file.

    As root, issue these commands from $SYBASE/$SYBASE_ASE/bin:

    chown root sybha
    chgrp sybhagrp sybha
    chmod 4550 sybha
    

    As root, perfrom these tasks from $SYBASE/$SYBASE_ASE/install:

    1. Ensure that the sybase user is included in the sybhauser file.

    2. Issue:

      chown root sybhauser
      chmod 600 sybhauser
      
  6. Verify:

StepsReestablishing companionship and resuming package monitoring

  1. On each node, manually restart Adaptive Server.

  2. As root, restore the monitoring service for the cluster by issuing this command, which automatically executes the RUNHA_server_name.sh monitoring script:

    /usr/sbin/cluster/etc/rc.cluster -boot '-N' '-b' '-i'
    
  3. Verify you have performed the prerequisite steps for establishing companionship described in “Configuring companion servers for failover”.

  4. Reestablish companionship between the servers. On the secondary server, issue:

    dbcc traceon (2209)
    sp_companion primary_server_name,configure
    

    NoteFor symmetric configurations, issue this command on both companions.

    If the secondary server includes user databases, you may see one or more warning messages, which you can safely ignore:

    Msg 18739, Level 16, State 1:
    Server 'server_name', Procedure 'sp_hacmpcfgvrfy', Line 102:
    Database 'database_name': a user database exists. Drop this
    database and retry the configuration again.
    
  5. Restart the resource groups on their appropriate nodes. As root, on the primary node, enter:

    clRGmove -g group_name -u -s false
    

    As root, on the secondary node, enter:

    clRGmove -g group_name -u -s true
    
  6. Run sp_companion to verify that the system is properly configured for failover. Verify failover and failback.