Configuring the Sun Cluster subsystem for Sybase failover

See the Sun Cluster high availability subsystem manuals for information about installing the high availability subsystem.

This section assumes that the high availability subsystem is already installed.

NoteThe $SYBASE/$SYBASE_ASE/install directories for each companion must include RUNSERVER files after installing Adaptive Server on the local disks.

inst_ha_script sets up the environment for Sybase failover to run with the Sun Cluster high availability subsystem. inst_ha_scripts is located in $SYBASE/%SYBASE_ASE/install. Before you run this script, you must edit it so that:

After you have modified inst_ha_scripts for your site, as root, run it to:

  1. Copy the following scripts to /opt/SUNWcluster/ha/sybase:

  2. Copy the following scripts to /opt/SUNWcluster/bin

  3. Change the permissions for the files listed in steps 1 and 2 so the owner and group is bin, and have their permissions set to 755. For example, to change the permissions for sybase_svc_stop, move to /opt/SUNWcluster/bin and issue:

    chmod 755 sybase_svc_stop
    chown bin sybase_svc_stop
    chgrp bin sybase_svc_stop
    
  4. Copy the following scripts to /etc/opt/SUNWscsyb

  5. Change the permissions for all these files so the owner is root and group is sys, and have their permissions set to 444. For example, to change the permissions for hasybase_support, move to /opt/SUNWcluster/bin and issue:

    chmod 444 hasybase_support
    chown root hasybase_support 
    chgrp sys hasybase_support
    

    NoteThis ends the tasks inst_ha_scripts performs. You must manually perform the rest of the steps in this section.

  6. Create a file named sybtab in the /var/opt/sybase directories for both nodes. This file must be identical on both nodes. Edit sybtab to contain:

    Use the following syntax for each entry:

    server_name:$SYBASE path
    

    where server_name is the name of the Adaptive Server or Backup Server.

    For example, the sybtab file for MONEY1 and PERSONNEL1 would look similar to:

    MONEY1:/SYBASE12_5
    MONEY1_back:/SYBASE12_5
    PERSONNEL1:/SYBASE12_5
    PERSONNEL1_back:/SYBASE12_5 
    SYBASE_ASE:ASE-12_5
    SYBASE_OCS:OCS-12_5
    
  7. Run the following command to make sure the logical hosts are running on both nodes:

    haget -f mastered
    

    haget returns the name of the logical host it is mastering. For example, if this command is run on FIN1, it returns:

    loghost-MONEY1
    
  8. If you have installed the $SYBASE directory on a multihost disk, create the setup files for the fault monitor. Copy the following directories (with their subdirectories) and files from $SYBASE to /var/opt/sybase:

    The ctlib.loc file appears in /var/opt/sybase and in /var/opt/sybase/locales/us_english/iso_1.

  9. Register the Sybase service using the hareg command. Run hareg on only one node of the cluster. As root, enter:

    hareg -s -r sybase -h 
        loghost-primary_companion,loghost-secondary_companion
    

    where loghost-primary_companion and loghost-secondary_companion are the two logical hosts defined on the primary and secondary nodes. For example, to register the Sybase service for primary companion MONEY1 and secondary companion PERSONNEL1:

    hareg -s -r sybase -h loghost-MONEY1,loghost-PERSONNEL1
    

    For more information about creating logical hosts and the hareg command, see the Sun documentation.

  10. Check the status of the Sybase service. As root, issue:

    hareg
    

    hareg should return:

    sybase  off
    

    If the output shows that Sybase service is off, then, still as root, activate the Sybase service:

    hareg -y sybase
    

    hareg returns:

    sybase  on
    
  11. Register the primary and secondary companions with the logical hosts by issuing the hasybase command on either node of the cluster:

    hasybase insert server_name loghost_name 60 10 120 300 srvlogin/srvpasswd /$SYBASE/$SYBASE_ASE/install/RUNSERVER_file_name 
    

    where:

  12. Issue the hasybase command to start the primary and secondary companions and invoke the monitors for both companion servers:

    hasybase start companion_name
    

    where companion_name is the name of the companion you want to start monitoring. For example, to begin monitoring MONEY1:

    hasybase start MONEY1
    

    Notehasybase starts the companions automatically if they are not running when the command is issued.

When two adaptive servers are configured as asymmetric companions, you must start the monitor for the primary companion server and set it to on, and you must stop the monitor for the secondary companion and set it to 'off'. The secondary companion server must be started with its RUN_server file, otherwise the failover from primary server to secondary server will not succeed when something goes wrong on the primary server. For example, to configure MONEY1 and PERSONNEL1 as asymmetric companions with MONEY1 as the primary companion:

  1. On MONEY1, start monitoring MONEY1 (if MONEY1 is not running, it is started):

    hasybase start MONEY1
    
  2. On PERSONNEL1, start PERSONNEL1

    SYBASE/$SYBASE_ASE/install/RUN_PERSONNEL1 &
    

When two adaptive servers are configured as symmetric companions, the monitors for both companion servers should be started, otherwise the failover will not succeed. For example, to configure MONEY1 and PERSONNEL1 as symmetric companions:

  1. On MONEY1, start monitoring MONEY1 (if MONEY1 is not running, it is started):

    hasybase start MONEY1
    
  2. On PERSONNEL1, start monitoring PERSONNEL1 (if PERSONNEL1 is not running, it is started):

    hasybase start PERSONNEL1
    

For more information about configuring Adaptive Server for failover, see “Configure companion servers for failover”.