Upgrading from One Version of Adaptive Server Cluster Edition to Another

Use this manual method if you are upgrading from an earlier version of Adaptive Server Cluster Edition to version 15.7 Cluster Edition. Start the Cluster Edition with only one instance until the upgrade is complete.

Prerequisites

If you are upgrading to Adaptive Server Cluster Edition 15.7 SP100 from Adaptive Server versions through 15.x, perform the preparatory tasks before upgrading.

Preparatory tasks are not required if you are updating from Adaptive Server Cluster Edition version 15.x and higher.

Task
  1. Back up all old databases.
  2. Verify that you have the old installation; ensuring that you created a cluster with the old installation:
    1. Start the earlier version of Adaptive Server:
      Move to the old $SYBASE directory:
      cd $SYBASE
    2. Run the source command on the SYBASE script file:
      • Bourne shell – source SYBASE.sh
      • C shell – source SYBASE.csh
    3. Execute the runserver file:
      $SYBASE/$SYBASE_ASE/install/RUN_server_name

      You can also use sybcluster to bring up an earlier version of an Adaptive Server cluster.

      The command line-based sybcluster utility allows you to create and manage a cluster. The utility uses the SCC Agent Framework to "plug in" to the Sybase Control Center remote command and control agent on each node in the cluster. The SCC agent processes the sybcluster commands that let you manage the cluster. See the Users Guide to Clusters for detailed information about sybcluster, and the Sybase Control Center for Adaptive Server for the SCC Agent Framework.

      If you are upgrading from:
      • Adaptive Server version 15.7 ESD #1 or later and you chose to install and configure the SCC remote command and control agent – SCC starts automatically when you start sybcluster.

        If you did not configure SCC, start it manually from $SYBASE/SCC-3_2/bin/scc.sh.

      • Versions of Adaptive Server earlier than 15.7 ESD #1 (such as 15.7 GA or 15.5) – start SCC manually from $SYBASE_UA/bin/uafstartup.sh.
      To start sybcluster, enter:
      sybcluster -U uafadmin -P password -C testcluster -F "ibmpoc01-p3:8888"
      > start cluster
    4. In another window, change to the new $SYBASE directory and source SYBASE.sh (Bourne shell) or SYBASE.csh (C shell).
  3. If you are upgrading from Adaptive Server version 12.5.4, run the reserved word check on the old Adaptive Server:
    1. Install the Cluster Edition version of installupgrade:
      isql -Usa -Ppassword -Sserver_name 
       -i$SYBASE/$SYBASE_ASE/scripts/installupgrade
    2. Install the Cluster Edition version of usage.sql:
      isql -Usa -Ppassword -Sserver_name 
       -i$SYBASE/$SYBASE_ASE/upgrade/usage.sql
    3. Log in to the old Adaptive Server and execute sp_checkreswords on all databases:
      1> use sybsystemprocs
      2> go
      1> sp_checkreswords
      2> go
    4. Correct any errors the reserved word check reveals.
  4. If your "sa" password is set to NULL, create a new password, as Adaptive Server 15.7 ESD #2 requires a password for the "sa" login.
  5. Shut down the old Adaptive Server using isql.
  6. (Required only if you are upgrading from a nonclustered server) Create the cluster input file. For example mycluster.inp:
    #all input files must begin with a comment
    
    [cluster]
    name = mycluster
    max instances = 2
    master device = /dev/raw/raw101
    config file = /sybase/server_name.cfg
    interfaces path = /sybase/
    traceflags =
    primary protocol = udp
    secondary protocol = udp
    
    [management nodes]
    hostname = blade1
    hostname = blade2
    
    [instance]
    id = 1
    name = server_name
    node = blade1
    primary address = blade1
    primary port start = 38456
    secondary address = blade1
    secondary port start = 38466
    errorlog = /sybase/install/server_name.log
    config file = /sybase/server_name.cfg
    interfaces path = /sybase/
    traceflags =
    additional run parameters =
    
    [instance]
    id = 2
    name = server_name_ns2
    node = blade2
    primary address = blade2
    primary port start = 38556
    secondary address = blade2
    secondary port start = 38566
    errorlog = /sybase/install/server_name_ns2.log
    config file = /sybase/server_name.cfg
    interfaces path = /sybase/
    traceflags =
    additional run parameters =
    For an example of what this input file must contain, see The Cluster Input File.
    Note: The first instance’s server_name should be the name of the old server from which you are upgrading.
  7. (Required only if you are upgrading from a nonclustered server) Add an entry to the interfaces file for each of the instances in your cluster input file (described in the previous step). See Configuring the Interfaces File.
  8. Create the quorum device and start the new instance with the old master device.
    $SYBASE/$SYBASE_ASE/bin/dataserver\
    --instance=server_name\
    --cluster-input=mycluster.inp\
    --quorum-dev=/dev/raw/raw102
    --buildquorum
    -M$SYBASE
    Note: The server_name you indicate with the --instance parameter must be the name of the server from which you are upgrading, and the interfaces file must contain an entry for this instance. Any additional options such as -M must be present in the RUN_FILE as the dataserver does not read them from the quorum. For complete dataserver documentation, see the Clusters Users Guide.

    If you are upgrading from a 15.0.1 or a 15.0.3 Cluster Edition to a Cluster Edition server version 15.5 or later, use the original quorum device and cluster input file, and specify --buildquorum=force to rebuild the quorum and to override the existing one. Determine the raw device used for the quorum device. For the version of the Cluster Edition, use a raw device on shared disks. Do not use a file-system device.

  9. (Skip this step if you are upgrading from a 15.0.1, 15.0.3, or 15.5 Cluster Edition to a 15.7 ESD #2 Cluster Edition server) Log in to the instance. Create the local system temporary database devices and local system temporary databases for each of the instances in your cluster. The syntax is:
    create system temporary database database_name
    		for instance instance_name on device_name = size
  10. Shut down the instance. Log in to the instance with isql and issue:
    shutdown instance_name
  11. Restart the cluster.
    $SYBASE/$SYBASE_ASE/bin/dataserver \
    --instance=server_name\
    --quorum-dev=/dev/raw/raw102\
    -M$SYBASE
  12. Log in to the Cluster Edition and execute sp_checkreswords on all of databases. For example, log in to the instance and execute:
    1> use sybsystemprocs
    2> go
    1> sp_checkreswords
    2> go
  13. Correct any errors from the reserved word check.
  14. If you are upgrading from Adaptive Server Cluster Edition version 15.5 or earlier, create a RUN_server file with the quorum device, and run that file:
    1. Add this argument to the run_server file: --quorum-dev=<path to the quorum device>
    2. Remove these options, as the information is now stored in the quorum device.
      • -c
      • -i
      • -e

    If you are upgrading from Adaptive Server Cluster Edition version 15.7 or later, you should already have a RUN_server file. Run the file.

  15. Start each instance in the cluster:
    cd $SYBASE/$SYBASE_ASE/install
    startserver -fRUN_server_name
  16. Install the system procedures:
    isql -Usa -Ppassword -Sserver_name
     -i$SYBASE/$SYBASE_ASE/scripts/installmaster
  17. If Adaptive Server includes auditing, run installsecurity:
    isql -Usa -P password -S server_name 
     -i$SYBASE/$SYBASE_ASE/scripts/installsecurity
  18. Run installcommit:
    isql -Usa -Ppassword -Sserver_name
     -i$SYBASE/$SYBASE_ASE/scripts/installcommit
Related tasks
Setting Up Local System and Temporary Databases
Creating Runserver Files