Upgrading from One Version of Adaptive Server Cluster Edition to Another

Use this manual method if you are upgrading from an earlier version of Adaptive Server Cluster Edition to version 15.7 Cluster Edition. Start the Cluster Edition with only one instance until the upgrade is complete.

  1. If you are upgrading to Adaptive Server Cluster Edition 15.7 ESD #2 from Adaptive Server versions 12.5.4 through 15.x, perform preupgrade tasks.

    Preupgrade tasks are not required if you are updating from Adaptive Server version 15.x and higher.

  2. Back up all old databases.
  3. Verify that you have the old installation, then install the new server into its own installation directory:
    1. Start the earlier version of Adaptive Server:
      Move to the old $SYBASE directory:
      cd $SYBASE
    2. Run the source command on the SYBASE script file:
      • Bourne shell – source SYBASE.sh
      • C shell – source SYBASE.csh
    3. Execute the runserver file:
      $SYBASE/$SYBASE_ASE/install/RUN_server_name
      You can also use sybcluster to bring up an earlier version of an Adaptive Server cluster. For example:
      1. Enter $SYBASE_UA/bin/uafstartup.sh
      2. Start sybcluster:
        sybcluster -U uafadmin -P password -C testcluster -F "ibmpoc01-p3:8888"
        > start cluster
    4. In another window, change to the new $SYBASE directory and source SYBASE.sh (Bourne shell) or SYBASE.csh (C shell).
  4. Run the reserved word check on the old Adaptive Server:
    1. Install the Cluster Edition version of installupgrade:
      isql -Usa -Ppassword -Sserver_name 
       -i$SYBASE/$SYBASE_ASE/scripts/installupgrade
    2. Install the Cluster Edition version of usage.sql:
      isql -Usa -Ppassword -Sserver_name 
       -i$SYBASE/$SYBASE_ASE/upgrade/usage.sql
    3. Log in to the old Adaptive Server and execute sp_checkreswords on all databases:
      1> use sybsystemprocs
      2> go
      1> sp_checkreswords
      2> go
    4. Correct any errors the reserved word check reveals.
  5. If your "sa" password is set to NULL, create a new password, as Adaptive Server 15.7 ESD #2 requires a password for the "sa" login.
  6. Shut down the old Adaptive Server using isql.
  7. Copy the old Adaptive Server mycluster.cfg configuration file from the old $SYBASE directory to the new $SYBASE directory.
  8. (Required only if you are upgrading from a nonclustered server) Create the cluster input file. For example mycluster.inp:
    #all input files must begin with a comment
    
    [cluster]
    name = mycluster
    max instances = 2
    master device = /dev/raw/raw101
    config file = /sybase/server_name.cfg
    interfaces path = /sybase/
    traceflags =
    primary protocol = udp
    secondary protocol = udp
    
    [management nodes]
    hostname = blade1
    hostname = blade2
    
    [instance]
    id = 1
    name = server_name
    node = blade1
    primary address = blade1
    primary port start = 38456
    secondary address = blade1
    secondary port start = 38466
    errorlog = /sybase/install/server_name.log
    config file = /sybase/server_name.cfg
    interfaces path = /sybase/
    traceflags =
    additional run parameters =
    
    [instance]
    id = 2
    name = server_name_ns2
    node = blade2
    primary address = blade2
    primary port start = 38556
    secondary address = blade2
    secondary port start = 38566
    errorlog = /sybase/install/server_name_ns2.log
    config file = /sybase/server_name.cfg
    interfaces path = /sybase/
    traceflags =
    additional run parameters =
    For an example of what this input file must contain, see The Cluster Input File.
    Note: The first instance’s server_name should be the name of the old server from which you are upgrading.
  9. (Required only if you are upgrading from a nonclustered server) Add an entry to the interfaces file for each of the instances in your cluster input file (described in the previous step). See Configuring the Interfaces File.
  10. Create the quorum device and start the new instance with the old master device.
    $SYBASE/$SYBASE_ASE/bin/dataserver\
    --instance=server_name\
    --cluster-input=mycluster.inp\
    --quorum-dev=/dev/raw/raw102
    --buildquorum
    -M$SYBASE
    Note: The server_name you indicate with the --instance parameter must be the name of the server from which you are upgrading, and the interfaces file must contain an entry for this instance. Any additional options such as -M must be present in the RUN_FILE as the dataserver does not read them from the quorum. For complete dataserver documentation, see the Clusters Users Guide.

    If you are upgrading from a 15.0.1 or a 15.0.3 Cluster Edition to a Cluster Edition server version 15.5 or later, use the original quorum device and cluster input file, and specify --buildquorum=force to rebuild the quorum and to override the existing one. Determine the raw device used for the quorum device. For the version of the Cluster Edition, use a raw device on shared disks. Do not use a file-system device.

  11. (Skip this step if you are upgrading from a 15.0.1, 15.0.3, or 15.5 Cluster Edition to a 15.7 ESD #2 Cluster Edition server) Log in to the instance. Create the local system temporary database devices and local system temporary databases for each of the instances in your cluster. The syntax is:
    create system temporary database database_name
    		for instance instance_name on device_name = size
  12. Shut down the instance. Log in to the instance with isql and issue:
    shutdown instance_name
  13. Restart the cluster.
    $SYBASE/$SYBASE_ASE/bin/dataserver \
    --instance=server_name\
    --quorum-dev=/dev/raw/raw102\
    -M$SYBASE
  14. Log in to the Cluster Edition and execute sp_checkreswords on all of databases. For example, log in to the instance and execute:
    1> use sybsystemprocs
    2> go
    1> sp_checkreswords
    2> go
  15. Correct any errors from the reserved word check.
  16. Copy and modify the old run_server file to new directory. You must edit it to point to binaries in the correct $SYBASE directories:
    1. Add this argument to the run_server file: --quorum-dev=<path to the quorum device>
    2. Remove these options, as the information is now stored in the quorum device.
      • -c
      • -i
      • -e
  17. Start each instance in the cluster:
    cd $SYBASE/$SYBASE_ASE/install
    startserver -fRUN_server_name
  18. Install the system procedures:
    isql -Usa -Ppassword -Sserver_name
     -i$SYBASE/$SYBASE_ASE/scripts/installmaster
  19. If Adaptive Server includes auditing, run installsecurity:
    isql -Usa -P password -S server_name 
     -i$SYBASE/$SYBASE_ASE/scripts/installsecurity
  20. Run installcommit:
    isql -Usa -Ppassword -Sserver_name
     -i$SYBASE/$SYBASE_ASE/scripts/installcommit
Related tasks
Setting Up Local System and Temporary Databases
Creating Runserver Files