Start the Cluster Edition with only one instance until
the upgrade is complete.
Follow these steps to manually upgrade your old server.
-
Back up all old databases.
-
Start the earlier version of Adaptive Server:
-
Move to the old $SYBASE directory
cd $SYBASE
-
Source SYBASE.sh (Bourne shell)
or SYBASE.csh (C shell).
source SYBASE.csh
-
Execute the runserver file:
$SYBASE/$SYBASE_ASE/install/RUN_server_name
-
In another window, change to the new $SYBASE directory.
-
Source SYBASE.sh (Bourne shell)
or SYBASE.csh (C shell) in the new $SYBASE directory:
source SYBASE.csh
-
Run the pre-upgrade test on the old server
using the preupgrade utility, located at $SYBASE/$SYBASE_ASE/upgrade,
where $SYBASE and $SYBASE_ASE are
the values for the Cluster Edition. Do not change the default packet size from 512 to 2048 until
after the upgrade is complete.
Note: If during pre-upgrade the default
network packet size is set to 2048 then you cannot login
to finish the pre-upgrade on a 12.5.x server because there
is no way to tell preupgrade to use 2048 bytes
as a packet size.
-
Execute the following:
$SYBASE/$SYBASE_ASE/upgrade/preupgrade -Sserver_name -Uusername -Ppassword-I $OLD_SYBASE/interfaces
Where:
-
$SYBASE_ASE – is
the cluster edition of Adaptive Server
-
If -U option is ignored, then -P option is for the system administrator's password.
-
Correct all errors from the output of the pre-upgrade
test. Re-run preupgrade until it succeeds
without errors.
-
Restart the old Adaptive Server, if required.
-
Run the reserved word check on the old Adaptive
Server:
-
Install the Cluster
Edition version of installupgrade:
isql -Usa -Ppassword -Sserver_name
-i$SYBASE/$SYBASE_ASE/scripts/installupgrade
-
Install the Cluster Edition version of usage.sql:
isql -Usa -Ppassword -Sserver_name
-i$SYBASE/$SYBASE_ASE/upgrade/usage.sql
-
Log in to the old Adaptive Server and execute sp_checkreswords on all
databases:
1> use sybsystemprocs
2> go
1> sp_checkreswords
2> go
-
Correct any errors the reserved word check reveals.
-
Shut down the old Adaptive Server.
-
Copy the old Adaptive Server
configuration file mycluster.cfg from the old $SYBASE directory
to the new $SYBASE directory.
-
If you are upgrading from a 15.0.1 Cluster Edition or a 15.0.3 Cluster Edition to a 15.5 Cluster Edition server, skip this
step. Complete this step if you are upgrading from a nonclustered
server. Create the cluster input file. For example mycluster.inp:
#all input files must begin with a comment
[cluster]
name = mycluster
max instances = 2
master device = /dev/raw/raw101
config file = /sybase/server_name.cfg
interfaces path = /sybase/
traceflags =
primary protocol = udp
secondary protocol = udp
[management nodes]
hostname = blade1
hostname = blade2
[instance]
id = 1
name = server_name
node = blade1
primary address = blade1
primary port start = 38456
secondary address = blade1
secondary port start = 38466
errorlog = /sybase/install/server_name.log
config file = /sybase/server_name.cfg
interfaces path = /sybase/
traceflags =
additional run parameters =
[instance]
id = 2
name = server_name_ns2
node = blade2
primary address = blade2
primary port start = 38556
secondary address = blade2
secondary port start = 38566
errorlog = /sybase/install/server_name_ns2.log
config file = /sybase/server_name.cfg
interfaces path = /sybase/
traceflags =
additional run parameters =
For an example of what this input file must contain,
see the Creating a Cluster Input File topic. Note:
The first instance’s server_name should
be the name of the old server from which you are upgrading.
-
If you are upgrading from a 15.0.1 Cluster Edition or a or 15.0.3 Cluster Edition to a 15.5 Cluster Edition server, skip this
step. Complete this step if you are upgrading from a nonclustered
server. Add an additional entry to the interfaces file for each of
the instances in your cluster input file (described in Step 9). See the Configuring the Interfaces File topic for more information.
-
Complete this step if you are upgrading from a nonclustered
server. If you are upgrading from a 15.0.1 Cluster Edition or a or 15.0.3 Cluster Edition to a 15.5 Cluster Edition server, use the
original quorum device and cluster input file, and specify --buildquorum=force to
rebuild the quorum and to override the existing one. Determine the raw device used for the quorum device. For the
version of the Cluster Edition, use a raw device on shared disks.
Do not use a file-system device.
-
Create the quorum device and start the new instance
with the old master device:
$SYBASE/$SYBASE_ASE/bin/dataserver\
--instance=server_name\
--cluster-input=mycluster.inp\
--quorum-dev=/dev/raw/raw102
--buildquorum
-M$SYBASE
Note: The server_name you indicate
with the --instance parameter
must be the name of the server from which you are upgrading, and
the interfaces file must contain an entry for this instance. Any
additional options such as -M must be
present in the RUN_FILE as dataserver won’t read
them from the quorum. For complete dataserver documentation see
the Clusters Users Guide.
-
Run the upgrade utility, where instance_name is
the first instance in your cluster that has the same name as the
server from which you are upgrading:
$SYBASE/$SYBASE_ASE/upgrade/upgrade
-S instance_name -Ppassword
-
If you are upgrading from a 15.0.1Cluster Edition or 15.0.3 Cluster Edition to a 15.5 Cluster Edition server, skip
this step. Log in to the instance. Create the local system temporary
database devices and local system temporary databases for each of
the instances in your cluster. The syntax is:
create system temporary database database_name
for instance instance_name on device_name = size
See the Setting Up Local System Temporary Databases topic for more information.
-
Shut down the instance. Log in to the instance with isql and
issue:
shutdown instance_name
-
Restart the cluster.
$SYBASE/$SYBASE_ASE/bin/dataserver \
--instance=server_name\
--quorum-dev=/dev/raw/raw102\
-M$SYBASE
-
Log in to the Cluster Edition and execute sp_checkreswords on
all of databases. For example, log in to the instance and execute:
1> use sybsystemprocs
2> go
1> sp_checkreswords
2> go
-
Correct any errors from the reserved word check.
-
Copy and modify the old run_server file
to new directory. You must edit it to point to binaries in the correct $SYBASE directories:
-
Add this argument to the run_server file:
--quorum-dev=<path
to the quorum device>
-
Remove these options, as the information is now
stored in the quorum device.
See the Creating the Runserver Files topic for more information.
-
Start each of instance in the cluster:
cd $SYBASE/$SYBASE_ASE/install
startserver -fRUN_server_name
-
Install the system procedures:
isql -Usa -Ppassword -Sserver_name
-i$SYBASE/$SYBASE_ASE/scripts/installmaster
-
If Adaptive Server includes auditing, run installsecurity:
isql -Usa -P password -S server_name
-i$SYBASE/$SYBASE_ASE/scripts/installsecurity
-
Run installcommit:
isql -Usa -Ppassword -Sserver_name
-i$SYBASE/$SYBASE_ASE/scripts/installcommit