Installing Job Scheduler

All instances in the cluster share a single Job Scheduler. Set up Job Scheduler so that, in the event the instance on which it is running fails, Job Scheduler can fail over to another node.

  1. Create a device called sybmgmtdev with a size of at least 90MB on a shared raw device that is accessible to all instances in the cluster.
  2. Run the installjsdb script:
    isql –Usa –Psa_password –Sservername 
    –i $SYBASE/$SYBASE_ASE/scripts/installjsdb
    Note: You must have the directory with the location of the isql executable ($SYBASE/$SYBASE_OCS/bin) in your path.
    The installjsdb script looks for the sybmgmtdb database. If it exists, the script creates Job Scheduler tables and stored procedures. If it does not exist, the script looks for a sybmgmtdev device on which to create the sybmgmtdb database, tables, and stored procedures.
    Note: If the installjsdb script finds neither a sybmgmtdev device nor a sybmgmtdb database, it creates a sybmgmtdb database on the master device. SAP strongly recommends that you remove the sybmgmtdb database from the master device to make recovery easier in the case of a disk failure.
  3. Create a directory services entry for the JSAGENT in the interfaces file using dscp, dsedit, or a text editor as appropriate. SAP suggests that you name the entry “clustername_JSAGENT”.
    To enable high availability failover, the JSAGENT entry must contain master and query rows for each node in the cluster. For example, to add a JSAGENT entry for the cluster “mycluster” with two nodes, the syntax might be:
    mycluster_JSAGENT
        master tcp /dev/tcp node_name1 17780
        query tcp /dev/tcp node_name1 17780
        master tcp /dev/tcp node_name2 16780
        query tcp /dev/tcp node_name2 16780
    The host name must match the name returned by the uname -n command executed at the UNIX prompt. For example, on host "myxml1," uname -n returns the value "myxml1.sybase.com," and on host "myxml2", uname -n returns the value “myxml2.sybase.com.” The correct entry for JSAGENT is:
    mycluster_JSAGENT
        master tcp /dev/tcp myxmll.sybase.com 17780
        query tcp /dev/tcp myxmll.sybase.com 17780
        master tcp /dev/tcp myxml2.sybase.com 16780
        query tcp /dev/tcp myxml2.sybase.com 16780
    The host name for the JSAGENT entry must be identical to the host name of the instances. For example, instance 1 has an entry with "asekernel1.sybase.com" and instance 2 has "asekernel2":
    INSTANCE_1
        master tcp /dev/tcp asekernel1.sybase.com 17700
        query tcp /dev/tcp asekernel1.sybase.com 17700
    INSTANCE_2
        master tcp /dev/tcp asekernel2 16700
        query tcp /dev/tcp asekernel2 16700
    The correct entry for JSAGENT should have:
    mycluster_JSAGENT
        master tcp /dev/tcp asekernel1.sybase.com 17780
        query tcp /dev/tcp asekernel1.sybase.com 17780
        master tcp /dev/tcp asekernel2 16780
        query tcp /dev/tcp asekernel2 16780
    Note: You must specify a port that is not currently in use.
    See Directory Services in the System Administration Guide.
  4. Using sp_addserver, create an entry in the sysservers table for the cluster. For example:
    sp_addserver SYB_JSAGENT, null, mycluster_JSAGENT
    See the Reference Manual: Commands for more information about sp_addserver.
  5. Enable Job Scheduler:
    sp_configure "enable job scheduler", 1
  6. To start Job Scheduler, you can either restart the server, or execute:
    use sybmgmtdb�
    go�
    sp_js_wakeup "start_js", 1
    go�
  7. To determine the instance on which Job Scheduler is running, query the global variable @@jsinstanceid:
    select @@jsinstanceid
    go