All instances in the cluster share a single Job Scheduler. Set up Job Scheduler so that, in the event the instance on which it is running fails, Job Scheduler can fail over to another node.
isql –Usa –Psa_password –Sservername –i $SYBASE/$SYBASE_ASE/scripts/installjsdb
mycluster_JSAGENT master tcp /dev/tcp node_name1 17780 query tcp /dev/tcp node_name1 17780 master tcp /dev/tcp node_name2 16780 query tcp /dev/tcp node_name2 16780The host name must match the name returned by the uname -n command executed at the UNIX prompt. For example, on host "myxml1," uname -n returns the value "myxml1.sybase.com," and on host "myxml2", uname -n returns the value “myxml2.sybase.com.” The correct entry for JSAGENT is:
mycluster_JSAGENT master tcp /dev/tcp myxmll.sybase.com 17780 query tcp /dev/tcp myxmll.sybase.com 17780 master tcp /dev/tcp myxml2.sybase.com 16780 query tcp /dev/tcp myxml2.sybase.com 16780
INSTANCE_1 master tcp /dev/tcp asekernel1.sybase.com 17700 query tcp /dev/tcp asekernel1.sybase.com 17700 INSTANCE_2 master tcp /dev/tcp asekernel2 16700 query tcp /dev/tcp asekernel2 16700
mycluster_JSAGENT master tcp /dev/tcp asekernel1.sybase.com 17780 query tcp /dev/tcp asekernel1.sybase.com 17780 master tcp /dev/tcp asekernel2 16780 query tcp /dev/tcp asekernel2 16780
sp_addserver SYB_JSAGENT, null, mycluster_JSAGENT
sp_configure "enable job scheduler", 1
use sybmgmtdb� go� sp_js_wakeup "start_js", 1 go�
select @@jsinstanceid go