Best practices for multiple node configuration.
How Many Manager and Controller Nodes?
            
            
                - If you are not concerned about failure recovery or load sharing, a
                        single-node cluster may be enough.
- When you add nodes to the cluster, you can add them on the same host machine
                        as the first node or on different hosts. In a production environment, Sybase
                        recommends that you install additional nodes on different hosts, with no
                        more than one manager and one controller per host. This allows you to take
                        advantage of the load sharing and failure recovery features offered by the
                        clustering architecture.
- When you add the first few nodes to a cluster, Sybase recommends that you
                        maintain a one-to-one ratio of managers to controllers. Once you have four
                        manager nodes, the benefit of adding more diminishes. In a medium-sized or
                        large cluster, there are typically more controller nodes than manager
                        nodes—add more controllers as your portfolio of projects grows.
                            
- If you plan to use the failover feature for failure recovery, Sybase
                        recommends that you configure at least three managers and three controllers
                        in your cluster.
 
        Configuring Multiple Nodes in a Cluster
            
            
                - Set the same cache name and password for all
                        manager
                        nodes in a cluster to access the cache.
- Specify unique names for all nodes in their cluster
                        configuration files.
- Define no more than one manager for every host, in every
                        cluster.
- Set a common base directory for projects. This allows all
                        project log store files to save to a common location.
- Set a common persistence directory across all managers in a
                        cluster. If one manager node in a cluster is enabled for persistence, all
                        managers must be enabled for persistence.
- Reference common security files. All nodes must have the
                        same security configuration. All nodes require a keystore, regardless of
                        authentication mode. The keystore file must be shared among all nodes in the
                        cluster. All manager nodes in a cluster share common configuration files,
                        including keystore files, LDAP, Kerberos, and RSA files. These common
                        files,
                        which are located by default in
                            ESP_HOME/security,
                        must reside in a shared location that all nodes in the cluster can
                        access.
- Put input files and output file destinations in a shared
                        location if the project needs to be able to fail over to a controller on
                        another machine. If the project does not need to fail over, set controller
                        affinities to limit which nodes the project can run on and store input files
                        and output file destinations on the specified nodes only.