Configuring a Cluster

Configure clusters to enhance performance by dividing processing work among a number of servers.

If you did not configure your cluster during installation, or to create and configure a new cluster, follow these steps for every cluster node.

For each node in a cluster, you configure four basic sections of the configuration file—Controller, Manager, RPC, and Cache:
[...]
  <Controller enabled="true">
  </Controller>
  <Manager enabled="true" />
  <Rpc>
    <Host>dino</Host>
    <Port ssl="true">19011</Port>
    <AdminHost>dino</AdminHost>
    <AdminPort ssl="true">19111</AdminPort>
  </Rpc>
  <Cache>
    <Host>dino</Host>
    <Port>19001</Port>
    <Name>test-name-1</Name>
    <Password>test-password-1</Password>
    <Managers enabled="true">
      <Manager>dino:19001</Manager>
      <Manager>astro:19002</Manager>
      <Manager>scooby:19003</Manager>
    </Managers>
    <Persistence enabled="true"
      <Directory>${ESP_STORAGE}</Directory>
    </Persistence>
  </Cache>
[...]
Configuration varies based on whether you enable the node as a controller, a manager, or both. The node defined in the example above is enabled as both a manager and a controller. (This example and the others shown in this task come from the cluster configuration file for a UNIX-based installation of Event Stream Processor. In Windows, a node cannot be both manager and controller unless it is the only node in the cluster.)

In configuration files for manager nodes, the Cache section defines the cluster by identifying the managers that belong to the cluster’s shared cache.

  1. Open the configuration file from ${ESP_HOME}/cluster/nodes/<node-name>/<node-name>.xml on UNIX installations, or from %{ESP_HOME}%\cluster\nodes\<node-name>\<node-name>.xml on Windows installations.
  2. Provide a unique name for the node within the cluster.
    <Name>node1</Name>
    Note: Node names are case-insensitive.
  3. (Optional) Configure macro name and type elements.
    A macro is a configuration file shortcut for centralizing a repeated configuration or for acquiring properties from the environment.
    Permitted macro type entries are:
    • envar – the value is derived from the environment variable defined by the macro value.
    • sysproperty – the value is derived from the Java system property defined by the macro value.
    • value – the value specified is used.
    • prompt – when the cluster starts, the user is prompted for the value.
    <Macros>
      <Macro name="ESP_HOME" type="envar">ESP_HOME</Macro>
    </Macros>
  4. (Optional) Configure system properties.
    This step can include the replacement of macro values with literal values through macro expansion.
  5. (Optional) Enable the controller:
    <Controller enabled="true">
  6. (Optional) In a UNIX installation, if you plan to use a Java Runtime Environment (JRE) other than the one provided with Event Stream Processor, perform this step for controller-enabled nodes.

    In the Controller section of <node-name>.xml, in the ApplicationTypes elements for both project and ha_project, locate the line that sets the ld-preload property. Set ld-preload to point to the jsig library file provided with your runtime environment. For example:

    <Property name="ld-preload">${ESP_HOME}/lib/libjsig.so</Property>

    The following example shows the project application type, with definitions for the base directory, host name, service configuration file, and security directory that the application uses.

    Note: In this example, StandardStreamLog enables stdstream.log, which logs all output written to stdout and stderr. This includes SySAM licensing information for Event Stream Processor, as well as messages from third-party applications that write to stdout and stderr. See Project Logging for more information about stdstream.log files.
    <ApplicationTypes>
      <ApplicationType name="project" enabled="true">
        <Class>com.sybase.esp.cluster.plugins.apptypes.Project</Class> 
        <StandardStreamLog enabled="true" /> 
        <Properties>
          <Property name="esp-home">${ESP_HOME}</Property> 
          <Property name="hostname">${ESP_HOSTNAME}</Property> 
          <Property name="ld-preload">${ESP_HOME}/lib/libjsig.so</Property>
          <Property name="services-file">${ESP_HOME}/bin/service.xml</Property> 
          <Property name="base-directory">${ESP_HOME}/cluster/projects/test-name-1</Property> 
          <Property name="ssl-key-file">${ESP_HOME}/cluster/keys/test-name-1</Property> 
        </Properties>
      </ApplicationType>
    </ApplicationTypes>
    
  7. (Optional) Enable the manager:
    <Manager enabled="true" />
  8. (Optional) Define the host node for the RPC port, which is used for all external communication with the node. Clients such as controllers, projects, SDKs, and the cluster admin tool all connect to the manager through the RPC port.
    If your machine has multiple NICs and you do not want to use the machine’s default interface (localhost), create a Host element in the Rpc section of <node-name>.xml and enter the name or IP address of an alternate interface. For example:
    <Host>126.55.44.33</Host>
    Note: If the machine is set to use a proxy server or is behind a firewall, you may be unable to start a project when ESP is configured to use a network card other than the default. In such cases, set the no_proxy environment variable to the names and IP addresses of every system that will communicate with ESP, plus localhost and 127.0.0.1. Use fully qualified domain names. For example, on a Linux machine named archer that does not communicate with any other system:
    no_proxy='localhost, 127.0.0.1, archer.meadow.com, 123.45.67.89'

    On a Windows machine, do not put quotes around the value you specify for no_proxy. So, on a Windows machine named fletcher that communicates with a system named archer:

    no_proxy=localhost, 127.0.0.1, archer.meadow.com, 123.45.67.89, fletcher.meadow.com, 123.45.67.88

    Also verify that there are no inconsistent entries in the /etc/hosts file. This configuration applies to any ESP client system as well as any ESP-to-ESP communication.

    If using the no_proxy environment variable causes communication issues such as HTTP Error 503: Service unavailable, you can disable the proxy by unsetting the http_proxy environment variable.

  9. Provide the RPC port value.
    <Port ssl="true">19011</Port>
  10. (Optional) Provide a separate Admin host name and Admin port value. Doing so allows you to distinguish between administrative and non-administrative users, and limit network access to specific administrative actions, which may be advantageous when you have firewalls in place.
    <AdminHost>dino</AdminHost>
    <AdminPort ssl="true">19111</AdminPort>
  11. For manager-enabled nodes, provide the cache port value.
    The port specified in the Port element of the Cache section is used by other manager nodes for communication related to the cluster’s shared cache.
    <Port>19001</Port>
  12. (Optional) To define the host node for the cache, modify the Host element in the Cache section of the file.
    The Host element uses the default name localhost. To allow cluster clients from other machines to connect, change the value of Host to the name of the machine on which the cluster node is running. For example:
    <Host>dino</Host>
    If your machine has multiple NICs and you do not want to use the machine’s default interface (localhost), enter a name or IP address in the Host element in the Cache section to specify the network interface you want cluster clients to use. For example:
    <Host>125.66.44.33</Host>

    If you specify a Host value in the Cache section of the file, it must be the same host (that is, the same name or IP address) that you give for this manager node in the Managers element.

    Note: If the machine is set to use a proxy server or is behind a firewall, you may be unable to start a project when ESP is configured to use a network card other than the default. In such cases, set the no_proxy environment variable to the names and IP addresses of every system that will communicate with ESP, plus localhost and 127.0.0.1. Use fully qualified domain names. For example, on a Linux machine named archer that does not communicate with any other system:
    no_proxy='localhost, 127.0.0.1, archer.meadow.com, 123.45.67.89'

    On a Windows machine, do not put quotes around the value you specify for no_proxy. So, on a Windows machine named fletcher that communicates with a system named archer:

    no_proxy=localhost, 127.0.0.1, archer.meadow.com, 123.45.67.89, fletcher.meadow.com, 123.45.67.88

    Also verify that there are no inconsistent entries in the /etc/hosts file. This configuration applies to any ESP client system as well as any ESP-to-ESP communication.

    If using the no_proxy environment variable causes communication issues such as HTTP Error 503: Service unavailable, you can disable the proxy by unsetting the http_proxy environment variable.

  13. In the Cache section, define a unique name and password for the cluster. To join the cache—and thus to join the cluster—all nodes must use the same name and password when they start.
      <Name>test-name-1</Name> 
      <Password>test-password-1</Password> 
    When the Password element has no attributes, as shown above, ESP uses the password contained in the element to start the node. You do not supply a password when you execute the start command. To prompt for a password when the node starts, use these attributes with the Password element:
    Password Attribute Behavior
    prompt
    • true – ESP prompts for the password when you try to start the node.
    • false – ESP does not prompt for a password; the hide, verify, and query attributes are ignored.
    hide
    • true – ESP does not display the password as you type it.
    • false – ESP displays the password.
    verify
    • true – ESP prompts for the password twice.
    • false – ESP prompts for the password once.
    query
    • If a query value is present, ESP uses it to prompt for the cluster password when you attempt to start the node.
    • If no query value is present, ESP uses default wording for the password prompt.
    The Password element with prompting enabled looks like this:
      <Password prompt="true" hide="true" verify="false" query="Cluster password:">test-password-1</Password> 
  14. (Optional; not recommended for production environments) To enable multicast delivery on manager-enabled nodes, set the Multicast enabled value to true and enter Group and Port values.
    <Multicast enabled="true">
      <Group>224.2.2.7</Group>
      <Port>54323</Port>
    </Multicast>
    All nodes in the cluster must have the same multicast status.
  15. (Optional; recommended for production environments) If multicast is not enabled, enable the manager node and enter its host name and port.
    <Multicast enabled="false">
     [...] 
    </Multicast>
    <Managers enabled="true">
      <Manager>localhost:19001</Manager> 
    </Managers>
    
  16. For manager nodes, enable or disable cluster persistence.
    By default, persistence is enabled. To disable it, set <Persistence enabled="false">. Cluster persistence lets the node save projects and workspaces when it shuts down. When persistence is disabled, you lose all your projects when the last manager node in the cluster shuts down; therefore, in a production system, SAP recommends that you leave persistence enabled.
    Note: All nodes within a cluster must point to the same persistence directory.
    <Persistence enabled="true">
    	<Directory>${ESP_STORAGE}</Directory>
    </Persistence>
  17. Move ESP files and directories that require a shared drive to a shared location so that other nodes in the cluster can access them. Set the path of the ESP_SHARED macro to this location.
    Note: See File and Directory Infrastructure for sharing requirements.
  18. Repeat these steps for each node in the cluster.
Next
Configure security for each node, including authentication, access control, and SSL connections.
Related concepts
Cluster Persistence, Caching, and Multicast
Related tasks
Enabling and Disabling SSL
Related reference
Cluster Administrative Tool
File and Directory Infrastructure