Known Issues for Event Stream Processor Server

Learn about known issues and apply workarounds for the Server.

Server Issues
CR# Description
671971

By default, the RSA login uses the "SHA1withRSA" signature algorithm and the "MD5" digest algorithm. If you change the signature and digester methods in the cluster configuration, make the same changes to the SIGN_ALGORITHM and DIGEST_ALGORITHM environment variables.

For example, values for the SIGN_ALGORITHM environment variable are "SHA1withRSA" (default), "MD5withRSA", and "SHA1withDSA" (Java only). Possible values for the DIGEST_ALGORITHM environment variable are "MD5" (default) and "SHA1".

674280

Avoid using retention with input windows that use a log store. While the compiler does not flag this as an error, the retention policy on input windows that use a log store may not work as expected after recovery.

674786

In the case of the ESP Server crashing, if you have a join-based window using a log store, one source window using a log store, and another source window using a memory store and being derived from a window using a log store, the data in the join window is recovered up to the crash, but new records uploaded to the ESP Server do not get joined. When all the source windows use a log store, the same records that are uploaded to the ESP Server get joined.

675321

If you are using an external function that returns a string, do not assign this string to the arena. Otherwise, an unrecoverable error occurs when the Server later tries to release the memory used by the returned string because the memory would have already been released by the Server when the record was processed.

715362

To reduce memory consumption growth, when allocating a vector or a dictionary using 'new' that is subsequently being used as an argument to the getData() function, check for the vector already being allocated using isnull, as follows:

if (isnull(vector_var))
    vector_var = new vector(...)
...
...
getdata(vector_var, ...)

725950

731446

If one or more projects in a cluster repeatedly shut down unexpectedly, you might need to increase the heartbeat timeout value the cluster uses to ensure that projects are running. The default value is 7500 milliseconds (7.5 seconds).

Workaround:
  1. To confirm that the heartbeat timeout is the problem, check the project log (ESP_HOME/cluster/projects/<cluster-name>/<workspacename>.< project-name>.<instance-number>/esp_server.log, or in a Studio cluster, <user's-home-dir>/SybaseESP/5.1/workspace/<workspacename>.< project-name>.<instance-number>/esp_server.log). Look for a 720011 message that includes a last contact delta value. The message looks similar to this:
     2013-02-22 01:20:55.036 | 12611 | container | [SP-2-720011] (5741.829)
          sp(12589) Manager.heartbeatApplication() asked to stop. Last contact delta=7568   

    The delta value is the time in milliseconds between the final contacts between the project and the cluster. If the delta value is close to or larger than the heartbeat timeout value, try increasing the heartbeat timeout value.

  2. In an editor, open the node configuration file, ESP_HOME/cluster/nodes/<nodename>/< node-name>.xml.
  3. Replace the Manager element with this code:
    <Manager enabled="true">
      <!-- The ApplicationHeartbeatTimeout node is optional -->
      <!-- The first Manager in the Cluster will determine the value Cluster wide -->
      <!-- The value is in milliseconds -->
      <ApplicationHeartbeatTimeout>7500</ApplicationHeartbeatTimeout>
    </Manager>
    
    Note: This is the top-level Manager element, not a Manager element in the Cache | Managers section.
  4. Replace the default value of ApplicationHeartbeatTimeout, 7500, with a value larger than the last contact delta found in your log. For example, to increase the timeout to 15 seconds, enter 15000.
  5. Copy the new Manager section into the <node-name>.xml file for every manager node in the cluster.
  6. Restart the cluster, shutting down controller-only nodes first, then manager nodes, and starting all the manager nodes before the controller-only nodes. See the Configuration and Administration Guide for detailed instructions.