SAP HANA Output Adapter

The SAP HANA Output adapter loads information rapidly from Event Stream Processor into a SAP HANA database.

The SAP HANA Output adapter uses multiple parallel ODBC connections to load information into the SAP HANA server. This adapter does not support guaranteed delivery, and on UNIX platforms, only 64-bit ODBC drivers and driver managers are supported.

Prerequisites

Before you run the adapter:
  • Install revision 1.00.37 or higher of SAP HANA.
  • Install an ODBC driver manager and a SAP HANA ODBC client manager.
  • If you are running on UNIX, use unixODBC 2.3.O or higher:
    • If you are using version 2.3.0, add “Threading=0” in the odbcinst.ini file to ensure optimal adapter performance.
    • If you are using version 2.3.1, create a symbolic link under <2.3.1 installation folder>/lib as follows:
      ln -s libodbc.so.2.0.0 libodbc.so.1

      This link is required because ESP links to libodbc.so.1, which unixODBC 2.3.1 has renamed libodbc.so.2. With the link, ESP will now use libodbc.so.2.

  • In a Linux environment, specify the SAP HANA client libraries in your LD_LIBRARY_PATH.
  • To connect to SAP HANA through SSL on Linux, use OpenSSL 0.98 and ensure the symbolic links /usr/lib64/libssl.so and /usr/lib64/libcrypto.so link to the corresponding libraries of OpenSSL 0.98
  • To connect to SAP HANA through SSL on Solaris, use OpenSSL 0.98 and:
    • Ensure the symbolic links /usr/sfw/lib/64/libssl.so and /usr/sfw/lib/64/libcrypto.so link to the corresponding libraries of OpenSSL 0.98.
    • Remove or rename the original libraries /usr/sfw/lib/64/libssl.so.0.9.7 and /usr/sfw/lib/64/libcrypto.so.0.9.7.
    • Before starting the cluster node, set the environment variable LD_LIBRARY_PATH_64=$ESP_HOME/lib:/usr/sfw/lib/64/:[unixODBC installation]/lib:$LD_LIBRARY_PATH_64.
Here is an example of a service entry for the SAP HANA Output adapter to connect to the SAP HANA database using an ODBC connection:
<Service  Name="HANA" Type="DB">
     <Parameter Name="DriverLibrary">esp_db_odbc_lib</Parameter>
     <Parameter Name="DSN">HANA</Parameter>
     <Parameter Name="User"><UserName></Parameter>
     <Parameter Name="Password"><Password></Parameter>
    </Service> 
Property Label Description
DB Service Name

Property ID: service

Type: string

(Required) The name of the service entry that represents the SAP HANA database into which information will be loaded. Service entries are specified in the Event Stream Processor services.xml file. You must specify an ODBC service for the database service. See the Administrators Guide for more information. No default value.

Target Database Table Name

Property ID: table

Type: tables

(Required for adapter operation; optional only if you are using schema discovery) A string value representing the name of the SAP HANA database table into which the adapter loads data. No default value.

Field Mapping

Property ID: permutation

Type: permutation

(Advanced) Mapping between the Event Stream Processor schema columns and SAP HANA table columns. If no permutation is specified, the Event Stream Processor schema columns must match the SAP HANA table schema exactly (same column names, order, and count). The table and column names are case sensitive so to preserve original casing, use quotations. For example:
create table Table1 (Col1 integer)
create table "Table1" ("Col1" integer)
creates two separate tables and can be accessed using these parameter values:
...properties table='TABLE1', permutation='ESPCol1=COL1';
...properties table='Table1', permutation='ESPCol1=Col1';

If the Data Warehouse Mode property is OFF, and you want to specify a permutation, include a mapping for at least one column of the primary key of the ESP window attached to the adapter. Without mapping at least one primary key column, the adapter fails to start.

The format for this property is: <esp_columnname>=<database_columnname>:<esp_columnname>=<database_columnname>

Use a colon to separate mappings. No default value.

Bulk Batch Size

Property ID: bulkBatchSize

Type: integer

(Advanced) Number of rows that can be inserted in a database table partition before a commit occurs.

Bulk Insert Array Size

Property ID: bulkInsertArraySize

Type: integer

(Advanced) Number of rows simultaneously inserted into a database table partition. This option must be a divisor of the bulkBatchSize property.

Idle Buffer Write Delay in Msec

Property ID: idleBufferWriteDelayMSec

Type: integer

(Advanced) Specifies the number of milliseconds that a database table partition may sit idle with uncommitted data available for insert. Default value is 1000.

Buffer Age Limit in Msec

Property ID: bufferAgeLimitMSec

Type: integer

(Advanced) Forces the loading of any data that has been in existence longer than the time limit. Specify a value in milliseconds between 1 and 65535. Default value is 10000.

Data Warehouse Mode

Property ID: dataWarehouseMode

Type: choice

(Advanced) Specifies the type of data warehousing mode the adapter uses. Valid values are:
    • ON – updates are converted to inserts, and deletes are ignored.
    • INSERTONLY – only inserts are processed, and updates and deletes are ignored.
    • OFF – (default) all inserts, updates and deletes are processed as such.
    If you want to specify a field mapping, or permutation, map at least one column of the primary key of the ESP window attached to the adapter. Without mapping of at least one primary key column, the adapter fails to start.
Timestamp Column Name

Property ID: timestampColumnName

Type: string

(Advanced) If a column name is provided, the time at which the record is added to the bulk array is stored in that column of the database record. The column name for this property cannot be used in the mapping between Event Stream Processor schema columns and SAP HANA table columns (permutation).

If this property is empty, there is no timestamp stored. The inserted value will be in UTC time zone. The column must be a timestamp column.

Only Base Content

Property ID: onlyBase

Type:boolean

(Advanced) If true, the adapter processes only the base data of the window or stream to which it is attached. No further message flow is processed. Default value is false.

Include Base Content

Property ID: outputBase

Type: boolean

(Optional) Determines whether the base data of the window or stream to which this adapter is connected is output. If set to true, the adapter processes the base data of the window or stream to which it is attached. Default value is false.

PropertySet

Property ID: propertyset

Type: string

(Advanced) Specifies the name of the property set from the project configuration file. If you specify the same properties in the project configuration file and the ATTACH ADAPTER statement, the values in the property set override the values defined in the ATTACH ADAPTER statement. No default value.

Timezone For Statistics

Property ID: timezoneForStats

Type: string

(Advanced) Controls the time zone in which midnight statistics for the adapter roll over. For example, if the adapter is left at its default value of GMT, midnight statisticss reset at midnight GMT.

This setting does not affect any other dates processed or output by this adapter. Default value is GMT.

Number of Threads

Property ID: threadCount

Type: integer

(Advanced) The number of threads processing incoming data to be loaded into the database. If specified and the count is not in the N < x <5N range, where N is the number of table partitions, a warning message is displayed. If a value of 0 is set or the property is not specified, N threads are created.

Custom Statistics

The SAP HANA Output adapter maintains statistics to show the status of the adapter and to track its loading activities. Enable the time granularity option in the project configuration (ccr) file to get these custom statistics reported by the _ESP_Adapter_Statistics metadata stream:

Rows Bytes Average Rows
  • Received in the last hour
  • Received since midnight
  • Loaded in the last hour
  • Loaded since midnight
  • Skipped* in the last hour
  • Skipped* since midnight
  • Received since midnight
  • Loaded since midnight
  • Skipped since midnight
  • Per second received over the last minute
  • Per second received over the last hour

* A row is skipped when the opcode does not match the adapter's current warehousing mode. For example, if the data warehousing mode is INSERTONLY, and the adapter passes in a delete or an update, this results in a skipped row.

You can also obtain additional latency statistics through the _ESP_Connectors metadata stream. See the Administrators Guide for more information.