Open Client supports a wide range of numeric types.
Integer types include CS_TINYINT, a 1-byte integer; CS_SMALLINT, a 2-byte integer, and CS_INT, a 4-byte integer:
typedef unsigned char CS_TINYINT;
typedef short CS_SMALLINT;
typedef long CS_INT;
CS_REAL corresponds to the Adaptive Server datatype real. It is implemented as a C-language float type:
typedef float CS_REAL;
CS_FLOAT corresponds to the Adaptive Server datatype float. It is implemented as a C-language double type:
typedef double CS_FLOAT;
CS_NUMERIC and CS_DECIMAL correspond to the Adaptive Server datatypes numeric and decimal. These types provide platform-independent support for numbers with precision and scale.
WARNING! For output parameters CS_DECIMAL and CS_NUMERIC in Client-Library and ESQL/C programs the precision and scale must be defined before making a call to ct_param. This is required because the ouput parameters have no values associated with them at definition time and have an invalid precision and scale associated with them. You must do one of the following before calling ct_param:
CS_NUMERIC numeric_vaa ..numeric_var.precision =18; numeric.. ct_param (..)
Failure to initialize the values will result in an invalid precision or scale message.
The Adaptive Server datatypes numeric and decimal are equivalent; and CS_DECIMAL is defined as CS_NUMERIC:
typedef struct_cs_numeric
{
CS_BYTE precision;
CS_BYTE scale;
CS_BYTE array[CS_MAX_NUMLEN];
} CS_NUMERIC;
typedef CS_NUMERIC CS_DECIMAL;
where:
precision is the maximum number of decimal digits that are represented. At the current time, legal values for precision are from 1 to 77. The default precision is 18. CS_MIN_PREC, CS_MAX_PREC, and CS_DEF_PREC define the minimum, maximum, and default precision values, respectively.
scale is the maximum number of digits to the right of the decimal point. At the current time, legal values for scale are from 0 to 77. The default scale is 0. CS_MIN_SCALE, CS_MAX_SCALE, and CS_DEF_PREC define the minimum, maximum, and default scale values, respectively.
scale must be less than or equal to precision.
CS_DECIMAL types use the same default values for precision and scale as CS_NUMERIC types.