CS_REAL corresponds to the Adaptive Server datatype real. It is implemented as a C-language float type:
typedef float CS_REAL;
When converting 6-digit precision bigint or ubigint datatypes to real datatypes, note the following maximum and minimum values:
-9223370000000000000.0 < bigint < 9223370000000000000.0
0 < ubigint < 18446700000000000000.0
Values outside of these ranges cause overflow errors.
CS_FLOAT corresponds to the Adaptive Server datatype float. It is implemented as a C-language double type:
typedef double CS_FLOAT;
When converting 15-digit precision bigint or ubigint datatypes to float datatypes, note the following maximum and minimum values:
-9223372036854770000.0 < bigint < 9223372036854770000.0
0 < ubigint < 18446744073709500000.0
Values outside of these ranges cause overflow errors.
CS_NUMERIC and CS_DECIMAL correspond to the Adaptive Server datatypes numeric and decimal. These types provide platform-independent support for numbers with precision and scale.
WARNING! For output parameters CS_DECIMAL and CS_NUMERIC in Client-Library and ESQL/C programs, the precision and scale must be defined before making a call to ct_param. This is required because the output parameters have no values associated with them at definition time and have an invalid precision and scale associated with them. You must do one of the following before calling ct_param:
CS_NUMERIC numeric_vaa ..numeric_var.precision =18; numeric.. ct_param (..)
Failure to initialize the values will result in an invalid precision or scale message.
The Adaptive Server datatypes numeric and decimal are equivalent; and CS_DECIMAL is defined as CS_NUMERIC:
typedef struct_cs_numeric
{
CS_BYTE precision;
CS_BYTE scale;
CS_BYTE array[CS_MAX_NUMLEN];
} CS_NUMERIC;
typedef CS_NUMERIC CS_DECIMAL;
where:
precision is the maximum number of decimal digits that can be represented by the corresponding number of digits in base-256 numbering. For example, four digits of decimal precision (0-9999) can be represented by two base-256 digits. At the current time, legal values for precision are from 1 to 77. The default precision is 18. CS_MIN_PREC, CS_MAX_PREC, and CS_DEF_PREC define the minimum, maximum, and default precision values, respectively.
array is a base-256 representation of the numeric value. The byte at index 0 denotes the sign, where 0 (or a byte value of 00000000) represents a positive number, and 1 (or a byte value of 0000001) represents a negative number. The remaining bytes, 1-n, represent the base-256 number in little-endian order, with the byte at index 1 being the most significant byte.
The number of bytes used in array is based on the selected precision of the numeric. Mapping is performed based on the precision of the numeric to the length of array that is used.
scale is the maximum number of digits to the right of the decimal point. At the current time, legal values for scale are from 0 to 77. The default scale is 0. CS_MIN_SCALE, CS_MAX_SCALE, and CS_DEF_SCALE define the minimum, maximum, and default scale values, respectively.
scale must be less than or equal to precision.
CS_DECIMAL types use the same default values for precision and scale as CS_NUMERIC types.