CS_REAL corresponds to the Adaptive Server Enterprise datatype real. It is implemented as a platform-dependent C-language float type:
typedef float CS_REAL;
When converting 6-digit precision bigint or ubigint datatypes
to real datatypes, note the following maximum
and minimum values:
-9223370000000000000.0 < bigint < 9223370000000000000.0
0 < ubigint < 18446700000000000000.0
Values outside of these ranges cause overflow errors.
CS_FLOAT corresponds to the Adaptive Server Enterprise datatype float. It is implemented as a platform-dependent, C-language double type:
typedef double CS_FLOAT;
When converting 15-digit precision bigint or ubigint datatypes
to float datatypes, note the following maximum
and minimum values:
-9223372036854770000.0 < bigint < 9223372036854770000.0
0 < ubigint < 18446744073709500000.0
Values outside of these ranges cause overflow errors.
CS_NUMERIC and CS_DECIMAL correspond to the Adaptive Server Enterprise datatypes numeric and decimal. These types provide platform-independent support for numbers with precision and scale.
The Adaptive Server Enterprise datatypes numeric and decimal are equivalent; and CS_DECIMAL is defined as CS_NUMERIC:
typedef struct _cs_numeric
{
CS_BYTE precision;
CS_BYTE scale;
CS_BYTE array[CS_MAX_NUMLEN];
} CS_NUMERIC;
typedef CS_NUMERIC CS_DECIMAL;
where:
precision is the precision of the numeric value. Legal values for precision are from CS_MIN_PREC to CS_MAX_PREC. The default precision is CS_DEF_PREC. CS_MIN_PREC, CS_MAX_PREC, and CS_DEF_PREC define the minimum, maximum, and default precision values, respectively.
scale is the scale of the numeric value. Legal values for scale are from CS_MIN_SCALE to CS_MAX_SCALE. The default scale is CS_DEF_SCALE. CS_MIN_SCALE, CS_MAX_SCALE, and CS_DEF_SCALE defines the minimum, maximum, and default scale values, respectively.
scale must be less than or equal to precision.
CS_DECIMAL types use the same default values for precision and scale as CS_NUMERIC types.