Miscellaneous Error Messages
Code | 120000 |
Severity | Info |
Message | {0}({1})::checkpoint() -- time: {2}, wrote log checkpoint at {3} for {4} indices. |
Description | Information about the location of important data structures in the LogStore file. |
Code | 120001 |
Severity | Info |
Message | {0}({1})::checkpoint() -- time: {2}, wrote log header (tail={3}). |
Description | Information about the location of important data structures in the LogStore file. |
Code | 120002 |
Severity | Debug |
Message | LogStoreIndex()::reload() -- reloading index at offset {0} of store '{1}'. |
Description | When the Sybase Event Stream Processor is restarted with an existing LogStore file, it reads the contents of this file and uses it to populate the initial state of the streams. |
Code | 120003 |
Severity | Debug |
Message | LogStoreIndex()::reload() -- recreating index '{0}'. |
Description | The index reported in the previous message is used to populate the named stream. |
Code | 120004 |
Severity | Info |
Message | {0}({1})::reload() -- recovered empty log. |
Description | An existing LogStore file has been found, but it is empty. |
Code | 120005 |
Severity | Info |
Message | {0}({1})::reload() -- reloading {2} indices from last checkpoint. |
Description | When the Sybase Event Stream Processor is restarted with an existing LogStore file, it reads the contents of this file and uses it to populate the initial state of the streams. |
Code | 120006 |
Severity | Info |
Message | {0}({1})::reload() -- attempting roll-forward recovery from offset {2} |
Description | After the last checkpointed state was read from the LogStore and used to populate the streams, the source stream records that have not been checkpointed yet but remembered in the store are played back again to fully restore the state of the model. |
Code | 120007 |
Severity | Info |
Message | {0}({1})::reload() -- {2}( addr={3}, size={4}) . |
Description | Stages of the data reload. |
Code | 120008 |
Severity | Info |
Message | {0}({1})::reload() -- skipping record (client={2}, addr={3}, length={4}). |
Description | The non-checkpointed records from the derived stores are not played back because it is not consistent. The playback of the source stream records regenerates the state of the derived streams in a consistent way. |
Code | 120009 |
Severity | Info |
Message | {0}({1})::readHeader() -- reading log header. |
Description | Stages of the data reload. |
Code | 120010 |
Severity | Critical |
Message | {0}({1})::compact() freeSpace:{2} kB ({3}%) after full scrubbing is still under {4}% of required reserve, liveSize: {5} kB of totalSize: {6} kB, increase the store size and restart, stopping. |
Description | The LogStore needs a padding of free space to run. If the free space drops under the value of the reservePct attribute (20% by default), it does not leave enough service space for cleaning, and the store can not continue running. Increase the size of the store and restart the model. Monitor the space used and resize the store preventively to avoid such failures. The liveSize includes the estimation of the indexing overhead. |
Code | 120011 |
Severity | Info |
Message | {0}({1})::compact() START, tail: {2}, cleanerTail: {3}, freeSpace:{4} kB ({5}%). |
Description | When the LogStore runs out of free space, it tries to restore it by compacting the data and removing the records that are not used any more. |
Code | 120012 |
Severity | Info |
Message | {0}({1})::compact() END, time: {2}, tail: {3}, cleanerTail: {4}, freeSpace:{5} kB ({6}%), liveDataCopied: {7} kB, full liveSize: {8} kB. |
Description | Information about the results of a compaction run. The liveSize includes the estimation of the indexing overhead. See the Administration Guide for further explanation. |
Code | 120013 |
Severity | Critical |
Message | {0}({1})::compact() compactification cause log tail to enter redzone. |
Description | Obsolete. |
Code | 120014 |
Severity | Critical |
Message | {0}({1})::readHeader() -- header checksum invalid. |
Description | The LogStore file has an incorrect header checksum. Either it has been corrupted or it is a wrong file. The Sybase Event Stream Processor can not start. Try using a different file name or removing the corrupted file. |
Code | 120015 |
Severity | Info |
Message | {0}({1})::checkpoint() -- logfile not dirty, skipping checkpoint. |
Description | The Sybase Event Stream Processor checkpoints all the stores at the same time. If some store has seen no changes since the last checkpoint, it does not need to checkpoint again. |
Code | 120016 |
Severity | Critical |
Message | {0}({1})::readHeader() -- incompatible version {2}{3} in the log file, current {4}{5} |
Description | An attempt was made to open a LogStore file in a newer format, created with a newer version of the Sybase Event Stream Processor. It may also happen the other way around, when opening a much older file with a new version of the Sybase Event Stream Processor, that does not know any more, how to upgrade such a file. When the Sybase Event Stream Processor opens an older supported LogStore file, it upgrades its format to the new version, and the older version of the Sybase Event Stream Processor won't be able to open this file any more. The workaround is to dump the data with a compatible version of the Sybase Event Stream Processor and reload it from scratch. |
Code | 120018 |
Severity | Debug |
Message | {0}({1})::reload() -- successful restart. |
Description | The last state of the store has been successfully reloaded. |
Code | 120019 |
Severity | Info |
Message | {0}({1})::initialize() -- initializing, fullSize: {2} mb. |
Description | Information about stages of the LogStore initialization. |
Code | 120020 |
Severity | Critical |
Message | {0}({1})::initialize() -- the log-structured store size is limited to 2GB on 32-bit Sybase Event Stream Processors. |
Description | On the 32-bit machines the sum of LogStore sizes is limited to 2GB. In practice even smaller because the OS will limit the memory usage. |
Code | 120021 |
Severity | Critical |
Message | {0}({1})::initialize() -- '{2}' is not a directory. |
Description | The "file" attribute of the LogStore must actually refer to a directory where the store files are created. A name of non-existing directory is OK, as long as its parent directory exists. But if there is already a plain file with the same name, a directory can not be created in its place. |
Code | 120022 |
Severity | Info |
Message | {0}({1})::initialize() -- creating directory '{2}'. |
Description | The "file" attribute has referred to a non-existing directory. It is automatically created. |
Code | 120023 |
Severity | Critical |
Message | {0}({1})::initialize() -- could not create directory '{2}': {3} |
Description | The "file" attribute has referred to a non-existing directory. The Sybase Event Stream Processor tried to create it and failed. Check that the parent directory exists and that permissions are correct. |
Code | 120026 |
Severity | Critical |
Message | {0}({1})::initialize() -- Missing name of file for log store; exiting. |
Description | The Log Store must have the attribute "file" set. Check the model. |
Code | 120029 |
Severity | Info |
Message | {0}({1})::~LogStore( ) -- syncing log. |
Description | Information about stages of closing the LogStore. |
Code | 120030 |
Severity | Debug |
Message | {0}({1})::~LogStore( ) -- done. |
Description | Information about stages of closing the LogStore. |
Code | 120035 |
Severity | Debug |
Message | LogStore({0})::getIndex() -- request for index '{1}'. |
Description | Information about stages of the LogStore initialization. |
Code | 120036 |
Severity | Info |
Message | {0}({1})::getIndex( {2} ) -- creating index. |
Description | A stream has been found that previously had no matching index in the LogStore file. The index is created. |
Code | 120037 |
Severity | Critical |
Message | {0}({1})::checkSignature( {2} ) -- could not find stream in the ccx file. |
Description | Probably the Log Store file is used with a mismatched project ccx file. Make sure that it matches. If any LogStore stream has been changed in the model, the old LogStore file can not be used with this model any more. Start with a fresh file. |
Code | 120038 |
Severity | Critical |
Message | {0}({1})::checkSignature( {2} ) -- bad signature, ccx has changed. |
Description | Probably the Log Store file is used with a mismatched project ccx file. Make sure that it matches. If any LogStore stream has been changed in the model, the old LogStore file can not be used with this model any more. Start with a fresh file. |
Code | 120039 |
Severity | Debug |
Message | {0}({1})::checkSignature( {2} ) -- good signature, configuration unchanged. |
Description | Information about stages of the LogStore initialization. |
Code | 120040 |
Severity | Critical |
Message | {0}({1})::() -- cannot open file '{2}': {3} |
Description | One of the LogStore files can not be opened or created. Check the permissions in the filesystem. |
Code | 120041 |
Severity | Critical |
Message | {0}({1})::reload() -- fullsize of {2}, exceeds the 32 bit limit (2047), aborting'. |
Description | On the 32-bit machines the sum of LogStore sizes is limited to 2GB. In practice even smaller because the OS will limit the memory usage. |
Code | 120042 |
Severity | Debug |
Message | {0}({1})::reload() -- Found BEGINTRANS during roll-forward recovery. |
Description | Information about stages of the LogStore initialization. |
Code | 120043 |
Severity | Debug |
Message | {0}({1})::reload() -- Found ENDTRANS during roll-forward recovery. |
Description | Information about stages of the LogStore initialization. |
Code | 120044 |
Severity | Debug |
Message | {0}({1})::reload() -- Found a NORMAL record during roll-forward recovery. |
Description | Information about stages of the LogStore initialization. |
Code | 120045 |
Severity | Debug |
Message | {0}({1})::reload() -- Found BEGINCOMPACT record during roll-forward recovery. |
Description | Information about stages of the LogStore initialization. |
Code | 120046 |
Severity | Critical |
Message | {0}({1})::reload() -- Found an unknown record type {2} during roll-forward recovery, fatal error. |
Description | An unknown record type found in the LogStore file means that the file is probably corrupted. The model cannot be started with such a file. |
Code | 120047 |
Severity | Critical |
Message | {0}({1})::dumpDebugData() -- last logstore write region [{2},{3}), overlaps cleaner protected region: [{4},{5}). The store is wedged, stopping. |
Description | The specified Log Store size was too small and ran out of space. Monitor the Log Store usage and increase it in advance to prevent this fatal error. The store reserve is intended to guarantee against this situation, but in exceptional circumstances it still may happen. If the log store comes to this condition, it can not be resized any more. You must start with a new clean store (and increase its size too). |
Code | 120048 |
Severity | Critical |
Message | {0}({1})::getIndex( {2} ) -- failed, may have no more than {3} indexes per store. |
Description | A single LogStore may contain only a limited number of streams in it. Even though the limit is higher than would normally be seen, very large models may hit it. In such a case, split a LogStore in two. Also, if the store is used continuously with the changing models, where old streams are deleted and new streams created, the store still remembers the discarded streams. In this case, either start with a fresh file or use the online backup to move the active streams to a new file. If the streams are deleted during a Dynamic Service Modification, it does not leave the data behind. |
Code | 120049 |
Severity | Info |
Message | {0}({1})::backup -- successful, used {2} bytes. |
Description | The backup request has succeeded. |
Code | 120050 |
Severity | Critical |
Message | {0}({1})::backup -- index '{2}' has no stream associated to it. |
Description | The backup has found data from a discarded stream in the store. (Such as if the model has changed and does not contain some stream any more). This data will be skipped during backup. The previous versions of the Sybase Event Stream Processor refused to do the backup if such data was found. |
Code | 120051 |
Severity | Critical |
Message | {0}({1})::backup -- failed to copy a record in index '{2}'. |
Description | The backup procedure failed to copy the data, which means the whole backup has failed. Possibly the process has run out of memory, as the backup temporarily needs to double its memory size. |
Code | 120052 |
Severity | Critical |
Message | {0}({1})::getIndex( {2} ) -- failed, out of UINT32_MAX ids; do a backup and restart from backed-up stores to reset the ids. |
Description | In addition to the limited number of active stream indexes in the LogStore, each stream gets assigned a unique id. These ids are never reused even if the stream is properly disposed of using a dynamic modification. The limit on the numbed of ids is very large but frequent modifications on a very large model may still exhaust it. Use the online backup to reset the ids in the backed-up data. |
Code | 120053 |
Severity | Info |
Message | {0}({1})::PrepareMod() -- changing SWEEPAMOUNT to {2} bytes. |
Description | The sweep amount is specified in the attribute "sweepamount", in percent of the file size. It's the size of data that gets processed during one pass of LogStore cleaning. |
Code | 120054 |
Severity | Warning |
Message | {0}({1})::reload() -- during roll-forward recovery, encountered a bad record, error: {2} |
Description | The Log Store file was probably corrupted. |
Code | 120055 |
Severity | Error |
Message | {0}({1})::compact() -- nearing capacity, at {2}% free, {3}% live data; performance is being degraded. |
Description | The volume of data is nearing the size of the Log Store. The cleaning is performed most often to conserve space. Stop the server and increase the store size in the model as soon as practical before the store has overflowed. This message may also appear after resizing, while the Sybase Event Stream Processor is relocating the data to assimilate the new free space. |
Code | 120056 |
Severity | Warning |
Message | {0}({1})::readHeader() -- upgraded log version from {2}{3} to {4}{5} |
Description | When the Sybase Event Stream Processor opens an existing LogStore of an older version, it transparently upgrades the format of the store to the newer version. The LogStore version numbers are not directly related to the Sybase Event Stream Processor version numbers. Such an upgraded LogStore can not be opened by the old version of the Sybase Event Stream Processor any more. To preserve the compatibility with the older versions, either make a backup copy of the LogStore in advance or dump the contents of the store into XML with the new Sybase Event Stream Processor version and load the data back into the old Sybase Event Stream Processor version. |
Code | 120057 |
Severity | Warning |
Message | {0}({1})::initialize() -- specified sweepamount({2} kb) exceeds 20% of maxfilesize, reducing to 20% of maxfilesize. |
Description | The sweep amount is specified in the attribute "sweepamount", in percent of the file size. It's the size of data that gets processed during one pass of LogStore cleaning. |
Code | 120058 |
Severity | Warning |
Message | {0}({1})::initialize() -- specified sweepamount({2} kb) below 5% of maxfilesize, increasing to 5% of maxfilesize. |
Description | The sweep amount is specified in the attribute "sweepamount", in percent of the file size. It's the size of data that gets processed during one pass of LogStore cleaning. |
Code | 120059 |
Severity | Info |
Message | {0}({1})::initialize() -- using sweepAmount = {2} kb |
Description | The sweep amount is specified in the attribute "sweepamount", in percent of the file size. It's the size of data that gets processed during one pass of LogStore cleaning. |
Code | 120060 |
Severity | Notice |
Message | {0}({1})::initialize() -- setting metadata checkpointing at {2} records |
Description | The checkpoint count specified in the attribute "ckcount" controls, how often the intermediate Log Store index cache is checkpointed to the disk (not to be confused with the full store checkpoints). The value is the count of modified index nodes collected in the cache, roughly equal to the number of records modified since the last such checkpoint. Setting it to 0 is equivalent to the old Log Store logic that flushed this cache after every transaction. The higher values improve the efficience of space use in the store and reduce the amount of cleaning. However the very large values have diminshed returns and will increase the memory use. The highest theoretically supported value is 2G but the values over 100000 are probably impractical. |
Code | 120061 |
Severity | Notice |
Message | {0}({1})::initialize() -- setting the reserve size at {2}% |
Description | The Log Store needs a certain amount of its capacity reserved to maintain the efficiency of its operations. The attribute "reservePct" specifies the reserve capacity in percentage in comparison to the full store size. It can be set to a value between 10 and 40 and is 20 percent by default. If after all the possible cleaning the unused space in the log store falls below this amount, the Sybase Event Stream Processor aborts. If that happens, increase the store size and restart, the Log Store will automatically grow. |
Code | 120062 |
Severity | Warning |
Message | {0}({1})::initialize() -- the reserve size specified too low at {2}%, changed to {3}% |
Description | The Log Store needs a certain amount of its capacity reserved to maintain the efficiency of its operations. The attribute "reservePct" specifies the reserve capacity in percentage in comparison to the full store size. It can be set to a value between 10 and 40 and is 20 percent by default. If after all the possible cleaning the unused space in the log store falls below this amount, the Sybase Event Stream Processor aborts. If that happens, increase the store size and restart, the Log Store will automatically grow. |
Code | 120063 |
Severity | Warning |
Message | {0}({1})::initialize() -- the reserve size specified too high at {2}%, changed to {3}% |
Description | The Log Store needs a certain amount of its capacity reserved to maintain the efficiency of its operations. The attribute "reservePct" specifies the reserve capacity in percentage in comparison to the full store size. It can be set to a value between 10 and 40 and is 20 percent by default. If after all the possible cleaning the unused space in the log store falls below this amount, the Sybase Event Stream Processor aborts. If that happens, increase the store size and restart, the Log Store will automatically grow. |
Code | 120064 |
Severity | Critical |
Message | {0}({1})::initialize() -- the reserve of {2}% would be required for the reliable work of this store, higher than the maximum of {3}%; increase the size of the store. |
Description | The small stores are particularly sensitive to the size of reserve. An automatic check is done to ensure that the reserve is sufficient, and increased if it is found too low. If the automatic calculation results in the required reserve over 40 percent, the model won't start. Increase the size of the store and restart. |
Code | 120065 |
Severity | Warning |
Message | {0}({1})::initialize() -- the reserve of {2}% is too small for the reliable work of this store, increased to {3}% |
Description | The small stores are particularly sensitive to the size of reserve. An automatic check is done to ensure that the reserve is sufficient, and increased if it is found too low. |
Code | 120066 |
Severity | Critical |
Message | {0}({1})::compact() -- cleaning made no progress, the store is wedged, stopping. |
Description | The specified Log Store size was too small and ran out of space. Monitor the Log Store usage and increase it in advance to prevent this fatal error. The store reserve is intended to guarantee against this situation, but in exceptional circumstances it still may happen. If the log store comes to this condition, it can not be resized any more. You must start with a new clean store (and increase its size too). |
Code | 120067 |
Severity | Warning |
Message | {0}({1})::compact() -- free space dipped into reserve by {2}%, scrubbing in attempt to reclaim more space. |
Description | When the free space becomes very low, the Log Store logic attempts to reclaim more free space by repeating the compaction. If it scrubs through the whole store without finding more space, the Sybase Event Stream Processor will stop. Increase the size of the store as soon as possible. This message may also appear after resizing, while the Sybase Event Stream Processor is relocating the data to assimilate the new free space. |
Code | 120068 |
Severity | Warning |
Message | {0}({1})::initialize() -- the index '{2}' has no matching stream in the ccx file, discarded. |
Description | If the model was edited to remove some streams, on the next restart the data of these streams will be automatically discarded from the Log Store. |
Code | 120069 |
Severity | Warning |
Message | {0}({1})::~LogStore( ) -- at exit liveSize: {2} kB, {3}% of fullsize. |
Description | At the exit time the log store reports its usage statistics. |
Code | 121000 |
Severity | Debug |
Message | LogIndexNodeCache({0})::add( k={1}, l={2}, r={3} ) -- addr={4} |
Description | The debugging information about the internals of a LogStore index. |
Code | 121001 |
Severity | Debug |
Message | LogIndexNodeCache({0})::commit() -- cache unchanged, returning root == {1} |
Description | The debugging information about the internals of a LogStore index. |
Code | 121002 |
Severity | Debug |
Message | LogIndexNodeCache({0})::commit() -- writing {1} of {2} nodes. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121003 |
Severity | Debug |
Message | LogStoreIndex({0})::LogStoreIndex() -- created and attached to store '{1}'. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121004 |
Severity | Debug |
Message | LogStoreIndex({0})::LogStoreIndex() -- reload ({1} records, {2} bytes, root={3}, seq={4}). |
Description | The debugging information about the internals of a LogStore index. |
Code | 121005 |
Severity | Debug |
Message | LogStoreIndex({0})::~LogStoreIndex() -- disposing of LogStoreIndex, index storage: {1} Bytes, record storage: {2} Bytes, liveSize: {3} kB |
Description | The debugging information about the internals of a LogStore index. |
Code | 121018 |
Severity | Debug |
Message | LogStoreIndex({0})::removeRoot( {1} ) -- complex remove, rotating {2} |
Description | The debugging information about the internals of a LogStore index. |
Code | 121019 |
Severity | Debug |
Message | LogStoreIndex({0})::putDelete( ) -- failed, empty tree. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121024 |
Severity | Debug |
Message | LogStoreIndex({0})::get( {1} ) -- v. {2}, {3} |
Description | The debugging information about the internals of a LogStore index. |
Code | 121025 |
Severity | Debug |
Message | LogStoreIndex({0})::get( {1} ) -- key match '{2}'. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121026 |
Severity | Debug |
Message | LogStoreIndex({0})::get( {1} ) -- no matching record found. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121027 |
Severity | Debug |
Message | LogStoreIndex({0})::checkpoint() -- checkpointing ({1} records, {2} bytes). |
Description | The debugging information about the internals of a LogStore index. |
Code | 121030 |
Severity | Debug |
Message | LogStoreIndex({0})::beginTransaction() |
Description | The debugging information about the internals of a LogStore index. |
Code | 121031 |
Severity | Debug |
Message | LogStoreIndex({0})::commitTransaction() |
Description | The debugging information about the internals of a LogStore index. |
Code | 121032 |
Severity | Debug |
Message | LogStoreIndex({0})::rollbackTransaction() |
Description | The debugging information about the internals of a LogStore index. |
Code | 121033 |
Severity | Debug |
Message | LogStoreAccessor() -- attached to index '{0}' with root '{1}'. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121034 |
Severity | Debug |
Message | LogStoreAccessor() -- destroying accessor attached to index '{0}' |
Description | The debugging information about the internals of a LogStore index. |
Code | 121035 |
Severity | Debug |
Message | LogStoreAccessor({0})::hasNext() -- {1} |
Description | The debugging information about the internals of a LogStore index. |
Code | 121036 |
Severity | Debug |
Message | LogStoreAccessor({0})::getNext() -- no more records. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121037 |
Severity | Debug |
Message | LogStoreAccessor({0})::getNext() -- addr={1}, node[k={2}, l={3}, r={4}] |
Description | The debugging information about the internals of a LogStore index. |
Code | 121038 |
Severity | Debug |
Message | LogStoreAccessor({0})::get( {1} ) -- record md5 '{2}'. |
Description | The debugging information about the internals of a LogStore index. |
Code | 121039 |
Severity | Debug |
Message | LogStoreAccessor({0})::get( {1} ) -- no matching record found. |
Description | The debugging information about the internals of a LogStore index. |
Code | 122000 |
Severity | Debug |
Message | {0}({1})::initialize() -- initializing. |
Description | The debugging information about the internals of a memory store. |
Code | 122001 |
Severity | Debug |
Message | {0}({1})::~{2}() -- disposing. |
Description | The debugging information about the internals of a memory store. |
Code | 122002 |
Severity | Info |
Message | {0}({1})::getIndex() -- setting up index '{2}'. |
Description | Information about stages of the memory store initialization. |
Code | 122004 |
Severity | Debug |
Message | MemoryStoreIndex({0})::MemoryStoreIndex() -- attached to store '{1}'. |
Description | The debugging information about the internals of a memory store. |
Code | 122005 |
Severity | Debug |
Message | MemoryStoreIndex({0})::~MemoryStoreIndex() -- disposing of MemoryStoreIndex attached to store '{1}'. |
Description | The debugging information about the internals of a memory store. |
Code | 122006 |
Severity | Debug |
Message | MemoryStoreIndex({0}): dumping. |
Description | The debugging information about the internals of a memory store. |
Code | 122007 |
Severity | Debug |
Message | MemoryStoreIndex({0})::dump -- key md5 '{1}', record '0x{2}'. |
Description | The debugging information about the internals of a memory store. |
Code | 122010 |
Severity | Debug |
Message | MemoryStoreIndex({0})::getAccessor() -- making new accessor. |
Description | The debugging information about the internals of a memory store. |
Code | 122011 |
Severity | Critical |
Message | StoreIndex()::commitTransaction() -- got unexpected operation {0} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 122012 |
Severity | Debug |
Message | MemoryStoreList({0})::MemoryStoreList() -- attached to store '{1}'. |
Description | The debugging information about the internals of a memory store. |
Code | 122013 |
Severity | Debug |
Message | MemoryStoreList({0})::~MemoryStoreList() -- disposing of MemoryStoreList attached to store '{1}'. |
Description | The debugging information about the internals of a memory store. |
Code | 122014 |
Severity | Debug |
Message | MemoryStoreList({0}): dumping. |
Description | The debugging information about the internals of a memory store. |
Code | 122015 |
Severity | Debug |
Message | MemoryStoreList({0})::dump -- key md5 '{1}', record '0x{2}'. |
Description | The debugging information about the internals of a memory store. |
Code | 122016 |
Severity | Critical |
Message | MemoryStoreList({0})::putUpdate() -- bad call |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 122017 |
Severity | Critical |
Message | MemoryStoreList({0})::putDelete() -- bad call |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 122018 |
Severity | Debug |
Message | MemoryStoreList({0})::getAccessor() -- making new accessor. |
Description | The debugging information about the internals of a memory store. |
Code | 122019 |
Severity | Critical |
Message | MemoryStoreListAccessor({0})::get() -- calling get. |
Description | The debugging information about the internals of a memory store. |
Code | 123000 |
Severity | Critical |
Message | StoreIndex({0})::collapseTransaction() -- got unexpected operation {1} |
Description | This error may happen if the SPLASH code in a FlexStream sets an invalid value for the operation code. Check your SPLASH code. |
Code | 123001 |
Severity | Warning |
Message | StoreIndex({0})::put() bad insert, tid={1}. |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123002 |
Severity | Warning |
Message | StoreIndex({0})::put() bad update, tid={1} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123003 |
Severity | Warning |
Message | StoreIndex({0})::put() bad upsert, tid={1} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123004 |
Severity | Warning |
Message | StoreIndex({0})::put() bad delete, tid={1} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123005 |
Severity | Critical |
Message | StoreIndex({0})::put() -- got unexpected operation {1} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123006 |
Severity | Warning |
Message | StoreIndex({0})::put() -- roll back transaction of size {1} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123007 |
Severity | Warning |
Message | Bad insert writing to store. |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123008 |
Severity | Warning |
Message | Bad update writing to store. |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123009 |
Severity | Warning |
Message | Bad upsert writing to store. |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123010 |
Severity | Warning |
Message | Bad delete.writing to store. |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123011 |
Severity | Warning |
Message | StoreIndex({0})::collapse() Error collapsing transaction: op={1}, oldop={2}, tid={3} |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123012 |
Severity | Warning |
Message | Bad collapse. |
Description | An invalid data sequence was encountered. Check the input data and the model for correctness. If enabled, more detail about the offending records has been written to the bad records file. To enable the bad records file, use the option -B. |
Code | 123013 |
Severity | Info |
Message | {0}({1})::initialize() indexSizeHint set to {2} |
Description | Information about the index size hint set in the model. The hint lets the stream index preallocate space and require less resizing as the model works. It improves the speed and latency. |
Code | 124001 |
Severity | Info |
Message | {0}({1})::Memory usage: {2} bytes in aggregation index. |
Description | Statistics about memory usage in the aggregation index. Each AggregateStream keeps a copy or reference to all input data in its aggregation index. |
Code | 124002 |
Severity | Critical |
Message | {0}({1})::Bad expression '{2}' encountered: {3} |
Description | A syntax error in the aggregation expression. Check and correct the model. |
Code | 124004 |
Severity | Critical |
Message | {0}({1})::init() number of Group clauses does not match number of key columns. |
Description | The aggregation creates a new primary index on the data. The grouping guarantees the uniqueness on this index, so the group clauses must match the index counts. Check and correct the model. |
Code | 124007 |
Severity | Warning |
Message | {0}({1})::init() optimizing for additive case. |
Description | An informational message that the additive optimization is to be used on the data. The additive optimization is possible if the aggregation expressions can be recalculated by looking strictly at one added, updated or deleted record, without iterating through all the records in the group. |
Code | 124008 |
Severity | Warning |
Message | {0}({1})::init() CANNOT optimize, aggrgation is not additive. |
Description | An informational message that the additive optimization is not to be used on the data. The additive optimization is possible if the aggregation expressions can be recalculated by looking strictly at one added, updated or deleted record, without iterating through all the records in the group. |
Code | 124010 |
Severity | Critical |
Message | AggregateStream({0}): Encountered fatal error in update during delete phase. |
Description | An illegal calculation has happened in the model. Check the model logic and received data. |
Code | 124011 |
Severity | Warning |
Message | AggregateStream({0}): Discarding UPDATE---not valid for AggregateStream. |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 124012 |
Severity | Warning |
Message | AggregateStream({0}): Discarding UPSERT---not valid for AggregateStream. |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 124013 |
Severity | Critical |
Message | {0}({1})::init() The rule for Group {2} has an aggregation operation (group-by clauses must not use aggregation). |
Description | Since the grouping is done before aggregation, the grouping conditions may not use the aggregation operations. Check and correct the model. |
Code | 124014 |
Severity | Critical |
Message | {0}({1})::init() The rule for GroupFilter {2} has an aggregation operation (group-by clauses must not use aggregation). |
Description | Since the grouping is done before aggregation, the grouping conditions may not use the aggregation operations. Check and correct the model. |
Code | 124015 |
Severity | Critical |
Message | {0}({1})::init() The rule for GroupOrder {2} has an aggregation operation (group-by clauses must not use aggregation). |
Description | Since the grouping is done before aggregation, the grouping conditions may not use the aggregation operations. Check and correct the model. |
Code | 124017 |
Severity | Critical |
Message | {0}({1})::error in compilation of GroupFilter {2}: {3} |
Description | A syntax error in the expression. Check and correct the model. |
Code | 124018 |
Severity | Critical |
Message | {0}({1})::error in compilation of GroupOrder {2}: {3} |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 124019 |
Severity | Warning |
Message | AggregateStream({0}): Discarding UPDATE; record no longer present (tid={1}). |
Description | This error may happen if an InputWindow is defined on this stream. For correct usage of InputWindows, the input data must be insert-only. Check and correct the model or the input data. |
Code | 124020 |
Severity | Warning |
Message | AggregateStream discarding UPDATE; record no longer present. |
Description | This error may happen if an InputWindow is defined on this stream. For correct usage of InputWindows, the input data must be insert-only. Check and correct the model or the input data. |
Code | 124021 |
Severity | Critical |
Message | {0}({1})::init() no key columns specified; need at least one key column. |
Description | Each stream must have at least one key column defined. Check and correct the model. |
Code | 125003 |
Severity | Warning |
Message | SourceStream({0}): Discarding UPDATE---not valid for SourceStream with insertOnly (tid={1}). |
Description | A stream defined as insert-only may not receive the updates or deletes. Check and correct the input data. |
Code | 125004 |
Severity | Warning |
Message | SourceStream({0}): Discarding UPSERT---not valid for SourceStream with insertOnly (tid={1}). |
Description | A stream defined as insert-only may not receive the updates or deletes. Check and correct the input data. |
Code | 125005 |
Severity | Warning |
Message | SourceStream({0}): Discarding DELETE---not valid for SourceStream with insertOnly (tid={1}). |
Description | A stream defined as insert-only may not receive the updates or deletes. Check and correct the input data. |
Code | 125007 |
Severity | Warning |
Message | SourceStream discarding UPDATE---not valid for SourceStream with insertOnly . |
Description | A stream defined as insert-only may not receive the updates or deletes. Check and correct the input data. |
Code | 125008 |
Severity | Warning |
Message | SourceStream discarding UPSERT---not valid for SourceStream with insertOnly . |
Description | A stream defined as insert-only may not receive the updates or deletes. Check and correct the input data. |
Code | 125009 |
Severity | Warning |
Message | SourceStream discarding DELETE---not valid for SourceStream with insertOnly . |
Description | A stream defined as insert-only may not receive the updates or deletes. Check and correct the input data. |
Code | 126000 |
Severity | Critical |
Message | SourceStream({0})::initInputs() error in reading file {1} |
Description | Obsolete. |
Code | 126001 |
Severity | Info |
Message | {0}({1})::run() -- starting event queue Stream. |
Description | Information about the internal stages of initialization. |
Code | 126002 |
Severity | Warning |
Message | SourceStream({0}): Discarding INSERT; record has null key (tid={1}). |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126003 |
Severity | Warning |
Message | SourceStream({0}): Discarding UPDATE; record has null key (tid={1}). |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126004 |
Severity | Warning |
Message | SourceStream({0}): Discarding UPSERT; record has null key (tid={1}). |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126005 |
Severity | Warning |
Message | SourceStream({0}): Discarding DELETE; record has null key (tid={1}). |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126006 |
Severity | Warning |
Message | SourceStream discarding INSERT; record has null key. |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126007 |
Severity | Warning |
Message | SourceStream discarding UPDATE; record has null key. |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126008 |
Severity | Warning |
Message | SourceStream discarding UPSERT; record has null key. |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126009 |
Severity | Warning |
Message | SourceStream discarding DELETE; record has null key. |
Description | The key fields may not be NULL. Check and correct the input data. |
Code | 126010 |
Severity | Critical |
Message | {0}({1}): The autogen column '{2}' does not have type 'int64' in the row definition. |
Description | The auto-generated sequence numbers are always 64-bit. Check and correct the model. |
Code | 126011 |
Severity | Info |
Message | {0}({1}): The autogen column will start at {2} |
Description | Informational message. The logic checks the existing data in the stream and chooses the next sequence number to be larger. |
Code | 127001 |
Severity | Critical |
Message | {0}({1})::error in compilation of ColumnExpressions: {2} |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 127002 |
Severity | Critical |
Message | {0}({1})::ColumnExpression {2} has bad expression '{3}': {4}. |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 127003 |
Severity | Critical |
Message | {0}({1})::must have exactly one input stream |
Description | The ComputeStream processes the data from one stream. Check and correct the model. |
Code | 127004 |
Severity | Critical |
Message | {0}({1})::the number of keys in the input stream be <= the number of keys of the output |
Description | The ComputeStream cannot change the key of the data. It may add new fields to the key but no fields may be removed. Check and correct the model. |
Code | 127005 |
Severity | Critical |
Message | {0}({1})::all keys of the input must be copied into the keys of the output. |
Description | The ComputeStream cannot change the key of the data. It may add new fields to the key but no fields may be removed. Check and correct the model. |
Code | 127006 |
Severity | Critical |
Message | {0}({1}) ColumnExpression for a key column does not refer to a valid input table. |
Description | The ComputeStream cannot change the key of the data. It may add new fields to the key but no fields may be removed. Check and correct the model. |
Code | 127007 |
Severity | Critical |
Message | {0}({1}) ColumnExpression for a key column does not refer to a valid column in the input table. |
Description | The ComputeStream cannot change the key of the data. It may add new fields to the key but no fields may be removed. Check and correct the model. |
Code | 127008 |
Severity | Critical |
Message | {0}({1}) ColumnExpression for a key column refers to the same column as another key column rule. |
Description | The ComputeStream cannot change the key of the data. It may add new fields to the key but no fields may be removed. Check and correct the model. |
Code | 127009 |
Severity | Critical |
Message | {0}({1})::ColumnExpression {2} has aggregate operation, which is not valid in ComputeStream. |
Description | The aggregate operations may be used only in the AggregateStreams. Check and correct the model. |
Code | 127010 |
Severity | Warning |
Message | ComputeStream({0}): Discarding UPSERT---not valid for ComputeStream. |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 127015 |
Severity | Critical |
Message | {0}({1}): {2} syntax error in the type '{3}'. |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 127017 |
Severity | Critical |
Message | {0}({1}): {2} type '{3}' can not be resolved: {4} |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 127019 |
Severity | Critical |
Message | {0}({1}): ColumnExpression {2} is not of base type. |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 128000 |
Severity | Critical |
Message | {0}({1})::FilterStream requires exactly one input stream |
Description | The FilterStream processes the data from one stream. Check and correct the model. |
Code | 128004 |
Severity | Critical |
Message | {0}({1})::error in compilation of FilterExpression: {2} |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 128005 |
Severity | Critical |
Message | {0}({1})::Bad expression '{2}' encountered: {3} |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 128006 |
Severity | Warning |
Message | FilterStream({0}): Discarding UPSERT---not valid for FilterStream. |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 129017 |
Severity | Critical |
Message | {0}({1})::init() found no Join definitions |
Description | A JoinStream must have a join condition defined. Check and correct the model. |
Code | 129018 |
Severity | Warning |
Message | JoinStream({0}): Discarding UPSERT---not valid for JoinStream |
Description | Some self-testing assertion in the Sybase Event Stream Processor has failed, which should never happen. Contact Sybase support if this message appears. |
Code | 129019 |
Severity | Critical |
Message | {0}({1}): Input stream {2} to Join expression not found |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 129020 |
Severity | Critical |
Message | {0}({1}): Input field {2} in Join expression not found in stream {3} |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 129021 |
Severity | Critical |
Message | {0}({1}): Types of the fields {2}, {3} in Join expression do not match |
Description | The join expression matches together two columns from different streams. These columns must have the same type. |
Code | 129024 |
Severity | Critical |
Message | {0}({1}): Expression for a key column does not refer to a valid column in the input table |
Description | A syntax error in the expression. Check and correct the argument. |
Code | 129025 |
Severity | Critical |
Message | {0}({1}): Expression for a key refers to a non-key of an input table |
Description | The key of the resulting join must follow certain rules. It may include key columns from more than one original stream. No column may be used more than once. All the columns from at least one original stream must be used to guarantee its uniqueness. Each individual field of the result key may not refer more than one field of an original stream's key, also to guarantee the uniqueness. Check and correct the model. |
Code | 129026 |
Severity | Critical |
Message | {0}({1}): Expression for a key refers to an input key column already seen |
Description | The key of the resulting join must follow certain rules. It may include key columns from more than one original stream. No column may be used more than once. All the columns from at least one original stream must be used to guarantee its uniqueness. Each individual field of the result key may not refer more than one field of an original stream's key, also to guarantee the uniqueness. Check and correct the model. |
Code | 129027 |
Severity | Critical |
Message | {0}({1}): At least one input table must have all keys copied into key rules |
Description | The key of the resulting join must follow certain rules. It may include key columns from more than one original stream. No column may be used more than once. All the columns from at least one original stream must be used to guarantee its uniqueness. Each individual field of the result key may not refer more than one field of an original stream's key, also to guarantee the uniqueness. Check and correct the model. |
Code | 129028 |
Severity | Critical |
Message | {0}({1}): Expression for key refers to the same input table more than once |
Description | The key of the resulting join must follow certain rules. It may include key columns from more than one original stream. No column may be used more than once. All the columns from at least one original stream must be used to guarantee its uniqueness. Each individual field of the result key may not refer more than one field of an original stream's key, also to guarantee the uniqueness. Check and correct the model. |
Code | 129029 |
Severity | Critical |
Message | {0}({1}): Could not determine a join strategy; try reordering the input streams. |
Description | When joining more than two streams, determining the order of joining may become complicated. Try to change the order of streams in the join, or split one join into multiple sequential joins, with fewer streams joined in each of them. |