Creates new tables and optional integrity constraints, defines computed columns and table, row, index, and partition compression levels. Defines encrypted columns, decrypt defaults, and partition properties. Removes residual data from deletions.
create table [[database.[owner].]table_name (column_name datatype [default {constant_expression | user | null}] [{identity | null | not null}] [ in row [(length)] | off row ] [[constraint constraint_name] {{unique | primary key} [clustered | nonclustered] [asc | desc] [with {fillfactor = pct, max_rows_per_page = num_rows,} reservepagegap = num_pages] dml_logging = {full | minimal} [deferred_allocation | immediate_allocation]) [on segment_name] | references [[database.]owner.]ref_table [(ref_column)] [match full] | check (search_condition)}]} [[encrypt [with [database.[owner].]key_name] [decrypt_default constant_expression | null]] [not compressed] [compressed = {compression_level | not compressed} [[constraint [[database.[owner].]key_name] {unique | primary key} [clustered | nonclustered] (column_name [asc | desc] [{, column_name [asc | desc]}...]) [with {fillfactor = pct max_rows_per_page = num_rows, reservepagegap = num_pages}] [on segment_name] | foreign key (column_name [{,column_name}...]) references [[database.]owner.]ref_table [(ref_column [{, ref_column}...])] [match full] | check (search_condition) ...} [{, {next_column | next_constraint}}...]) [lock {datarows | datapages | allpages}] [with {max_rows_per_page = num_rows, exp_row_size = num_bytes, reservepagegap = num_pages, identity_gap = value transfer table [on | off], dml_logging = {full | minimal}, compression = {none | page | row}, "erase residual data" {on | off}}], lob_compression = off | compression_level, index_compression [={NONE | PAGE} [on segment_name] [partition_clause] [[external table] at pathname] [for load] compression_clause::= with compression = {none | page | row}
partition_clause::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [compression_clause] [on segment_name]]...) | partition by hash (column_name[, column_name]...) { (partition_name [compression_clause] [on segment_name] [, partition_name [compression_clause] [on segment_name]]...) | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name]] ...) | partition by roundrobin { (partition_name [on segment_name] [, partition_name [compression_clause] [on segment_name]]...) | number_of_partitions [on (segment_name[, segment_name]...)]}
create table [[database.[owner].] table_name (column_name {compute | as} computed_column_expression [[materialized] [not compressed]] | [not materialized]}
create table [database.[owner].]table_name . . . | {unique | primary key} using clustered (column_name [asc | desc] [{, column_name [asc | desc]}...])= (hash_factor [{, hash_factor}...]) with max num_hash_values key
You cannot use a variable for the table name. The table name must be unique within the database and to the owner. If you have set quoted_identifier on, you can use a delimited identifier for the table name. Otherwise, it must conform to the rules for identifiers. For more information about valid table names, see Identifiers in Reference Manual: Building Blocks.
You can create a temporary table by preceding the table name with either a pound sign (#) or “tempdb..”. See Tables Beginning with # (Temporary Tables) in Reference Manual: Building Blocks.
You can create a table in a different database, as long as you are listed in the sysusers table and have create table permission for that database. For example, you can use either of the following to create a table called newtable in the database otherdb:
create table otherdb..newtable
create table otherdb.yourname.newtable
datatype (n)
datatype (p,s)
See System and User-Defined Datatypes in Reference Manual: Building Blocks for more information.
If Java is enabled in the database, datatype can be the name of a Java class, either a system class or a user-defined class, that has been installed in the database. See Java in Adaptive Server Enterprise for more information.
You can reference global variables in the default section of create table statements that do not reference database objects. You cannot, however, use global variables in the check section of create table.
exact numeric and a scale of 0; or
Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.
IDENTITY columns are not updatable and do not allow nulls.
IDENTITY columns are used to store sequential numbers—such as invoice numbers or employee numbers—that are generated automatically by the SAP ASE server. The value of the IDENTITY column uniquely identifies each row in a table.
null specifies that the SAP ASE server assigns a null value if a user does not provide a value.
not null specifies that a user must provide a non-null value if no default exists.
The properties of a bit-type column must always be not null.
If you do not specify null or not null, the SAP ASE server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards.
When you do not specify length, the SAP ASE server uses the database-wide setting for in-row length.
The default value is off row. See Java in Adaptive Server Enterprise.
constraint_name is the name of the constraint. It must conform to the rules for identifiers and be unique in the database. If you do not specify the name for a referential or check constraint, the SAP ASE server generates a name in the form tabname_colname_objectid where:
tabname – is the first 10 characters of the table name
colname – is the first 5 characters of the column name
objectid – is the object ID number for the constraint
If you do not specify the name for a unique or primary key constraint, the SAP ASE server generates a name in the format tabname_colname_tabindid, where tabindid is a string concatenation of the table ID and index ID.
The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.
A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both the clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.
If the fillfactor is set to 100, the SAP ASE server creates both clustered and nonclustered indexes with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data is ever added.
fillfactor values smaller than 100 (except 0, which is a special case) cause the SAP ASE server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that eventually holds a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.
If CIS is enabled, you cannot use fillfactor for remote servers.
decrypt_default allows the sso to specify a value to be returned to users who do not have decrypt permissions on the encrypted column. Decrypt default values are substituted for text, image, or unitext columns retrieved through the select statement.
If you do not specify a value for max_rows_per_page, the SAP ASE server uses a value of 0 when creating the table. Values for tables and clustered indexes are between 0 and 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; the SAP ASE server returns an error message if the specified value is too high.
A max_rows_per_page of 0 creates clustered indexes with full data pages and nonclustered indexes with full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.
Using low values for max_rows_per_page reduces lock contention on frequently accessed data. However, using low values also causes the SAP ASE server to create new indexes with pages that are not completely full, uses more storage space, and may cause more page splits.
If CIS is enabled, and you create a proxy table, then max_rows_per_page is ignored. Proxy tables do not contain any data. If max_rows_per_page is used to create a table, and later a proxy table is created to reference that table, then the max_rows_per_page limits apply when you insert or delete through the proxy table.
full – the SAP ASE server logs all transactions
minimal – Adaptive Sever does not log row or page changes
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must match the datatype of the referenced table columns.
Null – the referential integrity condition is true.
Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.
If they are neither, then the referential integrity condition is false when:
All values are non-null and not equal, or
Some of the values in the referencing columns of a referencing row are non-null values, while others are null.
Although you can reference global variables in the default section of create table statements, you cannot use them in the check section.
A list of constant expressions introduced with in
A set of conditions introduced with like, which may contain wildcard characters
Column and table check constraints can reference any columns in the table.
An expression can include arithmetic operators and functions. The search_condition cannot contain subqueries, aggregate functions, host variables, or parameters.
The table creator must have select permission on the key. If you do nto supply key_name, the SAP ASE server looks for a default key in the database.
keyname identifies a key created using create encryption key. The creator of the table must have select permission on keyname. If keyname is not supplied, the SAP ASE server looks for a default key created using the as default clause on create encryption key or alter encryption key.
See Encrypting Data in the Encrypted Columns Users Guide for a list of supported datatypes.
value is the identity gap amount.
full – the SAP ASE server logs all transactions
minimal – Adaptive Sever does not log row or page changes
none – the data in this table or partition is not compressed. For partitions, none indicates that data in this partition remains uncompressed even if the table compression is altered to row or page compression.
row – compresses one or more data items in an individual row. The SAP ASE server stores data in a row compressed form only if the compressed form saves space compared to an uncompressed form. Set row compression at the partition or table level.
page – when the page fills, existing data rows that are row-compressed are then compressed using page-level compression to create page-level dictionary, index, and character-encoding entries. Set page compression at the partition or table level.
The SAP ASE server compresses data at the page level only after it has compressed data at the row level, so setting the compression-level to page implies both page and row compression.
0 – the row is not compressed.
1 through 9 – the SAP ASE server uses ZLib compression. Generally, the higher the compression number, the more the SAP ASE server compresses the LOB data, and the greater the ratio between compressed and uncompressed data (that is the greater the amount of space savings, in bytes, for the compressed data versus the size of the uncompressed data).
However, the amount of compression depends on the LOB content, and the higher the compression level , the more CPU-intensive the process. That is, level 9 provides the highest compression ratio but also the heaviest CPU usage.
100 – the SAP ASE server uses FastLZ compression. The compression ratio that uses the least CPU usage; generally used for shorter data.
101 – the SAP ASE server uses FastLZ compression. A value of 101 uses slightly more CPU than a value of 100, but uses a better compression ratio than a value of 100.
The compression algorithm ignores rows that do not use LOB data.
NONE – indexes on the specified table are not compressed. Indexes that are specifically created with index_compression = PAGE are compressed.
PAGE – all indexes on the specified table are compressed. Indexes that are specifically created with index_compression = NONE are not compressed.
When used for partitions, specifies the segment on which to place the partition.
If partition_name is omitted, the SAP ASE server creates a name in the form table_name_patition_id. The SAP ASE server truncates partition names that exceed the allowed maximum length.
server_name (required) – is the name of the server that contains the remote object.
dbname (optional) – is the name of the database managed by the remote server that contains this object.
owner (optional) – is the name of the remote server user that owns the remote object.
object (required) – is the name of the remote table or view.
aux1.aux2 (optional) – is a string of characters that is passed to the remote server during a create table or create index command. This string is used only if the server is class db2. aux1 is the DB2 database in which to place the table, and aux2 is the DB2 tablespace in which to place the table.
create table foo ( a int , b int default @@spid )
create table titles ( title_id tid not null , title varchar (80) not null , type char (12) not null , pub_id char (4) null , price money null , advance money null , total_sales int null , notes varchar (200) null , pubdate datetime not null , contract bit not null )
create table mytable ( col1 int , col2 int , col3 (char 50) ) partitioned by roundrobin 3 for load
Load the data into mytable, using bcp in.
Unpartition mytable.
The table is now available for any user activities.
create table "compute" ( "max" int , "min" int , "total score" int )
create table sales ( stor_id char (4) not null , ord_num varchar (20) not null , date datetime not null , unique clustered (stor_id, ord_num) )
create table salesdetail ( stor_id char (4) not null , ord_num varchar (20) not null , title_id tid not null references titles (title_id) , qty smallint default 0 not null , discount float not null, constraint salesdet_constr foreign key (stor_id, ord_num) references sales (stor_id, ord_num) )
create rule pub_idrule as @pub_id in ("1389", "0736", "0877", "1622", "1756") or @pub_id like "99[0-9][0-9]"
create table publishers ( pub_id char (4) not null check (pub_id in ("1389", "0736", "0877", "1622", "1756") or pub_id like "99[0-9][0-9]") , pub_name varchar (40) null , city varchar (20) null , state char (2) null )
create table sales_daily ( stor_id char (4) not null , ord_num numeric (10,0) identity , ord_amt money null )
create table new_titles ( title_id tid , title varchar (80) not null , type char (12) , pub_id char (4) null , price money null , advance money null , total_sales int null , notes varchar (200) null , pubdate datetime , contract bit ) lock datapages with exp_row_size = 200
create table new_publishers ( pub_id char (4) not null , pub_name varchar (40) null , city varchar (20) null , state char (2) null ) lock datarows with reservepagegap = 16
create table big_sales ( storid char(4) not null , ord_num varchar(20) not null , order_date datetime not null ) with dml_logging = minimal
create table im_not_here_yet ( col_1 int, col_2 varchar(20) ) with deferred_allocation
create table mytable ( f1 int , f2 bigint not null , f3 varchar (255) null ) lock datarows with transfer table on
create table genre ( mystery varchar(50) not null , novel varchar(50) not null , psych varchar(50) not null , history varchar(50) not null , art varchar(50) not null , science varchar(50) not null , children varchar(50) not null , cooking varchar(50) not null , gardening varchar(50) not null , poetry varchar(50) not null ) with compression = row
create table sales ( store_id int not null , order_num int not null , date datetime not null ) partition by range (date) ( Y2008 values <= ('12/31/2008') with compression = page on seg1, Y2009 values <= ('12/31/2009') on seg2, Y2010 values <= ('12/31/2010') on seg3)
create table email ( user_name char (10) , mailtxt text , photo image , reply_mails text) with lob_compression = 5
create table sales_south ( stor_id char (4) not null , ord_num varchar (20) not null , date datetime not null , unique clustered (stor_id asc, ord_num desc) )
create table t1 ( a int , b char (10) ) at "SERVER_A.db1.joe.t1"
create table employees ( name varchar (30) , home_addr Address , mailing_addr Address2Line )
create table mytable ( IdNum numeric (12,0) identity ) with identity_gap = 10
create table my_publishers ( pub_id char (4) not null , pub_name varchar (40) null , city varchar (20) null , state char (2) null ) partition by list (state) ( west values ('CA', 'OR', 'WA') on seg1 , east values ('NY', 'MA') on seg2 )
create table fictionsales ( store_id int not null , order_num int not null , date datetime not null ) partition by range (date) ( q1 values <= ("3/31/2005") on seg1 , q2 values <= ("6/30/2005") on seg2 , q3 values <= ("9/30/2005") on seg3 , q4 values <= ("12/31/2005") on seg4 )
create table currentpublishers ( pub_id char (4) not null , pub_name varchar (40) null , city varchar (20) null , state char (2) null ) partition by roundrobin 3 on (seg1)
create table mysalesdetail ( store_id char (4) not null , ord_num varchar (20) not null , title_id tid not null , qty smallint not null , discount float not null ) partition by hash (ord_num) ( p1 on seg1 , p2 on seg2 , p3 on seg3 )
create table mytitles ( title_id tid not null , title varchar (80) not null , type char (12) not null , pub_id char (4) null , price money null , advance money null , total_sales int null , notes varchar (200) null , pubdate datetime not null , sum_sales compute price * total_sales materialized )
create table employee_table ( ssn char(15) null encrypt name char(50) , deptid int )
create table customer ( ccard char(16) unique encrypt with cc_key decrypt_default 'XXXXXXXXXXXXXXXX', name char(30) )
The ccard column has a unique constraint and uses cc_key for encryption. Because of the decrypt_default specifier, the SAP ASE server returns the value ‘XXXXXXXXXXXXXXXX’ instead of the actual data when a user without decrypt permission selects the ccard column.
create table new_titles ( title_id tid not null , title varchar (80) not null , type char (12) null , price money null , pubdate datetime not null , description text in row (300) , notes text in row , reviews text off row )
create table orders ( id int , age int , primary key using clustered (id,age) = (10,1) with max 1000 key ) on order_seg
The order_seg segment starts on page ID 51200.
The ID for the first data object allocation map (OAM) page is 51201.
The maximum rows per page is 168.
The row size is 10.
The root index page of the overflow clustered region is 51217.
create table orders ( id int default NULL , age int , primary key using clustered (id,age) = (10,1) with max 100 key , name varchar(30) ) on order_seg
The order_seg segment starts on page ID 51200.
The ID for the first data OAM page is 51201.
The maximum rows per page is 42.
The row size is 45.
The root index page of the overflow clustered region is 51217.
Scenario 1
The option to erase residual data is turned on for table t1 because it is set at the database level, so that both the drop table and truncate table commands for t1 result in the cleanup of all residual data from its pages.
create database db1 go sp_dboption db1, "erase residual data", true go use db1 go create table t1 (col int) go insert t1 values ... go create table t2 (col1 int, col2 char(10)) with "erase residual data" off go truncate table t1 go drop table t1 go truncate table t2 go drop table t2 go
Scenario 2
create database db1 go use db1 go create table t1 (col int) go sp_dboption db1, "erase residual data", true go create table t2 (col1 int, col2 char(10)) go create table t3 (col1 int, col2 char(10)) with "erase residual data" off go truncate table t1 go truncate table t2 go truncate table t3 go
Scenario 3
create database db1 go use db1 go create table t1(col int) go create table t2 (col1 int, col2 char(10)) go create table t3 (col1 int, col2 char(10)) with "erase residual data" off go set erase_residual_data on go truncate table t1 go truncate table t2 go truncate table t3 go
create table order_line ( ol_o_id int, ol_d_id tinyint, ol_w_id smallint, ol_number tinyint, ol_i_id int, ol_supply_w_id smallint, ol_delivery_d datetime, ol_quantity smallint, ol_amount float, ol_dist_info char(24) ) lock datapages with index_compression = page
By default, indexes created on this table are compressed by default. However, if an index has an index row length that is too short to benefit from compression, a warning is raised, indicating that the index will not be compressed.
create table creates a table and optional integrity constraints. The table is created in the currently open database unless you specify a different database in the create table statement. You can create a table or index in another database, if you are listed in the sysusers table and have create table permission in the database.
Space is allocated to tables and indexes in increments of one extent, or eight pages, at a time. Each time an extent is filled, another extent is allocated. To see the amount of space allocated and used by a table, use sp_spaceused.
The maximum length for in-row Java columns is determined by the maximum size of a variable-length column for the table’s schema, locking style, and page size.
create table performs error checking for check constraints before it creates the table.
When using create table from CIS with a column defined as char (n) NULL, CIS creates the column as varchar (n) on the remote server.
Use the asc and desc keywords after index column names to specify the sort order for the index. Creating indexes so that columns are in the same order specified in the order by clause of queries eliminates the sorting step during query processing.
If an application inserts short rows into a data-only-locked table and updates them later so that their length increases, use exp_row_size to reduce the number of times that rows in data-only-locked tables are forwarded to new locations.
The location information provided by the at keyword is the same information that is provided by sp_addobjectdef. The information is stored in the sysattributes table.
When you set this option on a table, the operations for the table (drop table, delete row, alter table, drop index) that result in residual data automatically clean up deallocated space.
See also sp_addmessage, sp_addsegment, sp_addtype, sp_bindmsg, sp_chgattribute, sp_commonkey, sp_depends, sp_foreignkey, sp_help, sp_helpjoins, sp_helpsegment, sp_primarykey, sp_rename, sp_spaceused in Reference Manual: Procedures.
ANSI SQL – Compliance level: Entry-level compliant.
Use of a database name to qualify a table or column name
IDENTITY columns
The not null column default
The asc and desc options
The reservepagegap option
The lock clause
The on segment_name clause
See System and User-Defined Datatypes in Reference Manual: Building Blocks for datatype compliance information.
Any user can create temporary tables and new tables with logging disabled.
The following describes permission checks for create table that differ based on your granular permissions settings.
Setting | Description |
---|---|
Enabled | With granular permissions enabled, you must have the create table privilege to create a table. To create a table for another user, you must have the create any table privilege. |
Disabled | With granular permissions disabled, you must be the database owner, a user with sa_role, or a user with the create table privilege to create a table. To create a table for another user, you must have sa_role. |
Values in event and extrainfo columns of sysaudits are:
Information | Values |
---|---|
Event | 10 |
Audit option | create |
Command or access audited | create table |
Information in extrainfo |
|