create table

Description

Syntax

create table [[database.[owner].]table_name (column_name datatype
	[default {constant_expression  | user | null}]
	[{identity | null | not null}]
		[ in row [(length)] | off row ]
		[[constraint constraint_name]
		{{unique | primary key}
		[clustered | nonclustered] [asc | desc]
		[with {fillfactor = pct,
				max_rows_per_page = num_rows,}
				reservepagegap = num_pages]
				dml_logging = {full | minimal}
				[deferred_allocation | immediate_allocation])
		[on segment_name]
		| references [[database.]owner.]ref_table
			[(ref_column)]
			[match full]
			| check (search_condition)}]}
	[[encrypt [with [database.[owner].]key_name] 
		[decrypt_default constant_expression | null]]
						[not compressed]
		[compressed = {compression_level | not compressed}
	[[constraint [[database.[owner].]key_name]
		{unique | primary key} 
			[clustered | nonclustered]
			 (column_name [asc | desc]
				[{, column_name [asc | desc]}...])
			[with {fillfactor = pct 
				max_rows_per_page = num_rows,
				reservepagegap = num_pages}]
			[on segment_name]
		| foreign key (column_name [{,column_name}...])
			references [[database.]owner.]ref_table
				[(ref_column [{, ref_column}...])]
				[match full]
		| check (search_condition) ...}
	[{, {next_column | next_constraint}}...])
	[lock {datarows | datapages | allpages}]
	[with {max_rows_per_page = num_rows, 
			exp_row_size = num_bytes, 
			reservepagegap = num_pages,
			identity_gap = value
			transfer table [on | off]
			dml_logging = {full | minimal}
			compression = {none | page | row}}]
			lob_compression = off | compression_level
	[on segment_name]
	[partition_clause]
	[[external table] at pathname]
	[for load]
	compression_clause::=
		with compression = {none | page | row}

Use this syntax for partitions:

partition_clause::=	partition by range (column_name[, column_name]...) 
		 ([partition_name] values <= ({constant | MAX} 
			[, {constant | MAX}] ...) 
				[compression_clause] [on segment_name] 
			[, [partition_name] values <= ({constant | MAX} 
				[, {constant | MAX}] ...)
				[compression_clause] [on segment_name]]...)

	| partition by hash (column_name[, column_name]...)
		{ (partition_name 
				[compression_clause] [on segment_name] 
			[, partition_name 
				[compression_clause] [on segment_name]]...) 
		| number_of_partitions 
			[on (segment_name[, segment_name] ...)]} 

	| partition by list (column_name) 
		 ([partition_name] values (constant[, constant] ...) 
				[compression_clause] [on segment_name] 
			[, [partition_name] values (constant[, constant] ...) 
				[compression_clause] [on segment_name]] ...)

	| partition by roundrobin 
		{ (partition_name [on segment_name] 
			[, partition_name 
				[compression_clause] [on segment_name]]...) 
		| number_of_partitions 
			[on (segment_name[, segment_name]...)]}

Use this syntax for computed columns

create table [[database.[owner].] table_name
	 (column_name {compute | as} 
		computed_column_expression 
			[[materialized] [not compressed]] | [not materialized]}

Use this syntax to create a virtually hashed table

create table [database.[owner].]table_name 
. . .
	| {unique | primary key}
	using clustered
	(column_name [asc | desc] [{, column_name [asc | desc]}...])=
	(hash_factor [{, hash_factor}...])
		with max num_hash_values key

Parameters

table_name

is the explicit name of the new table. Specify the database name if the table is in another database, and specify the owner’s name if more than one table of that name exists in the database. The default value for owner is the current user, and the default value for database is the current database.

You cannot use a variable for the table name. The table name must be unique within the database and to the owner. If you have set quoted_identifier on, you can use a delimited identifier for the table name. Otherwise, it must conform to the rules for identifiers. For more information about valid table names, see “Identifiers” on page 359 in Chapter 4, “Expressions, Identifiers, and Wildcard Characters,” of Reference Manual: Building Blocks.

You can create a temporary table by preceding the table name with either a pound sign (#) or “tempdb..”. See “Tables beginning with # (temporary tables)” on page 362 in Chapter 4, “Expressions, Identifiers, and Wildcard Characters,” of Reference Manual: Building Blocks.

You can create a table in a different database, as long as you are listed in the sysusers table and have create table permission for that database. For example, you can use either of the following to create a table called newtable in the database otherdb:

create table otherdb..newtable
create table otherdb.yourname.newtable
column_name

is the name of the column in the table. It must be unique in the table. If you have set quoted_identifier on, you can use a delimited identifier for the column. Otherwise, it must conform to the rules for identifiers. For more information about valid column names, see Chapter 4, “Expressions, Identifiers, and Wildcard Characters,” of Reference Manual: Building Blocks.

datatype

is the datatype of the column. System or user-defined datatypes are acceptable. Certain datatypes expect a length, n, in parentheses:

 datatype (n)

Others expect a precision, p, and scale, s:

 datatype (p,s)

See Chapter 1, “System and User-Defined Datatypes” in Reference Manual: Building Blocks for more information.

If Java is enabled in the database, datatype can be the name of a Java class, either a system class or a user-defined class, that has been installed in the database. See Java in Adaptive Server Enterprise for more information.

default

specifies a default value for a column. If you specify a default, and the user does not provide a value for the column when inserting data, Adaptive Server inserts the default value. The default can be a constant expression or a built-in, to insert the name of the user who is performing the insert, or null, to insert the null value. Adaptive Server generates a name for the default in the form of tabname_colname_objid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objid is the object ID number for the default. Defaults declared for columns with the IDENTITY property have no effect on column values.

You can reference global variables in the default section of create table statements that do not reference database objects. You cannot, however, use global variables in the check section of create table.

constant_expression

is a constant expression to use as a default value for the column. It cannot include global variables, the name of any columns, or other database objects, but can include built-in functions that do not reference database objects. This default value must be compatible with the datatype of the column, or Adaptive Server generates a datatype conversion error when attempting to insert the default.

user | null

specifies that Adaptive Server should insert the user name or the null value as the default if the user does not supply a value. For user, the datatype of the column must be either char (30) or varchar (30). For null, the column must allow null values.

identity

indicates that the column has the IDENTITY property. Each table in a database can have one IDENTITY column with a datatype of:

  • exact numeric and a scale of 0; or

  • Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.

IDENTITY columns are not updatable and do not allow nulls.

IDENTITY columns are used to store sequential numbers—such as invoice numbers or employee numbers—that are generated automatically by Adaptive Server. The value of the IDENTITY column uniquely identifies each row in a table.

null | not null

specifies Adaptive Server behavior during data insertion if no default exists.

null specifies that Adaptive Server assigns a null value if a user does not provide a value.

not null specifies that a user must provide a non-null value if no default exists.

The properties of a bit-type column must always be not null.

If you do not specify null or not null, Adaptive Server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards.

in row

instructs Adaptive Server to store data in the LOB column as in-row whenever there is enough space in the data page. The LOB column’s data is stored either fully as in-row or fully off-row.

length

(optional) specifies the maximum size at which LOB column data can be stored as in-row. Anything larger than this value is stored off-row, while anything equal to or less than length is stored as in-row, as long as there is enough space on the page.

When you do not specify length, Adaptive Server uses the database-wide setting for in-row length.

off row

(optional) provides default behavior for storing LOB columns off-row. Adaptive Server assumes this behavior for your new table unless you specify in row. If you do not specify the off row clause and you set the database-wide in-row length, create table creates the LOB column as an in-row LOB column.

off row | in row

specifies whether a Java-SQL column is stored separate from the row (off row) or in storage allocated directly in the row (in row).

The default value is off row. See Java in Adaptive Server Enterprise.

size_in_bytes

specifies the maximum size of the in-row column. An object stored in-row can occupy up to approximately 16K bytes, depending on the page size of the database server and other variables. The default value is 255 bytes.

constraint constraint_name

introduces the name of an integrity constraint.

constraint_name is the name of the constraint. It must conform to the rules for identifiers and be unique in the database. If you do not specify the name for a referential or check constraint, Adaptive Server generates a name in the form tabname_colname_objectid where:

  • tabname – is the first 10 characters of the table name

  • colname – is the first 5 characters of the column name

  • objectid – is the object ID number for the constraint

If you do not specify the name for a unique or primary key constraint, Adaptive Server generates a name in the format tabname_colname_tabindid, where tabindid is a string concatenation of the table ID and index ID.

unique

constrains the values in the indicated column or columns so that no two rows have the same value. This constraint creates a unique index that can be dropped only if the constraint is dropped using alter table.

primary key

constrains the values in the indicated column or columns so that no two rows have the same value, and so that the value cannot be NULL. This constraint creates a unique index that can be dropped only if the constraint is dropped using alter table.

clustered | nonclustered

specifies that the index created by a unique or primary key constraint is a clustered or nonclustered index. clustered is the default for primary key constraints; nonclustered is the default for unique constraints. There can be only one clustered index per table. See create index for more information.

asc | desc

specifies whether the index created for a constraint is to be created in ascending or descending order for each column. The default is ascending order.

fillfactor

specifies how full Adaptive Server makes each page when it creates a new index on existing data. The fillfactor percentage is relevant only when the index is created. As the data changes, the pages are not maintained at any particular level of fullness.

The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.

A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both the clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.

If the fillfactor is set to 100, Adaptive Server creates both clustered and nonclustered indexes with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data is ever added.

fillfactor values smaller than 100 (except 0, which is a special case) cause Adaptive Server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that will eventually hold a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.

If CIS is enabled, you cannot use fillfactor for remote servers.

WARNING!  Creating a clustered index with a fillfactor affects the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.

decrypt_default allows the sso to specify a value to be returned to users who do not have decrypt permissions on the encrypted column. Decrypt default values will be substituted for text, image, or unitext columns retrieved through the select statement.

max_rows_per_page

limits the number of rows on data pages and the leaf-level pages of indexes. Unlike fillfactor, the max_rows_per_page value is maintained when data is inserted or deleted.

If you do not specify a value for max_rows_per_page, Adaptive Server uses a value of 0 when creating the table. Values for tables and clustered indexes are between 0 and 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; Adaptive Server returns an error message if the specified value is too high.

A max_rows_per_page of 0 creates clustered indexes with full data pages and nonclustered indexes with full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.

Using low values for max_rows_per_page reduces lock contention on frequently accessed data. However, using low values also causes Adaptive Server to create new indexes with pages that are not completely full, uses more storage space, and may cause more page splits.

If CIS is enabled, and you create a proxy table, then max_rows_per_page is ignored. Proxy tables do not contain any data. If max_rows_per_page is used to create a table, and later a proxy table is created to reference that table, then the max_rows_per_page limits apply when you insert or delete through the proxy table.

reservepagegap = num_pages

specifies the ratio of filled pages to empty pages that are to be left during extent I/O allocation operations. For each specified num_pages, an empty page is left for future expansion of the table. Valid values are 0 – 255. The default value is 0.

dml_logging = {full | minimal}

determines the amount of logging for insert, update and delete operations, and for some forms of bulk inserts. One of

  • full – Adaptive Server logs all transactions

  • minimal – Adaptive Sever does not log row or page changes

deferred_allocation

defers a table or index’s creation until the table is needed. Deferred tables are created at the first insert.

immediate_allocation

Explicitly creates a table when you have enabled sp_dboption 'deferred table allocation'.

on segment_name

when used with the constraint option, specifies that the index is to be created on the named segment. Before the on segment_name option can be used, the device must be initialized with disk init, and the segment must be added to the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.

If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.

references

specifies a column list for a referential integrity constraint. You can specify only one column value for a column constraint. By including this constraint with a table that references another table, any data inserted into the referencing table must already exist in the referenced table.

To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must match the datatype of the referenced table columns.

ref_table

is the name of the table that contains the referenced columns. You can reference tables in another database. Constraints can reference as many as 192 user tables and internally generated worktables.

ref_column

is the name of the column or columns in the referenced table.

match full

specifies that if all values in the referencing columns of a referencing row are:

  • Null – the referential integrity condition is true.

  • Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.

If they are neither, then the referential integrity condition is false when:

  • All values are non-null and not equal, or

  • Some of the values in the referencing columns of a referencing row are non-null values, while others are null.

check (search_condition)

specifies the check constraint on the column values and a search_condition constraint that Adaptive Server enforces for all the rows in the table. You can specify check constraints as table or column constraints; create table allows multiple check constraints in a column definition.

Although you can reference global variables in the default section of create table statements, you cannot use them in the check section.

The constraints can include:

  • A list of constant expressions introduced with in

  • A set of conditions introduced with like, which may contain wildcard characters

Column and table check constraints can reference any columns in the table.

An expression can include arithmetic operators and functions. The search_condition cannot contain subqueries, aggregate functions, host variables, or parameters.

encrypt [with key_name]

creates an encrypted column. Specify the database name if the key is in another database. Specify the owner’s name if key_name is not unique to the database. The default value for owner is the current user, and the default value for database is the current database.

The table creator must have select permission on the key. If you do nto supply key_name, Adaptive Server looks for a default key in the database.

keyname identifies a key created using create encryption key. The creator of the table must have select permission on keyname. If keyname is not supplied, Adaptive Server looks for a default key created using the as default clause on create encryption key or alter encryption key.

See “Encrypting Data,” in the Encrypted Columns Users Guide for a list of supported datatypes.

decrypt_default constant_expression

specifies that this column returns a default value for users who do not have decrypt permissions, and constant_expression is the constant value Adaptive Server returns on select statements instead of the decrypted value. The value can be NULL on nullable columns only. If the decrypt default value cannot be converted to the column’s datatype, Adaptive Server catches the conversion error only when it executes the query.

compression = compression_level | not compressed

indicates if the data in the row is compressed and to what level.

foreign key

specifies that the listed columns are foreign keys in this table whose target keys are the columns listed in the following references clause. The foreign-key syntax is permitted only for table-level constraints, not for column-level constraints.

next_column | next_constraint

indicates that you can include additional column definitions or table constraints (separated by commas) using the same syntax described for a column definition or table constraint definition.

lock datarows | datapages | allpages

specifies the locking scheme to be used for the table. The default is the server-wide setting for the configuration parameter lock scheme.

exp_row_size = num_bytes

specifies the expected row size; applies only to datarows and datapages locking schemes, and only to tables with variable-length rows. Valid values are 0, 1, and any value between the minimum and maximum row length for the table. The default value is 0, which means a server-wide setting is applied.

identity_gap value

specifies the identity gap for the table. This value overrides the system identity gap setting for this table only.

value is the identity gap amount. For more information about setting the identity gap, see “IDENTITY columns”.

transfer table [on | off]

marks the table for incremental transfer. The default value of this parameter is off.

dml_logging

determines the amount of logging for insert, update and delete operations, and for some forms of bulk inserts. One of

  • full – Adaptive Server logs all transactions

  • minimal – Adaptive Sever does not log row or page changes

compression

indicates the level of compression at the table or the partition level. Specifying the compression level for the partition overrides the compression level for the table. Adaptive Server compresses individual columns only in partitions that are configured for compression.

  • none – the data in this table or partition is not compressed. For partitions, none indicates that data in this partition remains uncompressed even if the table compression is altered to row or page compression.

  • row – compresses one or more data items in an individual row. Adaptive Server stores data in a row compressed form only if the compressed form saves space compared to an uncompressed form. Set row compression at the partition or table level.

  • page – when the page fills, existing data rows that are row-compressed are then compressed using page-level compression to create page-level dictionary, index, and character-encoding entries. Set page compression at the partition or table level.

    Adaptive Server compresses data at the page level only after it has compressed data at the row level, so setting the compression-level to page implies both page and row compression.

lob_compression = off | compression_level

Determines the compression level for the table. The table has no LOB compression if you select off.

compression_level

Table compression level. The compression levels are:

  • 0 – the row is not compressed.

  • 1 through 9 – Adaptive Server uses ZLib compression. Generally, the higher the compression number, the more Adaptive Server compresses the LOB data, and the greater the ratio between compressed and uncompressed data (that is the greater the amount of space savings, in bytes, for the compressed data versus the size of the uncompressed data).

    However, the amount of compression depends on the LOB content, and the higher the compression level , the more CPU-intensive the process. That is, level 9 provides the highest compression ratio but also the heaviest CPU usage.

  • 100 – Adaptive Server uses FastLZ compression. The compression ratio that uses the least CPU usage; generally used for shorter data.

  • 101 – Adaptive Server uses FastLZ compression. A value of 101 uses slightly more CPU than a value of 100, but uses a better compression ratio than a value of 100.

The compression algorithm ignores rows that do not use LOB data.

on segment_name

specifies the name of the segment on which to place the table. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.

When used for partitions, specifies the segment on which to place the partition.

external table

specifies that the object is a remote table or view. external table is the default, so specifying this is optional.

for load

creates a table available only to bcp in and alter table unpartition operations. You can use row_count() on a table you create using for load.

partition by range

specifies records are to be partitioned according to specified ranges of values in the partitioning column or columns.

column_name

when used in the partition_clause, specifies a partition key column.

partition_name

specifies the name of a new partition on which table records are stored. Partition names must be unique within the set of partitions on a table or index. Partition names can be delimited identifiers if set quoted_identifier is on. Otherwise, they must be valid identifiers.

If partition_name is omitted, Adaptive Server creates a name in the form table_name_patition_id. Adaptive Server truncates partition names that exceed the allowed maximum length.

on segment_name

when used in the partition_clause, specifies the segment on which the partition is to be placed. Before the on segment_name option can be used, the device must be initialized with disk init, and the segment must be added to the database using the sp_addsegment system procedure. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.

values <= constant | MAX

specifies the inclusive upper bound of values for a named partition. Specifying a constant value for the highest partition bound imposes an implicit integrity constraint on the table. The keyword MAX specifies the maximum value in a given datatype.

partition by hash

specifies records are to be partitioned by a system-supplied hash function. The function computes the hash value of the partition keys that specify the partition to which records are assigned.

partition by list

specifies records are to be partitioned according to literal values specified in the named column. Only one column can partition a list-partitioned table. You can specify up to 250 distinct list values for each partition.

partition by round-robin

specifies records are to be partitioned in a sequential manner. A round-robin-partitioned table has no partitioning key. Neither the user nor the optimizer knows the partition of a particular record.

at pathname

specifies the location of the remote object. Using the at pathname clause results in the creation of a proxy table.

pathname takes the form server_name.dbname.owner.object;aux1.aux2, where:

{compute | as}

reserved keywords that you can use interchangeably to indicate that a column is a computed column.

computed_column_expression

is any valid T-SQL expression that does not contain columns from other tables, local variables, aggregate functions, or subqueries. It can be one or a combination of column name, constant, function, global variable, or case expression, connected by one or more operators. You cannot cross-reference between computed columns except when virtual computed columns reference materialize computed columns.

materialized | not materialized

specifies whether or not the computed column is materialized and physically stored in the table. If neither keyword is specified, a computed column by default is not materialized, and thus not physically stored in the table.

using clustered

indicates you are creating a virtually hashed table. The list of columns are treated as key columns for this table.

column_name [asc | desc]

because rows are placed based on their hash function, you cannot use [asc | desc] for the hash region. If you provide an order for the key columns of virtually hashed tables, it is used only in the overflow clustered region.

hash_factor

required for the hash function for virtually hashed tables. For the hash function, a hash factor is required for every key column. These factors are used with key values to generate hash value for a particular row.

with max num_hash_values key

the maximum number of hash values that you can use. Defines the upper bound on the output of this hash function.

Examples

Example 1

Creates the foo table using the @@spid global variable with the default parameter:

create table foo (
      a      int
    , b      int      default @@spid
)

Example 2

Creates the titles table:

create table titles (
      title_id       tid             not null
    , title          varchar (80)    not null
    , type           char (12)       not null
    , pub_id         char (4)        null
    , price          money           null
    , advance        money           null
    , total_sales    int             null
    , notes          varchar (200)   null
    , pubdate        datetime        not null
    , contract       bit             not null
)

Example 3

Creates a table mytable using for load:

create table mytable (
      col1    int
    , col2    int
    , col3    (char 50)
)
partitioned by roundrobin 3 for load

The new table is unavailable to any user activities until it is unpartitioned.

  1. Load the data into mytable, using bcp in.

  2. Unpartition mytable.

The table is now available for any user activities.

Example 4

Creates the compute table. The table name and the column names, max and min, are enclosed in double quotes because they are reserved words. The total score column name is enclosed in double quotes because it contains an embedded blank. Before creating this table, you must set quoted_identifier on.

create table "compute" (
      "max"            int
    , "min"            int
    , "total score"    int
)

Example 5

Creates the sales table and a clustered index in one step with a unique constraint. (In the pubs2 database installation script, there are separate create table and create index statements):

create table sales (
      stor_id          char (4)      not null
    , ord_num          varchar (20)  not null
    , date             datetime      not null
    , unique clustered (stor_id, ord_num)
)

Example 6

Creates the salesdetail table with two referential integrity constraints and one default value. There is a table-level, referential integrity constraint named salesdet_constr and a column-level, referential integrity constraint on the title_id column without a specified name. Both constraints specify columns that have unique indexes in the referenced tables (titles and sales). The default clause with the qty column specifies 0 as its default value.

create table salesdetail (
      stor_id   char (4)                  not null
    , ord_num   varchar (20)              not null
    , title_id  tid                       not null
           references titles (title_id)
    , qty       smallint default 0        not null
    , discount  float                     not null,

constraint salesdet_constr
    foreign key (stor_id, ord_num)
    references sales (stor_id, ord_num)
)

Example 7

Creates the table publishers with a check constraint on the pub_id column. This column-level constraint can be used in place of the pub_idrule included in the pubs2 database.

create rule pub_idrule
as @pub_id in ("1389", "0736", "0877", "1622", "1756")
or @pub_id like "99[0-9][0-9]"
create table publishers (
    pub_id char (4) not null
    check (pub_id in ("1389", "0736", "0877", "1622", 
        "1756")
        or pub_id like "99[0-9][0-9]")
    , pub_name     varchar (40)      null
    , city         varchar (20)      null
    , state        char (2)          null
)

Example 8

Specifies the ord_num column as the IDENTITY column for the sales_daily table. The first time you insert a row into the table, Adaptive Server assigns a value of 1 to the IDENTITY column. On each subsequent insert, the value of the column increments by 1.

create table sales_daily (
      stor_id         char (4)         not null
    , ord_num         numeric (10,0)   identity
    , ord_amt         money            null
)

Example 9

Specifies the datapages locking scheme for the new_titles table and an expected row size of 200:

create table new_titles (
      title_id      tid
    , title          varchar (80)    not null
    , type           char (12)
    , pub_id         char (4)        null
    , price          money           null
    , advance        money           null
    , total_sales    int             null
    , notes          varchar (200)   null
    , pubdate        datetime
    , contract       bit
)
lock datapages
with exp_row_size = 200

Example 10

Specifies the datarows locking scheme and sets a reservepagegap value of 16 so that extent I/O operations leave 1 blank page for each 15 filled pages:

create table new_publishers (
      pub_id      char (4)       not null
    , pub_name    varchar (40)   null
    , city        varchar (20)   null
    , state       char (2)       null
)
lock datarows
with reservepagegap = 16

Example 11

Creates a table named big_sales with minimal logging:

create table big_sales (
      storid        char(4)        not null
    , ord_num       varchar(20)    not null
    , order_date    datetime       not null
)
with dml_logging = minimal

Example 12

Creates a deferred table named im_not_here_yet:

create table im_not_here_yet (
col_1 int,
col_2 varchar(20)
)
with deferred_allocation

Example 13

Creates a table named mytable, that uses the locking scheme datarows, and permits incremental transfer:

create table mytable (
      f1 int
    , f2 bigint not null
    , f3 varchar (255) null
)
lock datarows
with transfer table on

Example 14

Creates a table named genre with row-level compression:

create table genre (
      mystery      varchar(50)    not null
    , novel        varchar(50)    not null
    , psych        varchar(50)    not null
    , history      varchar(50)    not null
    , art          varchar(50)    not null
    , science      varchar(50)    not null
    , children     varchar(50)    not null
    , cooking      varchar(50)    not null
    , gardening    varchar(50)    not null
    , poetry       varchar(50)    not null
)
with compression = row

Example 15

Creates a table named sales on segments seg1, seg2, and seg3, with compression on seg1:

create table sales (
      store_id     int    not null
    , order_num    int    not null
    , date         datetime    not null
)
partition by range (date)
    ( Y2008 values <= ('12/31/2008') 
      with compression = page on seg1,
      Y2009 values <= ('12/31/2009') on seg2,
      Y2010 values <= ('12/31/2010') on seg3)

Example 16

Creates the email table, which uses a LOB compression level of 5:

create table email (
      user_name      char (10)
    , mailtxt        text
    , photo          image
    , reply_mails    text)
    with lob_compression = 5

Example 17

Creates a constraint supported by a unique clustered index; the index order is ascending for stor_id and descending for ord_num:

create table sales_south (
      stor_id     char (4)        not null
    , ord_num     varchar (20)    not null
    , date        datetime        not null
    , unique clustered (stor_id asc, ord_num desc)
)

Example 18

Creates a table named t1 at the remote server SERVER_A and creates a proxy table named t1 that is mapped to the remote table:

create table t1 (
      a    int
    , b    char (10)
)
at "SERVER_A.db1.joe.t1"

Example 19

Creates a table named employees. name is of type varchar, home_addr is a Java-SQL column of type Address, and mailing_addr is a Java-SQL column of type Address2Line. Both Address and Address2Line are Java classes installed in the database:

create table employees (
      name            varchar (30)
    , home_addr       Address
    , mailing_addr    Address2Line
)

Example 20

Creates a table named mytable with an identity column. The identity gap is set to 10, which means ID numbers are allocated in memory in blocks of ten. If the server fails or is shut down with no wait, the maximum gap between the last ID number assigned to a row and the next ID number assigned to a row is 10 numbers:

create table mytable (
      IdNum    numeric (12,0)    identity
)
with identity_gap = 10

Example 21

Creates a table my_publishers, which is partitioned by list according to values in the state column. See the Transact-SQL Users Guide for more information about creating table partitions.

create table my_publishers (
      pub_id      char (4)        not null
    , pub_name    varchar (40)    null
    , city        varchar (20)    null
    , state       char (2)        null
)
partition by list (state) (
      west values ('CA', 'OR', 'WA') on seg1
    , east values ('NY', 'MA') on seg2
)

Example 22

Creates the table fictionsales, which is partitioned by range according to values in the date column:

create table fictionsales (
      store_id     int         not null
    , order_num    int         not null
    , date         datetime    not null
)
partition by range (date) (
      q1 values <= ("3/31/2005") on seg1
    , q2 values <= ("6/30/2005") on seg2
    , q3 values <= ("9/30/2005") on seg3
    , q4 values <= ("12/31/2005") on seg4
)

Example 23

Creates the table currentpublishers, which is partitioned by round-robin:

create table currentpublishers (
      pub_id      char (4)        not null
    , pub_name    varchar (40)    null
    , city        varchar (20)    null
    , state       char (2)        null
)
partition by roundrobin 3 on (seg1)

Example 24

Creates the table mysalesdetail, which is partitioned by hash according to values in the ord_num column:

create table mysalesdetail (
      store_id    char (4)        not null
    , ord_num     varchar (20)    not null
    , title_id    tid             not null
    , qty         smallint        not null
    , discount    float           not null
)
partition by hash (ord_num) (
      p1 on seg1
    , p2 on seg2
    , p3 on seg3
)

Example 25

Creates a table called mytitles with one materialized computed column:

create table mytitles (
      title_id    tid    not null
    , title    varchar (80)     not null
    , type    char (12)         not null
    , pub_id    char (4)        null
    , price    money            null
    , advance    money          null
    , total_sales    int        null
    , notes    varchar (200)    null
    , pubdate    datetime       not null
    , sum_sales    compute price * total_sales 
          materialized
)

Example 26

Creates an employee table with a nullable encrypted column. Adaptive Server uses the database default encryption key to encrypt the ssn data:

create table employee_table (
      ssn             char(15)    null
      encrypt name    char(50)
    , deptid          int
)

Example 27

Creates a customer table with an encrypted column for credit card data:

create table customer (
ccard char(16) unique 
   encrypt with cc_key
   decrypt_default 'XXXXXXXXXXXXXXXX', name char(30)
)

The ccard column has a unique constraint and uses cc_key for encryption. Because of the decrypt_default specifier, Adaptive Server returns the value ‘XXXXXXXXXXXXXXXX’ instead of the actual data when a user without decrypt permission selects the ccard column.

Example 28

Creates a table that specifies description as an in-row LOB column 300 bytes long, notes as an in-row LOB column without a specified length (inheriting the size of the off-row storage), and the reviews column as stored off-row regardless of condition:

create table new_titles (
      title_id       tid             not null
    , title          varchar (80)    not null
    , type           char (12)       null
    , price          money           null
    , pubdate        datetime        not null
    , description    text            in row (300)
    , notes          text            in row
    , reviews        text            off row
)

Example 29

Creates a virtually hashed table named orders on the pubs2 database on the order_seg segment:

create table orders (
      id    int
    , age    int
    , primary key using clustered (id,age) = (10,1) with max 1000 key
)
on order_seg

The layout for the data is:

Figure 1-1: The data layout for the example

The layout for the example.

Example 30

Creates a virtually hashed table named orders on the pubs2 database on the order_seg segment:

create table orders (
      id    int default NULL
    , age    int
    , primary key using
          clustered (id,age) = (10,1) with max 100 key
    , name    varchar(30)
)
on order_seg

The layout for the data is:

Figure 1-2: The data layout for the example

The data layout for the example.

Usage


Restrictions


Creating compressed tables


Restrictions for compression


Compressing tables that use large object (LOB) data

Compressing data on LOB tables include these restrictions. You cannot:


Column definitions


Temporary tables


Using indexes


Renaming a table or its columns


Defining integrity constraints


Unique and primary key constraints


Referential integrity constraints


Using cross-database referential integrity constraints


check constraints


IDENTITY columns


Specifying a locking scheme

To specify the locking scheme for a table, use the keyword lock and one of the following locking schemes:

If you do not specify a locking scheme, the default locking scheme for the server is used. The server-wide default is set with the configuration parameter lock scheme.

You can use alter table to change the locking scheme for a table.


Space management properties


Using reservepagegap


Getting information about tables


Creating tables with partitions


Creating tables with computed columns


Creating tables with encrypted columns


Limitations when creating virtually hashed tables


Creating tables for in-memory and relaxed durability databases


Determining values for hash_factor

You can keep the hash factor for the first key as 1. The hash factor for all the remaining key columns is greater than the maximum value of the previous key allowed in the hash region multiplied by its hash factor.

Adaptive Server allows tables with hash factors greater than 1 for the first key column to have fewer rows on a page. For example, if a table has a hash factor of 5 for the first key column, after every row in a page, space for the next four rows is kept empty. To support this, Adaptive Server requires five times the amount of table space.

If the value of a key column is greater than or equal to the hash factor of the next key column, the current row is inserted in the overflow clustered region to avoid collisions in the hash region.

For example, t is a virtually hashed table with key columns id and age, and corresponding hash factors of (10,1). Because the hash value for rows (5, 5) and (2, 35) is 55, this may result in a hash collision.

However, because the value 35 is greater than or equal to 10 (the hash factor for the next key column, id), Adaptive Server stores the second row in the overflow clustered region, avoiding collisions in the hash region.

In another example, if u is a virtually hashed table with a primary index and hash factors of (id1, id2, id3) = (125, 25, 5) and a max hash_value of 200:


Restrictions for shared-disk clusters


Java-SQL columns

Standards

ANSI SQL – Compliance level: Entry-level compliant.

Transact-SQL extensions include:

Permissions

Any user can create temporary tables and new tables with logging disabled.

The following describes permission checks for create table that differ based on your granular permissions settings.

Granular permissions enabled

With granular permissions enabled, you must have the create table privilege. You must have create any table to run create table for other users.

Granular permissions disabled

With granular permissions disabled, you must be the database owner , a user with sa_role, or a user with create table privilege.

Auditing

Values in event and extrainfo columns of sysaudits are:

Event

Audit option

Command or access audited

Information in extrainfo

10

create

create table

  • Roles – current active roles

  • Keywords or options – NULL

  • Previous value – NULL

  • Current value – NULL

  • Other information – NULL

  • Proxy information – original login name, if set proxy is in effect

  • If the with option for with transfer table [on | off] is:

    • on – Adaptive Server prints WITH TRANSFER TABLE ON in the extra info in the audit record.

    • off – Adaptive Server prints WITH TRANSFER TABLE OFF.

See also

Commands alter table, create existing table, create index, create rule, create schema, create view, drop index, drop rule, drop table

System procedures sp_addmessage, sp_addsegment, sp_addtype, sp_bindmsg, sp_chgattribute, sp_commonkey, sp_depends, sp_foreignkey, sp_help, sp_helpjoins, sp_helpsegment, sp_primarykey, sp_rename, sp_spaceused