create table

Description

Syntax

create table [[database.[owner].]table_name (column_name datatype
	[default {constant_expression  | user | null}]
	[{identity | null | not null}]
		[off row | [in row [(size_in_bytes)]]
	[[constraint constraint_name]
		{{unique | primary key}
		[clustered | nonclustered] [asc | desc]
		[with {fillfactor = pct,
				max_rows_per_page = num_rows,}
				reservepagegap = num_pages]
				dml_logging = {full | minimal}
		[on segment_name]
		| references [[database.]owner.]ref_table
			[(ref_column)]
			[match full]
			| check (search_condition)}]}
	[[encrypt [with key_name] 
		[decrypt_default constant_expression | null]]
	[[constraint [[database.[owner].]key_name]
		{unique | primary key} 
			[clustered | nonclustered]
			 (column_name [asc | desc]
				[{, column_name [asc | desc]}...])
			[with {fillfactor = pct 
				max_rows_per_page = num_rows,
				reservepagegap = num_pages}]
			[on segment_name]
		| foreign key (column_name [{,column_name}...])
			references [[database.]owner.]ref_table
				[(ref_column [{, ref_column}...])]
				[match full]
		| check (search_condition) ...}
	[{, {next_column | next_constraint}}...])
	[lock {datarows | datapages | allpages}]
	[with {max_rows_per_page = num_rows, 
			exp_row_size = num_bytes, 
			reservepagegap = num_pages,
			identity_gap = value}
			transfer table [on | off]}
	[on segment_name]
	[partition_clause]
	[[external table] at pathname]
	[for load]

Syntax 2 Use this syntax for partitions

partition_clause::=	partition by range (column_name[, column_name]...) 
		 ([partition_name] values <= ({constant | MAX} 
			[, {constant | MAX}] ...) [on segment_name] 
			[, [partition_name] values <= ({constant | MAX} 
				[, {constant | MAX}] ...) [on segment_name]]...)

	| partition by hash (column_name[, column_name]...)
		{ (partition_name [on segment_name] 
			[, partition_name [on segment_name]]...) 
		| number_of_partitions 
			[on (segment_name[, segment_name] ...)]} 

	| partition by list (column_name) 
		 ([partition_name] values (constant[, constant] ...) 
			[on segment_name] 
			[, [partition_name] values (constant[, constant] ...) 
				[on segment_name]] ...)

	| partition by roundrobin 
		{ (partition_name [on segment_name] 
			[, partition_name [on segment_name]]...) 
		| number_of_partitions 
			[on (segment_name[, segment_name]...)]}

Syntax 3 Use this syntax for computed columns

create table [[database.[owner].] table_name
	 (column_name {compute | as} 
		computed_column_expression [materialized | not materialized]}

Sytax 4 Use this syntax to create a virtually hashed table

create table [database.[owner].]table_name 
. . .
		| {unique | primary key}
		using clustered
		(column_name [asc | desc] [{, column_name [asc | desc]}...])=
		(hash_factor [{, hash_factor}...])
			with max num_hash_values key

Parameters

table_name

is the explicit name of the new table. Specify the database name if the table is in another database, and specify the owner’s name if more than one table of that name exists in the database. The default value for owner is the current user, and the default value for database is the current database.

You cannot use a variable for the table name. The table name must be unique within the database and to the owner. If you have set quoted_identifier on, you can use a delimited identifier for the table name. Otherwise, it must conform to the rules for identifiers. For more information about valid table names, see “Identifiers” on page 369 in Chapter 4, “Expressions, Identifiers, and Wildcard Characters,” of Reference Manual: Building Blocks.

You can create a temporary table by preceding the table name with either a pound sign (#) or “tempdb..”. For more information, see “Tables beginning with # (temporary tables)” on page 372 in Chapter 4, “Expressions, Identifiers, and Wildcard Characters,” of Reference Manual: Building Blocks.

You can create a table in a different database, as long as you are listed in the sysusers table and have create table permission for that database. For example, you can use either of the following to create a table called newtable in the database otherdb:

create table otherdb..newtable
create table otherdb.yourname.newtable
column_name

is the name of the column in the table. It must be unique in the table. If you have set quoted_identifier on, you can use a delimited identifier for the column. Otherwise, it must conform to the rules for identifiers. For more information about valid column names, see Chapter 4, “Expressions, Identifiers, and Wildcard Characters,” of Reference Manual: Building Blocks.

datatype

is the datatype of the column. System or user-defined datatypes are acceptable. Certain datatypes expect a length, n, in parentheses:

 datatype (n)

Others expect a precision, p, and scale, s:

 datatype (p,s)

See Chapter 1, “System and User-Defined Datatypes” in Reference Manual: Building Blocks for more information.

If Java is enabled in the database, datatype can be the name of a Java class, either a system class or a user-defined class, that has been installed in the database. See Java in Adaptive Server Enterprise for more information.

default

specifies a default value for a column. If you specify a default, and the user does not provide a value for the column when inserting data, Adaptive Server inserts the default value. The default can be a constant expression or a built-in, to insert the name of the user who is performing the insert, or null, to insert the null value. Adaptive Server generates a name for the default in the form of tabname_colname_objid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objid is the object ID number for the default. Defaults declared for columns with the IDENTITY property have no effect on column values.

You can reference global variables in the default section of create table statements that do not reference database objects. You cannot, however, use global variables in the check section of create table.

constant_expression

is a constant expression to use as a default value for the column. It cannot include global variables, the name of any columns, or other database objects, but can include built-in functions that do not reference database objects. This default value must be compatible with the datatype of the column, or Adaptive Server generates a datatype conversion error when attempting to insert the default.

user | null

specifies that Adaptive Server should insert the user name or the null value as the default if the user does not supply a value. For user, the datatype of the column must be either char (30) or varchar (30). For null, the column must allow null values.

encrypt [with key_name]

creates an encrypted column. Specify the database name if the key is in another database. Specify the owner’s name if key_name is not unique to the database. The default value for owner is the current user, and the default value for database is the current database.

The table creator must have select permission on the key. If you do nto supply key_name, Adaptive Server looks for a default key in the database.

See Chapter 3, “Encrypted Data,” in User Guide for Encrypted Columns.

keyname

identifies a key created using create encryption key. The creator of the table must have select permission on keyname. If keyname is not supplied, Adaptive Server looks for a default key created using the as default clause on create encryption key or alter encryption key.

decrypt_default constant_expression

specifies that this column returns a default value for users who do not have decrypt permissions, and constant_expression is the constant value Adaptive Server returns on select statements instead of the decrypted value. The value can be NULL on nullable columns only. If the decrypt default value cannot be converted to the column’s data type, Adaptive Server catches the conversion error only when it executes the query.

identity

indicates that the column has the IDENTITY property. Each table in a database can have one IDENTITY column with a datatype of:

  • exact numeric and a scale of 0; or

  • Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.

IDENTITY columns are not updatable and do not allow nulls.

IDENTITY columns are used to store sequential numbers—such as invoice numbers or employee numbers—that are generated automatically by Adaptive Server. The value of the IDENTITY column uniquely identifies each row in a table.

null | not null

specifies Adaptive Server behavior during data insertion if no default exists.

null specifies that Adaptive Server assigns a null value if a user does not provide a value.

not null specifies that a user must provide a non-null value if no default exists.

The properties of a bit-type column must always be not null.

If you do not specify null or not null, Adaptive Server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards.

off row | in row

specifies whether a Java-SQL column is stored separate from the row (off row) or in storage allocated directly in the row (in row).

The default value is off row. For more information, see Java in Adaptive Server Enterprise.

size_in_bytes

specifies the maximum size of the in-row column. An object stored in-row can occupy up to approximately 16K bytes, depending on the page size of the database server and other variables. The default value is 255 bytes.

constraint

introduces the name of an integrity constraint.

constraint_name

is the name of the constraint. It must conform to the rules for identifiers and be unique in the database. If you do not specify the name for a referential or check constraint, Adaptive Server generates a name in the form tabname_colname_objectid where:

  • tabname – is the first 10 characters of the table name

  • colname – is the first 5 characters of the column name

  • objectid – is the object ID number for the constraint

If you do not specify the name for a unique or primary key constraint, Adaptive Server generates a name in the format tabname_colname_tabindid, where tabindid is a string concatenation of the table ID and index ID.

unique

constrains the values in the indicated column or columns so that no two rows have the same value. This constraint creates a unique index that can be dropped only if the constraint is dropped using alter table.

primary key

constrains the values in the indicated column or columns so that no two rows have the same value, and so that the value cannot be NULL. This constraint creates a unique index that can be dropped only if the constraint is dropped using alter table.

clustered | nonclustered

specifies that the index created by a unique or primary key constraint is a clustered or nonclustered index. clustered is the default for primary key constraints; nonclustered is the default for unique constraints. There can be only one clustered index per table. See create index for more information.

asc | desc

specifies whether the index created for a constraint is to be created in ascending or descending order for each column. The default is ascending order.

fillfactor

specifies how full Adaptive Server makes each page when it creates a new index on existing data. The fillfactor percentage is relevant only when the index is created. As the data changes, the pages are not maintained at any particular level of fullness.

The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.

A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both the clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.

If the fillfactor is set to 100, Adaptive Server creates both clustered and nonclustered indexes with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data is ever added.

fillfactor values smaller than 100 (except 0, which is a special case) cause Adaptive Server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that will eventually hold a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.

If Component Integration Services is enabled, you cannot use fillfactor for remote servers.

WARNING!  Creating a clustered index with a fillfactor affects the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.

max_rows_per_page

limits the number of rows on data pages and the leaf-level pages of indexes. Unlike fillfactor, the max_rows_per_page value is maintained when data is inserted or deleted.

If you do not specify a value for max_rows_per_page, Adaptive Server uses a value of 0 when creating the table. Values for tables and clustered indexes are between 0 and 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; Adaptive Server returns an error message if the specified value is too high.

A max_rows_per_page of 0 creates clustered indexes with full data pages and nonclustered indexes with full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.

Using low values for max_rows_per_page reduces lock contention on frequently accessed data. However, using low values also causes Adaptive Server to create new indexes with pages that are not completely full, uses more storage space, and may cause more page splits.

If Component Integration Services is enabled, and you create a proxy table, then max_rows_per_page is ignored. Proxy tables do not contain any data. If max_rows_per_page is used to create a table, and later a proxy table is created to reference that table, then the max_rows_per_page limits apply when you insert or delete through the proxy table.

on segment_name

when used with the constraint option, specifies that the index is to be created on the named segment. Before the on segment_name option can be used, the device must be initialized with disk init, and the segment must be added to the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.

If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.

references

specifies a column list for a referential integrity constraint. You can specify only one column value for a column constraint. By including this constraint with a table that references another table, any data inserted into the referencing table must already exist in the referenced table.

To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must match the datatype of the referenced table columns.

foreign key

specifies that the listed columns are foreign keys in this table whose target keys are the columns listed in the following references clause. The foreign-key syntax is permitted only for table-level constraints, not for column-level constraints.

ref_table

is the name of the table that contains the referenced columns. You can reference tables in another database. Constraints can reference as many as 192 user tables and internally generated worktables.

ref_column

is the name of the column or columns in the referenced table.

match full

specifies that if all values in the referencing columns of a referencing row are:

  • Null – the referential integrity condition is true.

  • Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.

If they are neither, then the referential integrity condition is false when:

  • All values are non-null and not equal, or

  • Some of the values in the referencing columns of a referencing row are non-null values, while others are null.

check

specifies a search_condition constraint that Adaptive Server enforces for all the rows in the table. You can specify check constraints as table or column constraints; create table allows multiple check constraints in a column definition.

Although you can reference global variables in the default section of create table statements, you cannot use them in the check section.

search_condition

is the check constraint on the column values. These constraints can include:

  • A list of constant expressions introduced with in

  • A set of conditions introduced with like, which may contain wildcard characters

Column and table check constraints can reference any columns in the table.

An expression can include arithmetic operators and functions. The search_condition cannot contain subqueries, aggregate functions, host variables, or parameters.

next_column | next_constraint

indicates that you can include additional column definitions or table constraints (separated by commas) using the same syntax described for a column definition or table constraint definition.

lock datarows | datapages | allpages

specifies the locking scheme to be used for the table. The default is the server-wide setting for the configuration parameter lock scheme.

exp_row_size = num_bytes

specifies the expected row size; applies only to datarows and datapages locking schemes, and only to tables with variable-length rows. Valid values are 0, 1, and any value between the minimum and maximum row length for the table. The default value is 0, which means a server-wide setting is applied.

reservepagegap = num_pages

specifies the ratio of filled pages to empty pages that are to be left during extent I/O allocation operations. For each specified num_pages, an empty page is left for future expansion of the table. Valid values are 0 – 255. The default value is 0.

dml_logging

determines the amount of logging for insert, update and delete operations, and for some forms of bulk inserts. One of

  • full – Adaptive Server logs all transactions

  • minimal – Adaptive Sever does not log row or page changes

transfer table [on | off]

marks the table for incremental transfer. The default value of this parameter is off.

with identity_gap

specifies the identity gap for the table. This value overrides the system identity gap setting for this table only.

value

is the identity gap amount. For more information about setting the identity gap, see “IDENTITY columns”.

on segment_name

specifies the name of the segment on which to place the table. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.

When used for partitions, specifies the segment on which to place the partition.

external table

specifies that the object is a remote table or view. external table is the default, so specifying this is optional.

partition by range

specifies records are to be partitioned according to specified ranges of values in the partitioning column or columns.

column_name

when used in the partition_clause, specifies a partition key column.

partition_name

specifies the name of a new partition on which table records are stored. Partition names must be unique within the set of partitions on a table or index. Partition names can be delimited identifiers if set quoted_identifier is on. Otherwise, they must be valid identifiers.

If partition_name is omitted, Adaptive Server creates a name in the form table_name_patition_id. Adaptive Server truncates partition names that exceed the allowed maximum length.

on segment_name

when used in the partition_clause, specifies the segment on which the partition is to be placed. Before the on segment_name option can be used, the device must be initialized with disk init, and the segment must be added to the database using the sp_addsegment system procedure. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.

values <= constant | MAX

specifies the inclusive upper bound of values for a named partition. Specifying a constant value for the highest partition bound imposes an implicit integrity constraint on the table. The keyword MAX specifies the maximum value in a given datatype.

partition by hash

specifies records are to be partitioned by a system-supplied hash function. The function computes the hash value of the partition keys that specify the partition to which records are assigned.

partition by list

specifies records are to be partitioned according to literal values specified in the named column. Only one column can partition a list-partitioned table. You can specify up to 250 distinct list values for each partition.

partition by round-robin

specifies records are to be partitioned in a sequential manner. A round-robin partitioned table has no partitioning key. Neither the user nor the optimizer knows the partition of a particular record.

at pathname

specifies the location of the remote object. Using the at pathname clause results in the creation of a proxy table.

pathname takes the form server_name.dbname.owner.object;aux1.aux2, where:

{compute | as}

reserved keywords that you can use interchangeably to indicate that a column is a computed column.

computed_column_expression

is any valid T-SQL expression that does not contain columns from other tables, local variables, aggregate functions, or subqueries. It can be one or a combination of column name, constant, function, global variable, or case expression, connected by one or more operators. You cannot cross-reference between computed columns except when virtual computed columns reference materialize computed columns.

materialized | not materialized

specifies whether or not the computed column is materialized and physically stored in the table. If neither keyword is specified, a computed column by default is not materialized, and thus not physically stored in the table.

using clustered

indicates you are creating a virtually-hashed table. The list of columns are treated as key columns for this table.

column_name [asc | desc]

because rows are placed based on their hash function, you cannot use [asc | desc] for the hash region. If you provide an order for the key columns of virtually hashed tables, it is used only in the overflow clustered region.

hash_factor

required for the hash function for virtually-hashed tables. For the hash function, a hash factor is required for every key column. These factors are used with key values to generate hash value for a particular row.

with max num_hash_values key

the maximum number of hash values that you can use. Defines the upper bound on the output of this hash function.

Determining values for hash_factor You can keep the hash factor for the first key as 1. The hash factor for all the remaining key columns is greater than the maximum value of the previous key allowed in the hash region multiplied by its hash factor.

Adaptive Server allows tables with hash factors greater than 1 for the first key column to have fewer rows on a page. For example, if a table has a hash factor of 5 for the first key column, after every row in a page, space for the next four rows is kept empty. To support this, Adaptive Server requires five times the amount of table space.

If the value of a key column is greater than or equal to the hash factor of the next key column, the current row is inserted in the overflow clustered region to avoid collisions in the hash region.

For example, t is a virtually-hashed table with key columns id and age, and corresponding hash factors of (10,1). Because the hash value for rows (5, 5) and (2, 35) is 55, this may result in a hash collision.

However, because the value 35 is greater than or equal to 10 (the hash factor for the next key column, id), Adaptive Server stores the second row in the overflow clustered region, avoiding collisions in the hash region.

In another example, if u is a virtually-hashed table with a primary index and hash factors of (id1, id2, id3) = (125, 25, 5) and a max hash_value of 200:

  • Row (1,1,1) has a hash value of 155 and is stored in the hash region.

  • Row (2,0,0) has a hash value 250 and is stored in overflow clustered region.

  • Row (0,0,6) has a hash factor of 6 x 5, which is greater than or equal to 25, so it is stored in the overflow clustered region.

  • Row (0,7,0) has a hash factor of 7 x 25, which is greater than or equal to 125, so it is stored in the overflow clustered region

for load

creates a table available only to BCP IN and 'alter table unpartition' operations.

Examples

Example 1

Creates a table using for load, called TAB:

create table TAB (col1 int, col2 int, col3 (char 50))
partitioned by roundrobin 3 for load

TAB is unavailaable to any user activities until it is unpartiioned.

  1. Load the data into TAB, using BCP IN.

  2. Unpartition TAB.

TAB is now available for any user activities.

Example 2

Creates a virtually-hashed table named orders on the pubs2 database on the order_seg segment:

create table orders(
id int,
age int,
primary key using clustered (id,age) = (10,1) with max 1000 key)
on order_seg

The layout for the data is:

Figure 1-1: The data layout for the example

The layout for the example.

Example 3

Creates a virtually-hashed table named orders on the pubs2 database on the order_seg segment:

create table orders(
id int default NULL,
age int,
primary key using clustered (id,age) = (10,1) with max 100 key,
name varchar(30)
)
on order_seg

The layout for the data is:

Figure 1-2: The data layout for the example

The data layout for the example.

Example 4

Creates the foo table using the @@spid global variable with the default parameter:

create table foo (a int , b int default @@spid)

Example 5

Creates the titles table:

create table titles
 (title_id tid not null,
title varchar (80) not null,
type char (12) not null,
pub_id char (4) null,
price money null,
advance money null,
total_sales int null,
notes varchar (200) null,
pubdate datetime not null,
contract bit not null)

Example 6

Creates the compute table. The table name and the column names, max and min, are enclosed in double quotes because they are reserved words. The total score column name is enclosed in double quotes because it contains an embedded blank. Before creating this table, you must set quoted_identifier on:

create table "compute"
  ("max" int, "min" int, "total score" int)

Example 7

Creates the sales table and a clustered index in one step with a unique constraint. (In the pubs2 database installation script, there are separate create table and create index statements):

create table sales
 (stor_id         char (4)      not null,
ord_num          varchar (20)  not null,
date             datetime     not null,
unique clustered (stor_id, ord_num))

Example 8

Creates the salesdetail table with two referential integrity constraints and one default value. There is a table-level, referential integrity constraint named salesdet_constr and a column-level, referential integrity constraint on the title_id column without a specified name. Both constraints specify columns that have unique indexes in the referenced tables (titles and sales). The default clause with the qty column specifies 0 as its default value:

create table salesdetail
(stor_id   char (4)                  not null,
 ord_num   varchar (20)              not null,
 title_id  tid                       not null
           references titles (title_id),
 qty       smallint default 0        not null,
 discount  float                     not null,

constraint salesdet_constr
    foreign key (stor_id, ord_num)
    references sales (stor_id, ord_num))

Example 9

Creates the table publishers with a check constraint on the pub_id column. This column-level constraint can be used in place of the pub_idrule included in the pubs2 database:

create rule pub_idrule
as @pub_id in ("1389", "0736", "0877", "1622", "1756")
or @pub_id like "99[0-9][0-9]"
create table publishers
 (pub_id char (4) not null
    check (pub_id in ("1389", "0736", "0877", "1622", 
        "1756")
    or pub_id like "99[0-9][0-9]"),
pub_name     varchar (40)  null,
city     varchar (20)      null,
state    char (2)          null)

Example 10

Specifies the ord_num column as the IDENTITY column for the sales_daily table. The first time you insert a row into the table, Adaptive Server assigns a value of 1 to the IDENTITY column. On each subsequent insert, the value of the column increments by 1:

create table sales_daily
(stor_id         char (4)         not null,
ord_num          numeric (10,0)   identity,
ord_amt          money            null)

Example 11

Specifies the datapages locking scheme for the new_titles table and an expected row size of 200:

create table new_titles (
      title_id    tid,
      title       varchar (80) not null,
      type        char (12) ,
      pub_id      char (4) null,
      price       money null,
      advance     money null,
      total_sales int null,
      notes       varchar (200) null,
      pubdate     datetime,
      contract    bit                            )
lock datapages
with exp_row_size = 200

Example 12

Specifies the datarows locking scheme and sets a reservepagegap value of 16 so that extent I/O operations leave 1 blank page for each 15 filled pages:

create table new_publishers (
pub_id          char (4) not null,
pub_name        varchar (40) null,
city            varchar (20) null,
state           char (2) null )
lock datarows
with reservepagegap = 16

Example 13

Creates a table named big_sales with minimal logging:

create table big_sales
(storid char(4) not null,
ord_num varchar(20) not null,
order_date datetime not null)
with dml_logging = minimal

Example 14

Creates a table called “mytable,” that uses the locking scheme datarows, and permits incremental transfer:

create table mytable (
	f1 int
	f2 bigint not null,
	f3 varchar (255) null
)
lock datarows
with transfer table on

Example 15

Creates a constraint supported by a unique clustered index; the index order is ascending for stor_id and descending for ord_num:

create table sales_south
 (stor_id         char (4)      not null,
ord_num          varchar (20)  not null,
date             datetime     not null,
unique clustered (stor_id asc, ord_num desc))

Example 16

Creates a table named t1 at the remote server SERVER_A and creates a proxy table named t1 that is mapped to the remote table:

create table t1
 (a        int,
 b        char (10))
at "SERVER_A.db1.joe.t1"

Example 17

Creates a table named employees. name is of type varchar, home_addr is a Java-SQL column of type Address, and mailing_addr is a Java-SQL column of type Address2Line. Both Address and Address2Line are Java classes installed in the database:

create table employees
 (name varchar (30),
home_addr Address,
mailing_addr Address2Line)

Example 18

Creates a table named mytable with an identity column. The identity gap is set to 10, which means ID numbers are allocated in memory in blocks of ten. If the server fails or is shut down with no wait, the maximum gap between the last ID number assigned to a row and the next ID number assigned to a row is ten numbers:

create table mytable 
 (IdNum numeric (12,0) identity)
with identity_gap = 10

For more information about identity gaps, see “Managing Identity Gaps in Tables” in Chapter 7, “Creating Databases and Tables” in the Transact-SQL User’s Guide.

Example 19

Creates a table my_publishers, which is partitioned by list according to values in the state column. See the Transact-SQL User’s Guide for more information about creating table partitions.

create table my_publishers
 (pub_id char (4) not null,
pub_name varchar (40) null,
city varchar (20) null,
state char (2) null)
partition by list (state)
 (west values ('CA', 'OR', 'WA') on seg1,
east values ('NY', 'MA') on seg2)

Example 20

Creates the table fictionsales, which is partitioned by range according to values in the date column. See the Transact-SQL User’s Guide for more information about creating table partitions.

create table fictionsales
 (store_id int not null,
order_num int not null,
date datetime not null)
partition by range (date)
 (q1 values <= ("3/31/2005") on seg1,
q2 values <= ("6/30/2005") on seg2,
q3 values <= ("9/30/2005") on seg3,
q4 values <= ("12/31/2005") on seg4)

Example 21

Creates the table currentpublishers, which is partitioned by round-robin. See the Transact-SQL User’s Guide for more information about creating table partitions.

create table currentpublishers
 (pub_id char (4) not null,
pub_name varchar (40) null,
city varchar (20) null,
state char (2) null)
partition by roundrobin 3 on (seg1)

Example 22

Creates the table mysalesdetail, which is partitioned by hash according to values in the ord_num column. See the Transact-SQL User’s Guide for more information.

create table mysalesdetail
  (store_id char (4) not null,
  ord_num varchar (20) not null,
  title_id tid not null,
  qty smallint not null,
  discount float not null)
  partition by hash (ord_num)
  (p1 on seg1, p2 on seg2, p3 on seg3)

Example 23

Creates a table called mytitles with one materialized computed column:

create table mytitles
 (title_id tid not null,
title varchar (80) not null,
type char (12) not null,
pub_id char (4) null,
price money null,
advance money null,
total_sales int null,
notes varchar (200) null,
pubdate datetime not null,
sum_sales compute price * total_sales materialized)

Example 24

Creates an employee table with a nullable encrypted column. Adaptive Server uses the database default encryption key to encrypt the ssn data:

create table employee_table (ssn char(15) null 
   encrypt name char(50), deptid int)

Example 25

To create a customer table with an encrypted column for credit card data, enter:

create table customer (ccard char(16) unique 
   encrypt with cc_key
   decrypt_default 'XXXXXXXXXXXXXXXX', name char(30))

The ccard column has a unique constraint and uses cc_key for encryption. Because of the decrypt_default specifier, Adaptive Server returns the value ‘XXXXXXXXXXXXXXXX’ instead of the actual data when a user without decrypt permission selects the ccard column.

Usage


Restrictions


Column definitions


Temporary tables


Using indexes


Renaming a table or its columns


Specifying ascending or descending ordering in indexes


Defining integrity constraints


Unique and primary key constraints


Referential integrity constraints


Using cross-database referential integrity constraints


check constraints


IDENTITY columns


Specifying a locking scheme


Space management properties


Using exp_row_size


Using reservepagegap


Using at


Java-SQL columns


Getting information about tables


Creating tables with partitions


Creating tables with computed columns


Creating tables with encrypted columns


Creating virtually-hashed tables

These are the limitations for virtually-hashed tables:


Creating tables for in-memory and relaxed durability databases


Restrictions for shared-disk clusters

Standards

ANSI SQL – Compliance level: Entry-level compliant.

The following are Transact-SQL extensions:

Permissions

create table permission defaults to the database owner, who can transfer it to other users. Any user can create temporary tables and new tables with logging disabled.

Auditing

Values in event and extrainfo columns of sysaudits are:

Event

Audit option

Command or access audited

Information in extrainfo

10

create

create table

  • Roles – Current active roles

  • Keywords or options – NULL

  • Previous value – NULL

  • Current value – NULL

  • Other information – NULL

  • Proxy information – Original login name, if set proxy in effect

See also

Commands alter table, create existing table, create index, create rule, create schema, create view, drop index, drop rule, drop table

System procedures sp_addmessage, sp_addsegment, sp_addtype, sp_bindmsg, sp_chgattribute, sp_commonkey, sp_depends, sp_foreignkey, sp_help, sp_helpjoins, sp_helpsegment, sp_primarykey, sp_rename, sp_spaceused