Adds new columns to a table; drops or modifies existing columns; adds, changes, or drops constraints; changes properties of an existing table; enables or disables triggers on a table.
Supports adding, dropping, and modifying computed columns and to enable the materialized property, nullability, or definition of an existing computed column to be changed.
Partitions and repartitions a table with specified partition strategy, or add partitions to a table with existing partitions. Syntax for altering table partitions is listed separately. See the alter table syntax for partitions.
alter table [[database.][owner].table_name {add column_name datatype} [default {constant_expression | user | null}] {identity | null | not null} [off row | in row] [[constraint constraint_name] {{unique | primary key} [clustered | nonclustered] [asc | desc] [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages] [on segment_name] | references [[database.]owner.]ref_table [(ref_column)] [match full] | check (search_condition)] [encrypt [with [database.[owner].] keyname] [decrypt_default {constant_expression | null}]] [, next_column]... | add [constraint constraint_name] {unique | primary key} [clustered | nonclustered] (column_name [asc | desc] [, column_name [asc | desc]...]) [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages}] [on segment_name] | foreign key (column_name [{, column_name}...]) references [[database.]owner.]ref_table [(ref_column [{, ref_column}...])] [match full] | check (search_condition)} | set dml_logging = {full | minimal | default} | drop {column_name [, column_name]... | constraint constraint_name} | modify column_name [datatype [null | not null]] [[[encrypt [with keyname] [decrypt_default [value]] | decrypt ] [, next_column]... | replace column_name default {constant_expression | user | null} | decrypt_default {constant_expression | null} | drop decrypt_default | lock {allpages | datarows | datapages} } | with exp_row_size=num_bytes transfer table [on | off]} | partition number_of_partitions | unpartition | partition_clause | add_partition_clause
alter table syntax for partitions:
partition_clause::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | partition by hash (column_name[, column_name]...) { (partition_name [on segment_name] [, partition_name [on segment_name]]...) | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...) | partition by roundrobin { (partition_name [on segment_name] [, partition_name [on segment_name]]...) | number_of_partitions [on (segment_name [, segment_name]...)]} add_partition_clause::= add partition { ([partition_name] values <= ({constant | MAX} [, {constant | MAX}]...) [on segment_name] [, [partition_name ] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | ([partition_name] values (constant[, constant] ...) [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...)}
alter table syntax for computed columns
alter table add column_name {compute | as} computed_column_expression... [materialized | not materialized] drop column_name modify column_name {null | not null | {materialized | not materialized} [null | not null] | {compute | as} computed_column_expression [materialized | not materialized] [null | not null]}
alter table syntax for dropping partitions
alter table table_name drop partition partition_name [, partition_name]...
is the name of the table to change. Specify the database name if the table is in another database, and specify the owner’s name if more than one table of that name exists in the database. The default value for owner is the current user, and the default value for database is the current database.
specifies the name of the column or constraint to add to the table. If Component Integration Services is enabled, you cannot use add for remote servers.
is the name of a column in that table. If Java is enabled in the database, the column can be a Java-SQL column.
is any system datatype except bit or any user-defined datatype except those based on bit.
If Java is enabled in the database, datatype can be the name of a Java class installed in the database, either a system class or a user-defined class. See Java in Adaptive Server Enterprise for more information.
specifies a default value for a column. If you specify a default and the user does not provide a value for this column when inserting data, Adaptive Server inserts this value. The default can be a constant_expression, user (to insert the name of the user who is inserting the data), or null (to insert the null value).
Adaptive Server generates a name for the default in the form of tabname_colname_objid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objid is the object ID number for the default. Setting the default to null drops the default.
If Component Integration Services is enabled, you cannot use default for remote servers.
is a constant expression to use as a default value for a column. It cannot include global variables, the name of any columns, or other database objects, but can include built-in functions. This default value must be compatible with the datatype of the column.
specifies that Adaptive Server should insert the user name as the default if the user does not supply a value. The datatype of the column must be char(30), varchar(30), or a type that Adaptive Server implicitly converts to char; however, if the datatype is not char(30) or varchar(30), truncation may occur.
specifies the Adaptive Server behavior during data insertion if no default exists.
null specifies that a column is added that allows nulls. Adaptive Server assigns a null value during inserts if a user does not provide a value.
The properties of a bit-type column must always be not null.
not null specifies that a column is added that does not allow nulls. Users must provide a non-null value during inserts if no default exists.
If you do not specify null or not null, Adaptive Server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards. If you specify (or imply) not null for the newly added column, a default clause is required. The default value is used for all existing rows of the newly added column, and applies to future inserts as well.
specifies an encrypted column and the key used to encrypt it. You may use encrypt [with keyname] on columns with any system datatype other than text and image.
identifies a key created using create encryption key. The table owner must have select permission on keyname. If keyname is not supplied, Adaptive Server looks for a default key created using create encryption key or alter encryption key as default
specifies that this column returns a default value for users who do not have decrypt permissions, and constant_expression is the value Adaptive Server returns on select statements instead of the decrypted value. The value can be NULL on nullable columns only. If the decrypt_value cannot be converted to the column’s data type, Adaptive Server catches the conversion error only when it executes the query.
decrypts the encrypted column.
indicates that the column has the IDENTITY property. Each table in a database can have one IDENTITY column with a datatype of:
Exact numeric and scale of 0, or
Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.
IDENTITY columns are not updatable and do not allow nulls.
IDENTITY columns store sequential numbers, such as invoice numbers or employee numbers that are automatically generated by Adaptive Server. The value of the IDENTITY column uniquely identifies each row in a table.
specifies whether the Java-SQL column is stored separately from the row, or in storage allocated directly in the row.
The storage for an in row column cannot exceed 16K bytes, depending on the page size of the database server and other variables. The default value is off row.
introduces the name of an integrity constraint. If Component Integration Services is enabled, you cannot use constraint for remote servers.
is the name of the constraint. It must conform to the rules for identifiers and be unique in the database. If you do not specify the name for a table-level constraint, Adaptive Server generates a name in the form of tabname_colname_objectid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objectid is the object ID number for the constraint. If you do not specify the name for a unique or primary key constraint, Adaptive Server generates a name in the format tabname_colname_tabindid, where tabindid is a string concatenation of the table ID and index ID.
Constraints do not apply to the data that already exists in the table at the time the constraint is added.
constrains the values in the indicated column or columns so that no two rows can have the same non-null value. This constraint creates a unique index that can be dropped only if the constraint is dropped. You cannot use this option along with the null option described above.
constrains the values in the indicated column or columns so that no two rows can have the same value and so that the value cannot be NULL. This constraint creates a unique index that can be dropped only if the constraint is dropped.
specifies that the index created by a unique or primary key constraint is a clustered or nonclustered index. clustered is the default (unless a clustered index already exists for the table) for primary key constraints; nonclustered is the default for unique constraints. There can be only one clustered index per table. See create index for more information.
specifies whether the index is to be created in ascending (asc) or descending (desc) order. The default is ascending order.
specifies how full to make each page when Adaptive Server creates a new index on existing data. “pct” stands for percentage. The fillfactor percentage is relevant only when the index is created. As data changes, pages are not maintained at any particular level of fullness.
WARNING! Creating a clustered index with a fillfactor affects the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.
The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.
A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.
If the fillfactor is set to 100, Adaptive Server creates both clustered and nonclustered indexes with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data will ever be added.
fillfactor values smaller than 100 (except 0, which is a special case) cause Adaptive Server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that will eventually hold a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.
alters a table’s eligibility for incremental transfer. The default value is to make no change, whether the table is marked for transfer or not. If the alter table command specifies set transfer table, and the selection of on or off differs from the current value, the table’s eligibility is changed.
limits the number of rows on data pages and the leaf-level pages of indexes. Unlike fillfactor, the max_rows_per_page value is maintained until it is changed with sp_chgattribute.
If you do not specify a value for max_rows_per_page, Adaptive Server uses a value of 0 when creating the index. When specifying max_rows_per_page for data pages, use a value between 0 and 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; Adaptive Server returns an error message if the specified value is too high.
For indexes created by constraints, a max_rows_per_page setting of 0 creates clustered indexes with full pages and nonclustered indexes with full leaf pages. A setting of 0 leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.
If max_rows_per_page is set to 1, Adaptive Server creates both clustered and nonclustered leaf index pages with one row per page at the leaf level. You can use this to reduce lock contention on frequently accessed data.
Low max_rows_per_page values cause Adaptive Server to create new indexes with pages that are not completely full, use more storage space, and may cause more page splits.
WARNING! Creating a clustered index with max_rows_per_page can affect the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.
specifies a ratio of filled pages to empty pages to be left during extent I/O allocation operations for the index created by the constraint. For each specified num_pages, an empty page is left for future expansion of the table. Valid values are 0 – 255. The default value, 0, leaves no empty pages.
specifies the segment on which the index exists or is to be placed. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
When using on segment_name for partitions, it specifies the segment on which to place the partition.
specifies a column list for a referential integrity constraint. You can specify only one column value for a column constraint. By including this constraint with a table that references another table, any data inserted into the referencing table must already exist in the referenced table.
To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must exactly match the datatype of the referenced table columns.
If Component Integration Services is enabled, you cannot use references for remote servers.
specifies that the listed columns are foreign keys in this table whose matching primary keys are the columns listed in the references clause.
is the name of the table that contains the referenced columns. You can reference tables in another database. Constraints can reference up to 192 user tables and internally generated worktables. Use the system procedure sp_helpconstraint to check a table’s referential constraints.
is the name of the column or columns in the referenced table.
specifies that if all values in the referencing columns of a referencing row are:
Null – the referential integrity condition is true.
Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.
If they are neither, then the referential integrity condition is false when:
All values are non-null and not equal, or
Some of the values in the referencing columns of a referencing row are non-null values, while others are null.
specifies a search_condition constraint that Adaptive Server enforces for all the rows in the table. If Component Integration Services is enabled, you cannot use check for remote servers.
is a Boolean expression that defines the check constraint on the column values. These constraints can include:
A list of constant expressions introduced with in
A set of conditions, which may contain wildcard characters, introduced with like
An expression can include arithmetic operations and Transact-SQL functions. The search_condition cannot contain subqueries, aggregate functions, parameters, or host variables.
includes additional column definitions (separated by commas) using the same syntax described for a column definition.
determines the amount of logging for insert, update, and delete (DML) operations. One of:
full – Adaptive Server logs all transactions,
minimal – Adaptive Server does not log row or page changes
default – logging is set to the table default.
specifies the name of a column or constraint to drop from the table. If Component Integration Services is enabled, you cannot use drop for remote servers.
specifies the name of the column whose datatype or nullability you are changing.
specifies the column whose default value you want to change with the new value specified by a following default clause. If Component Integration Services is enabled, you cannot use replace for remote servers.
enables or disables a trigger. For more information, see the System Administration Guide.
changes the locking scheme to be used for the table.
specifies the expected row size. You can only apply this parameter:
To datarows and datapages locking schemes.
To tables with variable-length rows.
When alter table performs a data copy, such as with alter table add or modify. You cannot use with exp_row_size=num_bytes with alter table lock change operations.
Valid values are 0, 1, and any value between the minimum and maximum row length for the table. The default value is 0, which means a server-wide setting is applied.
adds (number_of_partitions –1) empty partitions to an unpartitioned table (round-robin partitioned table with a single partition). Thus, the total number of partitions for the table becomes number_of_partitions. Even if Component Integration Services (CIS) is enabled, you cannot use partition for remote servers.
changes a round-robin partitioned table without indexes, to an unpartitioned table. Even if CIS is enabled, you cannot use unpartition for remote servers.
specifies records are to be partitioned according values in the partitioning column or columns. Each partitioning column value is compared with sets of user-supplied upper and lower bounds to determine partition assignment.
when used in the partition_clause, specifies a partition key column. A partition key column cannot be an encrypted column.
specifies the name of a new partition on which table records are to stored. Partition names must be unique within the set of partitions on a table or index. Partition names can be delimited identifiers if set quoted_identifier is on. Otherwise, they must be valid identifiers.
If partition_name is omitted, Adaptive Server creates a name in the form table_name_partition_id. Adaptive Server truncates partition names that exceed the allowed maximum length.
specifies the inclusive upper bound of values for a named partition. Specifying a constant value for the highest partition bound imposes an implicit integrity constraint on the table. The keyword MAX specifies the maximum value in a given datatype.
when used in the partition_clause, specifies the segment on which the partition is to be placed. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.
specifies records are to be partitioned by a system-supplied hash function. The function computes the hash value of the partition keys that specify the partition to which records are assigned.
specifies records are to be partitioned according to literal values specified in the named column. The partition key contains only one column. You can list up to 250 constants as the partition values for each list partition.
specifies records are to be partitioned in a sequential manner. A round-robin partitioned table has no partitioning key. Neither the user nor the optimizer knows in which partition a particular record resides.
applies only to range- or list-partitioned tables:
For range-partitioned tables – adds one or more partitions to the upper end of a range partitioned table.
For list-partitioned tables – adds one or more partitions with a new set of values.
adds or drops a new computed column. Follow the same rules defined for the existing create table command and the alter table add rules.
is any valid T-SQL expression that does not contain columns from other tables, local variables, aggregate functions, or subqueries. It can be one or a combination of column name, constant, function, global variable, or case expression, connected by one or more operators. You cannot cross-reference between computed columns except when virtual computed columns reference materialize computed columns. An encrypted column cannot be referenced in a computed_column_expression.
reserved keywords in the modify clause that specify whether the computed column is materialized, or physically stored in the table. By default, a computed column is not materialized (that is, not physically stored in the table). You can also use this piece of syntax to change the definitions of existing virtual computed columns; that is, to materialize them.
drops one or more list or range partitions. You cannot use alter table to drop a hash or round-robin partition.
For each partition you drop, Adaptive Server:
Deletes all data on the partition.
Deletes the partition definition from the system catalog.
Drops all corresponding local index partitions that refer to this data partition.
Regenerates the partition condition object of the base table and each local index.
Deletes all statistics information on this partition.
Rebuilds all global indexes.
If you attempt to drop a partition from a table that is referenced by another table, and the partition to be dropped and the referencing table are not empty, the command fails because of possible violations with the foreign-key constraint, and Adaptive Server displays error message 13971.
Adds a column to a table. For each existing row in the table, Adaptive Server assigns a NULL column value:
alter table publishers add manager_name varchar (40) null
Adds an IDENTITY column to a table. For each existing row in the table, Adaptive Server assigns a unique, sequential column value. The IDENTITY column could be type numeric or integer, and a scale of zero. The precision determines the maximum value (10 5 -1, or 99,999) that can be inserted into the column:
alter table sales_daily add ord_num numeric (5,0) identity
Adds a primary key constraint to the authors table. If there is an existing primary key or unique constraint on the table, you must drop the existing constraint first (see next example):
alter table authors add constraint au_identification primary key (au_id, au_lname, au_fname)
Drops the au_identification constraint:
alter table titles drop constraint au_identification
Creates an index on authors; the index has a reservepagegap value of 16, leaving 1 empty page in the index for each 15 allocated pages:
alter table authors add constraint au_identification primary key (au_id, au_lname, au_fname) with reservepagegap = 16
Removes the default constraint on the phone column in the authors table. If the column allows NULL values, NULL is inserted if no column value is specified. If the column does not allow NULL values, an insert that does not specify a column value fails:
alter table authors replace phone default null
Modifies the emp table to encrypt the ssn column and specifies decrypt default:
alter table emp modify ssn encrypt with key1 decrypt_default '000-00-0000'
Decrypts credit card data that is longer sensitive:
alter table stolen_ccards modify ccard decrypt
If card was encrypted by a key protected by a user-defined password, precede this command with the set encryption key command.
Adds an encrypted column to an existing table. Because keyname is omitted, Adaptive Server looks for the database default encryption key:
alter table sales_mgr add bonus money null encrypt
Sets the password for the ssn_key encryption key and encrypts the ssn column in the existing employee table.
set encryption passwd '4evermore' for key ssn_key alter table employee modify ssn encrypt with ssn_key
If ssn in the above example were an existing encrypted column encrypted by ‘key1’ the alter table would cause Adaptive Server to decrypt ssn using ‘key1’ and re-encrypt ssn using ‘ssn_key’.
Adds a decrypt default to the salary column, which is already encrypted:
alter table employee replace salary decrypt_default $0.00
Removes the decrypt default for salary without removing the encryption property:
alter table employee replace salary drop decrypt_default
Changes an unpartitioned table to a range-partitioned table with three partitions, each of which is on a different segment:
alter table titles partition by range (total_sales) (smallsales values <= (500) on seg1, mediumsales values <= (5000) on seg2, bigsales values <= (25000) on seg3)
Adds another range partition to the titles table:
alter table titles add partition (vbigsales values <= (40000) on seg4)
Changes the locking scheme for the titles table to datarows locking:
alter table titles lock datarows
Adds the not-null column author_type to the authors table with a default of primary_author:
alter table authors add author_type varchar (20) default "primary_author" not null
Drops the advance, notes, and contract columns from the titles table:
alter table titles drop advance, notes, contract
Modifies the city column of the authors table to be a varchar(30) with a default of NULL:
alter table authors modify city varchar (30) null
Modifies the stor_name column of the stores table to be NOT NULL. Its datatype, varchar(40), remains unchanged:
alter table stores modify stor_name not null
Modifies the type column of the titles table and changes the locking scheme of the titles table from allpages to datarows:
alter table titles modify type varchar (10) lock datarows
Modifies the notes column of the titles table from varchar(200) to varchar(150), changes the default value from NULL to NOT NULL, and specifies an exp_row_size of 40:
alter table titles modify notes varchar (150) not null with exp_row_size = 40
Adds the incremental transfer attribute to mytable:
alter table mytable set transfer table on
Removes the incremental transfer attribute from mytable:
alter table mytable set transfer table off
Adds, modifies, and drops a column, and then adds another column in one query. Alters the locking scheme and specifies the exp_row_size of the new column:
alter table titles add author_type varchar (30) null modify city varchar (30) drop notes add sec_advance money default 1000 not null lock datarows with exp_row_size = 40
Add a virtual computed column:
alter table authors add fullname compute au_fname + ' ' + au_lname
Changes a virtual computed column to a materialized computed column:
alter table authors modify fullname materialized
Before you add, modify, or drop columns on a table, run the sp_depends stored procedure to see if there are any stored procedures that depend on the table you are changing. If such stored procedures exist, Sybase recommends that you drop, then re-create the stored procedures as necessary after changing table schema.
If stored procedures using select * reference a table that has been altered, no new columns appear in the result set, even if you use the with recompile option. You must drop the procedure and re-create it to include these new columns. Otherwise, the wrong results can be caused by the insert...select statement of insert into table1 select * from table2 in the procedure when the tables have been altered and new columns have been added to the tables.
When the table owner uses alter table, Adaptive Server disables access rules during the execution of the command and enables them upon completion of the command. The access rules are disabled to avoid filtering of the table data during alter table.
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
alter table. . . transfer table involves data copy (similar to adding or removing a column): it is a very expensive command in terms of performance.
alter table performs error checking for check constraints before it alters the table.
When using on segment_name for partitions, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your System Administrator or use sp_helpsegment for a list of the segment names available in your database.
WARNING! Do not alter the system tables.
You cannot add a column of datatype bit to an existing table if you specify a default value. This default value must be 0 or 1.
The maximum number of columns in a table is:
1024 for fixed-length columns in both all-pages-locked (APL) and data-only-locked (DOL) tables
254 for variable-length columns in an APL table
1024 for variable-length columns in a DOL table
alter table raises an error if the number of variable-length columns in an APL table exceeds 254.
All compiled objects should be dropped and recreated after changing a table’s lock schema.
The maximum length for in-row Java columns is determined by the maximum size of a variable-length column for the table’s schema, locking style, and page size.
When converting a table to a different locking scheme, the data in the source table cannot violate the limits of the target table. For example, if you attempt to convert a DOL table with more than 254 variable-length columns to an APL table, alter table fails because an APL table is restricted to having no more than 254 variable-length columns.
Columns with fixed-length data (for example char, binary, and so on) have the maximum sizes shown in Table 1-2:
Locking scheme |
Page size |
Maximum row length |
Maximum column length |
---|---|---|---|
* This size includes six bytes for the row overhead and two bytes for the row length field |
|||
APL tables |
2K (2048 bytes) |
1962 |
1960 bytes |
4K (4096 bytes) |
4010 |
4008 bytes |
|
8K (8192 bytes) |
8106 |
8104 bytes |
|
16K (16384 bytes) |
16298 |
16296 bytes |
|
DOL tables |
2K (2048 bytes) |
1964 |
1958 bytes |
4K (4096 bytes) |
4012 |
4006 bytes |
|
8K (8192 bytes) |
8108 |
8102 bytes |
|
16K (16384 bytes) |
16300 |
16294 bytes – if table does not include any variable length columns. |
|
16K (16384 bytes) |
16300 (subject to a max start offset of varlen = 8191) |
8191-6-2 = 8183 bytes – if table includes at least on variable-length column.* |
The maximum number of bytes of variable-length data per row depends on the locking scheme for the table. The following describes the maximum size columns for an APL table:
Page size |
Maximum row length |
Maximum column length |
---|---|---|
2K (2048 bytes) |
1960 |
1960 |
4K (4096 bytes) |
4008 |
4008 |
8K (8192 bytes) |
8104 |
8157 |
16K (16384 bytes) |
16296 |
16227 |
The following describes the maximum size columns for a DOL table:
Page size |
Maximum row length |
Maximum column length |
---|---|---|
2K (2048 bytes) |
1960 |
1958 |
4K (4096 bytes) |
4008 |
4006 |
8K (8192 bytes) |
8157 |
8102 |
16K (16384 bytes) |
16294 |
16294 |
You cannot use alter table to add a declarative or check constraint and then insert data into the table in the same batch or procedure. Either separate the alter and insert statements into two different batches or procedures, or use execute to perform the actions separately.
You cannot use the following variable in alter table statements that include defaults:
declare @a int select @a = 2 alter table t2 add c3 int default @a
Doing so results in error message 154, which says, “Variable is not allowed in default.”
SQL user-defined functions are not currently supported with create proxy table, create table at remote server, or alter table.
The execution of SQL functions requires the syntax username.functionname().
When used to add or modify an encrypted column, alter table may take a significant amount of time if the table contains a large number of rows.
Modifying a column for encryption can cause the row size of the table to increase.
You cannot use alter table to encrypt or decrypt a column belonging to a clustered or placement index. To encrypt or decrypt such a column:
Drop the index.
Alter the column.
Re-create the index.
You cannot use alter table to encrypt or decrypt a column if the table has a trigger defined. To modify the column:
Drop the trigger.
Alter the column.
Re-create the trigger.
If the type of the encrypted column which belongs to a clustered or placement index is modified, it results in the index being out of order. alter table displays an error. To modify the type:
Drop the index
Modify the type
Re-create the index
You can encrypt these data types:
int, smallint, tinyint
unsigned int, unsigned smallint, unsigned tinyint
bigint, unsigned bigint
decimal and numeric
float4 and float8
money, smallmoney
date, time, smalldatetime, datetime, bigdatetime
char and varchar
unichar, univarchar
binary and varbinary
bit
The underlying data type of encrypted data on disk is varbinary. Null values are not encrypted.
Modifying the data type of the encrypted column belonging to a clustered or placement index results in the index being out of order, and alter table displays an error. To modify the type:
Drop the index
Modify the type
Recreate the index
alter table reports an error if you:
Change a computed column to an encrypted column or change an encrypted column to a computed column
Enable a column for encryption where the column is referenced in an expression used by a computed column
Change a computed column to reference an encrypted column.
Encrypt a column that is a member of a functional index
Specify an encrypted column as a partition key
Encrypt a column that is already used as a partition key
Referential integrity between encrypted columns is supported when the columns are encrypted with the same key. For details, refer to Encrypted Columns Guide, chapter “Encrypting Data.”
For information about a table and its columns, use sp_help.
To rename a table, execute sp_rename (do not rename the system tables).
For information about integrity constraints (unique, primary key, references, and check) or the default clause, see create table in this chapter.
Use the asc and desc keywords after index column names to specify the sort order for the index. Creating indexes so that columns are in the same order specified in the order by clause of queries eliminates the sorting step during query processing. For more information, see Chapter 13, “Indexing for Performance” in the Performance and Tuning Guide: Basics.
When you create a cross-database constraint, Adaptive Server stores the following information in the sysreferences table of each database:
Information stored in sysreferences |
Columns with information about the referenced table |
Columns with information about the referencing table |
---|---|---|
Key column IDs |
refkey1 through refkey16 |
fokey1 through fokey16 |
Table ID |
reftabid |
tableid |
Database ID |
pmrydbid |
frgndbid |
Database name |
pmrydbname |
frgndbname |
When you drop a referencing table or its database, Adaptive Server removes the foreign-key information from the referenced database.
Because the referencing table depends on information from the referenced table, Adaptive Server does not allow you to:
Drop the referenced table,
Drop the external database that contains the referenced table, or
Rename either database with sp_renamedb.
You must first remove the cross-database constraint with alter table.
Each time you add or remove a cross-database constraint, or drop a table that contains a cross-database constraint, dump both of the affected databases.
WARNING! Loading earlier dumps of these databases could cause database corruption.
The sysreferences system table stores the name and the ID number of the external database. Adaptive Server cannot guarantee referential integrity if you use load database to change the database name or to load it onto a different server.
WARNING! Before dumping a database to load it with a different name or move it to another Adaptive Server, use alter table to drop all external referential integrity constraints.
You can create column defaults in two ways: by declaring the default as a column constraint in the create table or alter table statement, or by creating the default using the create default statement and binding it to a column using sp_bindefault.
You cannot replace a user-defined default bound to the column with sp_bindefault. Unbind the default with sp_unbindefault first.
If you declare a default column value with create table or alter table, you cannot bind a default to that column with sp_bindefault. Drop the default by altering it to NULL, then bind the user-defined default. Changing the default to NULL unbinds the default and deletes it from the sysobjects table.
The space management properties fillfactor, max_rows_per_page, and reservepagegap in the alter table statement apply to indexes that are created for primary key or unique constraints. The space management properties affect the data pages of the table if the constraint creates a clustered index on an allpages-locked table.
Use sp_chgattribute to change max_rows_per_page or reservepagegap for a table or an index, to change the exp_row_size value for a table, or to store fillfactor values.
Space management properties for indexes are applied:
When indexes are re-created as a result of an alter table command that changes the locking scheme for a table from allpages locking to data-only locking or vice versa. See “Changing locking schemes” for more information.
When indexes are automatically rebuilt as part of a reorg rebuild command.
To see the space management properties currently in effect for a table, use sp_help. To see the space management properties currently in effect for an index, use sp_helpindex.
The space management properties fillfactor, max_rows_per_page, and reservepagegap help manage space usage for tables and indexes in the following ways:
fillfactor leaves extra space on pages when indexes are created, but the fillfactor is not maintained over time. It applies to all locking schemes.
max_rows_per_page limits the number of rows on a data or index page. Its main use is to improve concurrency in allpages-locked tables.
reservepagegap specifies the ratio of empty pages to full pages to apply for commands that perform extent allocation. It applies to all locking schemes.
You can store space management properties for tables and indexes so that they are applied during alter table and reorg rebuild commands.
The following table shows the valid combinations of space management properties and locking schemes. If an alter table command changes the table so that the combination is not compatible, the values stored in the stored in system tables remain there, but are not applied during operations on the table. If the locking scheme for a table changes so that the properties become valid, then they are used.
Parameter |
allpages |
datapages |
datarows |
---|---|---|---|
max_rows_per_page |
Yes |
No |
No |
reservepagegap |
Yes |
Yes |
Yes |
fillfactor |
Yes |
Yes |
Yes |
exp_row_size |
No |
Yes |
Yes |
The following table shows the default values and the effects of using the default values for the space management properties.
Parameter |
Default |
Effect of using the default |
---|---|---|
max_rows_per_page |
0 |
Fits as many rows as possible on the page, up to a maximum of 256 |
reservepagegap |
0 |
Leaves no gaps |
fillfactor |
0 |
Fully packs leaf pages |
If a table has max_rows_per_page set, and the table is converted from allpages locking to data-only locking, the value is converted to an exp_row_size value before the alter table...lock command copies the table to its new location. The exp_row_size is enforced during the copy. The following table shows how the values are converted.
If max_rows_per_page is set to |
Set exp_row_size to |
---|---|
0 |
Percentage value set by default exp_row_size percent |
255 |
1, that is, fully packed pages |
1–254 |
The smaller of:
|
Commands that use large amounts of space allocate new space by allocating an extent rather than allocating single pages. The reservepagegap keyword causes these commands to leave empty pages so that future page allocations take place close to the page that is being split or to the page from which a row is being forwarded.
The reservepagegap value for a table is stored in sysindexes, and is applied when the locking scheme for a table is changed from allpages locking to data-only locking or vice versa. To change the stored value, use sp_chgattribute before running alter table.
reservepagegap specified with the clustered keyword on an allpages-locked table overwrites any value previously specified with create table or alter table.
You can partition an unpartitioned table or repartition an already partitioned table using the partition by clause. The task requires data copy; all data rows are redistributed according to the specified partition criteria. The task may be run in parallel if the Adaptive Server is configured for parallel processing. You must set the select into/bulkcopy/pllsort option to true. If the table has indexes, you must drop the indexes before you can change:
An unpartitioned table into a semantic-partitioned table.
The partitioning type.
The partitioning key – You need not drop indexes to change other attributes of the partitions, such as number of partitions, partition bounds, or partition location; the indexes are built automatically. See create table for more information on partition key and bound restrictions.
You can use the add partition clause to add empty partitions to list- or range-partitioned tables, but not to hash or round-robin partitioned tables.
For range-partitioned tables, you can add new partitions only to the high end of the partition conditions. If the last existing partition has the maximum boundary (values <= (MAX)), you cannot add new partitions.
The partition number_of_partition and unpartition clause are provided for compatibility with versions of Adaptive Server earlier than 15.0. You can use partition number_of_partition clause only on unpartitioned tables to add (number_of_partition-1) empty round-robin partitions; existing data is placed on the first partition, with subsequent data distributed among all partitions. If the table has a global clustered index, Adaptive Server places subsequent data rows in the first partition. To redistribute the data, drop and re-create the index.
These commands do not require data movement. However, because Adaptive Server performs a number of internal steps, the commands, especially when executed on large tables, do not occur instantly. To avoid data corruption, do not interrupt the operation while you partition or unpartition the table.
You can use the unpartition clause only on round-robin-partitioned tables without indexes.
You cannot partition system tables.
You cannot partition remote proxy tables.
You cannot issue the partition-related alter table commands within a user-defined transactions.
You cannot change a table’s partitioning properties using the partition by clause if there are active open cursors on the table.
After using the partition by clause, you must perform a full database dump before you can use dump transaction.
You cannot drop a column that is part of a partitioning key.
Alter key columns with care. In some cases, modifying the datatype of a key column might redistribute data among partitions. See the Transact-SQL User’s Guide for details.
Changing a table’s partitioning properties increments the schema count, which causes existing stored procedures that access this table to recompile the next time they are executed.
When you add a new computed column without specifying nullability and the materialization property, the default option is nullable and not materialized.
When you add a new materialized computed column, the computed_column_expression is evaluated for each existing row in the table, and the result is stored in the table.
You cannot add new computed columns and add or modify their base columns at the same time.
You can modify the entire definition of an existing computed column. This is a quick way to drop the computed column and add a new one with the same name. Such a column behaves like a new computed column: its defaults are not materialized and nullable, if you do not specify these options.
You can modify the materialization property of an existing computed column without changing its other properties, such as the expression that defines it or its nullability.
When you modify a not-null, materialized computed column into a virtual column, you must specify “null” in the modify clause.
When you modify a computed column that is not materialized, to materialize it, the computed_column_expression is evaluated for each existing row in the table, and the result is stored in the table.
If you modify existing columns that are index keys, the index is rebuilt.
You cannot modify a materialized computed column into a virtual column if it has been used as an index key; you must first drop the index.
You cannot modify a regular column to become a computed column, or a computed column to become a regular column.
You cannot modify or drop the base column referenced by a computed column.
You cannot drop a computed column if it is used as an index key.
When adding a numeric or integer IDENTITY column to a table, make sure the column precision is large enough to accommodate the number of existing rows. If the number of rows exceeds 10 precision - 1, Adaptive Server prints an error message and does not add the column.
When adding an IDENTITY column to a table, Adaptive Server:
Locks the table until all the IDENTITY column values have been generated. If a table contains a large number of rows, this process may be time-consuming.
Assigns each existing row a unique, sequential IDENTITY column value, beginning with the value 1.
Logs each insert operation into the table. Use dump transaction to clear the database’s transaction log before adding an IDENTITY column to a table with a large number of rows.
Each time you insert a row into the table, Adaptive Server generates an IDENTITY column value that is one higher than the last value. This value takes precedence over any defaults declared for the column in the alter table statement or bound to it with sp_bindefault.
add, drop, or modify, and lock sub-clauses are useful in changing an existing table’s schema. A single statement can contain any number of these sub-clauses, in any order, as long as the same column name is not referenced more than once in the statement.
If stored procedures using select * reference a table that has been altered, no new columns appear in the result set, even if you use the with recompile option. You must drop the procedure and re-create it to include these new columns.
To ensure that triggers fire properly, you must drop and re-create all triggers on an altered table after you perform an add, drop, modify, or lock operation.
Adaptive Server issues an error message if you add a not null column with alter table.
You cannot drop all the columns in a table. Also, you cannot drop the last remaining column from a table (for example, if you drop four columns from a five-column table, you cannot then drop the remaining column). To remove a table from the database, use drop table.
Data copy is required:
To drop a column
To add a NOT NULL column
For most alter table ... modify commands
Use set noexec on and showplan on options to determine if a data copy is required for a particular alter table command.
You can specify a change in the locking scheme for the modified table with other alter table commands (add, drop, or modify) when the other alter table command requires a data copy.
If alter table performs a data copy, select into /bulkcopy/pllsort must be turned on in the database that includes the table whose schema you are changing.
The modified table retains the existing space management properties (max_rows_per_page, fillfactor, and so on) and indexes of the table.
alter table that requires a data copy does not fire any triggers.
You can use alter table to change the schema of remote proxy tables created and maintained by Component Integration Services (CIS). For information about CIS, see the Component Integration Services User’s Guide.
You cannot perform a data copy and add a table level or referential integrity constraint in the same statement.
You cannot perform a data copy and create a clustered index in the same statement.
If you add a not null column, you must also specify a default clause. This rule has one exception: if you add a user-defined type column, and the type has a default bound to it, you need not specify a default clause.
You can always add, drop, or modify a column in all-pages locked tables. However, there are restrictions for adding, dropping, or modifying a column in a data-only locked table, which are described in the following table:
Type of index |
All pages locked, partitioned table |
All pages locked, unpartitioned table |
Data-only locked, partitioned table |
Data-only locked, unpartitioned table |
---|---|---|---|---|
Clustered |
Yes |
Yes |
No |
Yes |
Nonclustered |
Yes |
Yes |
Yes |
Yes |
If you need to add, drop, or modify a column in a data-only-locked table partitioned with a clustered index, you can:
Drop the clustered index.
Alter the data-only locked table.
Re-create the clustered index.
You cannot add a NOT NULL Java object as a column. By default, all Java columns always have a default value of NULL, and are stored as either varbinary strings or as image datatypes.
You cannot modify a partitioned table that contains a Java column if the modification requires a data copy. Instead, first unpartition the table, execute alter table, then repartition the table.
You cannot drop the key column from an index or a referential integrity constraint. To drop a key column, first drop the index or referential integrity constraint, then drop the key column. See the Transact-SQL User’s Guide for more information.
You can drop columns that have defaults or rules bound to them. Any column-specific defaults are also dropped when you drop the column. You cannot drop columns that have check constraints or referential constraints bound to them. Instead, first drop the check constraint or referential constraint, then drop the column. Use sp_helpconstraint to identify any constraints on a table, and use sp_depends to identify any column-level dependencies.
You cannot drop a column from a system table. Also, you cannot drop columns from user tables that are created and used by Sybase-provided tools and stored procedures.
You can generally modify the datatype of an existing column to any other datatype if the table is empty. If the table is not empty, you can modify the datatype to any datatype that is explicitly convertible to the original datatype.
You can:
Add a new IDENTITY column.
Drop an existing IDENTITY column.
Modify the size of an existing IDENTITY.
See the Transact-SQL User’s Guide for more information.
Altering the schema of a table increments the schema count, causing existing stored procedures that access this table to be renormalized the next time they are executed. Changes in datatype-dependent stored procedures or views may fail with datatype normalization type errors. You must update these dependent objects so they refer to the modified schema of the table.
You cannot run alter table from inside a transaction.
Altering a table’s schema can invalidate backups that you made using bcp. These backups may use a tables schema that is no longer compatible with the table’s current schema.
You can add NOT NULL columns with check constraints, however, Adaptive Server does not validate the constraint against existing data.
You cannot change the locking scheme of a table using the alter table . . . add, drop, or modify commands if the table has a clustered index and the operation requires a data copy. Instead you can
Drop the clustered index.
Alter the table’s schema.
Re-create the clustered index.
You cannot alter a table’s schema if there are any active open cursors on the table.
You can only add text or image columns that accept null values.
To add a text or image column so it contains only non-null values, first add a column that only accepts null values and then update it to the non-null values.
You can modify a column from text datatype only to the following datatypes:
[n]char
[n]varchar
unichar
univarchar
nchar
nvarchar
You can modify a column from image datatype only to binary or varbinary.
You cannot add a new text or image column and then drop an existing text or image column in the same statement.
You cannot modify a column to either text or image datatype.
The following restrictions apply when you use alter table to modify unitext columns:
You can add a new unitext column that accepts NULL values.
You can modify a column from unitext only to the following datatypes:
[n]char
[n]varchar
unichar
univarchar
binary
varbinary
You cannot modify a column to the unitext datatype.
You cannot add a unitext column and drop an existing unitext column in the same statement.
alter table supports changing from any locking scheme to any other locking scheme. You can change:
From allpages to datapages or vice versa
From allpages to datarows or vice versa
From datapages to datarows or vice versa
Before you change from allpages locking to a data-only locking scheme, or vice versa, use sp_dboption to set the database option select into/bulkcopy/pllsort to true, then run checkpoint in the database if any of the tables are partitioned and the sorts for the indexes require a parallel sort.
After changing the locking scheme from allpages-locking to data-only locking or vice versa, the use of the dump transaction command to back up the transaction log is prohibited; you must first perform a full database dump.
When you use alter table...lock to change the locking scheme for a table from allpages locking to data-only locking or vice versa, Adaptive Server makes a copy of the table’s data pages. There must be enough room on the segment where the table resides for a complete copy of the data pages. There must be space on the segment where the indexes reside to rebuild the indexes.
Clustered indexes for data-only-locked tables have a leaf level above the data pages. If you are altering a table with a clustered index from allpages-locking to data-only-locking, the resulting clustered index requires more space. The additional space required depends on the size of the index keys.
Use sp_spaceused to determine how much space is currently occupied by the table, and use sp_helpsegment to see the space available to store the table.
When you change the locking scheme for a table from allpages locking to datapages locking or vice versa, the space management properties are applied to the tables, as the data rows are copied, and to the indexes, as they are re-created. When you change from one data-only locking scheme to another, the data pages are not copied, and the space management properties are not applied.
If a table is partitioned, changing the locking scheme performs a partition-to-partition copy of the rows. It does not balance the data on the partitions during the copy.
When you change the locking scheme for a table, the alter table...lock command acquires an exclusive lock on the table until the command completes.
When you use alter table...lock to change from datapages locking to datarows locking, the command does not copy data pages or rebuild indexes. It only updates system tables.
Changing the locking scheme while other users are active on the system may have the following effects on user activity:
Query plans in the procedure cache that access the table are recompiled the next time they are run.
Active multistatement procedures that use the table are recompiled before continuing with the next step.
Ad hoc batch transactions that use the table are terminated.
WARNING! Changing the locking scheme for a table while a bulk copy operation is active can cause table corruption. Bulk copy operates by first obtaining information about the table and does not hold a lock between the time it reads the table information and the time it starts sending rows, leaving a small window of time for an alter table...lock command to start.
If Java is enabled in the database, you can add Java-SQL columns to a table. For more information, see Java in Adaptive Server Enterprise.
The declared class (datatype) of the new Java-SQL column must implement either the Serializable or Externalizable interface.
When you add a Java-SQL column to a table, the Java-SQL column cannot be specified:
As a foreign key
In a references clause
As having the UNIQUE property
As the primary key
If in row is specified, the value stored cannot exceed 16K bytes, depending on the page size of the data server.
If off row is specified, then the column cannot be:
Referenced in a check constraint
Referenced in a select that specifies distinct
Specified in a comparison operator, in a predicate, or in a group by clause
A referential integrity constraint cannot reference a column on a local temporary database except from a table on the same local temporary database. alter table fails when it attempts to create a reference to a column on a local temporary database from a table in another database.
You cannot encrypt a column with an encryption key stored in a local temporary database unless the column’s table resides on the same local temporary database. alter table fails if it attempts to encrypt a column with an encryption key on the local temporary database and the table is in another database.
ANSI SQL – Compliance level: Transact-SQL extension.
See Chapter 1, “System and User-Defined Datatypes” in Reference Manual: Building Blocks for datatype compliance information.
alter table permission defaults to the table owner; it cannot be transferred except to the database owner, who can impersonate the table owner by running the setuser command. A system administrator can also alter user tables.
Values in event and extrainfo columns of sysaudits are:
Event |
Audit option |
Command or access audited |
Information in extrainfo |
---|---|---|---|
3 |
alter |
alter table |
|
Commands create index, create table, dbcc, drop database, dump transaction, insert, setuser
System procedures sp_chgattribute, sp_help, sp_helpartition, sp_rename