Adds new columns to a table; drops or modifies existing columns; adds, changes, or drops constraints; changes properties of an existing table; enables or disables triggers on a table, changes the compression level of a table.
Supports adding, dropping, and modifying computed columns, and enables the materialized property, nullability, or definition of an existing computed column to be changed.
Partitions and repartitions a table with specified partition strategy, adds partitions to a table with existing partitions, and splits or merges existing partitions.
alter table [[database.][owner].table_name {add column_name datatype} [default {constant_expression | user | null}] {identity | null | not null [not materialized]} [off row | in row] [[constraint constraint_name] {{unique | primary key} [clustered | nonclustered] [asc | desc] [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages immediate_allocation] [on segment_name] | references [[database.]owner.]ref_table [(ref_column)] [match full] | check (search_condition)] [encrypt [with [database.[owner].] keyname] [decrypt_default {constant_expression | null}]] [compressed = compression_level | not compressed] [, next_column]... | add [constraint constraint_name] {unique | primary key} [clustered | nonclustered] (column_name [asc | desc] [, column_name [asc | desc]...]) [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages}] [on segment_name] | foreign key (column_name [{, column_name}...]) references [[database.]owner.]ref_table [(ref_column [{, ref_column}...])] [match full] | add lob-colname { text | image | unitext } [null] [ in row [ (length) ] ] | check (search_condition)} | set dml_logging = {full | minimal | default} | [, compression = {none | page | row}]} [lob_compression = off | compression_level] | drop {column_name [, column_name]... | constraint constraint_name} | modify column_name [datatype [null | not null]] [[[encrypt [with keyname] [decrypt_default [value]] | decrypt ] [[not] compressed] [compressed = compression_level | not compressed] | modify lob-column [ in row (length)] [, next_column]... | replace column_name default {constant_expression | user | null} | decrypt_default {constant_expression | null} | drop decrypt_default | lock {allpages | datarows | datapages} } | with exp_row_size=num_bytes | transfer table [on | off] | no datacopy} | partition number_of_partitions | unpartition | partition_clause | add_partition_clause
alter table syntax for partitions:
partition_clause::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | partition by hash (column_name[, column_name]...) { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...) | partition by roundrobin { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name [, segment_name]...)]} add_partition_clause::= add partition { ([partition_name] values <= ({constant | MAX} [, {constant | MAX}]...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name ] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | modify partition {partition_name [, partition_name . . .] } set compression [= {default | none | row | page}] | ([partition_name] values (constant[, constant] ...) [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...)}
alter table syntax for computed columns:
alter table add column_name {compute | as} computed_column_expression... [materialized | not materialized] drop column_name modify column_name {null | not null | {materialized | not materialized} [null | not null] | {compute | as} computed_column_expression [materialized | not materialized] [null | not null]}
alter table syntax for dropping, splitting, merging, and moving partitions:
alter table table_name drop partition partition_name [, partition_name]... split partition partition_name merge partition {partition_name [{, partition_name}…]} into destination_partition_name [on segment_name] move partition partition_name [{, partition_name}…] to destination_segment_name
is the name of the table to change. Specify the database name if the table is in another database, and specify the owner’s name if more than one table of that name exists in the database. The default value for owner is the current user, and the default value for database is the current database.
specifies the name of the column or constraint to add to the table. If CIS is enabled, you cannot use add for remote servers.
is the name of a column in that table. If Java is enabled in the database, the column can be a Java-SQL column.
is any system datatype except bit, or any user-defined datatype except those based on bit.
If Java is enabled in the database, datatype can be the name of a Java class installed in the database, either a system class or a user-defined class. See Java in Adaptive Server Enterprise.
specifies a default value for a column. If you specify a default and the user does not provide a value for this column when inserting data, Adaptive Server inserts this value. The default can be a constant_expression, user (to insert the name of the user who is inserting the data), or null (to insert the null value).
Adaptive Server generates a name for the default in the form of tabname_colname_objid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objid is the object ID number for the default. Setting the default to null drops the default.
If CIS is enabled, you cannot use default for remote servers.
is a constant expression to use as a default value for a column. It cannot include global variables, the name of any columns, or other database objects, but can include built-in functions. This default value must be compatible with the datatype of the column.
specifies that Adaptive Server should insert the user name as the default if the user does not supply a value. The datatype of the column must be char(30), varchar(30), or a type that Adaptive Server implicitly converts to char; however, if the datatype is not char(30) or varchar(30), truncation may occur.
specifies the Adaptive Server behavior during data insertion if no default exists.
null specifies that a column is added that allows nulls. Adaptive Server assigns a null value during inserts if a user does not provide a value.
The properties of a bit-type column must always be not null.
not null specifies that a column is added that does not allow nulls. Users must provide a non-null value during inserts if no default exists.
If you do not specify null or not null, Adaptive Server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards. If you specify (or imply) not null for the newly added column, a default clause is required. The default value is used for all existing rows of the newly added column, and applies to future inserts as well.
indicates whether you are creating a materialized or nonmaterialized column.
specifies an encrypted column and the key used to encrypt it.
keyname identifies a key created using create encryption key. The table owner must have select permission on keyname. If keyname is not supplied, the server looks for a default key created using create encryption key or alter encryption key.
See “Encrypting Data” in the User Guide for Encrypted Columns for a list of supported datatypes.
specifies that this column returns a default value for users who do not have decrypt permissions, and constant_expression is the value Adaptive Server returns on select statements instead of the decrypted value. The value can be NULL on nullable columns only. If the decrypt_value cannot be converted to the column’s datatype, Adaptive Server catches the conversion error only when the query executes.
decrypts the encrypted column.
indicates if the data in the row is compressed and to what level.
Level of compression. The compression levels are:
0 – the row is not compressed.
1 through 9 – Adaptive Server uses ZLib compression. Generally, the higher the compression number, the more Adaptive Server compresses the LOB data, and the greater the ratio between compressed and uncompressed data (that is the greater the amount of space savings, in bytes, for the compressed data versus the size of the uncompressed data).
However, the amount of compression depends on the LOB content, and the higher the compression level , the more CPU-intensive the process. That is, level 9 provides the highest compression ratio but also the heaviest CPU usage.
100 – Adaptive Server uses FastLZ compression. The compression ratio that uses the least CPU usage; generally used for shorter data.
101 – Adaptive Server uses FastLZ compression. A value of 101 uses slightly more CPU than a value of 100, but uses a better compression ratio than a value of 100.
The compression algorithm ignores rows that do not use LOB data.
indicates that the column has the IDENTITY property. Each table in a database can have one IDENTITY column with a datatype of:
Exact numeric and scale of 0, or
Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.
IDENTITY columns are not updatable and do not allow nulls.
IDENTITY columns store sequential numbers, such as invoice numbers or employee numbers that are automatically generated by Adaptive Server. The value of the IDENTITY column uniquely identifies each row in a table.
specifies whether the Java-SQL column is stored separately from the row, or in storage allocated directly in the row.
The storage for an in row column cannot exceed 16K bytes, depending on the page size of the database server and other variables. The default value is off row.
introduces the name of an integrity constraint. If CIS is enabled, you cannot use constraint for remote servers.
is the name of the constraint, which must conform to the rules for identifiers and be unique in the database. If you do not specify the name for a table-level constraint, Adaptive Server generates a name in the form of tabname_colname_objectid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objectid is the object ID number for the constraint. If you do not specify the name for a unique or primary key constraint, Adaptive Server generates a name in the format tabname_colname_tabindid, where tabindid is a string concatenation of the table ID and index ID.
Constraints do not apply to the data that already exists in the table at the time the constraint is added.
constrains the values in the indicated column or columns so that no two rows can have the same non-null value. This constraint creates a unique index that can be dropped only if the constraint is dropped. You cannot use this option with the null option.
constrains the values in the indicated column or columns so that no two rows can have the same value and so that the value cannot be NULL. This constraint creates a unique index that can be dropped only if the constraint is dropped.
specifies that the index created by a unique or primary key constraint is a clustered or nonclustered index. clustered is the default (unless a clustered index already exists for the table) for primary key constraints; nonclustered is the default for unique constraints. There can be only one clustered index per table. See create index for more information.
specifies whether the index is to be created in ascending (asc) or descending (desc) order. The default is ascending order.
specifies how full to make each page when Adaptive Server creates a new index on existing data. “pct” stands for percentage. The fillfactor percentage is relevant only when the index is created. As data changes, pages are not maintained at any particular level of fullness.
WARNING! Creating a clustered index with a fillfactor affects the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.
The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.
A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.
If the fillfactor is set to 100, Adaptive Server creates both clustered and nonclustered indexes, with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data will ever be added.
fillfactor values smaller than 100 (except 0, which is a special case) cause Adaptive Server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that will eventually hold a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.
alters a table’s eligibility for incremental transfer. The default value is to make no change, whether the table is marked for transfer or not. If the alter table command specifies set transfer table, and the selection of on or off differs from the current value, the table’s eligibility is changed.
limits the number of rows on data pages and the leaf-level pages of indexes. Unlike fillfactor, the max_rows_per_page value is maintained until it is changed with sp_chgattribute.
If you do not specify a value for max_rows_per_page, Adaptive Server uses a value of 0 when creating the index. When specifying max_rows_per_page for data pages, use a value between 0 – 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; Adaptive Server returns an error message if the specified value is too high.
For indexes created by constraints, a max_rows_per_page setting of 0 creates clustered indexes with full pages, and nonclustered indexes with full leaf pages. A setting of 0 leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.
If max_rows_per_page is set to 1, Adaptive Server creates both clustered and nonclustered leaf index pages with one row per page at the leaf level. You can use this to reduce lock contention on frequently accessed data.
Low max_rows_per_page values cause Adaptive Server to create new indexes with pages that are not completely full, use more storage space, and may cause more page splits.
WARNING! Creating a clustered index with max_rows_per_page can affect the amount of storage space your data occupies, since Adaptive Server redistributes the data as it creates the clustered index.
specifies a ratio of filled pages to empty pages to be left during extent I/O allocation operations for the index created by the constraint. For each specified num_pages, an empty page is left for future expansion of the table. Valid values are 0 – 255. The default value, 0, leaves no empty pages.
Explicitely creates a table when you have enabled sp_dboption 'deferred table allocation'.
specifies the segment on which the index exists or is to be placed. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
For partitions, on segment_name specifies the segment on which to place the partition.
specifies a column list for a referential integrity constraint. You can specify only one column value for a column constraint. By including this constraint with a table that references another table, any data inserted into the referencing table must already exist in the referenced table.
To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must exactly match the datatype of the referenced table columns.
If CIS is enabled, you cannot use references for remote servers.
specifies that the listed columns are foreign keys in this table for which the matching primary keys are the columns listed in the references clause.
is the name of the table that contains the referenced columns. You can reference tables in another database. Constraints can reference as many as 192 user tables and internally generated worktables. Use sp_helpconstraint to check a table’s referential constraints.
is the name of the column or columns in the referenced table.
specifies that if all values in the referencing columns of a referencing row are:
Null – the referential integrity condition is true.
Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.
If they are neither, then the referential integrity condition is false when:
All values are non-null and not equal, or
Some of the values in the referencing columns of a referencing row are non-null values, while others are null.
specifies a search_condition constraint that Adaptive Server enforces for all the rows in the table. If CIS is enabled, you cannot use check for remote servers.
is a Boolean expression that defines the check constraint on the column values. These constraints can include:
A list of constant expressions introduced with in
A set of conditions, which may contain wildcard characters, introduced with like
An expression can include arithmetic operations and Transact-SQL functions. The search_condition cannot contain subqueries, aggregate functions, parameters, or host variables.
includes additional column definitions (separated by commas) using the same syntax described for a column definition.
determines the amount of logging for insert, update, and delete (DML) operations. One of:
full – Adaptive Server logs all transactions,
minimal – Adaptive Server does not log row or page changes
default – logging is set to the table default.
adds the LOB column with the specified datatype.
specifies the maximum length for the LOB column to remain in-row. If you do not specify length, Adaptive Server applies the database-wide setting in effect for in-row length.
If you do not use in row (length), and the database-wide setting is not in effect, the LOB column is added with off-row storage of the data.
changes only the property of the LOB column to in-row, up to the specified length. When you run this command, no data moves.
You can also use this option to increase the length of an in-row LOB column.
You cannot use this option to decrease the length of a LOB column, nor can you specify 0 as the length. Depending on the amount of space available on the page, the off-row LOB data is moved in-row up to the specified in-row length during updates that occur after this modification.
indicates the level of compression to be applied to the table or partition. The new compression level applies to newly inserted or updated data:
default – resets the compression level for the specified partitions to the compression level of the table.
none – the data in this table or partition is not compressed. For partitions, none indicates that data in this partition remains uncompressed even if the table compression is altered to row or page compression.
page – when the page fills, existing data rows that are row-compressed are then compressed using page-level compression to create page-level dictionary, index, and character-encoding entries. Set page compression at the partition or table level.
Adaptive Server compresses data at the page level only after it has compressed data at the row level, so setting the compression to page implies both page and row compression.
row – compresses one or more data items in an individual row. Adaptive Server stores data in a row compressed form only if the compressed form saves space compared to an uncompressed form. Set row compression at the partition or table level.
changes the compression level for a table that uses LOB datatypes.
specifies the name of a column or constraint to drop from the table. If CIS is enabled, you cannot use drop for remote servers.
specifies the name of the column for which you are changing the datatype or nullability.
indicates if the modified column is compressed.
specifies the column for which to replace the default value with the new value specified by a following default clause. If CIS is enabled, you cannot use replace for remote servers.
enables or disables a trigger. See the System Administration Guide for information about triggers.
changes the locking scheme to be used for the table.
specifies the expected row size. You can apply this parameter only:
To datarows and datapages locking schemes.
To tables with variable-length rows.
When alter table performs a data copy, such as with alter table add or modify. You cannot use with exp_row_size=num_bytes with alter table lock change operations.
Valid values are 0, 1, and any value between the minimum and maximum row length for the table. The default value is 0, which means a server-wide setting is applied.
indicates that alter table drops colums without performing a data copy, preventing the alter table... drop command from blocking other commands running on the table while the alter table operation occurs.
adds (number_of_partitions –1) empty partitions to an unpartitioned table (round-robin-partitioned table with a single partition). Thus, the total number of partitions for the table becomes number_of_partitions. If Component Integration Services (CIS) is enabled, you cannot use partition for remote servers.
changes a round-robin-partitioned table without indexes, to an unpartitioned table. If CIS is enabled, you cannot use unpartition for remote servers.
specifies that records are to be partitioned according values in the partitioning column or columns. Each partitioning column value is compared with sets of user-supplied upper and lower bounds to determine partition assignment.
when used in the partition_clause, specifies a partition key column. A partition key column cannot be an encrypted column.
specifies the name of a new partition on which table records are to stored. Partition names must be unique within the set of partitions on a table or index. Partition names can be delimited identifiers if set quoted_identifier is on. Otherwise, they must be valid identifiers.
If partition_name is omitted, Adaptive Server creates a name in the form table_name_partition_id. Adaptive Server truncates partition names that exceed the allowed maximum length.
specifies the inclusive upper bound of values for a named partition. Specifying a constant value for the highest partition bound imposes an implicit integrity constraint on the table. The keyword MAX specifies the maximum value in a given datatype.
when used in the partition_clause, specifies the segment on which the partition is to be placed. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.
specifies that records are to be partitioned by a system-supplied hash function. The function computes the hash value of the partition keys that specify the partition to which records are assigned.
specifies that records are to be partitioned according to literal values specified in the named column. The partition key contains only one column. You can list as many as 250 constants as the partition values for each list partition.
specifies that records are to be partitioned in a sequential manner. A round-robin-partitioned table has no partitioning key. Neither the user nor the optimizer knows in which partition a particular record resides.
applies only to range- or list-partitioned tables:
For range-partitioned tables – adds one or more partitions to the upper end of a range partitioned table.
For list-partitioned tables – adds one or more partitions with a new set of values.
specifies the partitions for which you are modifying the compression level.
adds or drops a new computed column. Follow the same rules defined for the create table command and the alter table add rules.
is any valid Transact-SQL expression that does not contain columns from other tables, local variables, aggregate functions, or subqueries. It can be one or a combination of column name, constant, function, global variable, or case expression, connected by one or more operators. You cannot cross-reference between computed columns, except when virtual computed columns reference materialize computed columns. You cannot reference encrypted column in a computed_column_expression.
specifies whether a computer column is materialized or not. These are reserved keywords in the modify clause that specify whether the computed column is materialized, or physically stored in the table. By default, a computed column is not materialized (that is, not physically stored in the table). You can also use this parameter to change the definitions of existing virtual computed columns; that is, to materialize them.
drops one or more list or range partitions. You cannot use alter table to drop a hash or round-robin partition.
For each partition you drop, Adaptive Server:
Deletes all data on the partition
Deletes the partition definition from the system catalog
Drops all corresponding local index partitions that refer to this data partition
Regenerates the partition condition object of the base table and each local index
Deletes all statistics information on this partition
Rebuilds all global indexes
If you attempt to drop a partition from a table that is referenced by another table, and the partition to be dropped and the referencing table are not empty, the command fails because of possible violations with the foreign-key constraint, and Adaptive Server displays error message 13971.
redistributes partition data to two or more partitions.
indicates conditions that specify how to split the source partition data. Typically, conditions are a numerical range or a data range. The partition conditions should cover all, and only, the data in the source partition.
partition_condition_clause may be on the same segment as the source partition, or on a new segment. If you do not specify destination partition segments, Adaptive Server creates the new partitions on the segment on which the source partition resides.
combine the data from two or more merge-compatible partitions into a single partition.
a new or existing partition. If destination_partition_name is an existing partition, it cannot be any of the source partitions you are merging. If you do not specify a destination partition name, a system-generated name is picked.
moves a partition (and its index) to a specified segment.
a new or existing segment to which you are moving the partition. You cannot specify “default” as the destination_segment_name.
Adds a column to a table. For each existing row in the table, Adaptive Server assigns a NULL column value:
alter table publishers add manager_name varchar (40) null
Adds an IDENTITY column to a table. For each existing row in the table, Adaptive Server assigns a unique, sequential column value. The IDENTITY column can be type numeric or integer, and have a scale of zero. The precision determines the maximum value (10 5 -1, or 99,999) that can be inserted into the column:
alter table sales_daily add ord_num numeric (5,0) identity
Adds a primary key constraint to the authors table. If there is an existing primary key or unique constraint on the table, you must drop the existing constraint first (see next example):
alter table authors add constraint au_identification primary key (au_id, au_lname, au_fname)
Drops the au_identification constraint:
alter table titles drop constraint au_identification
Creates an index on authors; the index has a reservepagegap value of 16, leaving 1 empty page in the index for each 15 allocated pages:
alter table authors add constraint au_identification primary key (au_id, au_lname, au_fname) with reservepagegap = 16
Removes the default constraint on the phone column in the authors table. If the column allows NULL values, NULL is inserted if no column value is specified. If the column does not allow NULL values, an insert that does not specify a column value fails:
alter table authors replace phone default null
Modifies the emp table to encrypt the ssn column and specifies decrypt default:
alter table emp modify ssn encrypt with key1 decrypt_default '000-00-0000'
Decrypts credit card data that is longer sensitive:
alter table stolen_ccards modify ccard decrypt
If card was encrypted by a key protected by a user-defined password, precede this command with the set encryption key command.
Adds an encrypted column to an existing table. Because keyname is omitted, Adaptive Server looks for the database default encryption key:
alter table sales_mgr add bonus money null encrypt
Sets the password for the ssn_key encryption key and encrypts the ssn column in the existing employee table.
set encryption passwd '4evermore' for key ssn_key alter table employee modify ssn encrypt with ssn_key
If ssn in this example is an existing encrypted column encrypted by “key1” the alter table would cause Adaptive Server to decrypt ssn using “key1” and reencrypt ssn using “ssn_key”.
Adds a decrypt default to the salary column, which is already encrypted:
alter table employee replace salary decrypt_default $0.00
Removes the decrypt default for salary without removing the encryption property:
alter table employee replace salary drop decrypt_default
Changes an unpartitioned table to a range-partitioned table with three partitions, each of which is on a different segment:
alter table titles partition by range (total_sales) (smallsales values <= (500) on seg1, mediumsales values <= (5000) on seg2, bigsales values <= (25000) on seg3)
Adds another range partition to the titles table:
alter table titles add partition (vbigsales values <= (40000) on seg4)
Alters the titles table in the pubs2 database to use row-level compression:
alter table titles set compression = row
Changes the Y2009 partition of the sales table to use page-level compression:
alter table sales modify partition Y2009 set compression = page
Changes the locking scheme for the titles table to datarows locking:
alter table titles lock datarows
Adds the not-null column author_type to the authors table with a default of primary_author:
alter table authors add author_type varchar (20) default "primary_author" not null
Drops the advance, notes, and contract columns from the titles table:
alter table titles drop advance, notes, contract
Modifies the city column of the authors table to be a varchar(30) with a default of NULL:
alter table authors modify city varchar (30) null
Modifies the stor_name column of the stores table to be NOT NULL. Its datatype, varchar(40), remains unchanged:
alter table stores modify stor_name not null
Modifies the type column of the titles table and changes the locking scheme of the titles table from allpages to datarows:
alter table titles modify type varchar (10) lock datarows
Modifies the notes column of the titles table from varchar(200) to varchar(150), changes the default value from NULL to NOT NULL, and specifies an exp_row_size of 40:
alter table titles modify notes varchar (150) not null with exp_row_size = 40
Adds the incremental transfer attribute to mytable:
alter table mytable set transfer table on
Removes the incremental transfer attribute from mytable:
alter table mytable set transfer table off
Adds, modifies, and drops a column, and then adds another column in one query. Alters the locking scheme and specifies the exp_row_size of the new column:
alter table titles add author_type varchar (30) null modify city varchar (30) drop notes add sec_advance money default 1000 not null lock datarows with exp_row_size = 40
Modifies the description column of mymsgs table to support in-row LOB 400 bytes long:
alter table mymsgs modify description in row (400)
Adds a virtual computed column:
alter table authors add fullname compute au_fname + ' ' + au_lname
Changes a virtual computed column to a materialized computed column:
alter table authors modify fullname materialized
Splits the partition containing the orders table into two partitions:
alter table orders split partition P2 into ( P5 values <= (25000) on seg2, P6 values <= (50000) on seg3)
Merges the partitions containing the sales table into a single partition:
alter table sales merge partition Q1, Q2, Q3, Q4 into Y2007
Moves the orders table to the seg4 segment:
alter table orders move partition P2 to seg4
Drops the total_sales column from the titles table with a data copy:
alter table titles drop total_sales with no datacopy
You cannot use alter table on a segment that includes a virtually hashed table.
You cannot use alter table on a segment that includes the VHASH table, since a virtually hashed table must take only one exclusive segment, which cannot be shared by other tables or databases.
Before you add, modify, or drop columns on a table, run sp_depends to see if there are any stored procedures that depend on the table you are changing. If such stored procedures exist, drop, then re-create the stored procedures as necessary after changing table schema.
If stored procedures using select * reference an altered table, no new columns appear in the result set, even if you use the with recompile option. You must drop the procedure and re-create it to include these new columns. Otherwise, the wrong results may be caused by insert into table1 select * from table2 in the procedure when the tables have been altered and new columns have been added to the tables.
When the table owner uses alter table, Adaptive Server disables access rules during the execution of the command and enables them upon completion of the command. The access rules are disabled to avoid filtering of the table data during alter table.
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
alter table. . . transfer table involves data copy (similar to adding or removing a column): it is a very expensive command in terms of performance.
alter table performs error checking for check constraints before it alters the table.
When using on segment_name for partitions, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.
WARNING! Do not alter the system tables.
You cannot add a column of datatype bit to an existing table if you specify a default value. This default value must be 0 or 1.
The maximum number of columns in a table is:
1024 for fixed-length columns in both all-pages-locked (APL) and data-only-locked (DOL) tables
254 for variable-length columns in an APL table
1024 for variable-length columns in a DOL table
alter table raises an error if the number of variable-length columns in an APL table exceeds 254.
Drop, then re-create compiled objects after changing a table’s lock schema.
You cannot use the no datacopy parameter on:
Materialized or virtual computed columns
Encrypted columns
XML columns
Java columns
Columns using timestamp or bit datatypes
The maximum length for in-row Java columns is determined by the maximum size of a variable-length column for the table’s schema, locking style, and page size.
When converting a table to a different locking scheme, the data in the source table cannot violate the limits of the target table. For example, if you attempt to convert a DOL table with more than 254 variable-length columns to an APL table, alter table fails because an APL table is restricted to having no more than 254 variable-length columns.
Columns with fixed-length data (for example char, binary, and so on) have the maximum sizes shown in Table 1-1:
Locking scheme |
Page size |
Maximum row length |
Maximum column length |
---|---|---|---|
* This size includes 6 bytes for the row overhead and 2 bytes for the row-length field. |
|||
APL tables |
2KB (2048 bytes) |
1962 |
1960 bytes |
4KB (4096 bytes) |
4010 |
4008 bytes |
|
8KB (8192 bytes) |
8106 |
8104 bytes |
|
16KB (16384 bytes) |
16298 |
16296 bytes |
|
DOL tables |
2KB (2048 bytes) |
1964 |
1958 bytes |
4KB (4096 bytes) |
4012 |
4006 bytes |
|
8KB (8192 bytes) |
8108 |
8102 bytes |
|
16KB (16384 bytes) |
16300 |
16294 bytes – if table does not include any variable length columns. |
|
16KB (16384 bytes) |
16300 (subject to a max start offset of varlen = 8191) |
8191-6-2 = 8183 bytes – if table includes at least one variable-length column.* |
The maximum number of bytes of variable-length data per row depends on the locking scheme for the table. The following describes the maximum size columns for an APL table:
Page size |
Maximum row length |
Maximum column length |
---|---|---|
2KB (2048 bytes) |
1960 |
1960 |
4KB (4096 bytes) |
4008 |
4008 |
8KB (8192 bytes) |
8104 |
8157 |
16KB (16384 bytes) |
16296 |
16227 |
The following describes the maximum size columns for a DOL table:
Page size |
Maximum row length |
Maximum column length |
---|---|---|
2KB (2048 bytes) |
1960 |
1958 |
4KB (4096 bytes) |
4008 |
4006 |
8KB (8192 bytes) |
8157 |
8102 |
16KB (16384 bytes) |
16294 |
16294 |
You cannot use alter table to add a declarative or check constraint and then insert data into the table in the same batch or procedure. Either separate the alter and insert statements into two different batches or procedures, or use execute to perform the actions separately.
You cannot use the following variable in alter table statements that include defaults:
declare @a int select @a = 2 alter table t2 add c3 int default @a
Doing so results in error message 154; Variable
is not allowed in default.
SQL user-defined functions are not currently supported with create proxy table, create table at remote server, or alter table.
The execution of SQL functions requires the syntax username.functionname().
When used to add or modify an encrypted column, alter table may take a significant amount of time if the table contains a large number of rows.
Modifying a column for encryption may cause the row size of the table to increase.
You cannot use alter table to encrypt or decrypt a column:
If the column belongs to a clustered or placement index. To encrypt or decrypt such a column, drop the index, alter the column, and re-create the index.
If the table has a trigger defined. Drop the trigger before you modify the column. Afterwards, re-create the trigger:
If you modify the type of an encrypted column belonging to a clustered or placement index, the index is out of order, and alter table displays an error. Drop the index before modifying the type. Afterwards, re-recreate the index.
You can encrypt these datatypes:
int, smallint, tinyint
unsigned int, unsigned smallint, unsigned tinyint
bigint, unsigned bigint
decimal and numeric
float4 and float8
money, smallmoney
date, time, smalldatetime, datetime, bigdatetime
char and varchar
unichar, univarchar
binary and varbinary
bit
The underlying datatype of encrypted data on disk is varbinary. Null values are not encrypted.
Modifying the datatype of the encrypted column belonging to a clustered or placement index results in the index being out of order, and alter table displays an error. Drop the index before modifying the type, after which you re-create the index.
alter table reports an error if you:
Change a computed column to an encrypted column, or change an encrypted column to a computed column
Enable a column for encryption where the column is referenced in an expression used by a computed column
Change a computed column to reference an encrypted column.
Encrypt a column that is a member of a functional index
Specify an encrypted column as a partition key
Enable a column for encryption that is already used as a partition key
Referential integrity between encrypted columns is supported when the columns are encrypted with the same key. For details, see “Encrypting Data” in the Encrypted Columns Users Guide.
Use set compression to change the compression level of the table for future data inserts or updates. set compression does not affect existing data rows and pages that are already compressed, but does require exclusive access to the table.
You cannot change a partition’s compression level in the same command in which you are altering a table’s compression level. You must perform these operations as independent commands
You may use set compression with other set parameters.
Changing the table’s compression level affects only those partitions that do not already have an explicitly defined compression level. All partitions without an explicitly defined compression level implicitly inherit the table’s compression level. For example, if you modify a table’s compression level from uncompressed to row-level compression, all partitions that had a compression level of none do not change, but partitions for which their compression level was undefined are changed to row-level compressed.
Altering a table’s compression level does not change the compression level for existing columns. For example, if my_table and its columns are uncompressed, when you alter the compression level of my_table, its columns initially remain uncompressed. However, Adaptive Server compresses these columns individually when they fill with enough data to trigger the compression mechanism.
The default behavior for newly added columns depends on the table’s compression setting. For compressed tables, the column’s datatype determines its compression level. For uncompressed tables, new columns are uncompressed.
You may add compressed materialized computed columns to a table or compress them later.
When a command requires data movement, Adaptive Server compresses any uncompressed data rows in the source partitions if the target partition is compressed. When you include a compression clause, alter table includes these interactions between the parameters:
set – you:
Cannot combine set with add, drop, or modify clauses.
Cannot combine the modify partition set clause with other modify column_name parameters.
Cannot use the all keyword with modify partition and include partition names in the clause.
add:
You may add nullable and non-nullable compressed columns to existing tables. Adding non-nullable columns requires data movement.
Columns added to a compressed table use row compression if they are configured for an appropriate datatype.
Modifying a column’s datatype to one that is eligible for row compression in a compressed table does not change the column’s compression level.
If you do not specify not compressed for a new column, it inherits the table’s compression level. For example, if the table is row-level compressed, any column you add to the table uses row-level compression as well.
drop:
Dropping a compressed column causes data movement.
If the other columns in a table or partition are not compressed, or cannot be compressed, you must change the compression state to none before dropping the last compressed column.
modify:
You can modify a compressed column in an existing table to not compressed and vice versa.
You can change a column’s datatype simultaneously with its compression level. However, the target datatype must be eligible for row compression. If the target datatype is not eligible for compression, Adaptive Server ignores the request to compress it during the modify operation.
Adaptive Server issues an error if you attempt to modify the compression level of a column that you cannot create as a compressed column. For example, Adaptive Server issues an error message if you attempt to compress a virtual computed column or an in-row Java column.
Combining add, drop, and modify:
You can issue multiple add, drop, or modify parameters that include one or more compressed columns in a single alter table statement. The data movement for this alter table command is dictated by the data movement restrictions for the parameters.
If alter table requires data movement, the column’s compression level remains unchanged for the columns not affected by the add, drop, or modify parameters.
Repartitioning a table – if you do not specify the compression clause for new partitions, Adaptive Server sets the compression level for the new partitions as:
Uncompressed if the source table and all of its partitions are uncompressed.
The same compression level as the source table, if all of its partitions are compressed with the same compression level.
Uncompressed if:
The table or the individual partitions are compressed differently, or,
The source table is not compressed, but some of its partitions are
Adaptive Server does not compress the new partitions because it may be difficult to uniquely map and migrate the compression attribute from the original compressed partitions to the new partitions. You must explicitly state the compression level as part of the alter table command to specify which target partitions must be compressed during the data copy.
add partition – newly added partitions inherit the table’s compression level if you do not specify a partition level in the compression clause.
drop partition – dropping a table’s only compressed partition does not change the table’s compression level if the table includes multiple partitions.
If a table has been defined for compression, it remains compressed after the partition is dropped, and Adaptive Server automatically configures future partitions for compression.
Changing the locking scheme – Adaptive Server requires data movement if you change the locking scheme of a table from allpages-locked to data-only-locked, or vice-versa. You cannot simultaneously change the compression level of the table or individual partitions when you change the locking scheme. Instead, you must run the set compression command to specify the compression level before you change the locking scheme.
Unpartitioning a table – if at least one source partition is compressed when you unpartition a table, the entire table is marked as compressed when you run alter table. Adaptive Server issues a warning message if the table was initially uncompressed.
Other commands – you cannot combine parameters that specify the default value of a column, enable or disable triggers, add or drop column-level or table-level constraints, and so on, with commands that specify column-level or partition-level compression or copy data.
If alter table does not include data movement, existing data is not affected, and Adaptive Server applies the table’s compression level to data once it is inserted. If alter table includes data movement, existing data is compressed or decompressed according to the table’s compression level.
These alter table events include data movement:
Adding non-null columns
Dropping columns
Modifying a column to increase its length (for example, from a smallint to an int, or from char(3) to varchar(45))
These alter table events do not include data movement:
Adding null column
Adding null text column (adding a non-null text column has restrictions)
Modifying a variable-length column to increase its length (for example, from varchar(5) to varchar(20) or from varchar(20) to varchar(40))
You cannot compress proxy tables or partitions or columns on proxy tables.
Changing the table’s large object (LOB) compression level affects only the LOB columns that do not have an explicitly defined compression level. Columns without an explicitly defined compression level implicitly inherit the table’s compression level.
The default behavior for newly added LOB columns for which you have not specified a LOB compression clause depends on the table’s LOB compression level. For LOB compressed tables, Adaptive Server uses the table’s LOB compression level for the columns. For LOB uncompressed tables, newly added LOB columns remain uncompressed.
Interactions between compression and other alter table parameters for tables with LOB data:
drop column – if the table includes no compressed LOB columns after dropping columns, the table uses the table-level LOB column compression level.
add column
You can add a nullable compressed LOB column, but you cannot add a non-nullable compressed LOB column.
For a table not set to LOB compression, by default, newly added LOB columns are not compressed. Newly added LOB columns with LOB compression subclauses can be compressed or not compressed as specified.
modify column
You can uncompress an existing compressed LOB column. Although newly inserted data is uncompressed, existing data remains compressed.
You can change the compression level of an existing LOB column. Although newly inserted data assumes the new compression level, existing data retains the original compression level.
You can change an uncompressed LOB column to compressed.
You cannot modify a regular column to a LOB column (compressed or uncompressed).
You can modify a compressed LOB column to:
Compressed text columns using nchar, nvarchar, unichar, and univarchar
Compressed image columns using varbinary and binary
Compressed unitext columns using nchar, nvarchar, unichar, univarchar, varbinary, and binary
Compressed off-row java columns cannot be modified to regular columns.
Adaptive Server decompresses the LOB data, truncating the data if necessary to fit the regular column length, and converting it to the regular datatype. The maximum length of the regular column is governed by the Adaptive Server page size.
Combinations of add, drop, modify, and set lob_compression:
You can issue multiple add, drop, or modify subcommands in a single alter table command—or set lob_compression and set compression subclauses—that involves one or more compressed columns.
If you add a column to a LOB-compressed table and include set lob_compression = 0 in the command, the newly added column is not compressed.
If you add a column to a regular, uncompressed table, and include set lob_compression = compression_level in the command, the newly added column is compressed.
Existing LOB data is not affected by alter table commands; only future DMLs are affected by the changed LOB compression attributes. Use update and select into to compress or uncompress existing LOB data.
For information about a table and its columns, use sp_help.
To rename a table, execute sp_rename (do not rename the system tables).
For information about integrity constraints (unique, primary key, references, and check) or the default clause, see create table in this chapter.
Use the asc and desc keywords after index column names to specify the sort order for the index. Creating indexes so that columns are in the same order specified in the order by clause of queries eliminates the sorting step during query processing. See “Indexing for Performance” in the Performance and Tuning Guide: Basics.
When you create a cross-database constraint, Adaptive Server stores the following information in the sysreferences table of each database:
Information stored in sysreferences |
Columns with information about the referenced table |
Columns with information about the referencing table |
---|---|---|
Key column IDs |
refkey1 through refkey16 |
fokey1 through fokey16 |
Table ID |
reftabid |
tableid |
Database ID |
pmrydbid |
frgndbid |
Database name |
pmrydbname |
frgndbname |
When you drop a referencing table or its database, Adaptive Server removes the foreign-key information from the referenced database.
Because the referencing table depends on information from the referenced table, Adaptive Server does not allow you to:
Drop the referenced table,
Drop the external database that contains the referenced table, or
Rename either database with sp_renamedb.
You must first use alter table to remove the cross-database constraint
Each time you add or remove a cross-database constraint, or drop a table that contains a cross-database constraint, dump both the affected databases.
WARNING! Loading earlier dumps of these databases may cause database corruption.
The sysreferences system table stores the name and the ID number of the external database. Adaptive Server cannot guarantee referential integrity if you use load database to change the database name or to load it onto a different server.
WARNING! Before dumping a database to load it with a different name or move it to another Adaptive Server, use alter table to drop all external referential integrity constraints.
You can create column defaults either by declaring the default as a column constraint in the create table or alter table statement, or by creating the default using the create default statement and binding it to a column using sp_bindefault.
You cannot replace a user-defined default bound to the column with sp_bindefault. First use sp_unbindefault to unbind the default.
If you declare a default column value with create table or alter table, you cannot bind a default to that column with sp_bindefault. Drop the default by altering it to NULL, then bind the user-defined default. Changing the default to NULL unbinds the default and deletes it from the sysobjects table.
The space management properties fillfactor, max_rows_per_page, and reservepagegap in the alter table statement apply to indexes that are created for primary key or unique constraints. The space management properties affect the data pages of the table if the constraint creates a clustered index on an allpages-locked table.
Use sp_chgattribute to change max_rows_per_page or reservepagegap for a table or an index, to change the exp_row_size value for a table, or to store fillfactor values.
Space management properties for indexes are applied when indexes are:
Re-created as a result of an alter table command that changes the locking scheme for a table from allpages locking to data-only locking or vice versa. See “Changing locking schemes”.
Automatically rebuilt as part of a reorg rebuild command.
To see the space management properties currently in effect for a table, use sp_help. To see the space management properties currently in effect for an index, use sp_helpindex.
The space management properties fillfactor, max_rows_per_page, and reservepagegap help manage space usage for tables and indexes in the following ways:
fillfactor leaves extra space on pages when indexes are created, but the fillfactor is not maintained over time. It applies to all locking schemes.
max_rows_per_page limits the number of rows on a data or index page. Its main use is to improve concurrency in allpages-locked tables.
reservepagegap specifies the ratio of empty pages to full pages to apply for commands that perform extent allocation. It applies to all locking schemes.
You can store space management properties for tables and indexes so that they are applied during alter table and reorg rebuild commands.
The following table shows the valid combinations of space management properties and locking schemes. If an alter table command changes the table so that the combination is not compatible, the values stored in the stored in system tables remain there, but are not applied during operations on the table. If the locking scheme for a table changes so that the properties become valid, then they are used.
Parameter |
Allpages |
Datapages |
Datarows |
---|---|---|---|
max_rows_per_page |
Yes |
No |
No |
reservepagegap |
Yes |
Yes |
Yes |
fillfactor |
Yes |
Yes |
Yes |
exp_row_size |
No |
Yes |
Yes |
The following table shows the default values and the effects of using the default values for the space management properties.
Parameter |
Default |
Effect of using the default |
---|---|---|
max_rows_per_page |
0 |
Fits as many rows as possible on the page, up to a maximum of 256 |
reservepagegap |
0 |
Leaves no gaps |
fillfactor |
0 |
Fully packs leaf pages |
If a table has max_rows_per_page set, and the table is converted from allpages locking to data-only locking, the value is converted to an exp_row_size value before the alter table...lock command copies the table to its new location. The exp_row_size is enforced during the copy. The following table shows how the values are converted.
If max_rows_per_page is set to |
Set exp_row_size to |
---|---|
0 |
Percentage value set by default exp_row_size percent |
255 |
1, that is, fully packed pages |
1 – 254 |
The smaller of:
|
Commands that use large amounts of space allocate new space by allocating an extent rather than allocating single pages. The reservepagegap keyword causes these commands to leave empty pages so that future page allocations take place close to the page that is being split, or to the page from which a row is being forwarded.
The reservepagegap value for a table is stored in sysindexes, and is applied when the locking scheme for a table is changed from allpages locking to data-only locking, or vice versa. To change the stored value, use sp_chgattribute before running alter table.
reservepagegap specified with the clustered keyword on an allpages-locked table overwrites any value previously specified with create table or alter table.
Use the partition by clause to partition an unpartitioned table or repartition an already partitioned table. The task requires a data copy; all data rows are redistributed according to the specified partition criteria. You may run this task in parallel if the Adaptive Server is configured for parallel processing. You must set the select into/bulkcopy/pllsort option to true. If the table has indexes, you must drop the indexes before you can change:
An unpartitioned table into a semantic-partitioned table.
The partitioning type.
The partitioning key – you need not drop indexes to change other attributes of the partitions, such as number of partitions, partition bounds, or partition location; the indexes are built automatically. See create table for more information on partition key and bound restrictions.
You can use the add partition clause to add empty partitions to list- or range-partitioned tables, but not to hash or round-robin-partitioned tables.
For range-partitioned tables, you can add new partitions only to the high end of the partition conditions. If the last existing partition has the maximum boundary (values <= (MAX)), you cannot add new partitions.
The partition number_of_partition and unpartition clauses are provided for compatibility with versions of Adaptive Server earlier than 15.0. You can use partition number_of_partition only on unpartitioned tables to add (number_of_partition-1) empty round-robin partitions; existing data is placed on the first partition, with subsequent data distributed among all partitions. If the table has a global clustered index, Adaptive Server places subsequent data rows in the first partition. To redistribute the data, drop and re-create the index.
These commands do not require data movement. However, because Adaptive Server performs a number of internal steps, the commands, especially when executed on large tables, do not occur instantly. To avoid data corruption, do not interrupt the operation while you partition or unpartition a table.
You can use the unpartition clause only on round-robin-partitioned tables without indexes.
You cannot partition system tables.
You cannot partition remote proxy tables.
You cannot issue the partition-related alter table commands within a user-defined transaction.
You cannot change a table’s partitioning properties using the partition by clause if there are active open cursors on the table.
After using the partition by clause, you must perform a full database dump before you can use dump transaction.
You cannot drop a column that is part of a partitioning key.
Alter key columns with care. In some cases, modifying the datatype of a key column might redistribute data among partitions. See the Transact-SQL Users Guide.
Changing a table’s partitioning properties increments the schema count, which causes existing stored procedures that access this table to recompile the next time they are executed.
When you add a new computed column without specifying nullability and the materialization property, the default option is nullable and not materialized.
When you add a new materialized computed column, the computed_column_expression is evaluated for each existing row in the table, and the result is stored in the table.
You cannot add new computed columns and add or modify their base columns at the same time.
You can modify the entire definition of an existing computed column. This is a quick way to drop the computed column and add a new one with the same name. Such a column behaves like a new computed column: its defaults are not materialized and nullable, if you do not specify these options.
You can modify the materialization property of an existing computed column without changing its other properties, such as the expression that defines it or its nullability.
When you modify a not-null, materialized computed column into a virtual column, you must specify “null” in the modify clause.
When you modify a computed column that is not materialized, to materialize it, the computed_column_expression is evaluated for each existing row in the table, and the result is stored in the table.
If you modify existing columns that are index keys, the index is rebuilt.
You cannot modify a materialized computed column into a virtual column if it has been used as an index key; you must first drop the index.
You cannot modify a regular column to become a computed column, or a computed column to become a regular column.
You cannot modify or drop the base column referenced by a computed column.
You cannot drop a computed column if it is used as an index key.
When adding a numeric or integer IDENTITY column to a table, make sure the column precision is large enough to accommodate the number of existing rows. If the number of rows exceeds 10 precision - 1, Adaptive Server prints an error message and does not add the column.
When adding an IDENTITY column to a table, Adaptive Server:
Locks the table until all the IDENTITY column values have been generated. If a table contains a large number of rows, this process may be time-consuming.
Assigns each existing row a unique, sequential IDENTITY column value, beginning with 1.
Logs each insert operation into the table. Use dump transaction to clear the database’s transaction log before adding an IDENTITY column to a table with a large number of rows.
Each time you insert a row into the table, Adaptive Server generates an IDENTITY column value that is one higher than the value. This value takes precedence over any defaults declared for the column in the alter table statement or bound to it with sp_bindefault.
add, drop, modify, and lock subclauses are useful in changing an existing table’s schema. A single statement can contain any number of these sub-clauses, in any order, as long as the same column name is not referenced more than once in the statement.
If stored procedures using select * reference a table that has been altered, no new columns appear in the result set, even if you use the with recompile option. You must drop the procedure and re-create it to include these new columns.
To ensure that triggers fire properly, drop and re-create all triggers on an altered table after you perform an add, drop, modify, or lock operation.
Adaptive Server issues an error message if you add a not null column with alter table.
You cannot drop all the columns in a table. Also, you cannot drop the last remaining column from a table (for example, if you drop four columns from a five-column table, you cannot then drop the remaining column). To remove a table from the database, use drop table.
A data copy is required:
To drop a column
To add a NOT NULL column
For most alter table ... modify commands
Use set noexec on and showplan on options to determine if a data copy is required for a particular alter table command.
You can specify a change in the locking scheme for the modified table with other alter table commands (add, drop, or modify) when the other alter table command requires a data copy.
If alter table performs a data copy, select into /bulkcopy/pllsort must be turned on in the database that includes the table whose schema you are changing.
The modified table retains the existing space management properties (max_rows_per_page, fillfactor, and so on) and indexes of the table.
alter table that requires a data copy does not fire any triggers.
You can use alter table to change the schema of remote proxy tables created and maintained by CIS. See the Component Integration Services Users Guide.
You cannot perform a data copy and add a table level or referential integrity constraint in the same statement.
You cannot perform a data copy and create a clustered index in the same statement.
If you add a not null column, you must also specify a default clause. This rule has one exception: if you add a user-defined type column, and the type has a default bound to it, you need not specify a default clause.
You can always add, drop, or modify a column in allpages-locked tables. However, there are restrictions for adding, dropping, or modifying a column in a data-only-locked table, which are described in the following table:
Type of index |
All pages locked, partitioned table |
All pages locked, unpartitioned table |
Data-only-locked, partitioned table |
Data-only-locked, unpartitioned table |
---|---|---|---|---|
Clustered |
Yes |
Yes |
No |
Yes |
Nonclustered |
Yes |
Yes |
Yes |
Yes |
If you need to add, drop, or modify a column in a data-only-locked table partitioned with a clustered index, you can:
Drop the clustered index.
Alter the data-only-locked table.
Re-create the clustered index.
You cannot add a NOT NULL Java object as a column. By default, all Java columns always have a default value of NULL, and are stored as either varbinary strings or as image datatypes.
You cannot modify a partitioned table that contains a Java column if the modification requires a data copy. Instead, first unpartition the table, execute alter table, then repartition the table.
You cannot drop the key column from an index or a referential integrity constraint. To drop a key column, first drop the index or referential integrity constraint, then drop the key column. See the Transact-SQL Users Guide.
You can drop columns that have defaults or rules bound to them. Any column-specific defaults are also dropped when you drop the column. You cannot drop columns that have check constraints or referential constraints bound to them. Instead, first drop the check constraint or referential constraint, then drop the column. Use sp_helpconstraint to identify any constraints on a table, and use sp_depends to identify any column-level dependencies.
You cannot drop a column from a system table. Also, you cannot drop columns from user tables that are created and used by Sybase-provided tools and stored procedures.
You can generally modify the datatype of an existing column to any other datatype if the table is empty. If the table is not empty, you can modify the datatype to any datatype that is explicitly convertible to the original datatype.
You can:
Add a new IDENTITY column.
Drop an existing IDENTITY column.
Modify the size of an existing IDENTITY.
See the Transact-SQL Users Guide for more information.
Altering the schema of a table increments the schema count, causing existing stored procedures that access this table to be renormalized the next time they are executed. Changes in datatype-dependent stored procedures or views may fail with datatype normalization errors. Update these dependent objects so they refer to the modified schema of the table.
You cannot run alter table from inside a transaction.
Altering a table’s schema can invalidate backups that you made using bcp. These backups may use a table schema that is no longer compatible with the table’s current schema.
You can add NOT NULL columns with check constraints, however, Adaptive Server does not validate the constraint against existing data.
You cannot change the locking scheme of a table using the alter table . . . add, drop, or modify commands if the table has a clustered index and the operation requires a data copy. Instead you can
Drop the clustered index.
Alter the table’s schema.
Re-create the clustered index.
You cannot alter a table’s schema if there are any active open cursors on the table.
You can only add text or image columns that accept null values.
To add a text or image column so it contains only non-null values, first add a column that only accepts null values and then update it to the non-null values.
You can modify a column from text datatype only to the following datatypes:
[n]char
[n]varchar
unichar
univarchar
nchar
nvarchar
You can modify a column from image datatype only to binary or varbinary.
You cannot add a new text or image column and then drop an existing text or image column in the same statement.
You cannot modify a column to either text or image datatype.
The following restrictions apply when you use alter table to modify unitext columns:
You can add a new unitext column that accepts NULL values.
You can modify a column from unitext only to the following datatypes:
[n]char
[n]varchar
unichar
univarchar
binary
varbinary
You cannot modify a column to the unitext datatype.
You cannot add a unitext column and drop an existing unitext column in the same statement.
alter table supports changing from any locking scheme to any other locking scheme. You can change:
From allpages to datapages or vice versa
From allpages to datarows or vice versa
From datapages to datarows or vice versa
Before you change from allpages locking to a data-only locking scheme, or vice versa, use sp_dboption to set the database option select into/bulkcopy/pllsort to true, then run checkpoint in the database if any of the tables are partitioned and the sorts for the indexes require a parallel sort.
After changing the locking scheme from allpages-locking to data-only locking or vice versa, you cannot use the dump transaction command to back up the transaction log; you must first perform a full database dump.
When you use alter table...lock to change the locking scheme for a table from allpages locking to data-only locking or vice versa, Adaptive Server makes a copy of the table’s data pages. There must be enough room on the segment where the table resides for a complete copy of the data pages. There must be space on the segment where the indexes reside to rebuild the indexes.
Clustered indexes for data-only-locked tables have a leaf level above the data pages. If you are altering a table with a clustered index from allpages-locking to data-only-locking, the resulting clustered index requires more space. The additional space required depends on the size of the index keys.
Use sp_spaceused to determine how much space is currently occupied by the table, and use sp_helpsegment to see the space available to store the table.
When you change the locking scheme for a table from allpages locking to datapages locking or vice versa, the space management properties are applied to the tables, as the data rows are copied, and to the indexes, as they are re-created. When you change from one data-only locking scheme to another, the data pages are not copied, and the space management properties are not applied.
If a table is partitioned, changing the locking scheme performs a partition-to-partition copy of the rows. It does not balance the data on the partitions during the copy.
When you change the locking scheme for a table, the alter table...lock command acquires an exclusive lock on the table until the command completes.
When you use alter table...lock to change from datapages locking to datarows locking, the command does not copy data pages or rebuild indexes. It updates only system tables.
Changing the locking scheme while other users are active on the system may have the following effects on user activity:
Query plans in the procedure cache that access the table are recompiled the next time they are run.
Active multistatement procedures that use the table are recompiled before continuing with the next step.
Ad hoc batch transactions that use the table are terminated.
WARNING! Changing the locking scheme for a table while a bulk-copy operation is active can cause table corruption. Bulk copy operates by first obtaining information about the table and does not hold a lock between the time it reads the table information and the time it starts sending rows, leaving a small window of time for an alter table...lock command to start.
If Java is enabled in the database, you can add Java-SQL columns to a table. See Java in Adaptive Server Enterprise.
The declared class (datatype) of the new Java-SQL column must implement either the Serializable or Externalizable interface.
When you add a Java-SQL column to a table, the Java-SQL column cannot be specified:
As a foreign key
In a references clause
As having the UNIQUE property
As the primary key
If in row is specified, the value stored cannot exceed 16KB, depending on the page size of the data server.
If off row is specified, then the column cannot be:
Referenced in a check constraint
Referenced in a select that specifies distinct
Specified in a comparison operator, in a predicate, or in a group by clause
A referential integrity constraint cannot reference a column on a local temporary database except from a table on the same local temporary database. alter table fails when it attempts to create a reference to a column on a local temporary database from a table in another database.
You cannot encrypt a column with an encryption key stored in a local temporary database unless the column’s table resides on the same local temporary database. alter table fails if it attempts to encrypt a column with an encryption key on the local temporary database and the table is in another database.
ANSI SQL – Compliance level: Transact-SQL extension.
See Chapter 1, “System and User-Defined Datatypes” in Reference Manual: Building Blocks for datatype compliance information.
The permission checks for alter table differ based on your granular permissions settings.
Granular permissions enabled |
With granular permissions enabled, you must be the table owner or a user with alter any table privilege. A user with setuser privilege can impersonate the table owner by executing the setuser command. |
Granular permissions disabled |
With granular permissions disabled, you must be the table owner or a user with sa_role. The database owner can impersonate the table owner by running the setuser command |
Values in event and extrainfo columns of sysaudits are:
Event |
Audit option |
Command or access audited |
Information in extrainfo |
---|---|---|---|
3 |
alter |
alter table |
|
Commands create index, create table, dbcc, drop database, dump transaction, insert, setuser
System procedures sp_chgattribute, sp_help, sp_helpartition, sp_rename