Makes changes to existing tables.
Adds new columns to a table; drops or modifies existing columns; adds, changes, or drops constraints; changes properties of an existing table; enables or disables triggers on a table, changes the compression level of a table.
Supports adding, dropping, and modifying computed columns, and enables the materialized property, nullability, or definition of an existing computed column to be changed.
Partitions and repartitions a table with specified partition strategy, adds partitions to a table with existing partitions, and splits or merges existing partitions.
alter table [[database.][owner].]table_name {add column_name datatype} [default {constant_expression | user | null}] {identity | null | not null [not materialized]} [off row | in row] [[constraint constraint_name] {{unique | primary key} [clustered | nonclustered] [asc | desc] [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages immediate_allocation] [on segment_name] | references [[database.][owner].]ref_table [(ref_column)] [match full] | check (search_condition)] [encrypt [with [[database.][owner].] keyname] [decrypt_default {constant_expression | null}]] [compressed = compression_level | not compressed] [, next_column]... | add [constraint constraint_name] {unique | primary key} [clustered | nonclustered] (column_name [asc | desc] [, column_name [asc | desc]...]) [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages}] [on segment_name] | foreign key (column_name [{, column_name}...]) references [[database.][owner].]ref_table [(ref_column [{, ref_column}...])] [match full] | add lob-colname { text | image | unitext } [null] [ in row [ (length) ] ] | check (search_condition)} | set { [ dml_logging = {full | minimal | default}] | [,compression = {NONE | PAGE | ROW | ALL} ] | [,index_compression = {NONE | PAGE} ] | [,"erase residual data" (on | off) ] } [lob_compression = off | compression_level] | drop {column_name [, column_name]... | constraint constraint_name} | modify column_name [datatype [null | not null]] [[[encrypt [with keyname] [decrypt_default [value]] | decrypt ] [[not] compressed] [compressed = compression_level | not compressed] | modify lob-column [ in row (length)] [, next_column]... | replace column_name default {constant_expression | user | null} | decrypt_default {constant_expression | null} | drop decrypt_default | lock {allpages | datarows | datapages} } | with exp_row_size=num_bytes | transfer table [on | off] | no datacopy} | partition number_of_partitions | unpartition | partition_clause | add_partition_clause
partition_clause::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | partition by hash (column_name[, column_name]...) { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...) | partition by roundrobin { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name [, segment_name]...)]} add_partition_clause::= add partition { ([partition_name] values <= ({constant | MAX} [, {constant | MAX}]...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name ] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | modify partition {partition_name [, partition_name . . .] } set compression [= {default | none | row | page}] set index_compression [= {none | page} ] | ([partition_name] values (constant[, constant] ...) [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...)}
alter table add column_name {compute | as} computed_column_expression... [materialized | not materialized] drop column_name modify column_name {null | not null | {materialized | not materialized} [null | not null] | {compute | as} computed_column_expression [materialized | not materialized] [null | not null]}
alter table table_name drop partition partition_name [, partition_name]... split partition partition_name merge partition {partition_name [{, partition_name}…]} into destination_partition_name [on segment_name] move partition partition_name [{, partition_name}…] to destination_segment_name
If Java is enabled in the database, datatype can be the name of a Java class installed in the database, either a system class or a user-defined class. See Java in Adaptive Server Enterprise.
The SAP ASE server generates a name for the default in the form of tabname_colname_objid, where tabname is the first 10 characters of the table name, colname is the first 5 characters of the column name, and objid is the object ID number for the default. Setting the default to null drops the default.
If CIS is enabled, you cannot use default for remote servers.
null specifies that a column is added that allows nulls. The SAP ASE server assigns a null value during inserts if a user does not provide a value.
The properties of a bit-type column must always be not null.
not null specifies that a column is added that does not allow nulls. Users must provide a non-null value during inserts if no default exists.
If you do not specify null or not null, the SAP ASE server uses not null by default. However, you can switch this default using sp_dboption to make the default compatible with the SQL standards. If you specify (or imply) not null for the newly added column, a default clause is required. The default value is used for all existing rows of the newly added column, and applies to future inserts as well.
keyname identifies a key created using create encryption key. The table owner must have select permission on keyname. If keyname is not supplied, the server looks for a default key created using create encryption key or alter encryption key.
See Encrypting Data, in Database Encryption for a list of supported datatypes.
0 – the row is not compressed.
1 through 9 – the SAP ASE server uses ZLib compression. Generally, the higher the compression number, the more the SAP ASE server compresses the LOB data, and the greater the ratio between compressed and uncompressed data (that is the greater the amount of space savings, in bytes, for the compressed data versus the size of the uncompressed data).
However, the amount of compression depends on the LOB content, and the higher the compression level , the more CPU-intensive the process. That is, level 9 provides the highest compression ratio but also the heaviest CPU usage.
100 – the SAP ASE server uses FastLZ compression. The compression ratio that uses the least CPU usage; generally used for shorter data.
101 – the SAP ASE server uses FastLZ compression. A value of 101 uses slightly more CPU than a value of 100, but uses a better compression ratio than a value of 100.
The compression algorithm ignores rows that do not use LOB data.
Exact numeric and scale of 0, or
Any of the integer datatypes, including signed or unsigned bigint, int, smallint, or tinyint.
IDENTITY columns are not updatable and do not allow nulls.
IDENTITY columns store sequential numbers, such as invoice numbers or employee numbers that are automatically generated by the SAP ASE server. The value of the IDENTITY column uniquely identifies each row in a table.
The storage for an in row column cannot exceed 16K bytes, depending on the page size of the database server and other variables. The default value is off row.
Constraints do not apply to the data that already exists in the table at the time the constraint is added.
The default for fillfactor is 0; this is used when you do not include with fillfactor in the create index statement (unless the value has been changed with sp_configure). When specifying a fillfactor, use a value between 1 and 100.
A fillfactor of 0 creates clustered indexes with completely full pages and nonclustered indexes with completely full leaf pages. It leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes. There is seldom a reason to change the fillfactor.
If the fillfactor is set to 100, the SAP ASE server creates both clustered and nonclustered indexes, with each page 100 percent full. A fillfactor of 100 makes sense only for read-only tables—tables to which no data is ever added.
fillfactor values smaller than 100 (except 0, which is a special case) cause the SAP ASE server to create new indexes with pages that are not completely full. A fillfactor of 10 might be a reasonable choice if you are creating an index on a table that eventually holds a great deal more data, but small fillfactor values cause each index (or index and data) to take more storage space.
If you do not specify a value for max_rows_per_page, the SAP ASE server uses a value of 0 when creating the index. When specifying max_rows_per_page for data pages, use a value between 0 – 256. The maximum number of rows per page for nonclustered indexes depends on the size of the index key; the SAP ASE server returns an error message if the specified value is too high.
For indexes created by constraints, a max_rows_per_page setting of 0 creates clustered indexes with full pages, and nonclustered indexes with full leaf pages. A setting of 0 leaves a comfortable amount of space within the index B-tree in both clustered and nonclustered indexes.
If max_rows_per_page is set to 1, the SAP ASE server creates both clustered and nonclustered leaf index pages with one row per page at the leaf level. You can use this to reduce lock contention on frequently accessed data.
Low max_rows_per_page values cause the SAP ASE server to create new indexes with pages that are not completely full, use more storage space, and may cause more page splits.
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
For partitions, on segment_name specifies the segment on which to place the partition.
To use this constraint, you must have references permission on the referenced table. The specified columns in the referenced table must be constrained by a unique index (created by either a unique constraint or a create index statement). If no columns are specified, there must be a primary key constraint on the appropriate columns in the referenced table. Also, the datatypes of the referencing table columns must exactly match the datatype of the referenced table columns.
If CIS is enabled, you cannot use references for remote servers.
Null – the referential integrity condition is true.
Non-null values – if there is a referenced row where each corresponding column is equal in the referenced table, then the referential integrity condition is true.
If they are neither, then the referential integrity condition is false when:
All values are non-null and not equal, or
Some of the values in the referencing columns of a referencing row are non-null values, while others are null.
A list of constant expressions introduced with in
A set of conditions, which may contain wildcard characters, introduced with like
An expression can include arithmetic operations and Transact-SQL functions. The search_condition cannot contain subqueries, aggregate functions, parameters, or host variables.
full – the SAP ASE server logs all transactions,
minimal – the SAP ASE server does not log row or page changes
default – logging is set to the table default.
If you do not use in row (length), and the database-wide setting is not in effect, the LOB column is added with off-row storage of the data.
You can also use this option to increase the length of an in-row LOB column.
default – resets the compression level for the specified partitions to the compression level of the table.
none – the data in this table or partition is not compressed. For partitions, none indicates that data in this partition remains uncompressed even if the table compression is altered to row or page compression.
page – when the page fills, existing data rows that are row-compressed are then compressed using page-level compression to create page-level dictionary, index, and character-encoding entries. Set page compression at the partition or table level.
The SAP ASE server compresses data at the page level only after it has compressed data at the row level, so setting the compression to page implies both page and row compression.
row – compresses one or more data items in an individual row. The SAP ASE server stores data in a row compressed form only if the compressed form saves space compared to an uncompressed form. Set row compression at the partition or table level.
NONE – indexes on the specified table are not compressed.
PAGE – all indexes on the specified table are compressed.
To datarows and datapages locking schemes.
To tables with variable-length rows.
When alter table performs a data copy, such as with alter table add or modify. You cannot use with exp_row_size=num_bytes with alter table lock change operations.
Valid values are 0, 1, and any value between the minimum and maximum row length for the table. The default value is 0, which means a server-wide setting is applied.
If partition_name is omitted, the SAP ASE server creates a name in the form table_name_partition_id. The SAP ASE server truncates partition names that exceed the allowed maximum length.
For range-partitioned tables – adds one or more partitions to the upper end of a range partitioned table.
For list-partitioned tables – adds one or more partitions with a new set of values.
Deletes all data on the partition
Deletes the partition definition from the system catalog
Drops all corresponding local index partitions that refer to this data partition
Regenerates the partition condition object of the base table and each local index
Deletes all statistics information on this partition
Rebuilds all global indexes
partition_condition_clause may be on the same segment as the source partition, or on a new segment. If you do not specify destination partition segments, the SAP ASE server creates the new partitions on the segment on which the source partition resides.
alter table publishers add manager_name varchar (40) null
alter table sales_daily add ord_num numeric (5,0) identity
alter table authors add constraint au_identification primary key (au_id, au_lname, au_fname)
alter table titles drop constraint au_identification
alter table authors add constraint au_identification primary key (au_id, au_lname, au_fname) with reservepagegap = 16
alter table authors replace phone default null
alter table emp modify ssn encrypt with key1 decrypt_default '000-00-0000'
alter table stolen_ccards modify ccard decryptIf card was encrypted by a key protected by a user-defined password, precede this command with the set encryption key command.
alter table sales_mgr add bonus money null encrypt
set encryption passwd '4evermore' for key ssn_key alter table employee modify ssn encrypt with ssn_keyIf ssn in this example is an existing encrypted column encrypted by “key1” the alter table would cause the SAP ASE server to decrypt ssn using “key1” and reencrypt ssn using “ssn_key”.
alter table employee replace salary decrypt_default $0.00
alter table employee replace salary drop decrypt_default
alter table titles partition by range (total_sales) (smallsales values <= (500) on seg1, mediumsales values <= (5000) on seg2, bigsales values <= (25000) on seg3)
alter table titles add partition (vbigsales values <= (40000) on seg4)
alter table titles set compression = row
alter table sales modify partition Y2009 set compression = page
alter table order_line set index_compress = NONE
alter table sales modify partition Y2009 set index_compression = PAGE
alter table titles lock datarows
alter table authors add author_type varchar (20) default "primary_author" not null
alter table titles drop advance, notes, contract
alter table authors modify city varchar (30) null
alter table stores modify stor_name not null
alter table titles modify type varchar (10) lock datarows
alter table titles modify notes varchar (150) not null with exp_row_size = 40
alter table mytable set transfer table on
alter table mytable set transfer table off
alter table titles add author_type varchar (30) null modify city varchar (30) drop notes add sec_advance money default 1000 not null lock datarows with exp_row_size = 40
alter table mymsgs modify description in row (400)
alter table authors add fullname compute au_fname + ' ' + au_lname
alter table authors modify fullname materialized
alter table orders split partition P2 into ( P5 values <= (25000) on seg2, P6 values <= (50000) on seg3)
alter table sales merge partition Q1, Q2, Q3, Q4 into Y2007
alter table orders move partition P2 to seg4
alter table titles drop total_sales with no datacopy
You cannot use alter table on a segment that includes a virtually hashed table.
You cannot use alter table on a segment that includes the VHASH table, since a virtually hashed table must take only one exclusive segment, which cannot be shared by other tables or databases.
Before you add, modify, or drop columns on a table, run sp_depends to see if there are any stored procedures that depend on the table you are changing. If such stored procedures exist, drop, then re-create the stored procedures as necessary after changing table schema.
If stored procedures using select * reference an altered table, no new columns appear in the result set, even if you use the with recompile option. You must drop the procedure and re-create it to include these new columns. Otherwise, the wrong results may be caused by insert into table1 select * from table2 in the procedure when the tables have been altered and new columns have been added to the tables.
When the table owner uses alter table, the SAP ASE server disables access rules during the execution of the command and enables them upon completion of the command. The access rules are disabled to avoid filtering of the table data during alter table.
If you specify clustered and use the on segment_name option, the entire table migrates to the segment you specify, since the leaf level of the index contains the actual data pages.
alter table. . . transfer table involves data copy (similar to adding or removing a column): it is a very expensive command in terms of performance.
alter table performs error checking for check constraints before it alters the table.
When using on segment_name for partitions, the logical device must already have been assigned to the database with create database or alter databse, and the segment must have been created in the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.
See also sp_chgattribute, sp_help, sp_helpartition, sp_rename in Reference Manual: Procedures.
ANSI SQL – Compliance level: Transact-SQL extension.
See System and User-Defined Datatypes in Reference Manual: Building Blocks for datatype compliance information.
The permission checks for alter table differ based on your granular permissions settings.
Setting | Description |
---|---|
Enabled | With granular permissions enabled, you must be the table owner or a user with alter any table privilege. A user with setuser privilege can impersonate the table owner by executing the setuser command. |
Disabled | With granular permissions disabled, you must be the table owner or a user with sa_role. The database owner can impersonate the table owner by running the setuser command |
Values in event and extrainfo columns of sysaudits are:
Information | Values |
---|---|
Event | 3 |
Audit option | alter |
Command or access audited | alter table |
Information in extrainfo |
|