These are very brief descriptions and syntax for Adaptive Server commands. See Reference Manual: Commands for complete information.
Using log off decreases the amount of space allocated to the log of a database, as well as to the modified pages section of an archive database.
alter database database_name [on {default | database_device} [= size] [, database_device [= size]]...] [log on {default | database_device} [= size] [, database_device [= size]]...] set { [durability = { no_recovery | at_shutdown | full}] [[,] dml_logging = {full | minimal} ] [[,] template = { database_name | NULL}] [, compression = {none | row | page}] [, lob_compression = {compression_level | off}] [,] inrow_lob_length = value [log off database_device [= size | [from logical_page_number] [to logical_page_number]] [, database_device [= size | [from logical_page_number] [to logical_page_number]] [with override] [for load] [for proxy_update]
(Encrypted columns) Changes the current password, adds and drops a key copy, regenerates an encryption key.Altering the master key:
alter encryption key [dual] master with <char_string> { add encryption {with passwd <char_string> for user <user_name> [for recovery] | for automatic_startup } | modify encryption { with passwd <char_string> [for recovery] | for automatic_startup } | drop encryption { for user <user_name> | for recovery | for automatic_startup } | regenerate key [ with passwd <char_string>] | recovery encryption with passwd <char_string> | modify owner <user_name> }
Altering the syb_extpasswdkey service key:
alter encryption key syb_extpasswdkey [ with { static key | master key}] { regenerate key [ with { static key | master key }] | modify encryption [ with { static key | master key }] }
Altering the column encryption key:
alter encryption key [[database.][owner].] keyname { [ as | not default ] [dual] master [ with { static key | master key} ] regenerate key [ with { static key | master key [no] dual_control} ] | [with passwd 'password' | system_encr_passwd | login_passwd | 'base_key_password'] modify encryption [ with {passwd {'password' | system_encr_passwd | login_passwd } | master key }] [[no] dual_control] for automatic startup add encryption [ with passwd 'password' | 'key_copy_password’] for user user_name [for [login_association | recovery | automatic_startup]] drop encryption for { user user_name | recovery [ for recovery ] | [ for automatic_startup ]} | [ with passwd 'password '] recover encryption with passwd 'password' | modify owner user_name }
Changes the attributes of a login account.
alter login login_name { [modify attribute_value_pair_list ] | [add auto activated roles role_name [, role_name_list ]] | [drop auto activated roles { ALL | role_name [, role_name_list ]}] | [drop attribute_name_list ] | [ with password caller_password modify password [immediately] new_loginName_password ] }
Changes the attributes of a login profile.
alter login profile login_profile_name { [as [ not ] default ] | [modify attribute_value_pair_list ] | [add auto activated roles role_name [, role_name_list ]] | [drop auto activated roles { ALL | role_name [, role_name_list ]}] | [drop attribute_name_list] }
Transfers the ownership of database objects from one owner to another.
alter { object_type | all } [owner.]{object_name | * } modify owner { name_in_db | loginame only login_name } [ preserve permissions ]
Defines mutually exclusive relationships between roles; adds, drops, and changes passwords for roles; specifies the password expiration interval, the minimum password length, and the maximum number of failed logins allowed for a specified role. alter role also locks and unlocks roles.
alter role role1 {add | drop} exclusive {membership | activation} role2
alter role role_name [add passwd "password" | drop passwd] [lock | unlock]
alter role {role_name | "all overrides"} set {passwd expiration | min passwd length | max failed_logins} option_value
Adds new columns to a table; drops or modifies existing columns; adds, changes, or drops constraints; changes properties of an existing table; enables or disables triggers on a table, changes the compression level of a table.
Supports adding, dropping, and modifying computed columns, and enables the materialized property, nullability, or definition of an existing computed column to be changed.
Partitions and repartitions a table with specified partition strategy, or adds partitions to a table with existing partitions.
alter table [[database.][owner].table_name {add column_name datatype} [default {constant_expression | user | null}] {identity | null | not null [not materialized]} [off row | in row] [[constraint constraint_name] {{unique | primary key} [clustered | nonclustered] [asc | desc] [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages] [on segment_name] | references [[database.]owner.]ref_table [(ref_column)] [match full] | check (search_condition)] [encrypt [with [database.[owner].] keyname] [decrypt_default {constant_expression | null}]] [compressed = compression_level | not compressed] [, next_column]... | add [constraint constraint_name] {unique | primary key} [clustered | nonclustered] (column_name [asc | desc] [, column_name [asc | desc]...]) [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages}] [on segment_name] | foreign key (column_name [{, column_name}...]) references [[database.]owner.]ref_table [(ref_column [{, ref_column}...])] [match full] | add lob-colname { text | image | unitext } [null] [ in row [ (length) ] ] | check (search_condition)} | set dml_logging = {full | minimal | default} | [, compression = {none | page | row}]} [lob_compression = off | compression_level] | drop {column_name [, column_name]... | constraint constraint_name} | modify column_name [datatype [null | not null]] [[[encrypt [with keyname] [decrypt_default [value]] | decrypt] [[not] compressed] [compressed = compression_level | not compressed] [, next_column]... | replace column_name default {constant_expression | user | null} | decrypt_default {constant_expression | null} | drop decrypt_default | lock {allpages | datarows | datapages} } | with exp_row_size=num_bytes transfer table [on | off]} | partition number_of_partitions | unpartition | partition_clause | add_partition_clause
alter table syntax for partitions:
partition_clause::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | partition by hash (column_name[, column_name]...) { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...) | partition by roundrobin { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name [, segment_name]...)]} add_partition_clause::= add partition { ([partition_name] values <= ({constant | MAX} [, {constant | MAX}]...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name ] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) | modify partition {partition_name [, partition_name . . .] } set compression [= {default | none | row | page}] | ([partition_name] values (constant[, constant] ...) [on segment_name] [, [partition_name] values (constant[, constant] ...) [on segment_name]] ...)}
alter table syntax for computed columns:
alter table add column_name {compute | as} computed_column_expression... [materialized | not materialized] drop column_name modify column_name {null | not null | {materialized | not materialized} [null | not null] | {compute | as} computed_column_expression [materialized | not materialized] [null | not null]}
alter table syntax for dropping partitions:
alter table table_name drop partition partition_name [, partition_name]...
Alters a thread pool.
alter thread pool pool_name with { pool name = “new_name” thread count = thread_count, [pool description = “description”]} [idle timeout = time_period]
Encloses a series of SQL statements so that control-of-flow language, such as if...else, can affect the performance of the whole group.
begin statement block end
Marks the starting point of a user-defined transaction.
begin tran[saction] [transaction_name]
Causes an exit from a while loop. break is often activated by an if test.
while logical_expression statement break statement continue
Writes all dirty pages (pages that have been updated since they were last written) to the database device.
checkpoint [all | [dbname[, dbname, dbname, ........]]
Deactivates a cursor.
close cursor_name
Marks the ending point of a user-defined transaction.
commit [tran | transaction | work] [transaction_name]
Generates summary values that appear as additional rows in the query results.
start_of_select_statement compute row_aggregate (column_name) [, row_aggregate (column_name)]... [by column_name [, column_name]...]
(Component Integration Services only) Connects to the specified server and disconnects the connected server.
Use this syntax with CIS to create a passthru to a different server.
connect to server_name disconnect [from ASE] [all] [connection_name]
Opens a new JDBC-level connection to Adaptive Server, and does not use CIS. You can specify the arguments in any order. If you do not include arguments, Adaptive Server prompts you for connection parameters.
connect [to ASE engine_name] [database database_name] [as connection_name] [user user_id] [identified by password]]]
Opens a new JDBC-level connection to Adaptive Server. This syntax does not use CIS:
connect using connect_string
Restarts the while loop. continue is often activated by an if test.
while boolean_expression statement break statement continue
Creates an archive database.
create archive database db_name [on db_device [= size] [, db_device [= size] ] ... ] with scratch_database = db_name
Creates a new database.Nonclustered environments:
create [inmemory] [temporary] database database_name [use database_name as template] [on {default | database_device} [= size] [, database_device [= size]]...] [log on database_device [= size] [, database_device [= size]]...] [with {dbid = number, default_location = "pathname", override}] | [[,]durability = { no_recovery | at_shutdown | full} ] [, compression = {none | row | page}] [, lob_compression = {compression_level | off}] [ [,] inrow_lob_length = value ] }... [for {load | proxy_update}]
Cluster environments:
create [ [ global | system ] temporary ] database database_name [ for instance instance_name ] [on {default | database_device} [= size] [, database_device [= size]]...] [log on database_device [= size] [, database_device [= size]]...] [with {override | default_location = "pathname"}] [for {load | proxy_update}]
Specifies a value to insert in a column (or in all columns of a user-defined datatype) if no value is explicitly supplied at insert time.
create default [owner.]default_name as constant_expression
Creates encryption keys. All the information related to keys and encryption is encapsulated by create encryption key, which allows you to specify the encryption algorithm and key size, the key’s default property, an optional user-specified password to encrypt the key, as well as the use of an initialization vector or padding during the encryption process.
Create the master key:
create encryption key [dual] master [for AES] with passwd char_literal
Create the server key:
create encryption key syb_extpasswdkey [ with { static key | master key }] create encryption key syb_syscommkey [ with { static key | master key }]
Create the column encryption key:
create encryption key [[database.][owner].]keyname [as default] [for algorithm] [with [{{passwd {char_literal | system_encr_passwd} | master key}] [key_length num_bits] [init_vector {null | random}] [pad {null | random}] [[no] dual_control]}]
(Component Integration Services only) Creates a proxy table, then retrieves and stores metadata from a remote table and places the data into the proxy table. Allows you to map the proxy table to a table, view, or procedure at a remote location.
create existing table table_name (column_list) [on segment_name] [[external {table | procedure | file | connection_type}] at pathname [column delimiter "string"]]
The preferred method of creating proxy tables is the create proxy_table command, which eliminates the need to define the column definitions.
Creates a user-defined function, which is a saved Transact-SQL routine that returns a specified value.
create function [ owner_name. ] function_name [ ( @parameter_name [as] parameter_datatype [ = default ] [ ,...n ] ) ] returns return_datatype [ with recompile] as [begin] function_body return scalar_expression [end]
Creates a user-defined function by adding a SQL wrapper to a Java static method. Can return a value defined by the method.
create function [owner.]sql_function_name ([ sql_parameter_name sql_datatype [(length)| (precision[, scale ])] [[, sql_parameter_name sql_datatype [(length)| (precision[, scale])]] ...]]) returns sql_datatype [(length)| (precision[, scale])] [modifies sql data] [returns null on null input | called on null input] [deterministic | not deterministic] [exportable] language java parameter style java external name 'java_method_name [([java_datatype[, java_datatype ...]])] '
Creates an index on one or more computed or noncomputed columns in a table. Creates partitioned indexes.
Allows computed columns, like ordinary columns, to be index keys, and creates function-based indexes. A function-based index has one or more expressions as its index key. The existing create index syntax can create indexes on computed columns, but function-based indexes require additional syntax.
create [unique] [clustered | nonclustered] index index_name on [[database.]owner.]table_name (column_expression [asc | desc] [, column_expression [asc | desc]]...) [with {fillfactor = pct, max_rows_per_page = num_rows, reservepagegap = num_pages, consumers = x, ignore_dup_key, sorted_data, [ignore_dup_row | allow_dup_row], statistics using num_steps values}] [on segment_name] [index_partition_clause]
Creates index partitions:
index_partition_clause::= [local index [partition_name [on segment_name] [, partition_name [on segment_name]...]]]
Creates function-based indexes:
create [unique | nonclustered] index index_name on [[database.] owner.] table_name (column_expression [asc | desc] [, column_expression [asc | desc]]...
Creates a login account; specifies a password, a login profile for the account, and user-supplied parameters to be assigned to the account.
create login login_name with [encrypted] password password [attribute_value_pair_list]
Creates a login profile with specified attributes.
create login profile login_profile_name [ as default ] [ with { attributes from login_name | attribute_value_pair_list } ]
Creates an abstract plan.
create plan query plan [into group_name] [and set @new_id]
Creates a stored procedure or an extended stored procedure (ESP) that can take one or more user-supplied parameters.
create procedure [owner.]procedure_name[;number] [[(@parameter_name datatype [(length) | (precision [, scale])] [= default][output] [, @parameter_name datatype [(length) | (precision [, scale])] [= default][output]]...)]] [with recompile] as {SQL_statements | external name dll_name}
Creates a SQLJ stored procedure by adding a SQL wrapper to a Java static method. Can accept user-supplied parameters and return result sets and output parameters.
create procedure [owner.]sql_procedure_name ([[in | out | inout] sql_parameter_name sql_datatype [(length) | (precision[, scale])] [=default] ...]) [, [in | out | inout] sql_parameter_name sql_datatype [(length) | (precision[, scale])]] [=default] ...]) [modifies sql data] [dynamic result sets integer] [deterministic | not deterministic] language java parameter style java external name 'java_method_name [([java_datatype[, java_datatype ...]])]'
(Component Integration Services only) Creates a proxy table without specifying a column list. CIS derives the column list from the metadata it obtains from the remote table.
create proxy_table table_name [external [table | directory | file]] at pathname [column delimiter “<string>”]
Creates a user-defined role; specifies the password expiration interval, the minimum password length, and the maximum number of failed logins allowed for a specified role at creation. You can also associate a password with the role at the time that the role is created.
create role role_name [with passwd "password" [, {passwd expiration | min passwd length | max failed_logins} option_value]]
Specifies the domain of acceptable values for a particular column or for any column of a user-defined datatype, and creates access rules.
create [[and | or] access]] rule [owner.]rule_name as condition_expression
Creates a new collection of tables, views, and permissions for a database user.
create schema authorization authorization_name create_object_statement [create_object_statement ...] [permission_statement ...]
Wraps the supplied SQL statement in a stored procedure with the specified name and parameters.
create service service-name [secure security_options ] [, userpath path] [, alias alias-name] type { xml | raw | soap } [[(@parameter_name datatype [(length ) | (precision [, scale ])] [= default][output] [, @parameter_name datatype [(length ) | (precision [, scale ])] [= default][output]]...[)]] as SQL_statements
security_options ::= (security_option_item [security_option_item])
Creates new tables and optional integrity constraints.
Defines computed columns.
Defines table, row, and partition compression levels.
Defines encrypted columns and decrypt defaults on encrypted columns.
create table [[database.[owner].]table_name (column_name datatype [default {constant_expression | user | null}] [{identity | null | not null}] [ in row [(length)] | off row ] [[constraint constraint_name] {{unique | primary key} [clustered | nonclustered] [asc | desc] [with {fillfactor = pct, max_rows_per_page = num_rows,} reservepagegap = num_pages] dml_logging = {full | minimal}) [on segment_name] | references [[database.]owner.]ref_table [(ref_column)] [match full] | check (search_condition)}]} [[encrypt [with [database.[owner].]key_name] [decrypt_default constant_expression | null]] [not compressed] [compressed = {compression_level | not compressed} [[constraint [[database.[owner].]key_name] {unique | primary key} [clustered | nonclustered] (column_name [asc | desc] [{, column_name [asc | desc]}...]) [with {fillfactor = pct max_rows_per_page = num_rows, reservepagegap = num_pages}] [on segment_name] | foreign key (column_name [{,column_name}...]) references [[database.]owner.]ref_table [(ref_column [{, ref_column}...])] [match full] | check (search_condition) ...} [{, {next_column | next_constraint}}...]) [lock {datarows | datapages | allpages}] [with {max_rows_per_page = num_rows, exp_row_size = num_bytes, reservepagegap = num_pages, identity_gap = value transfer table [on | off] dml_logging = {full | minimal} compression = {none | page | row}}] lob_compression = off | compression_level [on segment_name] [partition_clause] [[external table] at pathname] [for load] compression_clause::= with compression = {none | page | row}
Use this syntax for partitions:
partition_clause::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [compression_clause] [on segment_name]]...) | partition by hash (column_name[, column_name]...) { (partition_name [compression_clause] [on segment_name] [, partition_name [compression_clause] [on segment_name]]...) | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name]] ...) | partition by roundrobin { (partition_name [on segment_name] [, partition_name [compression_clause] [on segment_name]]...) | number_of_partitions [on (segment_name[, segment_name]...)]}
Use this syntax for computed columns
create table [[database.[owner].] table_name (column_name {compute | as} computed_column_expression [[materialized] [not compressed]] | [not materialized]}
Use this syntax to create a virtually hashed table
create table [database.[owner].]table_name . . . | {unique | primary key} using clustered (column_name [asc | desc] [{, column_name [asc | desc]}...])= (hash_factor [{, hash_factor}...]) with max num_hash_values key
Creates a user-defined thread pool.
create thread pool pool_name with thread count = count [, pool description = description ] [idle timeout = time_period]
Creates a trigger, which is a type of stored procedure that is often used for enforcing integrity constraints. A trigger executes automatically when a user attempts a specified data modification statement on a specified table.
create trigger [owner.]trigger_name on [owner.]table_name {for {insert , update} | instead of {insert, update, delete}} [as [if update (column_name) [{and | or} update (column_name)]...] SQL_statements [if update (column_name) [{and | or} update (column_name)]... SQL_statements]...]
Creates a view, which is an alternative way to look at the data in one or more tables.
create view [owner.]view_name [(column_name[, column_name]...)] as select [distinct] select_statement [with check option]
Database consistency checker (dbcc) checks the logical and physical consistency of a database and provides statistics, planning, and repair functionality.
Certain dbbc commands apply only to shared-disk clusters. See the separately listed dbbc syntax for clusters.
dbcc addtempdb (dbid |database_name)
dbcc checkalloc [(database_name[, fix | nofix])]
dbcc checkcatalog [(database_name[, fix])
dbcc checkdb [(database_name[, skip_ncindex])]
dbcc checkindex ({table_name | table_id}, index_id [, bottom_up[, partition_name | partition_id]])
dbcc checkstorage [(database_name)]
dbcc checktable (table_name | table_id [, skip_ncindex | fix_spacebits | "check spacebits" | bottom_up | NULL[, partition_name | partition_id)
dbcc checkverify (dbname[, tblname[, ignore_exclusions]])
dbcc complete_xact (xid, {["commit", “1pc”] | "rollback"})
dbcc dbrepair (database_name, dropdb)
dbcc engine ({offline, [enginenum] | "online"})
dbcc fix_text ({table_name | table_id})
dbcc forget_xact (xid)
dbcc indexalloc (table_name | table_id, index_id [, optimized | fast | NULL [, fix | nofix | NULL [, partition_name | partition_id]]])
dbcc monitor (increment, <group name>)
dbcc monitor (decrement, <group name>)
dbcc monitor (reset, <group name>)
dbcc pravailabletempdbs
dbcc rebuild_text (table_name | table_id | "all"[, column[, text_page [, data_partition_name | data_partition_id]]])
dbcc reindex ({table_name | table_id})
dbcc serverlimits
dbcc stackused
dbcc tablealloc (table_name | table_id [, full | optimized | fast | NULL [, fix | nofix | NULL [, data_partition_name | data_partition_id]]])
dbcc textalloc (table_name | table_id [, full | optimized | fast | NULL [, fix | nofix | NULL [, data_partition_name | data_partition_id]]])
dbcc {traceon | traceoff} (flag [, flag ...])
dbcc tune ({ascinserts, {0 | 1} , table_name | cleanup, {0 | 1} | cpuaffinity, start_cpu {, on| off} | des_greedyalloc, dbid, object_name, " {on | off}" | deviochar vdevno, "batch_size" | doneinproc {0 | 1}})
dbcc upgrade_object [ ( dbid | dbname [,[database.[owner].]compiled_object_name' | 'check' | 'default' | 'procedure' | 'rule' | 'trigger' | 'view' [, 'force' ] ] )
dbcc syntax for clusters only:
dbcc nodetraceon(trace_flag_number) dbcc nodetraceoff(trace_flag_number)
dbcc set_scope_in_cluster("cluster"|"instance"|"scope")
dbcc quorum
Makes a cursor inaccessible and releases all memory resources committed to that cursor.
deallocate [cursor] cursor_name
Deletes a large object (LOB) stored in memory and invalidates its LOB locator.
deallocate locator locator_descriptor
Declares the name and type of local variables for a batch or procedure. Variable declaration:
declare @variable_name datatype [, @variable_name datatype]...
Variable assignment:
select @variable = {expression | select_statement} [, @variable = {expression | select_statement} ...] [from table_list] [where search_conditions] [group by group_by_list] [having search_conditions] [order by order_by_list] [compute function_list [by by_list]]
Defines a cursor, by associating a select statement with a cursor name. You can use declare cursor with an archive database.
declare cursor_name [semi_sensitive | insensitive] [scroll | no scroll] [release_locks_on_close] cursor for select_statement [for {read only | update [of column_name_list]}]
Removes rows from a table.
delete [top unsigned_integer] [from] [[database.]owner.]{view_name|table_name} [where search_conditions] [plan "abstract plan"]
delete [[database.]owner.]{table_name | view_name} [from [[database.]owner.]{view_name [readpast]| table_name [(index {index_name | table_name} [prefetch size][lru|mru])]} [readpast] [, [[database.]owner.]{view_name [readpast]| table_name [(index {index_name | table_name} [prefetch size][lru|mru])] [readpast]} ...] [where search_conditions]] [plan "abstract plan"]
delete [from] [[database.]owner.]{table_name|view_name} where current of cursor_name
Removes statistics from the sysstatistics system table.
delete [shared] statistics table_name [partition data_partition_name] [(column_name[, column_name] ...)]
Makes a physical device or file usable by Adaptive Server.
disk init name = "device_name", physname = { 'physical_name' | 'cache_name'} skip_alloc={true | false}, [vdevno = virtual_device_number,] size = number_of_blocks [, type = 'inmemory' ] [, vstart = virtual_address , cntrltype = controller_number] [, dsync = {true | false}] [, directio = {true | false}] [, instance = "instance_name"]
Creates a software mirror that immediately takes over when the primary device fails.
disk mirror name = "device_name", mirror = "physicalname" [, writes = {serial | noserial}] [clear = {TRUE | FALSE}]
Rebuilds the master database’s sysusages and sysdatabases system tables from information contained in sysdevices.
disk refit
Rebuilds the master database’s sysdevices system table. Use disk reinit as part of the procedure to restore the master database.
disk reinit name = "device_name", physname = "physicalname" , [vdevno = virtual_device_number ,] size = number_of_blocks [, vstart = virtual_address , cntrltype = controller_number] [, dsync = {true | false}] [, directio = {true | false}] [, instance = "instance_name"]
Restarts disk mirroring after it is stopped by failure of a mirrored device, or temporarily disabled by the disk unmirror command.
disk remirror name = "device_name"
Dynamically increases the size of the device used by Adaptive Server.
disk resize name = “device_name”, size = additional_space
Suspends disk mirroring initiated with the disk mirror command to allow hardware maintenance or the changing of a hardware device.
disk unmirror name = "device_name" [, side = {"primary" | secondary}] [, mode = {retain | remove}]
Removes one or more databases from Adaptive Server, including archive databases.
drop database database_name [, database_name] ...
Removes a user-defined default.
drop default [owner.]default_name [, [owner.]default_name] ...
Allows key owners to drop the named encryption key.
drop encryption key [[database.[owner].]keyname
Removes one or more user-defined functions from the current database.
drop function{ [ owner_name . ] function_name } [ ,...n ]
Removes a SQLJ function.
drop func[tion] [owner.]function_name [, [owner.]function_name ] ...
Removes an index from a table in the current database.
drop index table_name.index_name [, table_name.index_name] ...
Drops a login account or list of accounts.
drop login login_name [, login_name_list] [ with override ]
Drops a login profile or list of login profiles.
drop login profile login_profile_name [, login_profile_name_list] [ with override ]
Removes a procedure.
drop proc[edure] [owner.]procedure_name [, [owner.]procedure_name] ...
Drops a user-defined role.
drop role role_name [with override]
Removes a user-defined rule.
drop rule [owner.]rule_name[, [owner.]rule_name] ...
The drop service command removes a user-defined Web service from the current database. Both the metadata and the corresponding stored procedure are removed.
drop service service-name
Drops a user-defined pool.
drop thread pool pool_name
Removes a table definition and all of its data, indexes, partition properties, triggers, encryption properties, and permissions from the database.
drop table [[database.]owner.]table_name [, [[database.]owner.]table_name] ...
Removes a trigger.
drop trigger [owner.]trigger_name [, [owner.]trigger_name] ...
Removes one or more views from the current database.
drop view [owner.]view_name [, [owner.]view_name] ...
Makes a backup copy of the entire database, including the transaction log, in a form that can be read in with load database. Dumps and loads are performed through Backup Server.
dump database database_name to [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name] [with shrink_log] with verify[= header | full] [stripe on [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name]] [[stripe on [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, capacity = number_kilobytes, compression = compress_level dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], passwd = password, retaindays = number_days, [noinit | init], notify = {client | operator_console} }]
(Tivoli Storage Manager) Use this syntax for copying the database when the Tivoli Storage Manager provides backup services.
dump database database_name to "syb_tsm::object_name" [blocksize = number_bytes] [stripe on "[syb_tsm::]object_name" [blocksize = number_bytes]]...] [with { blocksize = number_bytes, compression = compress_level, passwd = password, [noinit | init], notify = {client | operator_console}, verify[ = header | full] } ]
Makes a copy of a transaction log and removes the inactive portion. To make a routine log dump:
dump tran[saction] database_name to [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name] [stripe on [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name]] [[stripe on [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, capacity = number_kilobytes, compression = compress_level, dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], retaindays = number_days, [noinit | init], notify = {client | operator_console}, standby_access}]
To truncate the log without making a backup copy:
dump tran[saction] database_name with truncate_only
To truncate a log that is filled to capacity. Use only as a last resort, as you will lose the contents of your log:
dump tran[saction] database_name with no_log
To back up the log after a database device fails:
dump tran[saction] database_name to [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name] [stripe on [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name]] [[stripe on [compress::[compression_level::]]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, capacity = number_kilobytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, capacity = number_kilobytes, compression = compress_level dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], retaindays = number_days, [noinit | init], no_truncate, notify = {client | operator_console}}]
To copy the transaction log when the Tivoli Storage Manager provides backup services.
dump transaction database_name to "syb_tsm::object_name" [blocksize = number_bytes] [stripe on "[syb_tsm::]object_name" [blocksize = number_bytes]]...] [with { blocksize = number_bytes, compression = compress_level, passwd = password, [noinit | init], notify = {client | operator_console}, verify[ = header | full] } ]
Runs a procedure or dynamically executes Transact-SQL commands.
[exec[ute]] [@return_status =] [[[server .]database.]owner.]procedure_name[;number] [[@parameter_name =] value | [@parameter_name =] @variable [output] [, [@parameter_name =] value | [@parameter_name =] @variable [output]...]] [with recompile]
or
exec[ute] ("string" | char_variable [+ "string" | char_variable]...)
Returns a row or a set of rows from a cursor result set.
fetch [next | prior | first | last | absolute fetch_offset | relative fetch_offset] [from] cursor_name [into fetch_target_list]
Branches to a user-defined label.
label: goto label
Assigns permissions to individual users, groups of users, and roles. Assigns roles to users, or system- or user-defined roles.
Grants permission to access database objects:
grant {all [privileges] | permission_list} on {table_name [correlation_name] [(column_list)] | view_name[(column_list)] | stored_procedure_name} | function_name | keyname} [where search_condition] [as pred_name] to {public | name_list | role_list} [with grant option]
Grants permission to use built-in functions:
grant select on [builtin] builtin to {name_list | role_list}
Grants permission to execute certain commands:
grant {all [privileges] | command_list} to {public | name_list | role_list}
Grants access on certain dbcc commands:
grant dbcc {dbcc_command [on {all | database}] [, dbcc_command [on {all | database}], ...]} to {user_list | role_list }
Grants permission to create encryption keys:
grant create encryption key to {user_list | role_list | group_list}
Grants decrypt permission on a table or a list of columns in a table:
grant decrypt on [ owner. ]tablename[(columnname [{,columname}])] to {user | group | role}
Grants the default permissions for specific system tables:
grant default permissions on system tables
Grants a role to a user or a role:
grant {role role_granted [, role_granted ...]} to grantee [, grantee...]
Switches your server user identity to any other server login and limit its use based on the target login roles:
grant set {proxy | tracing} to role_list [restrict role role_list | all | system]
Used in select statements to divide a table into groups and to return only groups that match conditions in the having clause. group by is typically used in conjunction with aggregates to specify how to group the unaggregated columns of a select query. having clauses are applied to these groups.
Start of select statement
[group by [all] aggregate_free_expression [, aggregate_free_expression]...]
[having search_conditions]
End of select statement
Imposes conditions on the execution of a SQL statement.
if logical_expression [plan "abstract plan"] statements
[else [if logical_expression] [plan "abstract plan"] statement]
Adds new rows to a table or view.
insert [into] [database.[owner.]]{table_name|view_name} [(column_list)] {values (expression [, expression]...) |select_statement [plan "abstract plan"]}
Kills a process.
kill spid with statusonly
Loads a backup copy of a user database, including its transaction log, that was created with dump database, as well as materialize archive databases that have been loaded with a database dump.
Makes a routine database load:
load database database_name from [compression=]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] with verify only [= header | full] [stripe on [compression=]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [[stripe on [compression=]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, compression, dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], passwd = password, notify = {client | operator_console}, [override]}]]
Returns header or file information without loading the backup:
load database database_name from [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [stripe on [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [[stripe on [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, compression, dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], passwd = password, listonly [= full], headeronly, notify = {client | operator_console} }]]
Materializes an archive database:
load database database_name from dump_device [ [stripe on stripe_device] ... ] [with [norecovery,][passwd=password]
Loads a copy of the database when the Tivoli Storage Manager is licensed at your site:
load database database_name from syb_tsm::[[-S source_sever_name][-D source_database_name] ::]object_name [blocksize = number_bytes] [stripe on syb_tsm::[[-S source_sever_name] [-D source_database_name]::]object_name [blocksize = number_bytes]] [[stripe on syb_tsm::[[-S source_sever_name] [-D source_database_name]::]object_name [blocksize = number_bytes]]...] [with { blocksize = number_bytes, passwd = password, listonly [= full], headeronly, notify = {client | operator_console}, [[verifyonly | verify] [= header | full]] } ]
Loads a backup copy of the transaction log that was created with dump transaction.
Makes a routine log load:
load tran[saction] database_name from [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [stripe on [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [[stripe on [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, compression, dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], notify = {client | operator_console} }]]
Returns header or file information without loading the backup log:
load tran[saction] database_name from [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [stripe on [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name] [[stripe on [compress::]stripe_device [at backup_server_name] [density = density_value, blocksize = number_bytes, dumpvolume = volume_name, file = file_name]]...] [with { density = density_value, blocksize = number_bytes, compression, dumpvolume = volume_name, file = file_name, [dismount | nodismount], [nounload | unload], listonly [= full], headeronly, notify = {client | operator_console} until_time = datetime}]]
Loads a transaction log into an archive database:
load tran[saction] database_name from dump_device [[stripe on stripe_device] ... ]
(Tivoli Storage Manager only) loads a copy of the transaction log when the Tivoli Storage Manager is licensed at your site.
load transaction database_name from syb_tsm::[[-S source_sever_name][-D source_database_name] ::]object_name [blocksize = number_bytes] [stripe on syb_tsm::[[-S source_sever_name] [-D source_database_name]::]object_name [blocksize = number_bytes]] [[stripe on syb_tsm::[[-S source_sever_name] [-D source_database_name]::]object_name [blocksize = number_bytes]]...] [with { blocksize = number_bytes, passwd = password, listonly [= full], headeronly, notify = {client | operator_console}, until_time = datetime } ]
Explicitly locks a table within a transaction.
lock table table_name in {share | exclusive} mode [wait [numsecs] | nowait]
Transfers rows from a source table into a target table:
Inserts rows that are in the source and have no matching key columns in the target.
Updates rows with key columns that already exist in the target with the values from the source row.
merge by {insert | update} into [[database.]owner.]identifier [as table_alias] using [[database.]owner.]identifier [as table_alias] | (select_query) as alias_name [column_list] on merge_search_condition [ when matched [and search_conditions ] then {update set {col_name = expression} | delete} ] [ when not matched [and search_conditions ] then insert [(column_list)] values (value_list)
Attaches a database to a destination or secondary Adaptive Server.
mount database all | database_mapping[, database_mapping, ...] from "manifest_file" [using device_mapping [, device_mapping...] [with listonly] database_mapping: origdbname as newdbname | newdbname = origdbname | origdbname | newdbname device_mapping logical_device_name as new_physical_name | new_physical_name = logical_device_name | original_physical_name | new_physical_name
Marks a database available for public use after a normal load sequence; if needed, upgrades a loaded database to the current version of Adaptive Server; brings a database online after loading a transaction log dumped with the for standby_access option. You can also use online database to bring an archive database online.
online database database_name [for standby_access]
Opens a cursor for processing.
open cursor_name
Returns query results in the specified columns in sorted order.
[Start of select statement] [order by {[table_name.| view_name.] column_name | select_list_number | expression} [asc | desc] [,{[table_name.| view_name.] column_name | select_list_number | expression} [asc | desc]]...]
[End of select statement]
Used by DB-Library in a two-phase commit application to see if a server is prepared to commit a transaction.
prepare tran[saction]
Prints a user-defined message on the user’s screen.
print {format_string | @local_variable | @@global_variable} [, arg_list]
Suspends and resumes updates to a specified list of databases.
quiesce database tag_name hold database_list [for external dump] [to manifest_file [with override]]
or:
quiesce database tag_name release
Prints a user-defined error message on the user’s screen and sets a system flag to record that an error condition has occurred.
raiserror error_number [{format_string | @local_variable}] [, arg_list] [with errordata restricted_select_list]
Reads text, unitext, and image values, starting from a specified offset and reading a specified number of bytes or characters.
readtext [[database.]owner.]table_name.column_name text_pointer offset size [holdlock | noholdlock] [readpast] [using {bytes | chars | characters}] [at isolation { [read uncommitted | 0] | [read committed | 1] | [repeatable read | 2]| [serializable | 3]}]
The reconfigure command currently has no effect; it is included to allow existing scripts to run without modification.
reconfigure
Removes one or more Java-SQL classes, packages, or JARs from a database, when Java classes are installed in the database.
remove java class class_name[, class_name]... | package package_name[, package_name]... | jar jar_name[, jar_name]...[retain classes]
Reclaims unused space on pages, removes row forwarding, or rewrites all rows in the table to new pages, depending on the option used.
reorg compact table_name [partition partition_name] [with {resume, time = no_of_minutes, compress}]
reorg forwarded_rows table_name [partition partition_name] [with {resume, time = no_of_minutes, compress}]
reorg rebuild table_name [index_name [partition index_partition_name]]
reorg reclaim_space table_name [index_name] [partition partition_name] [with {resume, time = no_of_minutes, compress}]
Exits from a batch or procedure unconditionally and provides an optional return status. Statements following return are not executed.
return [integer_expression] [plan "abstract_plan"]
Revokes permission to access database objects:
revoke [grant option for] {all [privileges] | permission_list} on {table_name [(column_list)] | view_name [(column_list)] | stored_procedure_name} | keyname} from {public | name_list | role_list} [cascade]
Revokes permission to select built-in functions:
revoke select on [builtin] builtin to {name_list | role_list}
Revokes permission to create database objects, execute set proxy, or execute set session authorization:
revoke {all [privileges] | command_list} from {public | name_list | role_list}
Revokes permission to run set proxy or set tracing:
[revoke set {proxy | tracing} from {public | name_list | role_list
Revokes a role from a user or another role:
revoke role {role_name [, role_list ...]} from {grantee [, grantee ...]}
Revokes access on some dbcc commands:
revoke dbcc {dbcc_command [on {all | database}] [, dbcc_command [on {all | database}], ...]} from {user_list | role_list}
Revokes permission from other users, groups, and roles to create encryption keys.
revoke create encryption key from user | role | group
Revokes row-filtering predicates.
revoke {all [privileges] | [all] permission_list} on table_name (column_list) [with { pred_name | {all |no} predicates}] from {public | name_list | role_list}
Revokes decrypt permission on a table or a list of columns in a table:
revoke decrypt on [owner.] tablename[(columnname [{,columname}])] from user | group | role
Revokes the default permissions from public:
revoke default permissions on system tables
Rolls back a user-defined transaction to the named savepoint in the transaction or to the beginning of the transaction.
rollback [tran | transaction | work] [transaction_name | savepoint_name]
Rolls back the work done in a trigger, including the data modification that caused the trigger to fire, and issues an optional raiserror statement.
rollback trigger [with raiserror_statement]
Sets a savepoint within a transaction.
save transaction savepoint_name
Retrieves rows from database objects.
select ::= select [all | distinct] [top unsigned_integer] select_list [into_clause] [from_clause] [where_clause] [group_by_clause] [having_clause] [order_by_clause] [compute_clause] [read_only_clause] [isolation_clause] [browse_clause] [plan_clause] [for_xml_clause]
select_list ::=
For details on select_list, see the “Parameters” section in Reference Manual: Commands.
into_clause ::= into [[database.] owner.] table_name [(colname encrypt [with [database.[owner].]keyname] [, colname encrypt_clause ...])] | [compressed = compression_level | not compressed] [in row [(length)] | off row ] [{[external table at] 'server_name.[database].[owner].object_name’ | external directory at ‘pathname’ | external file at ‘pathname’ [column delimiter ‘string’]}] [on segment_name] dml_logging = (full | minimal) [partition_clause] [lock {datarows | datapages | allpages}] [with [, into_option[, into_option] ...]]] | into existing table table_name
partition_clause ::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) [compression_clause] [on segment_name] | partition by hash (column_name[, column_name]...) { (partition_name [on segment_name] [compression_clause] [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name] | partition by roundrobin { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name [, segment_name]...)]}
into_option ::= | max_rows_per_page = num_rows | exp_row_size = num_bytes | reservepagegap = num_pages | identity_gap = gap | compression = {none | page | row} | lob_compression = off | compression_level]
from_clause ::= from table_reference [,table_reference]...
table_reference ::= table_view_name | ANSI_join
table_view_name ::= [[database.]owner.] {{table_name | view_name} [as] [correlation_name] [(index {index_name | table_name})] [parallel [degree_of_parallelism]] [prefetch size][lru | mru]} [holdlock | noholdlock] [readpast] [shared]
ANSI_join ::= table_reference join_type join table_reference join_conditions join_type ::= inner | left [outer] | right [outer] join_conditions ::= on search_conditions
compression_clause::= with compression = {none | page | row}
where_clause ::= where search_conditions for update [of column_list
group_by_clause ::= group by [all] aggregate_free_expression [, aggregate_free_expression]...
having_clause ::= having search_conditions
order_by_clause ::= order by sort_clause [, sort_clause]...
sort_clause ::= {[[[database.]owner.]{table_name.|view_name.}]column_name | select_list_number | expression } [asc | desc]
compute_clause ::= compute row_aggregate (column_name) [, row_aggregate (column_name)]... [by column_name [, column_name]...]
read_only_clause ::= for {read only | update [of column_name_list]}
isolation_clause ::= at isolation {read uncommitted | 0} | {read committed | 1} | {repeatable read | 2} | {serializable | 3}
browse_clause ::= for browse
plan_clause ::= plan "abstract plan"
Sets Adaptive Server query-processing options for the duration of the user’s work session; sets some options inside a trigger or stored procedure.
set advanced_aggregation on/off
set @variable = expression [, @variable = expression...]
set ansinull {on | off}
set ansi_permissions {on | off}
set arithabort [arith_overflow | numeric_truncation] {on | off}
set arithignore [arith_overflow] {on | off}
set bulk array size number
set bulk batch size number
set builtin_date_strings number
set {chained, close on endtran, nocount, noexec, parseonly, self_recursion, showplan, sort_resources} {on | off}
set char_convert {off | on [with {error | no_error}] | charset [with {error | no_error}]}
set cis_rpc_handling {on | off}
set [clientname client_name | clienthostname host_name | clientapplname application_name]
set compression {on | off | default}
set cursor rows number for cursor_name
set {datefirst number, dateformat format, language language}
set delayed_commit {on | off | default}
set deferred_name_resolution { on | off }
set dml_logging {minimal | default}
set encryption passwd 'password_phrase' for {key | column} {keyname | column_name}
set export_options [on | off]
set fipsflagger {on | off}
set flushmessage {on | off}
set fmtonly {on | off}
set forceplan {on | off}
set identity_insert [database.[owner.]]table_name {on | off}
set identity_update table_name {on | off}
set index_union on | off
set literal_autoparam on | off
set lock {wait [numsecs] | nowait}
set metrics_capture on | off
set offsets {select, from, order, compute, table, procedure, statement, param, execute} {on | off}
set option show
set opttimeoutlimit
set parallel_degree number
set plan {dump | load} [group_name] {on | off}
set plan exists check {on | off}
set plan for show
set plan optgoal {allrows_mix | allrows_dss}
set plan opttimeoutlimit number
set plan replace {on | off}
set prefetch [on|off]
set print_minlogged_mode_override
set proc_output_params {on | off}
set proc_return_status {on | off}
set process_limit_action {abort | quiet | warning}
set proxy login_name
set quoted_identifier {on | off}
set repartition_degree number
set repthreshold number
set resource_granularity number
set role {"sa_role" | "sso_role" | "oper_role" | role_name [with passwd "password"]} {on | off}
set {rowcount number, textsize number}
set scan_parallel_degree number
set send_locator {on | off }
set session authorization login_name
set switch [serverwide] {on | off} trace_flag ,[trace_flag,] [with option [, option]
set show_exec_info [“on” | “off”]
set show_sqltext {on | off}
set show_transformed_sql, {on|off}
set statement_cache on | off
set statistics {io, subquerycache, time, plancost} {on | off}
set statistics simulate {on | off}
set strict_dtm_enforcement {on | off}
set string_rtruncation {on | off}
set system_view {instance | cluster | clear}
set textsize {number}
set tracefile [filename] [off] [for spid]
set transaction isolation level { [read uncommitted | 0] | [read committed | 1] | [repeatable read | 2] | [serializable | 3]}
set transactional_rpc {on | off}
Allows a database owner to impersonate another user.
setuser ["user_name"]
Shuts down the Adaptive Server from which the command is issued, its local Backup Server, or a remote Backup Server.
shutdown [srvname] [with {wait [="hh:mm:ss"] | nowait}]]
Syntax for clusters:
shutdown {cluster | [instance_name]} [with {wait | nowait}]
Initiates an incremental table transfer.
transfer table [[db.]owner.]table [to | from] destination_file [ for { ase | bcp | iq | csv } ] [ with {column_separator=string}, {column_order=option}, {encryption=option}, {row_separator=string}, {resend=id}, {progress=sss}, {tracking_id=nnn} {sync = true | false]}, {fixed_length = true | false} , null_byte = true | false}]
Truncates a LOB to a specified length.
truncate lob locator_descriptor [ ( result_length)]
Removes all rows from a table or partition.
truncate table [[database.]owner.]table_name [partition partition_name]
Returns a single result set that combines the results of two or more queries. Duplicate rows are eliminated from the result set unless the all keyword is specified.
select [top unsigned_integer] select_list [into clause] [from clause] [where clause] [group by clause] [having clause] [union [all] select [top unsigned_integer] select_list [from clause] [where clause] [group by clause] [having clause]]... [order by clause] [compute clause]
Shuts down the database and drops it from the Adaptive Server.
unmount database dbname_list to manifest_file
Changes data in existing rows, either by adding data or by modifying existing data.
update [top unsigned_integer] [[database.]owner.]{table_name | view_name} set [[[database.]owner.]{table_name.|view_name.}] column_name1 = {expression1 | NULL | (select_statement)} | variable_name1 = {expression1 | NULL | (select_statement)} [, column_name2 = {expression2 | NULL | (select_statement)}]... | [, variable_name2 = {expression2 | NULL | (select_statement)}]... [from [[database.]owner.]{view_name [readpast]| table_name [(index {index_name | table_name} [prefetch size][lru|mru])]} [readpast] [,[[database.]owner.]{view_name [readpast]| table_name [(index {index_name | table_name} [prefetch size][lru|mru])]}] [readpast] ...] [where search_conditions] [plan "abstract plan"]
update [[database.]owner.]{table_name | view_name} set [[[database.]owner.]{table_name.|view_name.}] column_name1 = {expression1 | NULL | (select_statement)} | variable_name1 = {expression1 | NULL | (select_statement)} [, column_name2 = {expression2 | NULL | (select_statement)}]... | [, variable_name2 = {expression2 | NULL | (select_statement)}]... where current of cursor_name
Updates all statistics information for a given table. You can run update all statistics on a single data partition.
update all statistics table_name [partition data_partition_name]
Updates the statistics for all columns in an index.
update index statistics table_name [[partition data_partition_name] | [index_name [partition index_partition_name]]] [using step values] [with consumers = consumers] [, sampling=N percent]
Updates information about the distribution of key values in specified indexes, for all columns in an index, table, or partition, and resets the data change counters for global nonclustered indexes.
update statistics table_name [[partition data_partition_name] [(column_list)] | index_name [partition index_partition_name]] [using step values] [with consumers = consumers][, sampling=N percent]
update table statistics updates statistics that are stored in systabstats table, such as rowcount, cluster ratios, and so on. update table statistics does not affect column statistics stored in sysstatistics.
update table statistics table_name [partition data_partition_name] [index_name [partition index_partition_name]]
Specifies the database with which you want to work.
use database_name
Specifies a specific time, a time interval, or an event for the execution of a statement block, stored procedure, or transaction.
waitfor {delay time | time time | errorexit | processexit | mirrorexit}
Sets the search conditions in a select, insert, update, or delete statement.
where [not] expression comparison_operator expression
where {[not] expression comparison_operator expression} | {...}
where [not] expression [not] like "match_string" [escape "escape_character "]
where [not] expression is [not] null
where [not] expression [not] between expression and expression
where [not] expression [not] in ({value_list | subquery})
where [not] exists (subquery)
where [not] expression comparison_operator {any | all} (subquery)
where [not] column_name join_operator column_name
where [not] logical_expression
where [not] expression {and | or} [not] expression
where column_name is [not] null
Sets a condition for the repeated execution of a statement or statement block. The statements are executed repeatedly, as long as the specified condition is true.
while logical_expression [plan "abstract plan"] statement
Permits minimally logged, interactive updating of an existing text, unitext or image column.
writetext [[database.]owner.]table_name.column_name text_pointer [readpast] [with log] data