Retrieves rows from database objects.
select ::= select [all | distinct] [top unsigned_integer] select_list [into_clause] [from_clause] [where_clause] [group_by_clause] [having_clause] [order_by_clause] [compute_clause] [read_only_clause] [isolation_clause] [browse_clause] [plan_clause] [for_xml_clause]
select_list ::=
For details on select_list, see the “Parameters” section.
into_clause ::= into [[database.] owner.] table_name [(colname encrypt [with [database.[owner].]keyname] [, colname encrypt_clause ...])] | [compressed = compression_level | not compressed] [in row [(length)] | off row ] [{[external table at] 'server_name.[database].[owner].object_name’ | external directory at ‘pathname’ | external file at ‘pathname’ [column delimiter ‘string’]}] [on segment_name] dml_logging = (full | minimal) [partition_clause] [lock {datarows | datapages | allpages}] [with [, into_option[, into_option] ...]]] | into existing table table_name
partition_clause ::= partition by range (column_name[, column_name]...) ([partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name] [compression_clause] [on segment_name] [, [partition_name] values <= ({constant | MAX} [, {constant | MAX}] ...) [on segment_name]]...) [compression_clause] [on segment_name] | partition by hash (column_name[, column_name]...) { (partition_name [on segment_name] [compression_clause] [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name[, segment_name] ...)]} | partition by list (column_name) ([partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name] [, [partition_name] values (constant[, constant] ...) [compression_clause] [on segment_name] | partition by roundrobin { (partition_name [on segment_name] [, partition_name [on segment_name]]...) [compression_clause] [on segment_name] | number_of_partitions [on (segment_name [, segment_name]...)]}
into_option ::= | max_rows_per_page = num_rows | exp_row_size = num_bytes | reservepagegap = num_pages | identity_gap = gap | compression = {none | page | row} | lob_compression = off | compression_level]
from_clause ::= from table_reference [,table_reference]...
table_reference ::= table_view_name | ANSI_join
table_view_name ::= [[database.]owner.] {{table_name | view_name} [as] [correlation_name] [(index {index_name | table_name})] [parallel [degree_of_parallelism]] [prefetch size][lru | mru]} [holdlock | noholdlock] [readpast] [shared]
ANSI_join ::= table_reference join_type join table_reference join_conditions join_type ::= inner | left [outer] | right [outer] join_conditions ::= on search_conditions
compression_clause::= with compression = {none | page | row}
where_clause ::= where search_conditions for update [of column_list
group_by_clause ::= group by [all] aggregate_free_expression [, aggregate_free_expression]...
having_clause ::= having search_conditions
order_by_clause ::= order by sort_clause [, sort_clause]...
sort_clause ::= {[[[database.]owner.]{table_name.|view_name.}]column_name | select_list_number | expression } [asc | desc]
compute_clause ::= compute row_aggregate (column_name) [, row_aggregate (column_name)]... [by column_name [, column_name]...]
read_only_clause ::= for {read only | update [of column_name_list]}
isolation_clause ::= at isolation {read uncommitted | 0} | {read committed | 1} | {repeatable read | 2} | {serializable | 3}
browse_clause ::= for browse
plan_clause ::= plan "abstract plan"
See the XML Services book for syntax, examples, and usage information for the select...for_xml_clause.
includes all rows in the results. all is the default.
includes only unique rows in the results. distinct must be the first word in the select list. distinct is ignored in browse mode.
Null values are considered equal for the purposes of the keyword distinct: only one NULL is selected, no matter how many are encountered.
is used with select...into statements to limit the number of rows inserted in the target table. This is different from set rowcount, which is ignored during a select...into.
When used with delete, update, or in a view, you cannot specify ordering. If there is an implied order on the table from a clustered index, that order applies; otherwise, the results are unpredictable, as they can be in any order.
n is an unsigned 32-bit value between 0 through 232-1 (4GB-1 or 4,294,967,295). Zero indicates “no” rows.
When used with cursors, top n limits the overall size of the result set. Specifying set cursor rowcount limits the results of a single fetch.
When a view definition contains select top n and a query with a where clause uses it, the results may be inconsistent.
consists of one or more of the following items:
“*”, representing all columns in create table order.
A list of column names in the order in which you want to see them. When selecting an existing IDENTITY column, you can substitute the syb_identity keyword, qualified by the table name, where necessary, for the actual column name.
A specification to add a new IDENTITY column to the result table:
column_name = identity (int | smallint | tinyint | precision)
If you specify int, smallint, or tinyint, the resulting column is an integer. If you specify precision, the result is numeric datatype.
A replacement for the default column heading (the column name), in one of these forms:
column_heading = column_name column_name column_heading
column_name as column_heading
The column heading can be enclosed in quotation marks for any of these forms. The heading must be enclosed in quotation marks if it is not a valid identifier (that is, if it is a reserved word, if it begins with a special character, or if it contains spaces or punctuation marks).
An expression (a column name, constant, function, or any combination of column names, constants, and functions connected by arithmetic or bitwise operators, or a subquery).
A built-in function or an aggregate.
Any combination of the items listed above.
The select_list can also assign values to variables, in the form:
@variable = expression [, @variable = expression ...]
You cannot combine variable assignment with any other select_list option.
except when used with existing table, creates a new table based on the columns specified in the select list and the rows chosen in the where clause. See “Using select into”.
Specifies encryption on colname in the target table. By default, Adaptive Server decrypts data selected from the source table. You must use the encrypt keyword to preserve the data encryption or to encrypt a column in the target database that was not encrypted in the source database.
indicates if the large object (LOB) data in the row is compressed and to what level.
indicates the compression level of the row:
0 – the row is not compressed.
1 through 9 – Adaptive Server uses ZLib compression. Generally, the higher the compression number, the more Adaptive Server compresses the LOB data, and the greater the ratio between compressed and uncompressed data (that is the greater the amount of space savings, in bytes, for the compressed data versus the size of the uncompressed data).
However, the amount of compression depends on the LOB content, and the higher the compression level , the more CPU-intensive the process. That is, level 9 provides the highest compression ratio but also the heaviest CPU usage.
100 – Adaptive Server uses FastLZ compression. The compression ratio that uses the least CPU usage; generally used for shorter data.
101 – Adaptive Server uses FastLZ compression. A value of 101 uses slightly more CPU than a value of 100, but uses a better compression ratio than a value of 100.
The compression algorithm ignores rows that do not use LOB data.
is a list of columns separated by commas.
specifies the key used on the source data, or a different key.
sets or changes the in-row chracteristics for the LOB columns in the target table. If you do not specify length, Adaptive Server uses the configured default in-row length.
By default, a LOB column in the target table inherits the storage property of the corresponding LOB column in the select list. If the target table’s LOB column is produced as a result of an expression, such as the convert(text, column) built-in function, the column then automatically uses off-row storage unless you change the setting by specifying in row [(length)].
changes the storage format of the column from in-row to off-row.
indicates that the type of the external object is a table, directory, or file. If you do not indicate a file, directory, or table, select into assumes that you are using a table.
You cannot specify an external location when using any part of the partition_clause. Partitions can be created only on tables on the current server and database.
indicates that you are selecting into a table or view found on the remote server_name.
determines the amount of logging for insert, update and delete operations, and for some forms of bulk inserts. One of
full – Adaptive Server logs all transactions
minimal – Adaptive Sever does not log row or page changes
indicates the full, operating system-specific path name of the external file or directory you are selecting into. All directories in path_name must be accessible to Adaptive Server.
indicates the delimiter that you are using to separate columns after converting the column’s data to string format. string can have as many as 16 characters. If you do not specify a delimiter, select into uses the tab character.
indicates that you are selecting data into a proxy table. You cannot use this select into with any other table type except proxy. The column list in the select list must match the type, length, and number in the proxy table.
specifies that the table is to be created on the named segment. Before the on segment_name option can be used, the device must be initialized with disk init, and the segment must be added to the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.
specifies records are to be partitioned according values in the partitioning column or columns. Each partitioning column value is compared with sets of user-supplied upper and lower bounds to determine partition assignment.
when used in the partition_clause, specifies a partition key column.
specifies the name of a new partition on which table records are to stored. Partition names must be unique within the set of partitions on a table or index. Partition names can be delimited identifiers if set quoted_identifier is on. Otherwise, they must be valid identifiers.
If partition_name is omitted, Adaptive Server creates a name in the form table_name_partition_id. Adaptive Server truncates partition names that exceed the allowed maximum length.
specifies the inclusive upper bound of values for a named partition. Specifying a constant value for the highest partition bound imposes an implicit integrity constraint on the table. The keyword MAX specifies the maximum value in a given datatype.
when used in the partition_clause, specifies the name of the segment on which to place the partition. When using on segment_name, the logical device must already have been assigned to the database with create database or alter database, and the segment must have been created in the database with sp_addsegment. See your system administrator or use sp_helpsegment for a list of the segment names available in your database.
specifies records are to be partitioned by a system-supplied hash function. The function computes the hash value of the partition keys that specify the partition to which records are assigned.
specifies records are to be partitioned according to literal values specified in the named column. The partition key contains only one column. You can list up to 250 constants as the partition values for each list partition.
specifies records are to be partitioned in a sequential manner. A round-robin-partitioned table has no partitioning key. Neither the user nor the optimizer knows in which partition a particular record resides.
specifies the locking scheme to be used for a table created with a select into command. The default is the server-wide setting for the configuration parameter lock scheme.
limits the number of rows on data pages for a table created with select into. Unlike fillfactor, the max_rows_per_page value is maintained when data is inserted or deleted. max_rows_per_page is not supported on data-only-locked tables.
specifies the expected row size for a table created with the select into command. Valid only for datarows and datapages locking schemes and only for tables that have variable-length rows. Valid values are 0, 1, and any value greater than the minimum row length and less than the maximum row length for the table. The default value is 0, which means that a server-wide default is used.
specifies a ratio of filled pages to empty pages that is to be left as select into allocates extents to store data. This option is valid only for the select into command. For each specified num_pages, one empty page is left for future expansion of the table. Valid values are 0 – 255. The default value is 0.
specifies that the query should silently skip rows with exclusive locks, without waiting and without generating a message.
specifies the identity gap for the table. This value overrides the system identity gap setting for this table only.
indicates the level of compression to be applied to the table or partition. The new compression level applies to the newly inserted or updated data:
none – the data in this table or partition is not compressed. For partitions, none indicates that data in this partition remains uncompressed even if the table compression is altered to row or page compression.
row – compresses one or more data items in an individual row. Adaptive Server stores data in a row-compressed form only if the compressed form saves space compared to an uncompressed form. Set row compression at the partition or table level.
page – when the page fills, existing data rows that are row-compressed are then compressed using page-level compression to create page-level dictionary, index, and character-encoding entries. Set page compression at the partition or table level.
Adaptive Server compresses data at the page level only after it has compressed data at the row level, so setting the compression to page implies both page and row compression.
determines the compression level for the table.
is the identity gap amount.
If you are creating a table in a select into statement from a table that has a specific identity gap setting, the new table does not inherit the identity gap setting from the parent table. Instead, the new table uses the identity burning set factor setting. To give the new table a specific identity_gap setting, specify the identity gap in the select into statement. You can give the new table an identity gap that is the same as or different from the parent table.
indicates which tables and views to use in the select statement. from required except when the select list contains no column names (that is, it contains constants and arithmetic expressions only):
select 5 x, 2 y, "the product is", 5*2 Result
x y Result ------- ------- ------------------ ----------- 5 2 the product is 10
At most, a query can reference 50 tables and 14 worktables (such as those created by aggregate functions). The 50-table limit includes:
Tables (or views on tables) listed in the from clause
Each instance of multiple references to the same table (self-joins)
Tables referenced in subqueries
Tables being created with into
Base tables referenced by the views listed in the from clause
lists tables and views used in the select statement. Specify the database name if the table or view is in another database, and specify the owner’s name if more than one table or view of that name exists in the database. The default value for owner is the current user, and the default value for database is the current database.
If there is more than one table or view in the list, separate their names by commas. The order of the tables and views following the keyword from does not affect the results.
You can query tables in different databases in the same statement.
Table names and view names can be given correlation names (aliases), either for clarity or to distinguish the different roles that tables or views play in self-joins or subqueries. To assign a correlation name, give the table or view name, then a space, then the correlation name, like this:
select pub_name, title_id from publishers pu, titles t where t.pub_id = pu.pub_id
All other references to that table or view (for example, in a where clause) must use the correlation name. Correlation names cannot begin with a numeral.
specifies the index to use to access table_name. You cannot use this option when you select from a view, but you can use it as part of a select clause in a create view statement.
specifies a parallel partition or index scan, if Adaptive Server is configured to allow parallel processing.
specifies the number of worker processes that will scan the table or index in parallel. If set to 1, the query executes serially.
specifies the I/O size, in kilobytes, for tables bound to caches with large I/Os configured. You cannot use this option when you select from a view, but you can use it as part of a select clause in a create view statement. sp_helpcache shows the valid sizes for the cache an object is bound to or for the default cache. To configure the data cache size, use sp_cacheconfigure.
When using prefetch and designating the prefetch size (size), the minimum is 2K and any power of two on the logical page size up to 16K. prefetch size options in kilobytes are:
Logical page size |
Prefetch size options |
---|---|
2 |
2, 4, 8 16 |
4 |
4, 8, 16, 32 |
8 |
8, 16, 32, 64 |
16 |
16, 32, 64, 128 |
The prefetch size specified in the query is only a suggestion. To allow the size specification, configure the data cache at that size. If you do not configure the data cache to a specific size, the default prefetch size is used.
If CIS is enabled, you cannot use prefetch for remote servers.
specifies the buffer replacement strategy to use for the table. Use lru to force the optimizer to read the table into the cache on the MRU/LRU (most recently used/least recently used) chain. Use mru to discard the buffer from cache and replace it with the next buffer for the table. You cannot use this option when you select from a view, but you can use it as part of a select clause in a create view statement.
makes a shared lock on a specified table or view more restrictive by holding it until the transaction completes (instead of releasing the shared lock as soon as the required data page is no longer needed, whether or not the transaction has completed).
The holdlock option applies only to the table or view for which it is specified, and only for the duration of the transaction defined by the statement in which it is used. Setting the transaction isolation level 3 option of the set command implicitly applies a holdlock for each select statement within a transaction. The keyword holdlock is not permitted in a select statement that includes the for browse option. You cannot specify both a holdlock and a noholdlock option in a query.
If CIS is enabled, you cannot use holdlock for remote servers.
prevents the server from holding any locks acquired during the execution of this select statement, regardless of the transaction isolation level currently in effect. You cannot specify both a holdlock and a noholdlock option in a query.
instructs Adaptive Server to use a shared lock (instead of an update lock) on a specified table or view. This allows other clients to obtain an update lock on that table or view. You can use the shared keyword only with a select clause included as part of a declare cursor statement. For example:
declare shared_crsr cursor for select title, title_id from titles shared where title_id like "BU%"
You can use the holdlock keyword in conjunction with shared after each table or view name, but holdlock must precede shared.
an inner or outer join that uses the ANSI syntax. The from clause specifies the tables to be joined.
includes only the rows of the inner and outer tables that meet the conditions of the on clause. The result set of a query that includes an inner join does not include any null-supplied rows for the rows of the outer table that do not meet the conditions of the on clause.
includes all the rows from the outer table whether or not they meet the conditions of the on clause. If a row does not meet the conditions of the on clause, values from the inner table are stored in the joined table as null values. The where clause of an ANSI outer join restricts the rows that are included in the query result.
left joins retain all the rows of the table reference listed on the left of the join clause. The left table reference is referred to as the outer table or row-preserving table.
In the queries below, T1 is the outer table and T2 is the inner table:
T1 left join T2 T2 right join T1
right joins retain all the rows of the table reference on the right of the join clause (see example above).
used to set the conditions for the rows that are retrieved. A search condition can include column names, expressions, arithmetic operators, comparison operators, the keywords not, like, is null, and, or, between, in, exists, any, and all, subqueries, case expressions, or any combination of these items. See where clause for more information.
finds a value for each group. These values appear as new columns in the results, rather than as new rows.
When group by is used with standard SQL, each item in the select list must either have a fixed value in every row in the group or be used with aggregate functions, which produce a single value for each group. Transact-SQL has no such restrictions on the items in the select list. Also, Transact-SQL allows you to group by any expression (except by a column alias); with standard SQL, you can group by a column only.
You can use the aggregates listed in Table 1-27 with group by (expression is almost always a column name):
Aggregate function |
Result |
---|---|
sum ([all | distinct] expression) |
Total of the values in the numeric column. |
avg ([all | distinct] expression) |
Average of the values in the numeric column. |
count ([all | distinct] expression) |
Number of (distinct) non-null values in the column returned as an integer. |
count_big ([all | distinct] expression) |
Number of distinct non-null values in the column returned as a bigint. |
count (*) |
Number of selected rows returned as an integer. |
count_big (*) |
Number of selected rows returned as a bigint. |
max (expression) |
Highest value in the column. |
min (expression) |
Lowest value in the column. |
See group by and having clauses for more information.
A table can be grouped by any combination of columns—that is, groups can be nested within each other. You cannot group by a column heading; you must use a column name, an expression, or a number representing the position of the item in the select list.
includes all groups in the results, even those that do not have any rows that meet the search conditions. See group by and having clauses for an example.
is an expression that includes no aggregates.
sets conditions for the group by clause, similar to the way that where sets conditions for the select clause. There is no limit on the number of conditions that can be included.
You can use a having clause without a group by clause.
If any columns in the select list do not have aggregate functions applied to them and are not included in the query’s group by clause (illegal in standard SQL), the meanings of having and where are somewhat different.
In this situation, a where clause restricts the rows that are included in the calculation of the aggregate, but does not restrict the rows returned by the query. Conversely, a having clause restricts the rows returned by the query, but does not affect the calculation of the aggregate. See group by and having clauses for examples.
sorts the results by columns. In Transact-SQL, you can use order by for items that do not appear in the select list. You can sort by a column name, a column heading (or alias), an expression, or a number representing the position of the item in the select list (the select_list_number). If you sort by select list number, the columns to which the order by clause refers must be included in the select list, and the select list cannot be * (asterisk).
sorts results in ascending order (the default).
sorts results in descending order.
used with row aggregates (sum, avg, min, max, count, and count_big) to generate control break summary values. The summary values appear as additional rows in the query results, allowing you to see detail and summary rows with one statement.
You cannot use a select into clause with compute.
If you use compute by, you must also use an order by clause. The columns listed after compute by must be identical to or a subset of those listed after order by, and must be in the same left-to-right order, start with the same expression, and not skip any expressions.
For example, if the order by clause is order
by a, b, c
, the compute by clause
can be any (or all) of these:
compute by a, b, c compute by a, b compute by a
The keyword compute can be used without by to generate grand totals, grand counts, and so on. order by is optional if you use compute without by. See compute clause for details and examples.
If CIS is enabled, compute is not forwarded to remote servers.
specifies that a cursor result set is read-only or updatable.
With Adaptive Server versions earlier than 15.7, you can use this option only within a stored procedure, and only when the procedure defines a query for a cursor. In this case, the select is the only statement allowed in the procedure. It defines the for read only or for update option (instead of the declare cursor statement). This method of declaring cursors provides the advantage of page-level locking while fetching rows.
Also, with Adaptive Server versions earlier than 15.7, if the select statement in the stored procedure is not used to define a cursor, Adaptive Server ignores the for read only | update option. See the Embedded SQL™ documentation for more information about using stored procedures to declare cursors.
With Adaptive Server 15.7 and later, if select for update is set, you can use the for update option at the language level outside of a stored procedure, but within a transaction. Such a select need not refer to a cursor. When you use select for update in datarows-locked tables, the selected rows are exclusively locked for the duration of the transaction.
For information about read-only or updatable cursors, see the Transact-SQL User’s Guide.
is the list of columns from a cursor result set defined as updatable with the for update option.
specifies the isolation level (0, 1, 2 or 3) of the query. If you omit this clause, the query uses the isolation level of the session in which it executes (isolation level 1 by default). The at isolation clause is valid only for single queries or within the declare cursor statement. Adaptive Server returns a syntax error if you use at isolation:
With a query using the into clause
Within a subquery
With a query in the create view statement
With a query in the insert statement
With a query using the for browse clause
If there is a union operator in the query, you must specify the at isolation clause after the last select. If you specify holdlock, noholdlock, or shared in a query that also specifies at isolation read uncommitted, Adaptive Server issues a warning and ignores the at isolation clause. For the other isolation levels, holdlock takes precedence over the at isolation clause. For more information about isolation levels, see the Transact-SQL User’s Guide.
If CIS is enabled, you cannot use at isolation for remote servers.
specifies isolation level 0 for the query.
specifies isolation level 1 for the query.
specifies transaction isolation level 2 for the query.
specifies isolation level 3 for the query.
must be attached to the end of a SQL statement sent to Adaptive Server in a DB-Library browse application. See the Open Client DB-Library Reference Manual for details.
specifies the abstract plan to use to optimize the query. It can be a full or partial plan, specified in the abstract plan language. See Chapter 30, “Creating and Using Abstract Plans,” in the Performance and Tuning Guide for more information.
Selects all rows and columns from the publishers table:
select * from publishers
pub_id pub_name city state ------ --------------------------- -------------------- ----- 0736 New Age Books Boston MA 0877 Binnet & Hardley Washington DC 1389 Algodata Infosystems Berkeley CA
Selects all rows from specific columns of the publishers table:
select pub_id, pub_name, city, state from publishers
Selects all rows from specific columns of the publishers table, substituting one column name and adding a string to the output:
select "The publisher's name is", Publisher = pub_name, pub_id from publishers
Publisher pub_id ----------------------- ----------------------------- ------ The publisher’s name is New Age Books 0736 The publisher’s name is Binnet & Hardley 0877 The publisher’s name is Algodata Infosystems 1389
Selects all rows from specific columns of the titles table, substituting column names:
select type as Type, price as Price from titles
Specifies the locking scheme and the reserve page gap for select into:
select title_id, title, price into bus_titles lock datarows with reservepagegap = 10 from titles where type = "business"
Encrypts the creditcard column when selecting into the bigspenders table:
select creditcard, custid, sum(amount) into #bigspenders (creditcard encrypt with cust.database.new_cc_key) from daily_xacts group by creditcard having sum(amount) > $5000
Selects only the rows that are not exclusively locked. If any other user has an exclusive lock on a qualifying row, that row is not returned:
select title, price from titles readpast where type = "news" and price between $20 and $30
Selects specific columns and rows, placing the results into the temporary table #advance_rpt:
select pub_id, total = sum (total_sales) into #advance_rpt from titles where advance < $10000 and total_sales is not null group by pub_id having count (*) > 1
Selects the top 3 rows from au_lname from the authors table:
select top 3 au_lname from authors
Concatenates two columns and places the results into the temporary table #tempnames:
select "Author_name" = au_fname + " " + au_lname into #tempnames from authors
Selects specific columns and rows, returns the results ordered by type from highest to lowest, and calculates summary information:
select type, price, advance from titles order by type desc compute avg (price), sum (advance) by type compute sum (price), sum (advance)
Selects specific columns and rows, and calculates totals for the price and advance columns:
select type, price, advance from titles compute sum (price), sum (advance)
Creates the coffeetabletitles table, a copy of the titles table which includes only books priced over $20:
select * into coffeetabletitles from titles where price > $20
Creates the newtitles table, an empty copy of the titles table:
select * into newtitles from titles where 1 = 0
Gives an optimizer hint:
select title_id, title from titles (index title_id_ind prefetch 16) where title_id like "BU%"
Selects the IDENTITY column from the sales_east and sales_west tables by using the syb_identity keyword:
select sales_east.syb_identity, sales_west.syb_identity from sales_east, sales_west
Creates the newtitles table, a copy of the titles table with an IDENTITY column:
select *, row_id = identity (10) into newtitles from titles
Specifies a transaction isolation level for the query.
select pub_id, pub_name from publishers at isolation read uncommitted
Selects from titles using the repeatable read isolation level. No other user can change values in or delete the affected rows until the transaction completes:
begin tran select type, avg (price) from titles group by type at isolation repeatable read
Gives an optimizer hint for the parallel degree for the query:
select ord_num from salesdetail (index salesdetail parallel 3)
Joins the titleauthor and the titles tables on their title_id columns. The result set only includes those rows that contain a price greater than 15:
select au_id, titles.title_id, title, price from titleauthor inner join titles on titleauthor.title_id = titles.title_id and price > 15
The result set contains all the authors from the authors table. The authors who do not live in the same city as their publishers produce null values in the pub_name column. Only the authors who live in the same city as their publishers, Cheryl Carson and Abraham Bennet, produce a non-null value in the pub_name column:
select au_fname, au_lname, pub_name from authors left join publishers on authors.city = publishers.city
Create a new table (newtable) from the existing table (oldtable) with an identity gap, you specify it in the select into statement:
select identity into newtable with identity_gap = 20 from oldtable
For more information about identity gaps, see “Managing Identity Gaps in Tables” in Chapter 7, “Creating Databases and Tables” in the Transact-SQL Users Guide.
Creates a new table, bay_area_authors with row-level compression, and populates it with information about authors that live in the San Francisco Bay Area:
select * into bay_area_authors with compression = row from authors where postalcode like '94%'
Creates a new table named titles_2 that compresses all columns except title and advance:
select * into titles_2 (title not compressed, advance not compressed) with compression = page from titles
Set the select for update configuration parameter, perform a select for update, then an update within the same transaction:
sp_configure 'select for update', 1
Parameter Name Default Memory Used Config Value Run Value Unit Type --------------------------- -------- ---------- ----------- ---------- ------------------- -------------------- select for update 0 0 1 1 not applicable dynamic (1 row affected) Resulting configuration value and memory use have not changed from previous values: new configuration value 1, previous value 1. (return status = 0)
begin tran
select c_int, c_bigdatetime from basetbl1 where c_int > 90 for update of c_bigint
c_int c_bigdatetime ----------- ----------------------------- 91 Sep 14 2009 9:00:00.000000PM 92 Sep 15 2009 9:00:00.000000PM 93 Sep 16 2009 9:00:00.000000PM 94 Sep 17 2009 9:00:00.000000PM 95 Sep 18 2009 9:00:00.000000PM 96 Sep 19 2009 9:00:00.000000PM 97 Sep 20 2009 9:00:00.000000PM 98 Sep 21 2009 9:00:00.000000PM 99 Sep 22 2009 9:00:00.000000PM 100 Sep 23 2009 9:00:00.000000PM (10 rows affected)
update basetbl1 set c_bigint = 5000 where c_int > 90
(10 rows affected) commit tran go
Creates a new table sales_report from an existing table sales_detail. The new table is partitioned by range on the qty column.
select * into sales_report partition by range (qty) (smallorder values <= (500) on seg1, bigorder values <= (5000) on seg2) from sales_detail
Use this query to find the statements that incur too many IOs as the candidates for tuning.
select lio_avg, qtext from sysquerymetrics order by lio_avg
Selects the titles table into the pubs3 database:
select title_id, title, price into bus_titles with dml_logging = minimal from titles
The keywords in the select statement, as in all other statements, must be used in the order shown in the syntax statement.
The maximum number of expressions in a select statement is 4096.
The keyword all can be used after select for compatibility with other implementations of SQL. all is the default. Used in this context, all is the opposite of distinct. All retrieved rows are included in the results, whether or not some are duplicates.
Except in create table, create view, and select into statements, column headings may include any characters, including blanks and Adaptive Server keywords, if the column heading is enclosed in quotes. If the heading is not enclosed in quotes, it must conform to the rules for identifiers.
The character string indicated by like cannot be longer than 255 bytes.
You cannot use the select...for browse option on tables containing more than 255 columns.
Column headings in create table, create view, and select into statements, as well as table aliases, must conform to the rules for identifiers.
To insert data with select from a table that has null values in some fields into a table that does not allow null values, you must provide a substitute value for any NULL entries in the original table. For example, to insert data into an advances table that does not allow null values, this example substitutes “0” for the NULL fields:
insert advances select pub_id, isnull (advance, 0) from titles
Without the isnull function, this command would insert all the rows with non-null values into the advances table, and produce error messages for all rows where the advance column in the titles table contained NULL.
If you cannot make this kind of substitution for your data, you cannot insert data containing null values into the columns with the NOT NULL specification.
Two tables can be identically structured, and yet be different as to whether null values are permitted in some fields. Use sp_help to see the null types of the columns in your table.
The default length of the text, unitext, or image data returned with a select statement is 32K. Use set textsize to change the value. The size for the current session is stored in the global variable @@textsize. Certain client software may issue a set textsize command on logging in to Adaptive Server.
Data from remote Adaptive Servers can be retrieved through the use of remote procedure calls. See create procedure and execute for more information.
A select statement used in a cursor definition (through declare cursor) must contain a from clause, but it cannot contain a compute, for browse, or into clause. If the select statement contains any of the following constructs, the cursor is considered read-only and not updatable:
distinct option
group by clause
Aggregate functions
union operator
If you declare a cursor inside a stored procedure with a select statement that contains an order by clause, that cursor is also considered read-only. Even if it is considered updatable, you cannot delete a row using a cursor that is defined by a select statement containing a join of two or more tables. See declare cursor for more information.
If a select statement that assigns a value to a variable returns more than one row, the last returned value is assigned to the variable. For example:
declare @x varchar (40) select @x = pub_name from publishers print @x
(3 rows affected) Algodata Infosystems
Before you write queries using the ANSI inner and outer join syntax, read “Outer Joins” in Chapter 4, “Joins: Retrieving Data From Several Tables,” in the Transact-SQL User’s Guide.
select into is a two-step operation. The first step creates the new table, and the second step inserts the specified rows into the new table.
You can select into a CIS existing table.
Because the rows inserted by select into operations are not logged, select into commands cannot be issued within user-defined transactions, even if the ddl in tran database option is set to true. Page allocations during select into operations are logged, so large select into operations may fill the transaction log.
If a select into statement fails after creating a new table, Adaptive Server does not automatically drop the table or deallocate its first data page. This means that any rows inserted on the first page before the error occurred remain on the page. Check the value of the @@error global variable after a select into statement to be sure that no error occurred. Use the drop table statement to remove the new table, then reissue the select into statement.
The name of the new table must be unique in the database and must conform to the rules for identifiers. You can also select into temporary tables (see Examples 7, 8, and 11).
Any rules, constraints, or defaults associated with the base table are not carried over to the new table. Bind rules or defaults to the new table using sp_bindrule and sp_bindefault.
select into does not carry over the base table’s max_rows_per_page value, and it creates the new table with a max_rows_per_page value of 0. Use sp_chgattribute to set the max_rows_per_page value.
The select into/bulkcopy/pllsort option must be set to true (by executing sp_dboption) in order to select into a permanent table. You do not have to set the select into/bulkcopy/pllsort option to true in order to select into a temporary table, since the temporary database is never recovered.
After you have used select into in a database, you must perform a full database dump before you can use the dump transaction command. select into operations log only page allocations and not changes to data rows. Therefore, changes are not recoverable from transaction logs. In this situation, issuing the dump transaction statement produces an error message instructing you to use dump database instead.
By default, the select into/bulkcopy/pllsort option is set to false in newly created databases. To change the default situation, set this option to true in the model database.
select into can be used with an archive database.
select into runs more slowly while a dump database is taking place.
You can use select into to create a duplicate table with no data by having a false condition in the where clause (see Example 12).
You must provide a column heading for any column in the select list that contains an aggregate function or any expression. The use of any constant, arithmetic or character expression, built-in functions, or concatenation in the select list requires a column heading for the affected item. The column heading must be a valid identifier or must be enclosed in quotation marks (see Examples 7 and 8).
Datatypes and nullability are implicitly assigned to literal values when select into is used, such as:
select x = getdate () into mytable
This results in a non-nullable column, regardless of whether allow nulls by default is on or not. It depends upon how the select commands are used and with what other commands within the syntax.
The convert syntax allows you to explicitly specify the datatype and nullability of the resulting column, not the default.
Wrap getdate with a function that does result in a null, such as:
select x = nullif (getdate (), "1/1/1900") into mytable
Or, use the convert syntax:
select x = convert (datetime null, getdate ()) into mytable
You cannot use select into inside a user-defined transaction or in the same statement as a compute clause.
To select an IDENTITY column into a result table, include the column name (or the syb_identity keyword) in the select statement’s column_list. The new column observes the following rules:
If an IDENTITY column is selected more than once, it is defined as NOT NULL in the new table. It does not inherit the IDENTITY property.
If an IDENTITY column is selected as part of an expression, the resulting column does not inherit the IDENTITY property. It is created as NULL if any column in the expression allows nulls; otherwise, it is created as NOT NULL.
If the select statement contains a group by clause or aggregate function, the resulting column does not inherit the IDENTITY property. Columns that include an aggregate of the IDENTITY column are created NULL; others are NOT NULL.
An IDENTITY column that is selected into a table with a union or join does not retain the IDENTITY property. If the table contains the union of the IDENTITY column and a NULL column, the new column is defined as NULL. Otherwise, it is defined as NOT NULL.
You cannot use select into to create a new table with multiple IDENTITY columns. If the select statement includes both an existing IDENTITY column and a new IDENTITY specification of the form column_name = identity (precision), the statement fails.
If CIS is enabled, and if the into table resides on Adaptive Server, Adaptive Server uses bulk copy routines to copy the data into the new table. Before doing a select into with remote tables, set the select into/bulkcopy database option to true.
For information about the Embedded SQL command select into host_var_list, see the Open Client Embedded SQL Reference Manual.
Use the convert command to change the nullability of a target column into which you are selecting data. For example, the following selects data from the titles table into a target table named temp_titles, but converts the total_sales column from null to not null:
select title, convert (char (100) not null, total_sales) total_sales into #tempsales from titles
Destination tables you create with select into do not inherit any configuration from the source table. That is, if the table from which you pull data is configured for row-level compression, the table that results from the select into is not configured for compression unless you explicitly state the compression level.
You can indicate column-, partition-, and table-level compression for the destination table (the compression level of the partition overrides the compression level of the table).
When you compress the target table, Adaptive Server selects the compression level for columns (if they qualify), regardless of the compression level of the corresponding columns in the source table.
The select into command can include a list of columns that are not compressed on the target table.
You cannot encrypt compressed columns on a target table.
max_rows_per_page applies only to allpages-locked compressed tables; you cannot use it on data-only-locked tables.
You can use exp_row_size on allpages- and data-only-locked tables with variable-length columns. However, the table must be able to expand updates on the same page without causing page splits on allpages-locked tables or or row forwarding on data-only-locked tables.
exp_row_size computes space limits identically for compressed and uncompressed tables. However, because compression results in a page with a lot of free space, the space set aside for exp_row_size may cause far fewer rows to fit on the page than would fit on this page if you did not set exp_row_size.
You cannot use exp_row_size on tables with fixed-length columns, even though Adaptive Server may convert some fixed-length columns to variable-length columns during the compression process.
fillfactor applies determines the space used on the data pages after compression. This is relevant only when you are creating clustered indexes, an operation which requires a sort of the data pages.
For Adaptive Server 15.7 and later, if the configuration parameter select for update is set to 1, the rows selected by select for update will be exclusively locked provided that the command is executed on a datarows-locked table within a transaction context, or in chained mode. If select for update is run within a cursor context, the cursor open and fetch statements must be within the context of a transaction.
Rows selected with select for update, within or outside of a cursor context, retain an exclusive lock until the transaction is complete.
Limitations of select for update:
select for update is not valid in a sub-query block.
select for update is applicable only when the select returns rows directly from a base table and not from a work table. You cannot use select for update with queries that have aggregates or group by, computed, union, having, or distinct clauses.
More rows may qualify for the actual transaction update than with select for update. These rows may appear in the updated set. Prevent such “phantom” rows by using isolation level 3.
During select processing, concurrent select for update tasks may attempt to lock the the same set of rows in different order, causing application deadlocks. However, once select for update completes, subsequent updates to the set of rows are not blocked, and encounter no deadlocks.
All existing restrictions on updateable cursors apply to select for update both within and outside of the cursor context. The only difference is that with select for update, the order by clause is supported with an updateable cursor. The limitations and restrictions on updateable cursors apply to both language and execute cursors.
All tables referenced by select for update must be from the same database.
select for update is supported only on tables with datarows locked.
By default, LOB columns in the target table inherit the storage property of the corresponding LOB columns in the select list. If the target table’s LOB column is produced as a result of an expression, such as the convert(text, column) built-in function, then the column automatically uses off-row storage.
The lock option, used with select...into, allows you to specify the locking scheme for the table created by the command. If you do not specify a locking scheme, the default locking scheme, as set by the configuration parameter lock scheme, is applied.
When you use the lock option, you can also specify the space management properties max_rows_per_page, exp_row_size, and reservepagegap.
You can change the space management properties for a table created with select into, using sp_chgattribute.
The partitions_clause, when used with select...into, allows you to specify the partition properties of the table created by the command. (See create table for more information.) If you do not specify a partition type, Adaptive Server creates an unpartitioned table. If any row to be inserted does not satisfy the criteria for any partition in the target table, select...into fails.
The index, prefetch and lru | mru options specify the index, cache and I/O strategies for query execution. These options override the choices made by the Adaptive Server optimizer. Use them with caution, and always check the performance impact with set statistics io on. For more information about using these options, see the Performance and Tuning Guide.
If you use the encrypt clause without specifying a key name, Adaptive Server uses the database default key to encrypt the data in the target column.
If a column in the source table is encrypted and you do not specify the encrypt clause for the target column, Adaptive Server decrypts the data in the source table and inserts plain text data in the target column.
If you specify encryption for the target column with the same key used for the source column data, and if the key does not use an initialization vector or random padding, then Adaptive Server copies the data from the source column to the target column as cipher text, without intermediate decryption and reencryption operations.
If, however, you specify encryption for the target column using a different key from that used for the source column, or if the key uses an initialization vector or padding during encryption, Adaptive Server performs a decryption and encryption operation for each selected row of the encrypted column.
The parallel option reduces the number of worker threads that the Adaptive Server optimizer can use for parallel processing. The degree_of_parallelism cannot be greater than the configured max parallel degree. If you specify a value that is greater than the configured max parallel degree, the optimizer ignores the parallel option.
When multiple worker processes merge their results, the order of rows that Adaptive Server returns may vary from one execution to the next. To get rows from partitioned tables in a consistent order, use an order by clause, or override parallel query execution by using parallel 1 in the from clause of the query.
A from clause specifying parallel is ignored if any of the following conditions is true:
The select statement is used for an update or insert.
The from clause is used in the definition of a cursor.
parallel is used in the from clause within any inner query blocks of a subquery.
The select statement creates a view.
The table is the inner table of an outer join.
The query specifies min or max on the table and specifies an index.
An unpartitioned clustered index is specified or is the only parallel option.
The query specifies exists on the table.
The value for the configuration parameter max scan parallel degree is 1 and the query specifies an index.
A nonclustered index is covered. For information on index covering, see Chapter 9, “How Indexes Work” in the Performance and Tuning Guide.
The table is a system table or a virtual table.
The query is processed using the OR strategy. For an explanation of the OR strategy, see the Performance and Tuning Guide.
The query returns a large number of rows to the user.
The readpast option allows a select command to access the specified table without being blocked by incompatible locks held by other tasks. readpast queries can only be performed on data-only-locked tables.
If the readpast option is specified for an allpages-locked table, the readpast option is ignored. The command operates at the isolation level specified for the command or session. If the isolation level is 0, dirty reads are performed, and the command returns values from locked rows and does not block. If the isolation level is 1 or 3, the command blocks when pages with incompatible locks must be read.
The interactions of session-level isolation levels and readpast on a table in a select command are shown in Table 1-28.
Session isolation level |
Effects |
---|---|
0, read uncommitted (dirty reads) |
readpast is ignored, and rows containing uncommitted transactions are returned to the user. A warning message is printed. |
1, read committed |
Rows or pages with incompatible locks are skipped; no locks are held on the rows or pages read Using readpast may produce duplicates and adding the distinct clause does not clear this problem. To resolve this, when using readpast, use a group by clause in addition to a distinct clause to avoid duplicates. |
2, repeatable read |
Rows or pages with incompatible locks are skipped; shared locks are held on all rows or pages that are read until the end of the statement or transaction; holds locks on all pages read by the statement until the transaction completes. |
3, serializable |
readpast is ignored, and the command executes at level 3. The command blocks on any rows or pages with incompatible locks. |
select commands that specify readpast fail with an error message if they also include any of the following:
An at isolation clause, specifying 0 or read uncommitted
An at isolation clause, specifying 3 or serializable
The holdlock keyword on the same table
If at isolation 2 or at isolation repeatable read is specified in a select query that specifies readpast, shared locks are held on the readpast tables until the statement or transaction completes.
If a select command with the readpast option encounters a text column that has an incompatible lock on it, readpast locking retrieves the row, but returns the text column with a value of null. No distinction is made, in this case, between a text column containing a null value and a null value returned because the column is locked.
When the source text of a stored procedure or trigger is stored in the system table syscomments, a query using select * is stored in syscomments expanding the column list referenced in the select *.
For example, a select * from a table containing the columns col1 and col2 is stored as:
select <table>.col1, <table>.col2 from <table>
In Adaptive Server version 12.5.4, the expanding of the column-list has been enhanced to check whether identifiers (table-names, column-names and so on) comply with the rules for identifiers.
For example, if a table includes the columns col1 and 2col, the second column-name starts with a number, which can only be included by using brackets in the create table statement.
When performing a select * in a stored procedure or trigger from this table, the text in syscomments looks similar to:
select <table>.col1, <table>[2col] from <table>
For all identifiers used in the text that exands a select *, brackets are added when the identifier does not comply with the rules for identifiers.
You must add brackets around identifiers to make sure Adaptive Server can use the SQL-text while performing an upgrade to a more recent release.
ANSI SQL – Compliance level: Entry-level compliant.
Transact-SQL extensions include:
select into to create a new table
lock clauses
compute clauses
Global and local variables
index clause, prefetch, parallel and lru | mru
holdlock, noholdlock, and shared keywords
“column_heading = column_name”
Qualified table and column names
select in a for browse clause
The use, within the select list, of columns that are not in the group by list and have no aggregate functions
at isolation repeatable read | 2 option
select permission defaults to the owner of the table or view, who can transfer it to other users.
Values in event and extrainfo columns of sysaudits are:
Event |
Audit option |
Command or access audited |
Information in extrainfo |
---|---|---|---|
62 |
select |
select from a table |
|
63 |
select |
select from a view |
|
Commands compute clause, create index, create trigger, delete, group by and having clauses, insert, order by clause, set, union operator, update, where clause
Functions avg, count, isnull, max, min, sum
System procedures sp_cachestrategy, sp_chgattribute, sp_dboption