o Allow nulls in arrays
o Allow arrays to be ORDER'ed
o fix array handling in ECPG
-BINARY DATA
+lfBINARY DATA
o -Add non-large-object binary field (already exists -- bytea)
o -Make binary interface for TOAST columns (base64)
- o Improve vacuum of large objects (/contrib/vacuumlo)
+ o Improve vacuum of large objects, like /contrib/vacuumlo
o Add security checking for large objects
o Make file in/out interface for TOAST columns, similar to large object
interface (force out-of-line storage and no compression)
* Allow CREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops)
fails index can't store constant parameters
* Add FILLFACTOR to index creation
-* Order duplicate index entries by tid
+* Order duplicate index entries by tid for faster heap lookups
* -Re-enable partial indexes
* -Prevent pg_attribute from having duplicate oids for indexes (Tom)
* Allow inherited tables to inherit index, UNIQUE constraint, and primary
non-consecutive keys or OR clauses, so fewer heap accesses
* Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,
float4, numeric/decimal too [optimizer]
-* Use indexes with CIDR '<<' (contains) operator
+* -Use indexes with CIDR '<<' (contains) operator
* Allow LIKE indexing optimization for non-ASCII locales
* Be smarter about insertion of already-ordered data into btree index
* -Gather more accurate dispersion statistics using indexes (Tom)
[ { UNION | INTERSECT | EXCEPT [ ALL ] } select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
[ FOR UPDATE [ OF tablename [, ...] ] ]
- [ LIMIT { count | ALL } [ { OFFSET | , } start ]]
+ [ LIMIT [ start , ] { count | ALL } ]
+ [ OFFSET start ]
where from_item can be:
table_query UNION [ ALL ] table_query
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
- [ LIMIT { count | ALL } [ { OFFSET | , } start ]]
+ [ LIMIT [ start , ] { count | ALL } ]
+ [ OFFSET start ]
where
table_query INTERSECT [ ALL ] table_query
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
- [ LIMIT { count | ALL } [ { OFFSET | , } start ]]
+ [ LIMIT [ start , ] { count | ALL } ]
+ [ OFFSET start ]
where
table_query EXCEPT [ ALL ] table_query
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
- [ LIMIT { count | ALL } [ { OFFSET | , } start ]]
+ [ LIMIT [ start , ] { count | ALL } ]
+ [ OFFSET start ]
where
- LIMIT { count | ALL } [ { OFFSET | , } start ]
+ LIMIT [ start , ] { count | ALL }
OFFSET start
constrains the result rows into a unique order. Otherwise you will get
an unpredictable subset of the query's rows---you may be asking for
the tenth through twentieth rows, but tenth through twentieth in what
- ordering? You don't know what ordering, unless you specified ORDER BY.
+ ordering? You don't know what ordering unless you specify ORDER BY.
query optimizer takes LIMIT into account when generating a query plan,
so you are very likely to get different plans (yielding different row
- orders) depending on what you give for LIMIT and OFFSET. Thus, using
+ orders) depending on what you use for LIMIT and OFFSET. Thus, using
different LIMIT/OFFSET values to select different subsets of a query
result will give inconsistent results unless
you enforce a predictable result ordering with ORDER BY. This is not
[ { UNION | INTERSECT | EXCEPT [ ALL ] } select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
[ FOR UPDATE [ OF class_name [, ...] ] ]
- [ LIMIT { count | ALL } [ { OFFSET | , } start ]]
+ [ LIMIT [ start , ] { count | ALL } ]
+ [ OFFSET start ]
*
*
* IDENTIFICATION
- * $Header: /cvsroot/pgsql/src/backend/parser/gram.y,v 2.252 2001/09/20 14:20:27 petere Exp $
+ * $Header: /cvsroot/pgsql/src/backend/parser/gram.y,v 2.253 2001/09/23 03:39:01 momjian Exp $
*
* HISTORY
* AUTHOR DATE MAJOR EVENT
;
-select_limit: LIMIT select_limit_value ',' select_offset_value
- { $$ = makeList2($4, $2); }
+select_limit: LIMIT select_offset_value ',' select_limit_value
+ { $$ = makeList2($2, $4); }
| LIMIT select_limit_value OFFSET select_offset_value
{ $$ = makeList2($4, $2); }
| LIMIT select_limit_value