The executor recursively steps through
the plan tree and
- retrieves tuple s in the way represented by the plan.
+ retrieves row s in the way represented by the plan.
The executor makes use of the
storage system while scanning
relations, performs sorts and joins ,
- evaluates qualifications and finally hands back the tuple s derived.
+ evaluates qualifications and finally hands back the row s derived.
to the backend (server). The query is transmitted using plain text,
i.e. there is no parsing done in the frontend (client). The
server parses the query, creates an execution plan ,
- executes the plan and returns the retrieved tuple s to the client
+ executes the plan and returns the retrieved row s to the client
by transmitting them over the established connection.
The lexer is defined in the file
scan.l and is responsible
for recognizing identifiers ,
- the SQL keywords etc. For
- every keyword or identifier that is found, a token
+ the SQL key words etc. For
+ every key word or identifier that is found, a token
is generated and handed to the parser.
call. This may be transformed to either a FuncExpr>
or Aggref> node depending on whether the referenced
name turns out to be an ordinary function or an aggregate function.
- Also, information about the actual datatypes of columns and expression
+ Also, information about the actual data types of columns and expression
results is added to the query tree.
- The first one worked using tuple level processing and was
+ The first one worked using row level processing and was
implemented deep in the executor . The rule system was
- called whenever an individual tuple had been accessed. This
+ called whenever an individual row had been accessed. This
implementation was removed in 1995 when the last official release
of the
Berkeley Postgres project was
transformed into
Postgres95 .
nested loop join : The right relation is scanned
- once for every tuple found in the left relation. This strategy
+ once for every row found in the left relation. This strategy
is easy to implement but can be very time consuming. (However,
- if the right relation can be scanned with an indexscan, this can
+ if the right relation can be scanned with an index scan, this can
be a good strategy. It is possible to use values from the current
- row of the left relation as keys for the indexscan of the right.)
+ row of the left relation as keys for the index scan of the right.)
hash join : the right relation is first scanned
and loaded into a hash table, using its join attributes as hash keys.
Next the left relation is scanned and the
- appropriate values of every tuple found are used as hash keys to
- locate the matching tuple s in the table.
+ appropriate values of every row found are used as hash keys to
+ locate the matching row s in the table.
The finished plan tree consists of sequential or index scans of
- the base relations, plus nestloop, merge, or hash join nodes as
+ the base relations, plus nested- loop, merge, or hash join nodes as
needed, plus any auxiliary steps needed, such as sort nodes or
aggregate-function calculation nodes. Most of these plan node
types have the additional ability to do selection>
The executor takes the plan handed back by the
planner/optimizer and recursively processes it to extract the required set
of rows. This is essentially a demand-pull pipeline mechanism.
- Each time a plan node is called, it must deliver one more tuple , or
- report that it is done delivering tuple s.
+ Each time a plan node is called, it must deliver one more row , or
+ report that it is done delivering row s.
To provide a concrete example, assume that the top
node is a MergeJoin node.
- Before any merge can be done two tuple s have to be fetched (one from
+ Before any merge can be done two row s have to be fetched (one from
each subplan). So the executor recursively calls itself to
process the subplans (it starts with the subplan attached to
lefttree ). The new top node (the top node of the left
subplan) is, let's say, a
Sort node and again recursion is needed to obtain
- an input tuple . The child node of the Sort might
+ an input row . The child node of the Sort might
be a SeqScan> node, representing actual reading of a table.
Execution of this node causes the executor to fetch a row from the
table and return it up to the calling node. The Sort
node will repeatedly call its child to obtain all the rows to be sorted.
When the input is exhausted (as indicated by the child node returning
- a NULL instead of a tuple ), the Sort code performs
+ a NULL instead of a row ), the Sort code performs
the sort, and finally is able to return its first output row, namely
the first one in sorted order. It keeps the remaining rows stored so
that it can deliver them in sorted order in response to later demands.
result row. But INSERT ... SELECT> may demand the full power
of the executor mechanism.) For UPDATE>, the planner arranges
that each computed row includes all the updated column values, plus
- the TID> (tuple ID, or location ) of the original target row;
+ the TID> (tuple ID, or row ID ) of the original target row;
the executor top level uses this information to create a new updated row
and mark the old row deleted. For DELETE>, the only column
that is actually returned by the plan is the TID, and the executor top
-
+
Arrays
When a single element is pushed on to the beginning of a one-dimensional
array, the result is an array with a lower bound subscript equal to
- the righthand operand's lower bound subscript, minus one. When a single
+ the right- hand operand's lower bound subscript, minus one. When a single
element is pushed on to the end of a one-dimensional array, the result is
- an array retaining the lower bound of the lefthand operand. For example:
+ an array retaining the lower bound of the left- hand operand. For example:
SELECT array_dims(1 || ARRAY[2,3]);
array_dims
When two arrays with an equal number of dimensions are concatenated, the
- result retains the lower bound subscript of the lefthand operand's outer
- dimension. The result is an array comprising every element of the lefthand
- operand followed by every element of the righthand operand. For example:
+ result retains the lower bound subscript of the left- hand operand's outer
+ dimension. The result is an array comprising every element of the left- hand
+ operand followed by every element of the right- hand operand. For example:
SELECT array_dims(ARRAY[1,2] || ARRAY[3,4,5]);
array_dims
int4
- Always -1 in storage, but when loaded into a tuple descriptor
+ Always -1 in storage, but when loaded into a row descriptor
in memory this may be updated to cache the offset of the attribute
- within the tuple .
+ within the row .
If true, this attribute is a set. In that case, what is really
- stored in the attribute is the OID of a tuple in the
+ stored in the attribute is the OID of a row in the
pg_proc catalog. The
- pg_proc tuple contains the query
+ pg_proc row contains the query
string that defines this set, i.e., the query to run to get
the set. So the atttypid (see
above) refers to the type returned by this query, but the
float4
- Number of tuple s in the table.
+ Number of row s in the table.
This is only an estimate used by the planner.
It is updated by VACUUM ,
ANALYZE , and CREATE INDEX .
xid
- All tuple s inserted or deleted by transaction IDs before this one
+ All row s inserted or deleted by transaction IDs before this one
have been marked as known committed or known aborted in this database.
This is used to determine when commit-log space can be recycled.
xid
- All tuple s inserted by transaction IDs before this one have been
+ All row s inserted by transaction IDs before this one have been
relabeled with a permanent (frozen>) transaction ID in this
database. This is useful to check whether a database must be vacuumed
soon to avoid transaction ID wrap-around problems.
|
refobjid
oid
- any oid attribute
+ any OID column
The OID of the specific referenced object
|
indkey
int2vector
- nk linkend="catalog-pg-attribute">pg_attribute .attnum
+ teral>pg_attribute .attnum
This is an array of indnatts (up to
INDEX_MAX_KEYS ) values that indicate which
opcamid
oid
pg_am .oid
- Index access method opclass is for
+ Index access method operator class is for
|
tgtype
int2
- Bitmask identifying trigger conditions
+ Bit mask identifying trigger conditions
|
For types used in system tables, it is critical that the size
and alignment defined in pg_type
agree with the way that the compiler will lay out the column in
- a struct representing a table row.
+ a structure representing a table row.
typndims is the number of array dimensions
- for a domain that is an array (that is, typbasetype is an array type;
- the domain's typelem will match the base type's typelem ).
+ for a domain that is an array (that is, typbasetype> is an array type;
+ the domain's typelem> will match the base type's typelem ).
Zero for types other than array domains.
|
numeric [ (p ,
- s ) ]
+ s ) ]
decimal [ (p ,
- s ) ]
+ s ) ]
exact numeric with selectable precision
|
- Name
- Storage Size
- Description
- Range
+ Name
+ Storage Size
+ Description
+ Range
|
- smallint>
- 2 bytes
- small-range integer
- -32768 to +32767
+ smallint>
+ 2 bytes
+ small-range integer
+ -32768 to +32767
|
- integer>
- 4 bytes
- usual choice for integer
- -2147483648 to +2147483647
+ integer>
+ 4 bytes
+ usual choice for integer
+ -2147483648 to +2147483647
|
- bigint>
- 8 bytes
- large-range integer
- -9223372036854775808 to 9223372036854775807
+ bigint>
+ 8 bytes
+ large-range integer
+ -9223372036854775808 to 9223372036854775807
|
- decimal>
- variable
- user-specified precision, exact
- no limit
+ decimal>
+ variable
+ user-specified precision, exact
+ no limit
|
- numeric>
- variable
- user-specified precision, exact
- no limit
+ numeric>
+ variable
+ user-specified precision, exact
+ no limit
|
- real>
- 4 bytes
- variable-precision, inexact
- 6 decimal digits precision
+ real>
+ 4 bytes
+ variable-precision, inexact
+ 6 decimal digits precision
|
- double precision>
- 8 bytes
- variable-precision, inexact
- 15 decimal digits precision
+ double precision>
+ 8 bytes
+ variable-precision, inexact
+ 15 decimal digits precision
|
- serial>
- 4 bytes
- autoincrementing integer
- 1 to 2147483647
+ serial>
+ 4 bytes
+ autoincrementing integer
+ 1 to 2147483647
|
- bigserial
- 8 bytes
- large autoincrementing integer
- 1 to 9223372036854775807
+ bigserial
+ 8 bytes
+ large autoincrementing integer
+ 1 to 9223372036854775807
column should be assigned its default value. This can be done
either by excluding the column from the list of columns in
the INSERT statement, or through the use of
- the DEFAULT keyword.
+ the DEFAULT key word.
|
- Name
- Storage Size
- Description
- Range
+ Name
+ Storage Size
+ Description
+ Range
|
- money
- 4 bytes
- currency amount
- -21474836.48 to +21474836.47
+ money
+ 4 bytes
+ currency amount
+ -21474836.48 to +21474836.47
|
- Name
- Description
+ Name
+ Description
|
- character varying(n>) , varchar(n>)
- variable-length with limit
+ character varying(n>) , varchar(n>)
+ variable-length with limit
|
- character(n>) , char(n>)
- fixed-length, blank padded
+ character(n>) , char(n>)
+ fixed-length, blank padded
|
- text
- variable unlimited length
+ text
+ variable unlimited length
|
- Name
- Storage Size
- Description
+ Name
+ Storage Size
+ Description
|
- "char"
- 1 byte
- single-character internal type
+ "char"
+ 1 byte
+ single-character internal type
|
- name
- 64 bytes
- internal type for object names
+ name
+ 64 bytes
+ internal type for object names
- The
SQL standard defines a different binary
- string type, called BLOB or BINARY LARGE
- OBJECT. The input format is different compared to
- bytea , but the provided functions and operators are
- mostly the same.
+
The
SQL standard defines a different binary
+ string type, called BLOB or BINARY LARGE
+ OBJECT. The input format is different compared to
+ bytea , but the provided functions and operators are
+ mostly the same.
When timestamp> values are stored as double precision floating-point
numbers (currently the default), the effective limit of precision
- may be less than 6. Timestamp values are stored as seconds
+ may be less than 6. timestamp values are stored as seconds
since 2000-01-01, and microsecond precision is achieved for dates within
a few years of 2000-01-01, but the precision degrades for dates further
- away. When timestamp s are stored as eight-byte integers (a compile-time
+ away. When timestamp value s are stored as eight-byte integers (a compile-time
option), microsecond precision is available over the full range of
values. However eight-byte integer timestamps have a reduced range of
dates from 4713 BC up to 294276 AD.
Date Input
- |
- Example
- Description
-
+ |
+ Example
+ Description
+
- |
- January 8, 1999
- unambiguous in any datestyle input mode
-
- |
- 1999-01-08
- ISO-8601, January 8 in any mode
- (recommended format)
-
- |
- 1/8/1999
- January 8 in MDY> mode;
- August 1 in DMY> mode
-
- |
- 1/18/1999
- January 18 in MDY> mode;
- rejected in other modes
-
- |
- 01/02/03
- January 2, 2003 in MDY> mode;
- February 1, 2003 in DMY> mode;
- February 3, 2001 in YMD> mode
-
-
- |
- 19990108
- ISO-8601; January 8, 1999 in any mode
-
- |
- 990108
- ISO-8601; January 8, 1999 in any mode
-
- |
- 1999.008
- year and day of year
-
- |
- J2451187
- Julian day
-
- |
- January 8, 99 BC
- year 99 before the Common Era
-
+ |
+ January 8, 1999
+ unambiguous in any datestyle input mode
+
+ |
+ 1999-01-08
+ ISO-8601, January 8 in any mode
+ (recommended format)
+
+ |
+ 1/8/1999
+ January 8 in MDY> mode;
+ August 1 in DMY> mode
+
+ |
+ 1/18/1999
+ January 18 in MDY> mode;
+ rejected in other modes
+
+ |
+ 01/02/03
+ January 2, 2003 in MDY> mode;
+ February 1, 2003 in DMY> mode;
+ February 3, 2001 in YMD> mode
+
+
+ |
+ 19990108
+ ISO-8601; January 8, 1999 in any mode
+
+ |
+ 990108
+ ISO-8601; January 8, 1999 in any mode
+
+ |
+ 1999.008
+ year and day of year
+
+ |
+ J2451187
+ Julian day
+
+ |
+ January 8, 99 BC
+ year 99 before the Common Era
+
Time Input
-
- |
- Example
- Description
-
-
-
- |
- 04:05:06.789
- ISO 8601
-
- |
- 04:05:06
- ISO 8601
-
- |
- 04:05
- ISO 8601
-
- |
- 040506
- ISO 8601
-
- |
- 04:05 AM
- same as 04:05; AM does not affect value
-
- |
- 04:05 PM
- same as 16:05; input hour must be <= 12
-
- |
- 04:05:06.789-8
- ISO 8601
-
- |
- 04:05:06-08:00
- ISO 8601
-
- |
- 04:05-08:00
- ISO 8601
-
- |
- 040506-08
- ISO 8601
-
- |
- 04:05:06 PST
- time zone specified by name
-
-
-
-
+
+ |
+ Example
+ Description
+
+
+
+ |
+ 04:05:06.789
+ ISO 8601
+
+ |
+ 04:05:06
+ ISO 8601
+
+ |
+ 04:05
+ ISO 8601
+
+ |
+ 040506
+ ISO 8601
+
+ |
+ 04:05 AM
+ same as 04:05; AM does not affect value
+
+ |
+ 04:05 PM
+ same as 16:05; input hour must be <= 12
+
+ |
+ 04:05:06.789-8
+ ISO 8601
+
+ |
+ 04:05:06-08:00
+ ISO 8601
+
+ |
+ 04:05-08:00
+ ISO 8601
+
+ |
+ 040506-08
+ ISO 8601
+
+ |
+ 04:05:06 PST
+ time zone specified by name
+
+
+
+
Time Zone Input
-
- |
- Example
- Description
-
-
-
- |
- PST
- Pacific Standard Time
-
- |
- -8:00
- ISO-8601 offset for PST
-
- |
- -800
- ISO-8601 offset for PST
-
- |
- -8
- ISO-8601 offset for PST
-
- |
- zulu
- Military abbreviation for GMT
-
- |
- z
- Short form of zulu
-
-
+
+ |
+ Example
+ Description
+
+
+
+ |
+ PST
+ Pacific Standard Time
+
+ |
+ -8:00
+ ISO-8601 offset for PST
+
+ |
+ -800
+ ISO-8601 offset for PST
+
+ |
+ -8
+ ISO-8601 offset for PST
+
+ |
+ zulu
+ Military abbreviation for GMT
+
+ |
+ z
+ Short form of zulu
+
+
Special Date/Time Inputs
-
- |
- Input String
+
+ |
+ Input String
Valid Types
- Description
-
-
-
- |
- epoch
+ Description
+
+
+
+ |
+ epoch
date , timestamp
- 1970-01-01 00:00:00+00 (Unix system time zero)
-
- |
- infinity
+ 1970-01-01 00:00:00+00 (Unix system time zero)
+
+ |
+ infinity
timestamp
- later than all other time stamps
-
- |
- -infinity
+ later than all other time stamps
+
+ |
+ -infinity
timestamp
- earlier than all other time stamps
-
- |
- now
+ earlier than all other time stamps
+
+ |
+ now
date , time , timestamp
- current transaction's start time
-
- |
- today
+ current transaction's start time
+
+ |
+ today
date , timestamp
- midnight today
-
- |
- tomorrow
+ midnight today
+
+ |
+ tomorrow
date , timestamp
- midnight tomorrow
-
- |
- yesterday
+ midnight tomorrow
+
+ |
+ yesterday
date , timestamp
- midnight yesterday
-
- |
- allballs
+ midnight yesterday
+
+ |
+ allballs
time
- 00:00:00.00 UTC
-
-
+ 00:00:00.00 UTC
+
+
Date/Time Output Styles
- |
- Style Specification
- Description
- Example
-
+ |
+ Style Specification
+ Description
+ Example
+
- |
- ISO
- ISO 8601/SQL standard
- 1997-12-17 07:37:16-08
-
- |
- SQL
- traditional style
- 12/17/1997 07:37:16.00 PST
-
- |
- POSTGRES
- original style
- Wed Dec 17 07:37:16 1997 PST
-
- |
- German
- regional style
- 17.12.1997 07:37:16.00 PST
-
+ |
+ ISO
+ ISO 8601/SQL standard
+ 1997-12-17 07:37:16-08
+
+ |
+ SQL
+ traditional style
+ 12/17/1997 07:37:16.00 PST
+
+ |
+ POSTGRES
+ original style
+ Wed Dec 17 07:37:16 1997 PST
+
+ |
+ German
+ regional style
+ 17.12.1997 07:37:16.00 PST
+
Date Order Conventions
- |
- DateStyle s etting
- Input Ordering
- Example Output
-
+ |
+ datestyle S etting
+ Input Ordering
+ Example Output
+
- |
- SQL, DMY>
- day /month /year
- 17/12/1997 15:37:16.00 CET
-
- |
- SQL, MDY>
- month /day /year
- 12/17/1997 07:37:16.00 PST
-
- |
- Postgres, DMY>
- day /month /year
- Wed 17 Dec 07:37:16 1997 PST
-
+ |
+ SQL, DMY>
+ day /month /year
+ 17/12/1997 15:37:16.00 CET
+
+ |
+ SQL, MDY>
+ month /day /year
+ 12/17/1997 07:37:16.00 PST
+
+ |
+ Postgres, DMY>
+ day /month /year
+ Wed 17 Dec 07:37:16 1997 PST
+
- Although the date type
- does not have an associated time zone, the
- time type can.
- Time zones in the real world can have no meaning unless
- associated with a date as well as a time
- since the offset may vary through the year with daylight-saving
- time boundaries.
+ Although the date type
+ does not have an associated time zone, the
+ time type can.
+ Time zones in the real world can have no meaning unless
+ associated with a date as well as a time
+ since the offset may vary through the year with daylight-saving
+ time boundaries.
- The default time zone is specified as a constant numeric offset
- from
UTC>. It is not possible to adapt to daylight-saving
- time when doing date/time arithmetic across
+ The default time zone is specified as a constant numeric offset
+
from
UTC>. It is not possible to adapt to daylight-saving
+ time when doing date/time arithmetic across
- The TZ environment variable on the server host
- is used by the server as the default time zone, if no other is
- specified.
+ The TZ environment variable on the server host
+ is used by the server as the default time zone, if no other is
+ specified.
- The timezone configuration parameter can be
- set in the file postgresql.conf>.
+ The timezone configuration parameter can be
+ set in the file postgresql.conf>.
- The PGTZ environment variable, if set at the
- client, is used by
libpq
- applications to send a SET TIME ZONE
- command to the server upon connection.
+ The PGTZ environment variable, if set at the
+
client, is used by
libpq
+ applications to send a SET TIME ZONE
+ command to the server upon connection.
- The
SQL command
SET TIME ZONE
- sets the time zone for the session.
+
The
SQL command
SET TIME ZONE
+ sets the time zone for the session.
|
- Name
- Storage Size
- Representation
- Description
+ Name
+ Storage Size
+ Representation
+ Description
|
- point
- 16 bytes
- Point on the plane
- (x,y)
+ point
+ 16 bytes
+ Point on the plane
+ (x,y)
|
- line
- 32 bytes
- Infinite line (not fully implemented)
- ((x1,y1),(x2,y2))
+ line
+ 32 bytes
+ Infinite line (not fully implemented)
+ ((x1,y1),(x2,y2))
|
- lseg
- 32 bytes
- Finite line segment
- ((x1,y1),(x2,y2))
+ lseg
+ 32 bytes
+ Finite line segment
+ ((x1,y1),(x2,y2))
|
- box
- 32 bytes
- Rectangular box
- ((x1,y1),(x2,y2))
+ box
+ 32 bytes
+ Rectangular box
+ ((x1,y1),(x2,y2))
|
- path
- 16+16n bytes
- Closed path (similar to polygon)
- ((x1,y1),...)
+ path
+ 16+16n bytes
+ Closed path (similar to polygon)
+ ((x1,y1),...)
|
- path
- 16+16n bytes
- Open path
- [(x1,y1),...]
+ path
+ 16+16n bytes
+ Open path
+ [(x1,y1),...]
|
- polygon
- 40+16n bytes
- Polygon (similar to closed path)
- ((x1,y1),...)
+ polygon
+ 40+16n bytes
+ Polygon (similar to closed path)
+ ((x1,y1),...)
|
- circle
- 24 bytes
- Circle
- <(x,y),r> (center and radius)
+ circle
+ 24 bytes
+ Circle
+ <(x,y),r> (center and radius)
|
- Name
- Storage Size
- Description
+ Name
+ Storage Size
+ Description
|
- cidr
- 12 or 24 bytes
- IPv4 or IPv6 networks
+ cidr
+ 12 or 24 bytes
+ IPv4 or IPv6 networks
|
- inet
- 12 or 24 bytes
- IPv4 and IPv6 hosts and networks
+ inet
+ 12 or 24 bytes
+ IPv4 and IPv6 hosts and networks
|
- macaddr
- 6 bytes
- MAC addresses
+ macaddr
+ 6 bytes
+ MAC addresses
cidr> Type Input Examples
-
- cidr Input
- cidr Output
- abbrev (cidr )
-
+
+ cidr Input
+ cidr Output
+ abbrev (cidr )
+
- |
- 192.168.100.128/25
- 192.168.100.128/25
- 192.168.100.128/25
-
- |
- 192.168/24
- 192.168.0.0/24
- 192.168.0/24
-
- |
- 192.168/25
- 192.168.0.0/25
- 192.168.0.0/25
-
- |
- 192.168.1
- 192.168.1.0/24
- 192.168.1/24
-
- |
- 192.168
- 192.168.0.0/24
- 192.168.0/24
-
- |
- 128.1
- 128.1.0.0/16
- 128.1/16
-
- |
- 128
- 128.0.0.0/16
- 128.0/16
-
- |
- 128.1.2
- 128.1.2.0/24
- 128.1.2/24
-
- |
- 10.1.2
- 10.1.2.0/24
- 10.1.2/24
-
- |
- 10.1
- 10.1.0.0/16
- 10.1/16
-
- |
- 10
- 10.0.0.0/8
- 10/8
-
- |
- 10.1.2.3/32
- 10.1.2.3/32
+ |
+ 192.168.100.128/25
+ 192.168.100.128/25
+ 192.168.100.128/25
+
+ |
+ 192.168/24
+ 192.168.0.0/24
+ 192.168.0/24
+
+ |
+ 192.168/25
+ 192.168.0.0/25
+ 192.168.0.0/25
+
+ |
+ 192.168.1
+ 192.168.1.0/24
+ 192.168.1/24
+
+ |
+ 192.168
+ 192.168.0.0/24
+ 192.168.0/24
+
+ |
+ 128.1
+ 128.1.0.0/16
+ 128.1/16
+
+ |
+ 128
+ 128.0.0.0/16
+ 128.0/16
+
+ |
+ 128.1.2
+ 128.1.2.0/24
+ 128.1.2/24
+
+ |
+ 10.1.2
+ 10.1.2.0/24
+ 10.1.2/24
+
+ |
+ 10.1
+ 10.1.0.0/16
+ 10.1/16
+
+ |
+ 10
+ 10.0.0.0/8
+ 10/8
+
+ |
+ 10.1.2.3/32
+ 10.1.2.3/32
10.1.2.3/32
-
+
|
- 2001:4f8:3:ba::/64
- 2001:4f8:3:ba::/64
- 2001:4f8:3:ba::/64
-
+ 2001:4f8:3:ba::/64
+ 2001:4f8:3:ba::/64
+ 2001:4f8:3:ba::/64
+
|
- 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128
- 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128
- 2001:4f8:3:ba:2e0:81ff:fe22:d1f1
-
- |
- ::ffff:1.2.3.0/120
- ::ffff:1.2.3.0/120
+ 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128
+ 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/128
+ 2001:4f8:3:ba:2e0:81ff:fe22:d1f1
+
+ |
+ ::ffff:1.2.3.0/120
+ ::ffff:1.2.3.0/120
::ffff:1.2.3/120
-
- |
- ::ffff:1.2.3.0/128
- ::ffff:1.2.3.0/128
+
+ |
+ ::ffff:1.2.3.0/128
+ ::ffff:1.2.3.0/128
::ffff:1.2.3.0/128
-
+
- If you do not like the output format for inet or
- cidr values, try the functions host>,
- text>, and abbrev>.
-
+ If you do not like the output format for inet or
+ cidr values, try the functions host>,
+ text>, and abbrev>.
+
|
- Name
- References
- Description
- Value Example
+ Name
+ References
+ Description
+ Value Example
|
- oid>
- any
- numeric object identifier
- 564182>
+ oid>
+ any
+ numeric object identifier
+ 564182>
|
- regproc>
- pg_proc>
- function name
- sum>
+ regproc>
+ pg_proc>
+ function name
+ sum>
|
- regprocedure>
- pg_proc>
- function with argument types
- sum(int4)>
+ regprocedure>
+ pg_proc>
+ function with argument types
+ sum(int4)>
|
- regoper>
- pg_operator>
- operator name
- +>
+ regoper>
+ pg_operator>
+ operator name
+ +>
|
- regoperator>
- pg_operator>
- operator with argument types
- *(integer,integer)> or -(NONE,integer)>
+ regoperator>
+ pg_operator>
+ operator with argument types
+ *(integer,integer)> or -(NONE,integer)>
|
- regclass>
- pg_class>
- relation name
- pg_type>
+ regclass>
+ pg_class>
+ relation name
+ pg_type>
|
- regtype>
- pg_type>
- data type name
- integer>
+ regtype>
+ pg_type>
+ data type name
+ integer>
A final identifier type used by the system is tid>, or tuple
- identifier. This is the data type of the system column
+ identifier (row identifier) . This is the data type of the system column
ctid>. A tuple ID is a pair
(block number, tuple index within block) that identifies the
- physical location of the tuple within its table.
+ physical location of the row within its table.
|
- Name
- Description
+ Name
+ Description
|
- any>
- Indicates that a function accepts any input data type whatever.
+ any>
+ Indicates that a function accepts any input data type whatever.
|
- anyarray>
- Indicates that a function accepts any array data type
- (see ).
+ anyarray>
+ Indicates that a function accepts any array data type
+ (see ).
|
- anyelement>
- Indicates that a function accepts any data type
- (see ).
+ anyelement>
+ Indicates that a function accepts any data type
+ (see ).
|
- cstring>
- Indicates that a function accepts or returns a null-terminated C string.
+ cstring>
+ Indicates that a function accepts or returns a null-terminated C string.
|
- internal>
- Indicates that a function accepts or returns a server-internal
- data type.
+ internal>
+ Indicates that a function accepts or returns a server-internal
+ data type.
|
- language_handler>
- A procedural language call handler is declared to return language_handler>.
+ language_handler>
+ A procedural language call handler is declared to return language_handler>.
|
- record>
- Identifies a function returning an unspecified row type.
+ record>
+ Identifies a function returning an unspecified row type.
|
- trigger>
- A trigger function is declared to return trigger.>
+ trigger>
+ A trigger function is declared to return trigger.>
|
- void>
- Indicates that a function returns no value.
+ void>
+ Indicates that a function returns no value.
|
- opaque>
- An obsolete type name that formerly served all the above purposes.
+ opaque>
+ An obsolete type name that formerly served all the above purposes.
-
+
Data Definition
The identity (transaction ID) of the inserting transaction for
- this tuple. (Note: In this context, a tuple is an individual
- state of a row; each update of a row creates a new tuple for th e
- same logical row.)
+ this row version. (A row version is an individual state of a
+ row; each update of a row creates a new row version for the sam e
+ logical row.)
The identity (transaction ID) of the deleting transaction, or
- zero for an undeleted tuple . It is possible for this column to
- be nonzero in a visible tuple : That usually indicates that the
+ zero for an undeleted row version . It is possible for this column to
+ be nonzero in a visible row version : That usually indicates that the
deleting transaction hasn't committed yet, or that an attempted
deletion was rolled back.
- The physical location of the tuple within its table. Note that
+ The physical location of the row version within its table. Note that
although the ctid can be used to
- locate the tuple very quickly, a row's
+ locate the row version very quickly, a row's
ctid will change each time it is
updated or moved by VACUUM FULL>. Therefore
ctid is useless as a long-term row
of 2
32> (4 billion) SQL commands
within a single transaction. In practice this limit is not a
problem --- note that the limit is on number of
-
SQL commands, not number of
tuple s processed.
+
SQL commands, not number of
row s processed.
- In some cases you may wish to know which table a particular tuple
+ In some cases you may wish to know which table a particular row
originated from. There is a system column called
TABLEOID in each table which can tell you the
originating table:
indicates double precision . Many of these functions
are provided in multiple forms with different argument types.
Except where noted, any given form of a function returns the same
- datatype as its argument.
+ data type as its argument.
The functions working with double precision data are mostly
implemented on top of the host system's C library; accuracy and behavior in
boundary cases may therefore vary depending on the host system.
|
\f>
- formfeed, as in C
+ form feed, as in C
|
- In addition to these functions, the SQL OVERLAPS> keyword is
+ In addition to these functions, the SQL OVERLAPS> operator is
supported:
( start1 , end1 ) OVERLAPS ( start2 , end2 )
This expression yields true when two time periods (defined by their
endpoints) overlap, false when they do not overlap. The endpoints
- can be specified as pairs of dates, times, or timestamps; or as
- a date, time, or timestamp followed by an interval.
+ can be specified as pairs of dates, times, or time stamps; or as
+ a date, time, or time stamp followed by an interval.
the intent is to allow a single transaction to have a consistent
notion of the current
time, so that multiple
modifications within the same transaction bear the same
- timestamp. timeofday()
+ time stamp. timeofday()
returns the wall-clock time and does advance during transactions.
|
hostmask (inet )
inet
- construct hostmask for network
+ construct host mask for network
hostmask('192.168.23.20/30')
0.0.0.3
The amcostestimate function is given a list of WHERE clauses that have
been determined to be usable with the index. It must return estimates
of the cost of accessing the index and the selectivity of the WHERE
- clauses (that is, the fraction of main-table tuple s that will be
+ clauses (that is, the fraction of main-table row s that will be
retrieved during the index scan). For simple cases, nearly all the
work of the cost estimator can be done by calling standard routines
in the optimizer; the point of having an amcostestimate function is
The index access costs should be computed in the units used by
src/backend/optimizer/path/costsize.c : a sequential disk block fetch
has cost 1.0, a nonsequential fetch has cost random_page_cost, and
- the cost of processing one index tuple should usually be taken as
+ the cost of processing one index row should usually be taken as
cpu_index_tuple_cost (which is a user-adjustable optimizer parameter).
In addition, an appropriate multiple of cpu_operator_cost should be charged
for any comparison operators invoked during index processing (especially
The access costs should include all disk and CPU costs associated with
scanning the index itself, but NOT the costs of retrieving or processing
- the main-table tuple s that are identified by the index.
+ the main-table row s that are identified by the index.
The start-up cost
is the part of the total scan cost that must be expended
- before we can begin to fetch the first tuple . For most indexes this can
+ before we can begin to fetch the first row . For most indexes this can
be taken as zero, but an index type with a high start-up cost might want
to set it nonzero.
The indexSelectivity should be set to the estimated fraction of the main
- table tuple s that will be retrieved during the index scan. In the case
+ table row s that will be retrieved during the index scan. In the case
of a lossy index, this will typically be higher than the fraction of
- tuple s that actually pass the given qual conditions.
+ row s that actually pass the given qual conditions.
The indexCorrelation should be set to the correlation (ranging between
-1.0 and 1.0) between the index order and the table order. This is used
- to adjust the estimate for the cost of fetching tuple s from the main
+ to adjust the estimate for the cost of fetching row s from the main
table.
- Estimate and return the fraction of main-table tuple s that will be visited
+ Estimate and return the fraction of main-table row s that will be visited
based on the given qual conditions. In the absence of any index-type-specific
knowledge, use the standard optimizer function clauselist_selectivity() :
- Estimate the number of index tuple s that will be visited during the
+ Estimate the number of index row s that will be visited during the
scan. For many index types this is the same as indexSelectivity times
- the number of tuple s in the index, but it might be more. (Note that the
- index's size in pages and tuple s is available from the IndexOptInfo struct.)
+ the number of row s in the index, but it might be more. (Note that the
+ index's size in pages and row s is available from the IndexOptInfo struct.)
/*
* Our generic assumption is that the index pages will be read
* sequentially, so they have cost 1.0 each, not random_page_cost.
- * Also, we charge for evaluation of the indexquals at each index tuple .
+ * Also, we charge for evaluation of the indexquals at each index row .
* All the costs are assumed to be paid incrementally during the scan.
*/
cost_qual_eval(&index_qual_cost, indexQuals);
-
+
PostgreSQL>]]>
--enable-thread-safety
- Allow separate libpq and ecpg threads to safely control their
- private connection handles.
+ Allow separate threads in
libpq
+ and
ECPG programs to safely
+ control their private connection handles.
Calling Stored Functions
-
PostgreSQL's jdbc driver fully
+
PostgreSQL's JDBC driver fully
supports calling
PostgreSQL stored
functions.
When calling a function that returns
a refcursor you must cast the return type
- of getObject to
+ of getObject to
a ResultSet
PostgreSQL is an extensible database
- system. You can add your own functions to the backend , which can
+ system. You can add your own functions to the server , which can
then be called from queries, or even add your own data types. As
these are facilities unique to
PostgreSQL ,
we support them from Java, with a set of extension
public Fastpath getFastpathAPI() throws SQLException
- This returns the
Fast path
API for the
+ This returns the
fast- path
API for the
current connection. It is primarily used by the Large Object
Returns:
- Fastpath object allowing access to functions on the
+ Fastpath> object allowing access to functions on the
Throws:
- SQLException> by Fastpath when initializing for first time
+ SQLException> by Fastpath> when initializing for first time
exists within the
libpq C interface, and allows a client machine
- to execute a function on the database backend . Most client code
+ to execute a function on the database server . Most client code
will not need to use this method, but it is provided because the
Large Object
API uses it.
the getFastpathAPI() is an extension method,
not part of
JDBC . Once you have a
Fastpath instance, you can use the
- fastpath() methods to execute a backend
+ fastpath() methods to execute a server
function.
FastpathArg args[]) throws SQLException
- Send a function call to the
PostgreSQL backend .
+ Send a function call to the
PostgreSQL server .
resulttype> - True if the result is an integer, false
for
other results
-
args> - FastpathArguments to pass to fastpath
+
args> - FastpathArguments to pass to fast-path call
FastpathArg args[]) throws SQLException
- Send a function call to the
PostgreSQL backend by name.
+ Send a function call to the
PostgreSQL server by name.
The mapping for the procedure name to function id needs to
exist, usually to an earlier call to addfunction() . This is
the preferred method to call, as function id's can/may change
- between versions of the backend . For an example of how this
+ between versions of the server . For an example of how this
works, refer to org.postgresql.LargeObject
resulttype> - True if the result is an integer, false
for
other results
-
args> - FastpathArguments to pass to fastpath
+
args> - FastpathArguments to pass to fast-path call
- Each fastpath call requires an array of arguments, the number and
+ Each fast- path call requires an array of arguments, the number and
type dependent on the function being called. This class
implements methods needed to provide this capability.
Cloneable
This implements a line consisting of two points. Currently line is
-not yet implemented in the backend , but this class ensures that when
+not yet implemented in the server , but this class ensures that when
it's done were ready for it.
Variables
- The currently recognized parameter keywords are:
+ The currently recognized parameter key words are:
This is the predecessor of PQconnectdb with a fixed
set of parameters. It has the same functionality except that the
- missing parameters will always take on default values. Write NULL or an
+ missing parameters will always take on default values. Write NULL or an
empty string for any one of the fixed parameters that is to be defaulted.
Certain parameter values are reported by the server automatically at
connection startup or whenever their values change.
PQparameterStatus> can be used to interrogate these settings.
-It returns the current value of a parameter if known, or NULL if the parameter
+It returns the current value of a parameter if known, or NULL if the parameter
is not known.
startup is complete, but it could theoretically change during a reset.
The 3.0 protocol will normally be used when communicating with
PostgreSQL> 7.4 or later servers; pre-7.4 servers support
-only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq .)
+only protocol 2.0. (Protocol 1.0 is obsolete and not supported by
libpq .)
nParams> is the number of parameters supplied; it is the length
of the arrays
paramTypes[]>, paramValues[]>,
paramLengths[]>, and paramFormats[]>. (The
-array pointers may be
NULL when
nParams> is zero.)
-
paramTypes[]> specifies, by OID, the datatypes to be assigned to
-the parameter symbols. If
paramTypes> is NULL , or any particular
-element in the array is zero, the backend assigns a data type to the parameter
+array pointers may be
NULL when
nParams> is zero.)
+
paramTypes[]> specifies, by OID, the data types to be assigned to
+the parameter symbols. If
paramTypes> is NULL , or any particular
+element in the array is zero, the server assigns a data type to the parameter
symbol in the same way it would do for an untyped literal string.
paramValues[]> specifies the actual values of the parameters.
-A NULL pointer in this array means the corresponding parameter is NULL ;
+A null pointer in this array means the corresponding parameter is null ;
otherwise the pointer points to a zero-terminated text string (for text
-format) or binary data in the format expected by the backend (for binary
+format) or binary data in the format expected by the server (for binary
format).
paramLengths[]> specifies the actual data lengths of
-binary-format parameters. It is ignored for NULL parameters and text-format
-parameters. The array pointer may be NULL when there are no binary
+binary-format parameters. It is ignored for null parameters and text-format
+parameters. The array pointer may be null when there are no binary
parameters.
paramFormats[]> specifies whether parameters are text (put a zero
in the array) or binary (put a one in the array). If the array pointer is
-NULL then all parameters are presumed to be text.
+null then all parameters are presumed to be text.
resultFormat> is zero to obtain results in text format, or one to
obtain results in binary format. (There is not currently a provision to
obtain different result columns in different formats, although that is
-NULL is returned if the column number is out of range.
+NULL is returned if the column number is out of range.
For data in text format, the value returned by PQgetvalue
is a null-terminated character string representation
of the field value. For data in binary format, the value is in the binary
-representation determined by the datatype's typsend> and
+representation determined by the data type's typsend> and
typreceive> functions. (The value is actually followed by
a zero byte in this case too, but that is not ordinarily useful, since
the value is likely to contain embedded nulls.)
-An empty string is returned if the field value is NULL . See
-PQgetisnull> to distinguish NULL s from empty-string values.
+An empty string is returned if the field value is null . See
+PQgetisnull> to distinguish null value s from empty-string values.
PQunescapeBytea ,
and PQnotifies .
It is needed by Win32, which can not free memory across
- DLL's, unless multithreaded DLL's (/MD in VC6) are used.
+ DLLs, unless multithreaded DLLs (/MD in VC6) are used.
On other platforms it is the same as free()>.
parameters to be passed to the function; they must match the declared
function argument list. When the
isint> field of a
parameter
- struct is true,
+ structure is true,
the
u.integer> value is sent to the server as an integer
of the indicated length (this must be 1, 2, or 4 bytes); proper
byte-swapping occurs. When
isint> is false, the
indicated number of bytes at
*u.ptr> are sent with no
processing; the data must be in the format expected by the server for
- binary transmission of the function's argument datatype.
+ binary transmission of the function's argument data type.
result_buf is the buffer in which
to place the return value. The caller must have allocated
sufficient space to store the return value. (There is no check!)
-Note that it is not possible to handle NULL arguments, NULL results, nor
+Note that it is not possible to handle null arguments, null results, nor
set-valued results when using this interface.
In
PostgreSQL 6.4 and later,
- the be_pid is that of the notifying backend process,
- whereas in earlier versions it was always the
PID of your own
backend process.
+ the be_pid is that of the notifying server process,
+ whereas in earlier versions it was always the
PID of your own
server process.
PQexec in a string that could contain additional
commands, the application must continue fetching results via
PQgetResult> after completing the COPY
- sequence. Only when PQgetResult> returns NULL is it certain
+ sequence. Only when PQgetResult> returns NULL is it certain
that the PQexec command string is done and it is
safe to issue more commands.
Transmits the COPY data in the specified
buffer>, of length
nbytes>, to the server. The result is 1 if the data was sent,
zero if it was not sent because the attempt would block (this case is only
-possible if the connection is in nonblock mode), or -1 if an error occurred.
+possible if the connection is in nonblocking mode), or -1 if an error occurred.
(Use PQerrorMessage to retrieve details if the return
value is -1. If the value is zero, wait for write-ready and try again.)
-The application may divide the COPY datastream into buffer loads of any
-convenient size. Bufferload boundaries have no semantic significance when
-sending. The contents of the datastream must match the data format expected
+The application may divide the COPY data stream into buffer loads of any
+convenient size. Buffer- load boundaries have no semantic significance when
+sending. The contents of the data stream must match the data format expected
by the COPY> command; see
for details.
Ends the
COPY_IN> operation successfully if errormsg>
-is
NULL. If errormsg> is not NULL then the
COPY>
+is
NULL . If errormsg> is not NULL then the
COPY>
is forced to fail, with the string pointed to by
errormsg>
used as the error message. (One should not assume that this exact error
message will come back from the server, however, as the server might have
The result is 1 if the termination data was sent,
zero if it was not sent because the attempt would block (this case is only
-possible if the connection is in nonblock mode), or -1 if an error occurred.
+possible if the connection is in nonblocking mode), or -1 if an error occurred.
(Use PQerrorMessage to retrieve details if the return
value is -1. If the value is zero, wait for write-ready and try again.)
Data is always returned one data row at a time; if only a partial row
is available, it is not returned. Successful return of a data row
involves allocating a chunk of memory to hold the data. The
-
buffer> parameter must be non-NULL . *buffer>
-is set to point to the allocated memory, or to NULL in cases where no
-buffer is returned. A non-NULL result buffer must be freed using
+
buffer> parameter must be non-NULL . *buffer>
+is set to point to the allocated memory, or to NULL in cases where no
+buffer is returned. A non-NULL result buffer must be freed using
PQfreemem> when no longer needed.
-The COPY data stream sent by a series of calls to
+The COPY data stream sent by a series of calls to
PQputline has the same format as that returned by
PQgetlineAsync , except that applications are not
obliged to send exactly one data row per PQputline
-These functions read and write files in the server's filesystem, using the
+These functions read and write files in the server's file system, using the
permissions of the database's owning user. Therefore, their use is restricted
to superusers. (In contrast, the client-side import and export functions
-read and write files in the client's filesystem, using the permissions of
+read and write files in the client's file system, using the permissions of
the client program. Their use is not restricted.)
In normal
PostgreSQL operation, an
UPDATE> or DELETE> of a row does not
- immediately remove the old tuple> (version of the row) .
+ immediately remove the old version of the row .
This approach is necessary to gain the benefits of multiversion
- concurrency control (see ): the tuple
+ concurrency control (see ): the row version
must not be deleted while it is still potentially visible to other
- transactions. But eventually, an outdated or deleted tuple is no
+ transactions. But eventually, an outdated or deleted row version is no
longer of interest to any transaction. The space it occupies must be
- reclaimed for reuse by new tuple s, to avoid infinite growth of disk
+ reclaimed for reuse by new row s, to avoid infinite growth of disk
space requirements. This is done by running VACUUM>.
The standard form of VACUUM> is best used with the goal of
maintaining a fairly level steady-state usage of disk space. The standard
- form finds old tuple s and makes their space available for re-use within
+ form finds old row version s and makes their space available for re-use within
the table, but it does not try very hard to shorten the table file and
return disk space to the operating system. If you need to return disk
space to the operating system you can use VACUUM FULL> ---
VACUUM FULL> is recommended for cases where you know you have
- deleted the majority of tuple s in a table, so that the steady-state size
+ deleted the majority of row s in a table, so that the steady-state size
of the table can be shrunk substantially with VACUUM FULL>'s
more aggressive approach.
PostgreSQL 's MVCC transaction semantics
depend on being able to compare transaction ID (
XID>)
- numbers: a tuple with an insertion XID greater than the current
+ numbers: a row version with an insertion XID greater than the current
transaction's XID is in the future> and should not be visible
to the current transaction. But since transaction IDs have limited size
(32 bits at this writing) a cluster that runs for a long time (more
that for every normal XID, there are two billion XIDs that are
older> and two billion that are newer>; another
way to say it is that the normal XID space is circular with no
- endpoint. Therefore, once a tuple has been created with a particular
- normal XID, the tuple will appear to be in the past> for
+ endpoint. Therefore, once a row version has been created with a particular
+ normal XID, the row version will appear to be in the past> for
the next two billion transactions, no matter which normal XID we are
- talking about. If the tuple still exists after more than two billion
+ talking about. If the row version still exists after more than two billion
transactions, it will suddenly appear to be in the future. To
- prevent data loss, old tuple s must be reassigned the XID
+ prevent data loss, old row version s must be reassigned the XID
FrozenXID> sometime before they reach the
two-billion-transactions-old mark. Once they are assigned this
special XID, they will appear to be in the past> to all
normal transactions regardless of wraparound issues, and so such
- tuple s will be good until deleted, no matter how long that is. This
+ row version s will be good until deleted, no matter how long that is. This
reassignment of XID is handled by VACUUM>.
VACUUM>'s normal policy is to reassign FrozenXID>
- to any tuple with a normal XID more than one billion transactions in the
+ to any row version with a normal XID more than one billion transactions in the
past. This policy preserves the original insertion XID until it is not
- likely to be of interest anymore. (In fact, most tuple s will probably
+ likely to be of interest anymore. (In fact, most row version s will probably
live and die without ever being frozen>.) With this policy,
the maximum safe interval between VACUUM> runs on any table
is exactly one billion transactions: if you wait longer, it's possible
- that a tuple that was not quite old enough to be reassigned last time
+ that a row version that was not quite old enough to be reassigned last time
is now more than two billion transactions old and has wrapped around
into the future --- i.e., is lost to you. (Of course, it'll reappear
after another two billion transactions, but that's no help.)
VACUUM> with the FREEZE> option uses a more
- aggressive freezing policy: tuple s are frozen if they are old enough
+ aggressive freezing policy: row version s are frozen if they are old enough
to be considered good by all open transactions. In particular, if a
VACUUM FREEZE> is performed in an otherwise-idle
- database, it is guaranteed that all> tuple s in that
+ database, it is guaranteed that all> row version s in that
database will be frozen. Hence, as long as the database is not
modified in any way, it will not need subsequent vacuuming to avoid
transaction ID wraparound problems. This technique is used by
The simplest production-grade approach to managing log output is to
- send it all to
syslog> and let syslog>
- deal with file rotation. To do this, set the configurations parameter
-
syslog> to 2 (to log to syslog> only) in
- postgresql.conf>. Then you can send a SIGHUP
- signal to the
syslog> daemon whenever you want to force it
- to start writing a new log file. If you want to automate log rotation,
- the logrotate program can be configured to work with log files from syslog.
+ send it all to
syslog> and let
+
syslog> deal with file rotation. To do this, set the
+ configurations parameter syslog> to 2 (to log to
+
syslog> only) in postgresql.conf>. Then
+ you can send a SIGHUP signal to the
+
syslog> daemon whenever you want to force it to
+ start writing a new log file. If you want to automate log
+ rotation, the
logrotate program can be
+ configured to work with log files from
|
pg_stat_all_tables>
For each table in the current database, total numbers of
- sequential and index scans, total numbers of tuple s returned by
- each type of scan, and totals of tuple insertions, updates,
+ sequential and index scans, total numbers of row s returned by
+ each type of scan, and totals of row insertions, updates,
and deletions.
|
pg_stat_all_indexes>
For each index in the current database, the total number
- of index scans that have used that index, the number of index tuple s
- read, and the number of successfully fetched heap tuple s. (This may
- be less when there are index entries pointing to expired heap tuple s.)
+ of index scans that have used that index, the number of index row s
+ read, and the number of successfully fetched heap row s. (This may
+ be less when there are index entries pointing to expired heap row s.)
pg_stat_get_tuples_returned (oid )
bigint
- Number of tuple s read by sequential scans when argument is a table,
- or number of index tuple s read when argument is an index
+ Number of row s read by sequential scans when argument is a table,
+ or number of index row s read when argument is an index
pg_stat_get_tuples_fetched (oid )
bigint
- Number of valid (unexpired) table tuple s fetched by sequential scans
+ Number of valid (unexpired) table row s fetched by sequential scans
when argument is a table, or fetched by index scans using this index
when argument is an index
pg_stat_get_tuples_inserted (oid )
bigint
- Number of tuple s inserted into table
+ Number of row s inserted into table
pg_stat_get_tuples_updated (oid )
bigint
- Number of tuple s updated in table
+ Number of row s updated in table
pg_stat_get_tuples_deleted (oid )
bigint
- Number of tuple s deleted from table
+ Number of row s deleted from table
this is only in the unlikely event that you do not want to try out
your translated messages. When you configure your source tree, be
sure to use the --enable-nls option. This will
- also check for the libintl library and the
+ also check for the
libintl library and the
msgfmt program, which all end users will need
anyway. To try out your work, follow the applicable portions of
the installation instructions.
implementation. Later, we will try to arrange it so that if you
use a packaged source distribution, you won't need
xgettext . (From CVS, you will still need
- it.) GNU gettext 0.10.36 or later is currently recommended.
+ it.)
GNU Gettext 0.10.36 or later is currently recommended.
- CATALOG_NAME
+ CATALOG_NAME
- AVAIL_LANGUAGES
+ AVAIL_LANGUAGES
- GETTEXT_FILES
+ GETTEXT_FILES
- GETTEXT_TRIGGERS
+ GETTEXT_TRIGGERS
is assumed to contain 8 bits. In addition, the term
item
refers to an individual data value that is stored on a page. In a table,
-an item is a tuple (row) ; in an index, an item is an index entry.
+an item is a row ; in an index, an item is an index entry.
|
Free space
-The unallocated space. All new tuple s are allocated from here, generally from the end.
+The unallocated space. All new row s are allocated from here, generally from the end.
|
and a version indicator. Beginning with
PostgreSQL 7.3 the version number is 1; prior
releases used version number 0. (The basic page layout and header format
- has not changed, but the layout of heap tuple headers has.) The page size
+ has not changed, but the layout of heap row headers has.) The page size
is basically only present as a cross-check; there is no support for having
more than one page size in an installation.
- All table tuple s are structured the same way. There is a fixed-size
+ All table row s are structured the same way. There is a fixed-size
header (occupying 23 bytes on most machines), followed by an optional null
bitmap, an optional object ID field, and the user data. The header is
detailed
in . The actual user data
- (fields of the tuple ) begins at the offset indicated by
+ (columns of the row ) begins at the offset indicated by
t_hoff>, which must always be a multiple of the MAXALIGN
distance for the platform.
The null bitmap is
t_xvac
TransactionId
4 bytes
- XID for VACUUM operation moving tuple
+ XID for VACUUM operation moving row version
|
t_ctid
ItemPointerData
6 bytes
- current TID of this or newer tuple
+ current TID of this or newer row version
|
t_natts
from_collapse_limit> (so that explicit joins and subselects
act similarly) or set join_collapse_limit> to 1 (if you want
to control join order with explicit joins). But you might set them
- differently if you are trying to fine-tune the tradeoff between planning
+ differently if you are trying to fine-tune the trade off between planning
time and run time.
-
Supported Argument and Result Datat ypes
+
Supported Argument and Result Data T ypes
Functions written in
PL/pgSQL can accept
- as arguments any scalar or array datatype supported by the server,
+ as arguments any scalar or array data type supported by the server,
and they can return a result of any of these types. They can also
accept or return any composite type (row type) specified by name.
It is also possible to declare a
PL/pgSQL
PL/pgSQL> functions may also be declared to accept
and return the polymorphic> types
anyelement and anyarray . The actual
- datatypes handled by a polymorphic function can vary from call to
+ data types handled by a polymorphic function can vary from call to
call, as discussed in .
An example is shown in .
PL/pgSQL> functions can also be declared to return
- a set>, or table, of any datatype they can return a single
+ a set>, or table, of any data type they can return a single
instance of. Such a function generates its output by executing
RETURN NEXT> for each desired element of the result set.
When the return type of a
PL/pgSQL
function is declared as a polymorphic type (anyelement
or anyarray ), a special parameter $0
- is created. Its datatype is the actual return type of the function,
+ is created. Its data type is the actual return type of the function,
as deduced from the actual input types (see
linkend="extend-types-polymorphic">).
This allows the function to access its actual return type
$0 is initialized to NULL and can be modified by
the function, so it can be used to hold the return value if desired,
though that is not required. $0 can also be
- given an alias. For example, this function works on any datatype
+ given an alias. For example, this function works on any data type
that has a +> operator:
CREATE FUNCTION add_three_values(anyelement, anyelement, anyelement)
-
+
Frontend/Backend Protocol
Formats and Format Codes
- Data of a particular datatype might be transmitted in any of several
+ Data of a particular data type might be transmitted in any of several
different
formats>. As of PostgreSQL> 7.4
the only supported formats are text> and binary>,
but the protocol makes provision for future extensions. The desired
The text representation of values is whatever strings are produced
and accepted by the input/output conversion functions for the
- particular datatype. In the transmitted representation, there is
+ particular data type. In the transmitted representation, there is
no trailing null character; the frontend must add one to received
values if it wants to process them as C strings.
(The text format does not allow embedded nulls, by the way.)
Binary representations for integers use network byte order (most
- significant byte first). For other datatypes consult the documentation
+ significant byte first). For other data types consult the documentation
or source code to learn about the binary representation. Keep in mind
- that binary representations for complex datatypes may change across
+ that binary representations for complex data types may change across
server versions; the text format is usually the more portable choice.
The response to a SELECT> query (or other queries that
- return rowsets, such as EXPLAIN> or SHOW>)
+ return row sets, such as EXPLAIN> or SHOW>)
normally consists of RowDescription, zero or more
DataRow messages, and then CommandComplete.
COPY> to or from the frontend invokes special protocol
In the extended protocol, the frontend first sends a Parse message,
which contains a textual query string, optionally some information
- about datatypes of parameter placeholders, and the
+ about data types of parameter placeholders, and the
name of a destination prepared-statement object (an empty string
selects the unnamed prepared statement). The response is
- either ParseComplete or ErrorResponse. Parameter datatypes may be
+ either ParseComplete or ErrorResponse. Parameter data types may be
specified by OID; if not given, the parser attempts to infer the
- datatypes in the same way as it would do for untyped literal string
+ data types in the same way as it would do for untyped literal string
constants.
unnamed portal) and
a maximum result-row count (zero meaning fetch all rows>).
The result-row count is only meaningful for portals
- containing commands that return rowsets; in other cases the command is
+ containing commands that return row sets; in other cases the command is
always executed to completion, and the row count is ignored.
The possible
responses to Execute are the same as those described above for queries
SET> SQL command executed by the frontend, and this case
is effectively synchronous --- but it is also possible for parameter
status changes to occur because the administrator changed a configuration
- file and then SIGHUP'd the postmaster. Also, if a SET command is
+ file and then sent the SIGHUP signal to the postmaster. Also, if a SET command is
rolled back, an appropriate ParameterStatus message will be generated
to report the current effective value.
- Specifies that a cleartext password is required.
+ Specifies that a clear- text password is required.
- Data that forms part of a COPY data stream. Messages sent
+ Data that forms part of a COPY data stream. Messages sent
from the backend will always correspond to single data rows,
- but messages sent by frontends may divide the datastream
+ but messages sent by frontends may divide the data stream
arbitrarily.
- Specifies the object ID of the parameter datatype.
+ Specifies the object ID of the parameter data type.
- The number of parameter datatypes specified
+ The number of parameter data types specified
(may be zero). Note that this is not an indication of
the number of parameters that might appear in the
query string, only the number that the frontend wants to
- Specifies the object ID of the parameter datatype.
+ Specifies the object ID of the parameter data type.
Placing a zero here is equivalent to leaving the type
unspecified.
- The object ID of the field's datatype.
+ The object ID of the field's data type.
- The datatype size (see pg_type.typlen>).
+ The data type size (see pg_type.typlen>).
Note that negative values denote variable-width types.
-
+
Queries
When a table reference names a table that is the supertable of a
table inheritance hierarchy, the table reference produces rows of
not only that table but all of its subtable successors, unless the
- keyword ONLY> precedes the table name. However, the
+ key word ONLY> precedes the table name. However, the
reference produces only the columns that appear in the named table
--- any columns added in subtables are ignored.
determined with the > operator.
- Actually,
PostgreSQL> uses the default b tree
- operator class> for the column's datatype to determine the sort
+ Actually,
PostgreSQL> uses the default B- tree
+ operator class> for the column's data type to determine the sort
ordering for ASC> and DESC>. Conventionally,
- datatypes will be set up so that the < and
+ data types will be set up so that the < and
> operators correspond to this sort ordering,
- but a user-defined datatype's designer could choose to do something
+ but a user-defined data type's designer could choose to do something
different.
and a rich set of geometric types.
PostgreSQL can be customized with an
arbitrary number of user-defined data types. Consequently, type
- names are not syntactical keywords, except where required to
+ names are not syntactical key words, except where required to
support special cases in the
SQL standard.
name
- The name (optionally schema-qualified) of a sequence to be altered.
+ The name (optionally schema-qualified) of a sequence to be altered.
increment
- Th e
- INCREMENT BY increment
- clause is optional. A positive value will make an
- ascending sequence, a negative one a descending sequence.
- If unspecified, the old increment value will be maintained.
+ The clause INCREMENT BY e
+ class="parameter">increment is
+ optional. A positive value will make an ascending sequence, a
+ negative one a descending sequence. If unspecified, the old
+ increment value will be maintained.
minvalue
- NO MINVALUE
+ NO MINVALUE
- The optional clause MINVALUE
- minvalue
- determines the minimum value
- a sequence can generate. If NO MINVALUE is specified,
- the defaults of 1 and -2^63-1 for ascending and descending sequences, respectively, will be used. If neither option is specified, the current minimum
- value will be maintained.
+ The optional clause
MINVALUE
+ class="parameter">minvalue determines
+ the minimum value a sequence can generate. If NO
+ MINVALUE is specified, the defaults of 1 and
+ -263>-1 for ascending and descending sequences,
+ respectively, will be used. If neither option is specified,
+ the current minimum value will be maintained.
maxvalue
- NO MAXVALUE
+ NO MAXVALUE
- The optional clause MAXVALUE
- maxvalue
- determines the maximum value for the sequence. If
- NO MAXVALUE is specified, the defaults are 2^63-1 and -1 for
- ascending and descending sequences, respectively, will be used. If
- neither option is specified, the current maximum value will be
- maintained.
+ The optional clause
MAXVALUE
+ class="parameter">maxvalue determines
+ the maximum value for the sequence. If NO
+ MAXVALUE is specified, the defaults are
+ 263>-1 and -1 for ascending and descending
+ sequences, respectively, will be used. If neither option is
+ specified, the current maximum value will be maintained.
start
- The optional RESTART WITH
- start
- clause changes the current value of the sequence.
+ The optional clause
RESTART WITH
+ class="parameter">start changes the
+ current value of the sequence.
cache
- The CACHE cache option
- enables sequence numbers to be preallocated
- and stored in memory for faster access. The minimum
- value is 1 (only one value can be generated at a time, i.e., no cache).
- If unspecified, the old cache value will be maintained.
+ The clause
CACHE
+ class="parameter">cache enables
+ sequence numbers to be preallocated and stored in memory for
+ faster access. The minimum value is 1 (only one value can be
+ generated at a time, i.e., no cache). If unspecified, the old
+ cache value will be maintained.
CYCLE
- The optional CYCLE key word may be used to enable
- the sequence to wrap around when the
- maxvalue or
- minvalue has been
- reached by
- an ascending or descending sequence respectively. If the limit is
- reached, the next number generated will be the
- minvalue or
- maxvalue ,
- respectively.
+ The optional CYCLE key word may be used to enable
+ the sequence to wrap around when the
+ maxvalue or
+ minvalue has been
+ reached by
+ an ascending or descending sequence respectively. If the limit is
+ reached, the next number generated will be the
+ minvalue or
+ maxvalue ,
+ respectively.
-
- NO CYCLE
-
- If the optional NO CYCLE keyword is specified, any
- calls to nextval after the sequence has reached
- its maximum value will return an error. If neither
- CYCLE or NO CYCLE are specified,
- the old cycle behaviour will be maintained.
-
-
-
+
+ NO CYCLE
+
+ If the optional NO CYCLE key word is
+ specified, any calls to nextval after the
+ sequence has reached its maximum value will return an error.
+ If neither CYCLE or NO
+ CYCLE are specified, the old cycle behaviour will be
+ maintained.
+
+
+
Restart a sequence called serial , at 105:
-
ALTER SEQUENCE serial RESTART WITH 105;
-
+
+
The LIKE clause specifies a table from which
- the new table automatically inherits all column names, their datatypes, and
- NOT NULL constraints.
+ the new table automatically inherits all column names, their data types, and
+ not-null constraints.
Unlike INHERITS , the new table and inherited table
representation. If this function is not supplied, the type cannot
participate in binary input. The binary representation should be
chosen to be cheap to convert to internal form, while being reasonably
- portable. (For example, the standard integer datatypes use network
+ portable. (For example, the standard integer data types use network
byte order as the external binary representation, while the internal
representation is in the machine's native byte order.) The receive
function should perform adequate checking to ensure that the value is
The receive function may be declared as taking one argument of type
internal , or two arguments of types internal
and oid . It must return a value of the data type itself.
- (The first argument is a pointer to a StringInfo buffer
+ (The first argument is a pointer to a StringInfo buffer
holding the received byte string; the optional second argument is the
element type in case this is an array type.) Similarly, the optional
send_function converts
CREATE VIEW defines a view of a query. The view
- is not physically materialized. Instead, the query is run everytime
+ is not physically materialized. Instead, the query is run every time
the view is referenced in a query.
CREATE VIEW vista AS SELECT 'Hello World';
is bad form in two ways: the column name defaults to ?column?>,
- and the column datatype defaults to unknown>. If you want a
+ and the column data type defaults to unknown>. If you want a
string literal in a view's result, use something like
CREATE VIEW vista AS SELECT text 'Hello World' AS hello;
shutdown is indicated by removal of the
PID
file. For starting up, a successful psql -l
indicates success. pg_ctl will attempt to
- use the proper port for psql . If the environment variable
- PGPORT exists, that is used. Otherwise, it will see if a port
+ use the proper port for
psql> . If the environment variable
+ PGPORT exists, that is used. Otherwise, it will see if a port
has been set in the postgresql.conf file.
If neither of those is used, it will use the default port that
PostgreSQL was compiled with
processed in a single transaction, unless there are explicit
BEGIN/COMMIT commands included in the string to divide it into
multiple transactions. This is different from the behavior when
- the same string is fed to psql 's standard input.
+ the same string is fed to
psql 's standard input.
name may be specified in the USING> clause.
ASC> is usually equivalent to USING <> and
DESC> is usually equivalent to USING >>.
- (But the creator of a user-defined datatype can define exactly what the
+ (But the creator of a user-defined data type can define exactly what the
default sort ordering is, and it might correspond to operators with other
names.)
MOVE/FETCH now returns the actual number of rows moved/fetched, or zero
if at the beginning/end of the cursor
- Prior releases would return the tuple count passed to the
+ Prior releases would return the row count passed to the
command, not the actual number of rows FETCHed or MOVEd.
Disable LIMIT #,# syntax; now only LIMIT # OFFSET # supported (Bruce)
Increase identifier length to 63 (Neil, Bruce)
UNION fixes for merging >= 3 columns of different lengths (Tom)
-
Add DEFAULT keyword to INSERT, e.g., INSERT ... (..., DEFAULT, ...) (Rod)
+
Add DEFAULT key word to INSERT, e.g., INSERT ... (..., DEFAULT, ...) (Rod)
Allow views to have default values using ALTER COLUMN ... SET DEFAULT (Neil)
Fail on INSERTs with column lists that don't supply all column values, e.g., INSERT INTO tab (col1, col2) VALUES ('val1'); (Rod)
Fix for join aliases (Tom)
Multibytes fixes (Tom)
Unicode fixes (Tatsuo)
Optimizer improvements (Tom)
-Fix for whole tuple s in functions (Tom)
+Fix for whole row s in functions (Tom)
Fix for pg_ctl and option strings with spaces (Peter E)
ODBC fixes (Hiroshi)
EXTRACT can now take string argument (Thomas)
Allow LIMIT in VIEW (Tom)
Require cursor FETCH to honor LIMIT (Tom)
Allow PRIMARY/FOREIGN Key definitions on inherited columns (Stephan)
-Allow ORDER BY, LIMIT in sub-select s (Tom)
+Allow ORDER BY, LIMIT in subquerie s (Tom)
Allow UNION in CREATE RULE (Tom)
Make ALTER/DROP TABLE rollback-able (Vadim, Tom)
Store initdb collation in pg_control so collation cannot be changed (Tom)
New warning code about auto-created table alias entries (Bruce)
Overhaul initdb process (Tom, Peter E)
Overhaul of inherited tables; inherited tables now accessed by default;
- new ONLY keyword prevents it (Chris Bitmead, Tom)
+ new ONLY key word prevents it (Chris Bitmead, Tom)
ODBC cleanups/improvements (Nick Gorham, Stephan Szabo, Zoltan Kovacs,
Michael Fork)
Allow renaming of temp tables (Tom)
pg_dumpall uses CREATE USER or CREATE GROUP rather using COPY (Peter E)
Overhaul pg_dump (Philip Warner)
Allow pg_hba.conf secondary password file to specify only username (Peter E)
-Allow TEMPORARY or TEMP keyword when creating temporary tables (Bruce)
+Allow TEMPORARY or TEMP key word when creating temporary tables (Bruce)
New memory leak checker (Karel)
New SET SESSION CHARACTERISTICS (Thomas)
Allow nested block comments (Thomas)
Fix TRUNCATE failure on relations with indexes (Tom)
Avoid database-wide restart on write error (Hiroshi)
Fix nodeMaterial to honor chgParam by recomputing its output (Tom)
-Fix VACUUM problem with moving chain of update tuples when source and
- destination of a tuple lie on the same page (Tom)
+Fix VACUUM problem with moving chain of update row versions when source
+ and destination of a row version lie on the same page (Tom)
Fix user.c CommandCounterIncrement (Tom)
Fix for AM/PM boundary problem in to_char() (Karel Zak)
Fix TIME aggregate handling (Tom)
Print current line number when COPY FROM fails (Massimo)
Recognize POSIX time zone e.g. "PST+8" and "GMT-8" (Thomas)
Add DEC as synonym for DECIMAL (Thomas)
-Add SESSION_USER as SQL92 keyword, same as CURRENT_USER (Thomas)
+Add SESSION_USER as SQL92 key word, same as CURRENT_USER (Thomas)
Implement SQL92 column aliases (aka correlation names) (Thomas)
Implement SQL92 join syntax (Thomas)
Make INTERVAL reserved word allowed as a column identifier (Thomas)
New expresssion subtree code(Tom)
Avoid disk writes for read-only transactions(Vadim)
Fix for removal of temp tables if last transaction was aborted(Bruce)
-Fix to prevent too large tuple from being created(Bruce)
+Fix to prevent too large row from being created(Bruce)
plpgsql fixes
Allow port numbers 32k - 64k(Bruce)
Add ^ precidence(Bruce)
Port to NetBSD/sun3(Mr. Mutsuki Nakajima)
Port to NetBSD/macppc(Toshimi Aoki)
Fix for tcl/tk configuration(Vince)
-Removed CURRENT keyword for rule queries(Jan)
+Removed CURRENT key word for rule queries(Jan)
NT dynamic loading now works(Daniel Horak)
Add ARM32 support(Andrew McMurry)
Better support for HP-UX 11 and UnixWare
New DECLARE and FETCH feature(Thomas)
libpq's internal structures now not exported(Tom)
Allow up to 8 key indexes(Bruce)
-Remove ARCHIVE keyword, that is no longer used(Thomas)
+Remove ARCHIVE key word, that is no longer used(Thomas)
pg_dump -n flag to supress quotes around indentifiers
disable system columns for views(Jan)
new INET and CIDR types for network addresses(TomH, Paul)
Prevent \do from wrapping(Bruce)
Remove duplicate Russian character set entries
Sunos4 cleanup
-Allow optional TABLE keyword in LOCK and SELECT INTO(Thomas)
+Allow optional TABLE key word in LOCK and SELECT INTO(Thomas)
CREATE SEQUENCE options to allow a negative integer(Thomas)
Add "PASSWORD" as an allowed column identifier(Thomas)
Add checks for UNION target fields(Bruce)
Enhancements
------------
-Subselects with EXISTS, IN, ALL, ANY keywords (Vadim, Bruce, Thomas)
+Subselects with EXISTS, IN, ALL, ANY key words (Vadim, Bruce, Thomas)
New User Manual(Thomas, others)
Speedup by inlining some frequently-called functions
Real deadlock detection, no more timeouts(Bruce)
A minor patch for HP/UX 10 vs 9(Stan)
New pg_attribute.atttypmod for type-specific info like varchar length(Bruce)
UnixWare patches(Billy)
-New i386 'lock' for spin lock asm(Billy)
+New i386 'lock' for spinlock asm(Billy)
Support for multiplexed backends is removed
Start an OpenBSD port
Start an AUX port
Catch non-functional delete attempts(Vadim)
Change time function names to be more consistent(Michael Reifenberg)
Check for zero divides(Michael Reifenberg)
-Fix very old bug which made tuples changed/inserted by a comm nd
+Fix very old bug which made rows changed/inserted by a comma nd
visible to the command itself (so we had multiple update of
- updated tuples, etc )(Vadim)
+ updated rows, etc. )(Vadim)
Fix for SELECT null, 'fail' FROM pg_am (Patrick)
SELECT NULL as EMPTY_FIELD now allowed(Patrick)
Remove un-needed signal stuff from contrib/pginterface
-Fix OR (where x != 1 or x isnull didn't return tuple s with x NULL) (Vadim)
+Fix OR (where x != 1 or x isnull didn't return row s with x NULL) (Vadim)
Fix time_cmp function (Vadim)
Fix handling of functions with non-attribute first argument in
WHERE clauses (Vadim)
Allow use parameters in target list having aggregates in functions(Vadim)
Added JDBC driver as an interface(Adrian & Peter)
pg_password utility
-Return number of tuple s inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim)
+Return number of row s inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim)
Triggers implemented with CREATE TRIGGER (SQL3)(Vadim)
SPI (Server Programming Interface) allows execution of queries inside
C-functions (Vadim)
fix file manager memmory leaks, cleanups (Vadim, Massimo)
fix storage manager memmory leaks (Vadim)
fix btree duplicates handling (Vadim)
-fix deleted tuples re- incarnation caused by vacuum (Vadim)
+fix deleted rows re incarnation caused by vacuum (Vadim)
fix SELECT varchar()/char() INTO TABLE made zero-length fields(Bruce)
many psql, pg_dump, and libpq memory leaks fixed using Purify (Igor)
* added PQdisplayTuples() to libpq and changed monitor and psql to use it
* added NeXT port (requires SysVIPC implementation)
* added CAST .. AS ... syntax
- * added ASC and DESC keywords
+ * added ASC and DESC key words
* added 'internal' as a possible language for CREATE FUNCTION
internal functions are C functions which have been statically linked
into the postgres backend.
Incompatibilities:
* date formats have to be MM-DD-YYYY (or DD-MM-YYYY if you're using
EUROPEAN STYLE). This follows SQL-92 specs.
- * "delimiters" is now a keyword
+ * "delimiters" is now a key word
Enhancements:
* sql LIKE syntax has been added
(Also, aggregates can now be overloaded, i.e. you can define your
own MAX aggregate to take in a user-defined type.)
* CHANGE ACL removed. GRANT/REVOKE syntax added.
- - Privileges can be given to a group using the "GROUP" keyword.
+ - Privileges can be given to a group using the "GROUP" key word.
For example:
GRANT SELECT ON foobar TO GROUP my_group;
- The keyword 'PUBLIC' is also supported to mean all users.
+ The key word 'PUBLIC' is also supported to mean all users.
Privileges can only be granted or revoked to one user or group
at a time.
* the bug where aggregates of empty tables were not run has been fixed. Now,
aggregates run on empty tables will return the initial conditions of the
aggregates. Thus, COUNT of an empty table will now properly return 0.
- MAX/MIN of an empty table will return a tuple of value NULL.
+ MAX/MIN of an empty table will return a row of value NULL.
* allow the use of \; inside the monitor
* the LISTEN/NOTIFY asynchronous notification mechanism now work
* NOTIFY in rule action bodies now work
libpgtcl changes:
* The -oid option has been added to the "pg_result" tcl command.
- pg_result -oid returns oid of the last tuple inserted. If the
+ pg_result -oid returns oid of the last row inserted. If the
last command was not an INSERT, then pg_result -oid returns "".
* the large object interface is available as pg_lo* tcl commands:
pg_lo_open, pg_lo_close, pg_lo_creat, etc.
-
+
The Rule System
the stage. Old table rows aren't overwritten, and this
is why ROLLBACK is fast. In an UPDATE ,
the new result row is inserted into the table (after stripping the
-
CTID>) and in the tuple header of the old row, which the
+
CTID>) and in the row header of the old row, which the
CTID> pointed to, the cmax> and
xmax> entries are set to the current command counter
and current transaction ID. Thus the old row is hidden, and after
Specifies the maximum amount of memory to be used by
VACUUM to keep track of to-be-reclaimed
- tuple s. The value is specified in kilobytes, and defaults to
- 8192 kilobytes . Larger settings may improve the speed of
- vacuuming large tables that have many deleted tuple s.
+ row s. The value is specified in kilobytes, and defaults to
+ 8192 kB . Larger settings may improve the speed of
+ vacuuming large tables that have many deleted row s.
Sets the query planner's estimate of the cost of processing
- each tuple during a query. This is measured as a fraction of
+ each row during a query. This is measured as a fraction of
the cost of a sequential page fetch. The default is 0.01.
Sets the query planner's estimate of the cost of processing
- each index tuple during an index scan. This is measured as a
+ each index row during an index scan. This is measured as a
fraction of the cost of a sequential page fetch. The default
is 0.001.
where the operator token follows the syntax
rules of , or is one of the
- keywords AND , OR , and
+ key words AND , OR , and
NOT , or is a qualified operator name
OPERATOR(>schema>.>operatorname>)>
An array constructor> is an expression that builds an
array value from values for its member elements. A simple array
constructor
- consists of the keyword ARRAY , a left square bracket
+ consists of the key word ARRAY , a left square bracket
[>, one or more expressions (separated by commas) for the
array element values, and finally a right square bracket ]>.
For example,
Multidimensional array values can be built by nesting array
constructors.
- In the inner constructors, the keyword ARRAY may
+ In the inner constructors, the key word ARRAY may
be omitted. For example, these produce the same result:
It is also possible to construct an array from the results of a
subquery. In this form, the array constructor is written with the
- keyword ARRAY followed by a parenthesized (not
+ key word ARRAY followed by a parenthesized (not
bracketed) subquery. For example:
SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%');
{2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31}
(1 row)
- The sub-select must return a single column. The
+ The subquery must return a single column. The
resulting one-dimensional array will have an element for each row in the
- sub-select result, with an element type matching that of the sub-select 's
+ subquery result, with an element type matching that of the subquery 's
output column.
Run through all candidates and keep those that accept preferred types (of the
-input datatype's type category) at the most positions where type conversion
+input data type's type category) at the most positions where type conversion
will be required.
Keep all candidates if none accept preferred types.
If only one candidate remains, use it; else continue to the next step.
Run through all candidates and keep those that accept preferred types (of the
-input datatype's type category) at the most positions where type conversion
+input data type's type category) at the most positions where type conversion
will be required.
Keep all candidates if none accept preferred types.
If only one candidate remains, use it; else continue to the next step.
of its arguments and the type it is expected to return. The routines are
called get_fn_expr_rettype(FmgrInfo *flinfo)> and
get_fn_expr_argtype(FmgrInfo *flinfo, int argnum)>.
- They return the result or argument type OID, or InvalidOid if the
+ They return the result or argument type OID, or InvalidOid if the
information is not available.
The structure flinfo> is normally accessed as
fcinfo->flinfo>. The parameter argnum>
The least error-prone way to define a related set of comparison operators
- is to write the b tree comparison support function first, and then write the
+ is to write the B- tree comparison support function first, and then write the
other functions as one-line wrappers around the support function. This
reduces the odds of getting inconsistent results for corner cases.
Following this approach, we first write
PostgreSQL uses operator classes to infer the
properties of operators in more ways than just whether they can be used
with indexes. Therefore, you might want to create operator classes
- even if you have no intention of indexing any columns of your datatype.
+ even if you have no intention of indexing any columns of your data type.
In particular, there are SQL features such as ORDER BY> and
DISTINCT> that require comparison and sorting of values.
- To implement these features on a user-defined datatype,
+ To implement these features on a user-defined data type,
PostgreSQL looks for the default B-tree operator
- class for the datatype. The equals> member of this operator
+ class for the data type. The equals> member of this operator
class defines the system's notion of equality of values for
GROUP BY> and DISTINCT>, and the sort ordering
imposed by the operator class defines the default ORDER BY>
- If there is no default B-tree operator class for a datatype, the system
+ If there is no default B-tree operator class for a data type, the system
will look for a default hash operator class. But since that kind of
operator class only provides equality, in practice it is only enough
to support array equality.
- When there is no default operator class for a datatype, you will get
+ When there is no default operator class for a data type, you will get
errors like could not identify an ordering operator> if you
- try to use these SQL features with the datatype.
+ try to use these SQL features with the data type.
a WHERE clause like tab1.x = tab2.y>, where tab1.x>
and tab2.y> are of a user-defined type, and suppose that
tab2.y> is indexed. The optimizer cannot generate an
- indexscan unless it can determine how to flip the clause around to
- tab2.y = tab1.x>, because the indexscan machinery expects
+ index scan unless it can determine how to flip the clause around to
+ tab2.y = tab1.x>, because the index- scan machinery expects
to see the indexed column on the left of the operator it is given.
PostgreSQL will
not> simply
assume that this is a valid transformation --- the creator of the
the operator, since of course the referencing operator class couldn't
exist yet. But attempts to use the operator in hash joins will fail
at runtime if no such operator class exists. The system needs the
- operator class to find the datatype-specific hash function for the
- operator's input datatype. Of course, you must also supply a suitable
+ operator class to find the data- type-specific hash function for the
+ operator's input data type. Of course, you must also supply a suitable
hash function before you can create the operator class.
- The function underlying a hashjoinable operator must be marked
+ The function underlying a hash- joinable operator must be marked
immutable or stable. If it is volatile, the system will never
attempt to use the operator for a hash join.
- If a hashjoinable operator has an underlying function that is marked
+ If a hash- joinable operator has an underlying function that is marked
strict, the
- function must also be complete: that is, it should return TRUE or
- FALSE, never NULL, for any two non-NULL inputs. If this rule is
+ function must also be complete: that is, it should return true or
+ false, never null, for any two nonnull inputs. If this rule is
not followed, hash-optimization of IN> operations may
generate wrong results. (Specifically, IN> might return
- FALSE where the correct answer per spec would be NULL ; or it might
- yield an error complaining that it wasn't prepared for a NULL result.)
+ false where the correct answer according to the standard would be null ; or it might
+ yield an error complaining that it wasn't prepared for a null result.)
- The function underlying a mergejoinable operator must be marked
+ The function underlying a merge- joinable operator must be marked
immutable or stable. If it is volatile, the system will never
attempt to use the operator for a merge join.
Optionally, a user-defined type can provide binary input and output
routines. Binary I/O is normally faster but less portable than textual
I/O. As with textual I/O, it is up to you to define exactly what the
- external binary representation is. Most of the built-in datatypes
+ external binary representation is. Most of the built-in data types
try to provide a machine-independent binary representation. For
complex , we will piggy-back on the binary I/O converters
for type float8>:
the total length in bytes of the datum (including itself). The C
functions operating on the data type must be careful to unpack any
toasted values they are handed (this detail can normally be hidden in the
- GETARG macros). Then,
+ GETARG macros). Then,
when running the CREATE TYPE command, specify the
internal length as variable> and select the appropriate
storage option.