- linkend="sql-prepare-transaction">).
+ linkend="sql-prepare-transaction"/ >).
Setting this parameter to zero (which is the default)
disables the prepared-transaction feature.
This parameter can only be set at server start.
should be set to zero to prevent accidental creation of prepared
transactions. If you are using prepared transactions, you will
probably want max_prepared_transactions to be at
- least as large as , so that every
+ least as large as />, so that every
session can have a prepared transaction pending.
Note that when autovacuum runs, up to
- times this memory
+ /> times this memory
may be allocated, so be careful not to set the default value
too high. It may be useful to control for this by separately
- setting .
+ setting />.
Specifies the maximum amount of memory to be used by each
autovacuum worker process. It defaults to -1, indicating that
- the value of should
+ the value of /> should
be used instead. The setting has no effect on the behavior of
VACUUM when run in other contexts.
Cost-based Vacuum Delay
- During the execution of
- and
+ During the execution of />
+ and />
commands, the system maintains an
internal counter that keeps track of the estimated cost of the
various I/O operations that are performed. When the accumulated
the OS writes data back in larger batches in the background. Often
that will result in greatly reduced transaction latency, but there
also are some cases, especially with workloads that are bigger than
- , but smaller than the OS's page
+ />, but smaller than the OS's page
cache, where performance might degrade. This setting may have no
effect on some platforms. The valid range is between
0 , which disables forced writeback, and
The default is 1 on supported systems, otherwise 0. This value can
be overridden for tables in a particular tablespace by setting the
tablespace parameter of the same name (see
- ).
+ />).
When changing this value, consider also adjusting
- and
- .
+ /> and
+ />.
Sets the maximum number of workers that can be started by a single
Gather or Gather Merge node.
Parallel workers are taken from the pool of processes established by
- , limited by
- . Note that the requested
+ />, limited by
+ />. Note that the requested
number of workers may not actually be available at run time. If this
occurs, the plan will run with fewer workers than expected, which may
be inefficient. The default value is 2. Setting this value to 0
system as an additional user session. This should be taken into
account when choosing a value for this setting, as well as when
configuring other settings that control resource utilization, such
- as . Resource limits such as
+ as />. Resource limits such as
work_mem are applied individually to each worker,
which means the total utilization may be much higher across all
processes than it would normally be for any single process.
For more information on parallel query, see
- .
+ />.
Sets the maximum number of workers that the system can support for
parallel queries. The default value is 8. When increasing or
decreasing this value, consider also adjusting
- .
+ />.
Also, note that a setting for this value which is higher than
- will have no effect,
+ /> will have no effect,
since parallel workers are taken from the pool of worker processes
established by that setting.
checkpoint, or when the OS writes data back in larger batches in the
background. Often that will result in greatly reduced transaction
latency, but there also are some cases, especially with workloads
- that are bigger than , but smaller
+ that are bigger than />, but smaller
than the OS's page cache, where performance might degrade. This
setting may have no effect on some platforms. The valid range is
between 0 , which disables forced writeback,
For additional information on tuning these settings,
- see .
+ see />.
In minimal level, WAL-logging of some bulk
operations can be safely skipped, which can make those
- operations much faster (see ).
+ operations much faster (see />).
Operations in which this optimization can be applied include:
CREATE TABLE AS
But minimal WAL does not contain enough information to reconstruct the
data from a base backup and the WAL logs, so replica or
higher must be used to enable WAL archiving
- () and streaming replication.
+ (/>) and streaming replication.
In logical level, the same information is logged as
If this parameter is on, the
PostgreSQL server
will try to make sure that updates are physically written to
disk, by issuing fsync() system calls or various
- equivalent methods (see ).
+ equivalent methods (see />).
This ensures that the database cluster can recover to a
consistent state after an operating system or hardware crash.
- In many situations, turning off
+ In many situations, turning off />
for noncritical transactions can provide much of the potential
performance benefit of turning off fsync , without
the attendant risks of data corruption.
fsync can only be set in the postgresql.conf
file or on the server command line.
If you turn this parameter off, also consider turning off
- .
+ />.
is on . When off , there can be a delay between
when success is reported to the client and when the transaction is
really guaranteed to be safe against a server crash. (The maximum
- delay is three times .) Unlike
- , setting this parameter to off
+ delay is three times />.) Unlike
+ />, setting this parameter to off
does not create any risk of database inconsistency: an operating
system or database crash might
result in some recent allegedly-committed transactions being lost, but
been aborted cleanly. So, turning synchronous_commit off
can be a useful alternative when performance is more important than
exact certainty about the durability of a transaction. For more
- discussion see .
+ discussion see />.
- If is non-empty, this
+ If /> is non-empty, this
parameter also controls whether or not transaction commits will wait
for their WAL records to be replicated to the standby server(s).
When set to on , commits will wait until replies
necessary to change this setting or other aspects of your system
configuration in order to create a crash-safe configuration or
achieve optimal performance.
- These aspects are discussed in .
+ These aspects are discussed in />.
This parameter can only be set in the postgresql.conf
file or on the server command line.
Turning off this parameter does not affect use of
WAL archiving for point-in-time recovery (PITR)
- (see ).
+ (see />).
When this parameter is
on , the
PostgreSQL
server compresses a full page image written to WAL when
- is on or during a base backup.
+ /> is on or during a base backup.
A compressed page image will be decompressed during WAL replay.
The default value is off .
Only superusers can change this setting.
The amount of shared memory used for WAL data that has not yet been
written to disk. The default setting of -1 selects a size equal to
- 1/32nd (about 3%) of , but not less
+ 1/32nd (about 3%) of />, but not less
than 64kB nor more than the size of one WAL
segment, typically 16MB . This value can be set
manually if the automatic choice is too large or too small,
checkpoint, or when the OS writes data back in larger batches in the
background. Often that will result in greatly reduced transaction
latency, but there also are some cases, especially with workloads
- that are bigger than , but smaller
+ that are bigger than />, but smaller
than the OS's page cache, where performance might degrade. This
setting may have no effect on some platforms. The valid range is
between 0 , which disables forced writeback,
When archive_mode is enabled, completed WAL segments
are sent to archive storage by setting
- . In addition to off ,
+ />. In addition to off ,
to disable, there are two modes: on , and
always . During normal operation, there is no
difference between the two modes, but when set to always
the WAL archiver is enabled also during archive recovery or standby
mode. In always mode, all files restored from the archive
or streamed with streaming replication will be archived (again). See
- for details.
+ /> for details.
archive_mode and archive_command are
Use %% to embed an actual % character in the
command. It is important for the command to return a zero
exit status only if it succeeds. For more information see
- .
+ />.
This parameter can only be set in the postgresql.conf
- The is only invoked for
+ The /> is only invoked for
completed WAL segments. Hence, if your server generates little WAL
traffic (or has slack periods where it does so), there could be a
long delay between the completion of a transaction and its safe
These settings control the behavior of the built-in
streaming replication feature (see
- ). Servers will be either a
+ />). Servers will be either a
Master or a Standby server. Masters can send data, while Standby(s)
are always receivers of replicated data. When cascading replication
- (see ) is used, Standby server(s)
+ (see />) is used, Standby server(s)
can also be senders, as well as receivers.
Parameters are mainly for Sending and Standby servers, though some
parameters have meaning only on the Master server. Settings may vary
processes). The default is 10. The value 0 means replication is
disabled. WAL sender processes count towards the total number
of connections, so the parameter cannot be set higher than
- . Abrupt streaming client
+ />. Abrupt streaming client
disconnection might cause an orphaned connection slot until
a timeout is reached, so this parameter should be set slightly
higher than the maximum number of expected clients so disconnected
Specifies the maximum number of replication slots
- (see ) that the server
+ (see />) that the server
can support. The default is 10. This parameter can only be set at
server start.
wal_level must be set
These parameters can be set on the master/primary server that is
to send replication data to one or more standby servers.
Note that in addition to these parameters,
- must be set appropriately on the master
+ /> must be set appropriately on the master
server, and optionally WAL archiving can be enabled as
- well (see ).
+ well (see />).
The values of these parameters on standby servers are irrelevant,
although you may wish to set them there in preparation for the
possibility of a standby becoming the master.
Specifies a list of standby servers that can support
synchronous replication , as described in
- .
+ />.
There will be one or more active synchronous standbys;
transactions waiting for commit will be allowed to proceed after
these standby servers confirm receipt of their data.
replication. This is the default configuration. Even when
synchronous replication is enabled, individual transactions can be
configured not to wait for replication by setting the
- parameter to
+ /> parameter to
local or off .
removed as soon as possible, that is, as soon as they are no longer
visible to any open transaction. You may wish to set this to a
non-zero value on a primary server that is supporting hot standby
- servers, as described in . This allows
+ servers, as described in />. This allows
more time for queries on the standby to complete without incurring
conflicts due to early cleanup of rows. However, since the value
is measured in terms of number of write transactions occurring on the
Specifies whether or not you can connect and run queries during
- recovery, as described in .
+ recovery, as described in />.
The default value is on .
This parameter can only be set at server start. It only has effect
during archive recovery or in standby mode.
When Hot Standby is active, this parameter determines how long the
standby server should wait before canceling standby queries that
conflict with about-to-be-applied WAL entries, as described in
- .
+ />.
max_standby_archive_delay applies when WAL data is
being read from WAL archive (and is therefore not current).
The default is 30 seconds. Units are milliseconds if not specified.
When Hot Standby is active, this parameter determines how long the
standby server should wait before canceling standby queries that
conflict with about-to-be-applied WAL entries, as described in
- .
+ />.
max_standby_streaming_delay applies when WAL data is
being received via streaming replication.
The default is 30 seconds. Units are milliseconds if not specified.
choose a different plan.
Better ways to improve the quality of the
plans chosen by the optimizer include adjusting the planner cost
- constants (see ),
- running manually, increasing
+ constants (see />),
+ running /> manually, increasing
the value of the
- linkend="guc-default-statistics-target"> configuration parameter,
+ linkend="guc-default-statistics-target"/ > configuration parameter,
and increasing the amount of statistics collected for
specific columns using ALTER TABLE SET
STATISTICS.
Enables or disables the query planner's use of index-only-scan plan
- types (see ).
+ types (see />).
The default is on .
that is part of a series of sequential fetches. The default is 1.0.
This value can be overridden for tables and indexes in a particular
tablespace by setting the tablespace parameter of the same name
- (see ).
+ (see />).
non-sequentially-fetched disk page. The default is 4.0.
This value can be overridden for tables and indexes in a particular
tablespace by setting the tablespace parameter of the same name
- (see ).
+ (see />).
complex queries (those joining many relations), at the cost of producing
plans that are sometimes inferior to those found by the normal
exhaustive-search algorithm.
- For more information see .
+ For more information see />.
do ANALYZE , but might improve the quality of the
planner's estimates. The default is 100. For more information
on the use of statistics by the
PostgreSQL
- query planner, refer to .
+ query planner, refer to />.
- Refer to for
+ Refer to /> for
more information on using constraint exclusion and partitioning.
resulting FROM list would have no more than
this many items. Smaller values reduce planning time but might
yield inferior query plans. The default is eight.
- For more information see .
+ For more information see />.
- Setting this value to or more
+ Setting this value to /> or more
may trigger use of the GEQO planner, resulting in non-optimal
- plans. See .
+ plans. See />.
the optimal join order, advanced users can elect to
temporarily set this variable to 1, and then specify the join
order they desire explicitly.
- For more information see .
+ For more information see />.
- Setting this value to or more
+ Setting this value to /> or more
may trigger use of the GEQO planner, resulting in non-optimal
- plans. See .
+ plans. See />.
log entries are output in comma separated
value (
CSV ) format, which is convenient for
loading logs into programs.
- See for details.
- must be enabled to generate
+ See /> for details.
+ /> must be enabled to generate
CSV-format log output.
log_destination .
PostgreSQL
can log to
syslog facilities
LOCAL0 through
LOCAL7 (see
- linkend="guc-syslog-facility">), but the default
+ linkend="guc-syslog-facility"/ >), but the default
syslog configuration on most platforms
will discard all such messages. You will need to add something like:
register an event source and its library with the operating
system so that the Windows Event Viewer can display event
log messages cleanly.
- See for details.
+ See /> for details.
file names. (Note that if there are
any time-zone-dependent % -escapes, the computation
is done in the zone specified
- by .)
+ by />.)
The supported % -escapes are similar to those
listed in the Open Group's
url="http://pubs.opengroup.org/onlinepubs/009695399/functions/strftime.html">strftime
server owner can read or write the log files. The other commonly
useful setting is 0640 , allowing members of the owner's
group to read the files. Note however that to make use of such a
- setting, you'll need to alter to
+ setting, you'll need to alter /> to
store the files somewhere outside the cluster data directory. In
any case, it's unwise to make the log files world-readable, since
they might contain sensitive data.
When using this option together with
- ,
+ />,
the text of statements that are logged because of
log_statement will not be repeated in the
duration log message.
If you are not using
syslog , it is recommended
that you log the PID or session ID using
-
+ />
so that you can link the statement message to the later
duration message using the process ID or session ID.
- explains the message
+ /> explains the message
severity levels used by
PostgreSQL . If logging output
is sent to syslog or Windows'
eventlog , the severity levels are translated
It is typically set by an application upon connection to the server.
The name will be displayed in the pg_stat_activity view
and included in CSV log entries. It can also be included in regular
- log entries via the parameter.
+ log entries via the /> parameter.
Only printable ASCII characters may be used in the
application_name value. Other characters will be
replaced with question marks (? ).
These messages are emitted at LOG message level, so by
default they will appear in the server log but will not be sent to the
client. You can change that by adjusting
- and/or
- .
+ /> and/or
+ />.
These parameters are off by default.
The difference between setting this option and setting
- to zero is that
+ /> to zero is that
exceeding log_min_duration_statement forces the text of
the query to be logged, but this option doesn't. Thus, if
log_duration is on and
the logging of DETAIL , HINT ,
QUERY , and CONTEXT error information.
VERBOSE output includes the SQLSTATE error
- code (see also ) and the source code file name, function name,
+ code (see also />) and the source code file name, function name,
and line number that generated the error.
Only superusers can change this setting.
Controls whether a log message is produced when a session waits
- longer than to acquire a
+ longer than /> to acquire a
lock. This is useful in determining if lock waits are causing
poor performance. The default is off .
Only superusers can change this setting.
Causes each replication command to be logged in the server log.
- See for more information about
+ See /> for more information about
replication command. The default value is off .
Only superusers can change this setting.
Sets the time zone used for timestamps written in the server log.
- Unlike , this value is cluster-wide,
+ Unlike />, this value is cluster-wide,
so that all sessions will report timestamps consistently.
The built-in default is GMT , but that is typically
overridden in
postgresql.conf ;
initdb
will install a setting there corresponding to its system environment.
- See for more information.
+ See /> for more information.
This parameter can only be set in the postgresql.conf
file or on the server command line.
These settings control how process titles of server processes are
modified. Process titles are typically viewed using programs like
ps or, on Windows,
Process Explorer .
- See for details.
+ See /> for details.
When statistics collection is enabled, the data that is produced can be
accessed via the pg_stat and
pg_statio family of system views.
- Refer to for more information.
+ Refer to /> for more information.
Enables timing of database I/O calls. This parameter is off by
default, because it will repeatedly query the operating system for
the current time, which may cause significant overhead on some
- platforms. You can use the tool to
+ platforms. You can use the /> tool to
measure the overhead of timing on your system.
I/O timing information is
- displayed in , in the output of
- when the BUFFERS option is
- used, and by . Only superusers can
+ displayed in />, in the output of
+ /> when the BUFFERS option is
+ used, and by />. Only superusers can
change this setting.
These settings control the behavior of the autovacuum
- feature. Refer to for more information.
+ feature. Refer to /> for more information.
Note that many of these settings can be overridden on a per-table
basis; see
- endterm="sql-createtable-storage-parameters-title">.
+ endterm="sql-createtable-storage-parameters-title"/ >.
Controls whether the server should run the
autovacuum launcher daemon. This is on by default; however,
- must also be enabled for
+ /> must also be enabled for
autovacuum to work.
This parameter can only be set in the postgresql.conf
file or on the server command line; however, autovacuuming can be
Note that even when this parameter is disabled, the system
will launch autovacuum processes if necessary to
prevent transaction ID wraparound. See
- linkend="vacuum-for-wraparound"> for more information.
+ linkend="vacuum-for-wraparound"/ > for more information.
This parameter can only be set at server start, but the setting
can be reduced for individual tables by
changing table storage parameters.
- For more information see .
+ For more information see />.
400 million multixacts.
This parameter can only be set at server start, but the setting can
be reduced for individual tables by changing table storage parameters.
- For more information see .
+ For more information see />.
Specifies the cost delay value that will be used in automatic
VACUUM operations. If -1 is specified, the regular
- value will be used.
+ /> value will be used.
The default value is 20 milliseconds.
This parameter can only be set in the postgresql.conf
file or on the server command line;
Specifies the cost limit value that will be used in automatic
VACUUM operations. If -1 is specified (which is the
default), the regular
- value will be used. Note that
+ /> value will be used. Note that
the value is distributed proportionally among the running autovacuum
workers, if there is more than one, so that the sum of the limits for
each worker does not exceed the value of this variable.
The current effective value of the search path can be examined
current_schemas
- (see ).
+ (see />).
This is not quite the same as
examining the value of search_path , since
current_schemas shows how the items
- For more information on schema handling, see .
+ For more information on schema handling, see />.
For more information on row security policies,
- see .
+ see />.
This variable is not used for temporary tables; for them,
- is consulted instead.
+ /> is consulted instead.
For more information on tablespaces,
- see .
+ see />.
- See also .
+ See also />.
This parameter is normally on. When set to off , it
disables validation of the function body string during
- linkend="sql-createfunction">. Disabling validation avoids side
+ linkend="sql-createfunction"/ >. Disabling validation avoids side
effects of the validation process and avoids false positives due
to problems such as forward references. Set this parameter
to off before loading functions on behalf of other
- Consult
and
- linkend="sql-set-transaction"> for more information.
+ Consult
/> and
+ linkend="sql-set-transaction"/ > for more information.
- Consult for more information.
+ Consult /> for more information.
- Consult for more information.
+ Consult /> for more information.
superuser privilege and results in discarding any previously cached
query plans. Possible values are origin (the default),
replica and local .
- See for
+ See /> for
more information.
longer than the specified duration in milliseconds. This allows any
locks held by that session to be released and the connection slot to be reused;
it also allows tuples visible only to this transaction to be vacuumed. See
- for more details about this.
+ /> for more details about this.
The default value of 0 disables this feature.
tuples. The default is 150 million transactions. Although users can
set this value anywhere from zero to two billions, VACUUM
will silently limit the effective value to 95% of
- , so that a
+ />, so that a
periodical manual VACUUM has a chance to run before an
anti-wraparound autovacuum is launched for the table. For more
information see
- .
+ />.
The default is 50 million transactions. Although
users can set this value anywhere from zero to one billion,
VACUUM will silently limit the effective value to half
- the value of , so
+ the value of />, so
that there is not an unreasonably short time between forced
autovacuums. For more information see
- linkend="vacuum-for-wraparound">.
+ linkend="vacuum-for-wraparound"/ >.
tuples. The default is 150 million multixacts.
Although users can set this value anywhere from zero to two billions,
VACUUM will silently limit the effective value to 95% of
- , so that a
+ />, so that a
periodical manual VACUUM has a chance to run before an
anti-wraparound is launched for the table.
- For more information see .
+ For more information see />.
is 5 million multixacts.
Although users can set this value anywhere from zero to one billion,
VACUUM will silently limit the effective value to half
- the value of ,
+ the value of />,
so that there is not an unreasonably short time between forced
autovacuums.
- For more information see .
+ For more information see />.
Sets the output format for values of type bytea .
Valid values are hex (the default)
and escape (the traditional PostgreSQL
- format). See for more
+ format). See /> for more
information. The bytea type always
accepts both formats on input, regardless of this setting.
base64 and hex , which
are both defined in the XML Schema standard. The default is
base64 . For further information about
- XML-related functions, see .
+ XML-related functions, see />.
Sets whether DOCUMENT or
CONTENT is implicit when converting between
XML and character string values. See
- linkend="datatype-xml"> for a description of this. Valid
+ linkend="datatype-xml"/ > for a description of this. Valid
values are DOCUMENT and
CONTENT . The default is
CONTENT .
The default is four megabytes (4MB ). This setting
can be overridden for individual GIN indexes by changing
index storage parameters.
- See > and >
+ See /> and >
for more information.
and European are synonyms for DMY ; the
keywords US , NonEuro , and
NonEuropean are synonyms for MDY . See
- for more information. The
+ /> for more information. The
built-in default is ISO, MDY , but
initdb will initialize the
configuration file with a setting that corresponds to the
output matching
SQL standard interval literals.
The value postgres (which is the default) will produce
output matching
PostgreSQL releases prior to 8.4
- when the
+ when the />
parameter was set to ISO .
The value postgres_verbose will produce output
matching
PostgreSQL releases prior to 8.4
The IntervalStyle parameter also affects the
interpretation of ambiguous interval input. See
- for more information.
+ /> for more information.
The built-in default is GMT , but that is typically
overridden in
postgresql.conf ;
initdb
will install a setting there corresponding to its system environment.
- See for more information.
+ See /> for more information.
which is a collection that works in most of the world; there are
also 'Australia' and 'India' ,
and other collections can be defined for a particular installation.
- See for more information.
+ See /> for more information.
partially-significant digits; this is especially useful for dumping
float data that needs to be restored exactly. Or it can be set
negative to suppress unwanted digits.
- See also .
+ See also />.
Sets the client-side encoding (character set).
The default is to use the database encoding.
The character sets supported by the
PostgreSQL
- server are described in .
+ server are described in />.
Sets the language in which messages are displayed. Acceptable
- values are system-dependent; see for
+ values are system-dependent; see /> for
more information. If this variable is set to the empty string
(which is the default) then the value is inherited from the
execution environment of the server in a system-dependent way.
Sets the locale to use for formatting monetary amounts, for
example with the to_char family of
functions. Acceptable values are system-dependent; see
- linkend="locale"> for more information. If this variable is
+ linkend="locale"/ > for more information. If this variable is
set to the empty string (which is the default) then the value
is inherited from the execution environment of the server in a
system-dependent way.
Sets the locale to use for formatting numbers, for example
with the to_char family of
functions. Acceptable values are system-dependent; see
- linkend="locale"> for more information. If this variable is
+ linkend="locale"/ > for more information. If this variable is
set to the empty string (which is the default) then the value
is inherited from the execution environment of the server in a
system-dependent way.
Sets the locale to use for formatting dates and times, for example
with the to_char family of
functions. Acceptable values are system-dependent; see
- linkend="locale"> for more information. If this variable is
+ linkend="locale"/ > for more information. If this variable is
set to the empty string (which is the default) then the value
is inherited from the execution environment of the server in a
system-dependent way.
Selects the text search configuration that is used by those variants
of the text search functions that do not have an explicit argument
specifying the configuration.
- See for further information.
+ See /> for further information.
The built-in default is pg_catalog.simple , but
initdb will initialize the
configuration file with a setting that corresponds to the
This variable specifies one or more shared libraries that are to be
preloaded at connection start.
It contains a comma-separated list of library names, where each name
- is interpreted as for the command.
+ is interpreted as for the /> command.
Whitespace between entries is ignored; surround a library name with
double quotes if you need to include whitespace or commas in the name.
The parameter value only takes effect at the start of the connection.
However, unless a module is specifically designed to be used in this way by
non-superusers, this is usually not the right setting to use. Look
- at instead.
+ at /> instead.
This variable specifies one or more shared libraries that are to be
preloaded at connection start.
It contains a comma-separated list of library names, where each name
- is interpreted as for the command.
+ is interpreted as for the /> command.
Whitespace between entries is ignored; surround a library name with
double quotes if you need to include whitespace or commas in the name.
The parameter value only takes effect at the start of the connection.
performance-measurement libraries to be loaded into specific sessions
without an explicit
LOAD command being given. For
- example, could be enabled for all
+ example, /> could be enabled for all
sessions under a given user name by setting this parameter
with ALTER ROLE SET . Also, this parameter can be changed
without restarting the server (but changes only take effect when a new
- Unlike , there is no large
+ Unlike />, there is no large
performance advantage to loading a library at session start rather than
when it is first used. There is some advantage, however, when
connection pooling is used.
This variable specifies one or more shared libraries to be preloaded at
server start.
It contains a comma-separated list of library names, where each name
- is interpreted as for the command.
+ is interpreted as for the /> command.
Whitespace between entries is ignored; surround a library name with
double quotes if you need to include whitespace or commas in the name.
This parameter can only be set at server start. If a specified
parameter is recommended only for libraries that will be used in most
sessions. Also, changing this parameter requires a server restart, so
this is not the right setting to use for short-term debugging tasks,
- say. Use for that
+ say. Use /> for that
instead.
Soft upper limit of the size of the set returned by GIN index scans. For more
- information see .
+ information see />.
- When is set,
+ When /> is set,
this parameter also determines the length of time to wait before
a log message is issued about the lock wait. If you are trying
to investigate locking delays you might want to set a shorter than
The shared lock table tracks locks on
max_locks_per_transaction * (
- linkend="guc-max-connections"> +
- linkend="guc-max-prepared-transactions">) objects (e.g., tables);
+ linkend="guc-max-connections"
/ > +
+ linkend="guc-max-prepared-transactions"/ >) objects (e.g., tables);
hence, no more than this many distinct objects can be locked at
any one time. This parameter controls the average number of object
locks allocated for each transaction; individual transactions
The shared predicate lock table tracks locks on
max_pred_locks_per_transaction * (
- linkend="guc-max-connections"> +
- linkend="guc-max-prepared-transactions">) objects (e.g., tables);
+ linkend="guc-max-connections"
/ > +
+ linkend="guc-max-prepared-transactions"/ >) objects (e.g., tables);
hence, no more than this many distinct objects can be locked at
any one time. This parameter controls the average number of object
locks allocated for each transaction; individual transactions
predicate-locked before the lock is promoted to covering the whole
relation. Values greater than or equal to zero mean an absolute
limit, while negative values
- mean divided by
+ mean /> divided by
the absolute value of this setting. The default is -2, which keeps
the behavior from previous versions of
PostgreSQL .
This parameter can only be set in the postgresql.conf
- See for more information.
+ See /> for more information.
output of EXPLAIN as well as the results of functions
like pg_get_viewdef . See also the
--quote-all-identifiers option of
- > and >.
+ /> and >.
parameter to determine how string literals will be processed.
The presence of this parameter can also be taken as an indication
that the escape string syntax (E'...' ) is supported.
- Escape string syntax ()
+ Escape string syntax (/>)
should be used if an application desires
backslashes to be treated as escape characters.
- Refer to for related information.
+ Refer to /> for related information.
Reports the size of a disk block. It is determined by the value
of BLCKSZ when building the server. The default
value is 8192 bytes. The meaning of some configuration
- variables (such as ) is
+ variables (such as />) is
influenced by
block_size . See
- linkend="runtime-config-resource"> for information.
+ linkend="runtime-config-resource"/ > for information.
Reports whether data checksums are enabled for this cluster.
- See for more information.
+ See /> for more information.
Reports the locale in which sorting of textual data is done.
- See for more information.
+ See /> for more information.
This value is determined when a database is created.
Reports the locale that determines character classifications.
- See for more information.
+ See /> for more information.
This value is determined when a database is created.
Ordinarily this will be the same as lc_collate ,
but for special applications it might be set differently.
Reports the database encoding (character set).
It is determined when the database is created. Ordinarily,
clients need only be concerned with the value of
- linkend="guc-client-encoding">.
+ linkend="guc-client-encoding"/ >.
Reports the number of blocks (pages) in a WAL segment file.
The total size of a WAL segment file in bytes is equal to
wal_segment_size multiplied by wal_block_size ;
- by default this is 16MB. See for
+ by default this is 16MB. See /> for
more information.
Generates a great amount of debugging output for the
LISTEN and NOTIFY
- commands. or
- must be
+ commands. /> or
+ /> must be
DEBUG1 or lower to send this output to the
client or server logs, respectively.
Enables logging of recovery-related debugging output that otherwise
would not be logged. This parameter allows the user to override the
- normal setting of , but only for
+ normal setting of />, but only for
specific messages. This is intended for use in debugging Hot Standby.
Valid values are DEBUG5 , DEBUG4 ,
DEBUG3 , DEBUG2 , DEBUG1 , and
- Only has effect if are enabled.
+ Only has effect if /> are enabled.
Detection of a checksum failure during a read normally causes
For convenience there are also single letter command-line option
switches available for some parameters. They are described in
- . Some of these
+ />. Some of these
options exist for historical reasons, and their presence as a
single-letter option does not necessarily indicate an endorsement
to use the option heavily.
This appendix covers extensions and other server plug-in modules found in
- contrib . covers utility
+ contrib . /> covers utility
programs.
When building from the source distribution, these components are not built
automatically, unless you build the "world" target
- (see ).
+ (see />).
You can build and install all of them by running:
make
To make use of one of these modules, after you have installed the code
you need to register the new SQL objects in the database system.
In
PostgreSQL 9.1 and later, this is done by executing
- a command. In a fresh database,
+ a /> command. In a fresh database,
you can simply do
This will update the pre-9.1 objects of the module into a proper
extension object. Future updates to the module will be
- managed by .
+ managed by />.
For more information about extension updates, see
- .
+ />.
Note, however, that some of these modules are not extensions
in this sense, but are loaded into the server in some other way, for instance
by way of
- . See the documentation of each
+ />. See the documentation of each
module for details.
This appendix and the previous one contain information regarding the modules that
can be found in the contrib directory of the
-
PostgreSQL distribution. See
for
+
PostgreSQL distribution. See
/> for
more information about the contrib section in general and
server extensions and plug-ins found in contrib
specifically.
This section covers
PostgreSQL client
applications in contrib . They can be run from anywhere,
independent of where the database server resides. See
- also for information about client
+ also /> for information about client
applications that part of the core
PostgreSQL
distribution.
This section covers
PostgreSQL server-related
applications in contrib . They are typically run on the
host where the database server resides. See also
- linkend="reference-server"> for information about server applications that
+ linkend="reference-server"/ > for information about server applications that
part of the core
PostgreSQL distribution.
Syntax
- shows the valid external
+ /> shows the valid external
representations for the cube
type. x , y , etc. denote
floating-point numbers.
Usage
- shows the operators provided for
+ /> shows the operators provided for
type cube .
- shows the available functions.
+ /> shows the available functions.
Convert a custom path to a finished plan. The return value will generally
be a CustomScan object, which the callback must allocate and
- initialize. See for more details.
+ initialize. See /> for more details.
PostgreSQL has a rich set of native data
types available to users. Users can add new types to
PostgreSQL using the
- linkend="sql-createtype"> command.
+ linkend="sql-createtype"/ > command.
- shows all the built-in general-purpose data
+ /> shows all the built-in general-purpose data
types. Most of the alternative names listed in the
Aliases
column are the names used internally by
PostgreSQL for historical reasons. In
Numeric types consist of two-, four-, and eight-byte integers,
four- and eight-byte floating-point numbers, and selectable-precision
- decimals. lists the
+ decimals. /> lists the
available types.
The syntax of constants for the numeric types is described in
- . The numeric types have a
+ />. The numeric types have a
full set of corresponding arithmetic operators and
- functions. Refer to for more
+ functions. Refer to /> for more
information. The following sections describe the types in detail.
The maximum allowed precision when explicitly specified in the
type declaration is 1000; NUMERIC without a specified
precision is subject to the limits described in
- linkend="datatype-numeric-table">.
+ linkend="datatype-numeric-table"/ >.
- The setting controls the
+ The /> setting controls the
number of extra significant digits included when a floating point
value is converted to text for output. With the default value of
0 , the output is the same on every platform
This section describes a PostgreSQL-specific way to create an
autoincrementing column. Another way is to use the SQL-standard
- identity column feature, described at .
+ identity column feature, described at />.
from the sequence is still "used up" even if a row containing that
value is never successfully inserted into the table column. This
may happen, for example, if the inserting transaction rolls back.
- See nextval() in
+ See nextval() in />
for details.
The money type stores a currency amount with a fixed
fractional precision; see
- linkend="datatype-money-table">. The fractional precision is
- determined by the database's setting.
+ linkend="datatype-money-table"/ >. The fractional precision is
+ determined by the database's /> setting.
The range shown in the table assumes there are two fractional digits.
Input is accepted in a variety of formats, including integer and
floating-point literals, as well as typical
- shows the
+ /> shows the
general-purpose character types available in
- Refer to for information about
- the syntax of string literals, and to
+ Refer to /> for information about
+ the syntax of string literals, and to />
for information about available operators and functions. The
database character set determines the character set used to store
textual values; for more information on character set support,
- refer to .
+ refer to />.
CREATE TABLE test1 (a character(4));
INSERT INTO test1 VALUES ('ok');
-SELECT a, char_length(a) FROM test1; --
+SELECT a, char_length(a) FROM test1; -- />
a | char_length
------+-------------
The char_length function is discussed in
- .
+ />.
There are two other fixed-length character types in
PostgreSQL , shown in
- linkend="datatype-character-special-table">. The name
+ linkend="datatype-character-special-table"/ >. The name
type exists only for the storage of identifiers
in the internal system catalogs and is not intended for use by the general user. Its
length is currently defined as 64 bytes (63 usable characters plus
The bytea data type allows storage of binary strings;
- see .
+ see />.
input and output:
PostgreSQL 's historical
escape
format, and hex
format. Both
of these are always accepted on input. The output format depends
- on the configuration parameter ;
+ on the configuration parameter />;
the default is hex. (Note that the hex format was introduced in
PostgreSQL 9.0; earlier versions and some
tools don't understand it.)
literal using escape string syntax).
Backslash itself (octet value 92) can alternatively be represented by
double backslashes.
-
+ />
shows the characters that must be escaped, and gives the alternative
escape sequences where applicable.
The requirement to escape non-printable octets
varies depending on locale settings. In some instances you can get away
with leaving them unescaped. Note that the result in each of the examples
- in was exactly one octet in
+ in /> was exactly one octet in
length, even though the output representation is sometimes
more than one character.
The reason multiple backslashes are required, as shown
- in , is that an input
+ in />, is that an input
string written as a string literal must pass through two parse
phases in the
PostgreSQL server.
The first backslash of each pair is interpreted as an escape
to a single octet with a decimal value of 1. Note that the
single-quote character is not treated specially by bytea ,
so it follows the normal rules for string literals. (See also
- .)
+ />.)
Most printable
octets are represented by their standard
representation in the client character set. The octet with decimal
value 92 (backslash) is doubled in the output.
- Details are in .
+ Details are in />.
PostgreSQL supports the full set of
SQL date and time types, shown in
- linkend="datatype-datetime-table">. The operations available
+ linkend="datatype-datetime-table"/ >. The operations available
on these data types are described in
- .
+ />.
Dates are counted according to the Gregorian calendar, even in
years before that calendar was introduced (see
- linkend="datetime-units-history"> for more information).
+ linkend="datetime-units-history"/ > for more information).
traditional
POSTGRES , and others.
For some formats, ordering of day, month, and year in date input is
ambiguous and there is support for specifying the expected
- ordering of these fields. Set the parameter
+ ordering of these fields. Set the /> parameter
to MDY to select month-day-year interpretation,
DMY to select day-month-year interpretation, or
YMD to select year-month-day interpretation.
PostgreSQL is more flexible in
handling date/time input than the
- See
+ See />
for the exact parsing rules of date/time input and for the
recognized text fields including months, days of the week, and
time zones.
Remember that any date or time literal input needs to be enclosed
in single quotes, like text strings. Refer to
- for more
+ /> for more
information.
SQL requires the following syntax
- shows some possible
+ /> shows some possible
inputs for the date type.
Valid input for these types consists of a time of day followed
by an optional time zone. (See
- linkend="datatype-datetime-time-table">
- and .) If a time zone is
+ linkend="datatype-datetime-time-table"/ >
+ and />.) If a time zone is
specified in the input for time without time zone ,
it is silently ignored. You can also specify a date but it will
be ignored, except when you use a time zone name that involves a
- Refer to for more information on how
+ Refer to /> for more information on how
to specify time zones.
time zone specified is converted to UTC using the appropriate offset
for that time zone. If no time zone is stated in the input string,
then it is assumed to be in the time zone indicated by the system's
- parameter, and is converted to UTC using the
+ /> parameter, and is converted to UTC using the
offset for the timezone zone.
current timezone zone, and displayed as local time in that
zone. To see the time in another time zone, either change
timezone or use the AT TIME ZONE construct
- (see ).
+ (see />).
PostgreSQL supports several
special date/time input values for convenience, as shown in
- linkend="datatype-datetime-special-table">. The values
+ linkend="datatype-datetime-special-table"/ >. The values
infinity and -infinity
are specially represented inside the system and will be displayed
unchanged; but the others are simply notational shorthands
CURRENT_TIMESTAMP , LOCALTIME ,
LOCALTIMESTAMP . The latter four accept an
optional subsecond precision specification. (See
- linkend="functions-datetime-current">.) Note that these are
+ linkend="functions-datetime-current"/ >.) Note that these are
SQL functions and are not recognized in data input strings.
SQL standard requires the use of the ISO 8601
format. The name of the SQL
output format is a
historical accident.)
- linkend="datatype-datetime-output-table"> shows examples of each
+ linkend="datatype-datetime-output-table"/ > shows examples of each
output style. The output of the date and
time types is generally only the date or time part
in accordance with the given examples. However, the
In the
SQL and POSTGRES styles, day appears before
month if DMY field ordering has been specified, otherwise month appears
before day.
- (See
+ (See />
for how this setting also affects interpretation of input values.)
- shows examples.
+ /> shows examples.
The date/time style can be selected by the user using the
SET datestyle command, the
- linkend="guc-datestyle"> parameter in the
+ linkend="guc-datestyle"/ > parameter in the
postgresql.conf configuration file, or the
PGDATESTYLE environment variable on the server or
client.
The formatting function to_char
- (see ) is also available as
+ (see />) is also available as
a more flexible way to format date/time output.
All timezone-aware dates and times are stored internally in
UTC . They are converted to local time
- in the zone specified by the configuration
+ in the zone specified by the /> configuration
parameter before being displayed to the client.
A full time zone name, for example America/New_York .
The recognized time zone names are listed in the
pg_timezone_names view (see
- linkend="view-pg-timezone-names">).
+ linkend="view-pg-timezone-names"/ >).
PostgreSQL uses the widely-used IANA
time zone data for this purpose, so the same time zone
names are also recognized by much other software.
contrast to full time zone names which can imply a set of daylight
savings transition-date rules as well. The recognized abbreviations
are listed in the
pg_timezone_abbrevs view (see
- linkend="view-pg-timezone-abbrevs">). You cannot set the
- configuration parameters or
- to a time
+ linkend="view-pg-timezone-abbrevs"/ >). You cannot set the
+ configuration parameters /> or
+ /> to a time
zone abbreviation, but you can use abbreviations in
date/time input values and with the AT TIME ZONE
operator.
they are obtained from configuration files stored under
.../share/timezone/ and .../share/timezonesets/
of the installation directory
- (see ).
+ (see />).
- The configuration parameter can
+ The /> configuration parameter can
be set in the file postgresql.conf , or in any of the
- other standard ways described in .
+ other standard ways described in />.
There are also some special ways to set it:
of the different units are implicitly added with appropriate
sign accounting. ago negates all the fields.
This syntax is also used for interval output, if
- is set to
+ /> is set to
postgres_verbose .
The string must start with a P , and may include a
T that introduces the time-of-day units. The
available unit abbreviations are given in
- linkend="datatype-interval-iso8601-units">. Units may be
+ linkend="datatype-interval-iso8601-units"/ >. Units may be
omitted, and may be specified in any order, but units smaller than
a day must appear after T . In particular, the meaning of
M depends on whether it is before or after
- shows some examples
+ /> shows some examples
of valid interval input.
postgres_verbose , or iso_8601 ,
using the command SET intervalstyle .
The default is the postgres format.
- shows examples of each
+ /> shows examples of each
output style.
The output of the postgres style matches the output of
PostgreSQL releases prior to 8.4 when the
- parameter was set to ISO .
+ /> parameter was set to ISO .
standard
SQL type
boolean ;
- see .
+ see />.
The boolean type can have several states:
true
, false
, and a third state,
unknown
, which is represented by the
- shows that
+ /> shows that
boolean values are output using the letters
t and f .
Enum types are created using the
- linkend="sql-createtype"> command,
+ linkend="sql-createtype"/ > command,
for example:
Geometric data types represent two-dimensional spatial
- objects. shows the geometric
+ objects. /> shows the geometric
types available in
PostgreSQL .
A rich set of functions and operators is available to perform various geometric
operations such as scaling, translation, rotation, and determining
- intersections. They are explained in .
+ intersections. They are explained in />.
PostgreSQL offers data types to store IPv4, IPv6, and MAC
- addresses, as shown in . It
+ addresses, as shown in />. It
is better to use these types instead of plain text types to store
network addresses, because
these types offer input error checking and specialized
- operators and functions (see ).
+ operators and functions (see />).
- shows some examples.
+ /> shows some examples.
Refer to
- linkend="sql-syntax-bit-strings"> for information about the syntax
+ linkend="sql-syntax-bit-strings"/ > for information about the syntax
of bit string constants. Bit-logical operators and string
manipulation functions are available; see
- linkend="functions-bitstring">.
+ linkend="functions-bitstring"/ >.
A bit string value requires 1 byte for each group of 8 bits, plus
5 or 8 bytes overhead depending on the length of the string
(but long values may be compressed or moved out-of-line, as explained
- in for character strings).
+ in /> for character strings).
The tsvector type represents a document in a form optimized
for text search; the tsquery type similarly represents
a text query.
- provides a detailed explanation of this
- facility, and summarizes the
+ /> provides a detailed explanation of this
+ facility, and /> summarizes the
related functions and operators.
A tsvector value is a sorted list of distinct
lexemes , which are words that have been
normalized to merge different variants of the same word
- (see for details). Sorting and
+ (see /> for details). Sorting and
duplicate-elimination are done automatically during input, as shown in
this example:
'fat':2 'rat':3
- Again, see for more detail.
+ Again, see /> for more detail.
functions for UUIDs, but the core database does not include any
function for generating UUIDs, because no single algorithm is well
suited for every application. The
- linkend="uuid-ossp"> module
+ linkend="uuid-ossp"/ > module
provides functions that implement several standard algorithms.
- The module also provides a generation
+ The /> module also provides a generation
function for random UUIDs.
Alternatively, UUIDs could be generated by client applications or
other libraries invoked through a server-side function.
advantage over storing XML data in a text field is that it
checks the input values for well-formedness, and there are support
functions to perform type-safe operations on it; see
- linkend="functions-xml">. Use of this data type requires the
+ linkend="functions-xml"/ >. Use of this data type requires the
installation to have been built with configure
--with-libxml.
results to the client (which is the normal mode), PostgreSQL
converts all character data passed between the client and the
server and vice versa to the character encoding of the respective
- end; see . This includes string
+ end; see />. This includes string
representations of XML values, such as in the above examples.
This would ordinarily mean that encoding declarations contained in
XML data can become invalid as the character data is converted
- For additional information see .
+ For additional information see />.
PostgreSQL as primary keys for various
system tables. OIDs are not added to user-created tables, unless
WITH OIDS is specified when the table is
- created, or the
+ created, or the />
configuration variable is enabled. Type oid represents
an object identifier. There are also several alias types for
oid : regproc , regprocedure ,
regoper , regoperator , regclass ,
regtype , regrole , regnamespace ,
regconfig , and regdictionary .
- shows an overview.
+ /> shows an overview.
(The system columns are further explained in
- linkend="ddl-system-columns">.)
+ linkend="ddl-system-columns"/ >.)
useful in situations where a function's behavior does not
correspond to simply taking or returning a value of a specific
SQL data type.
- linkend="datatype-pseudotypes-table"> lists the existing
+ linkend="datatype-pseudotypes-table"/ > lists the existing
pseudo-types.
|
anyelement
Indicates that a function accepts any data type
- (see ).
+ (see />).
|
anyarray
Indicates that a function accepts any array data type
- (see ).
+ (see />).
|
anynonarray
Indicates that a function accepts any non-array data type
- (see ).
+ (see />).
|
anyenum
Indicates that a function accepts any enum data type
- (see and
- ).
+ (see /> and
+ />).
|
anyrange
Indicates that a function accepts any range data type
- (see and
- ).
+ (see /> and
+ />).
|
Date/Time Key Words
- shows the tokens that are
+ /> shows the tokens that are
recognized as names of months.
- shows the tokens that are
+ /> shows the tokens that are
recognized as names of days of the week.
- shows the tokens that serve
+ /> shows the tokens that serve
various modifier purposes.
Since timezone abbreviations are not well standardized,
PostgreSQL provides a means to customize
the set of abbreviations accepted by the server. The
- run-time parameter
+ /> run-time parameter
determines the active set of abbreviations. While this parameter
can be altered by any database user, the possible values for it
are under the control of the database administrator — they
- See also , which provides roughly the same
+ See also />, which provides roughly the same
functionality using a more modern and standards-compliant infrastructure.
server. It is recommended to use the foreign-data wrapper
dblink_fdw when defining the foreign
server. See the example below, as well as
- and
- .
+ /> and
+ />.
libpq -style connection info string, for example
hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres
password=mypasswd.
- For details see .
+ For details see />.
Alternatively, the name of a foreign server.
the unnamed connection, or on a named connection if specified.
To receive notifications via dblink, LISTEN must
first be issued, using dblink_exec .
- For details see > and >.
+ For details see /> and >.
SQL does not make any guarantees about the order of the rows in a
table. When a table is read, the rows will appear in an unspecified order,
unless sorting is explicitly requested. This is covered in
- linkend="queries">. Furthermore, SQL does not assign unique
+ linkend="queries"/ >. Furthermore, SQL does not assign unique
identifiers to rows, so it is possible to have several completely
identical rows in a table. This is a consequence of the
mathematical model that underlies SQL but is usually not desirable.
built-in data types that fit many applications. Users can also
define their own data types. Most built-in data types have obvious
names and semantics, so we defer a detailed explanation to
- linkend="datatype">. Some of the frequently used data types are
+ linkend="datatype"/ >. Some of the frequently used data types are
integer for whole numbers, numeric for
possibly fractional numbers, text for character
strings, date for dates, time for
To create a table, you use the aptly named
- linkend="sql-createtable"> command.
+ linkend="sql-createtable"/ > command.
In this command you specify at least a name for the new table, the
names of the columns and the data type of each column. For
example:
text ; the second column has the name
second_column and the type integer .
The table and column names follow the identifier syntax explained
- in . The type names are
+ in />. The type names are
usually also identifiers, but there are some exceptions. Note that the
column list is comma-separated and surrounded by parentheses.
If you no longer need a table, you can remove it using the
- linkend="sql-droptable"> command.
+ linkend="sql-droptable"/ > command.
For example:
DROP TABLE my_first_table;
If you need to modify a table that already exists, see
- linkend="ddl-alter"> later in this chapter.
+ linkend="ddl-alter"/ > later in this chapter.
tables. The remainder of this chapter is concerned with adding
features to the table definition to ensure data integrity,
security, or convenience. If you are eager to fill your tables with
- data now you can skip ahead to and read the
+ data now you can skip ahead to /> and read the
rest of this chapter later.
columns will be filled with their respective default values. A
data manipulation command can also request explicitly that a column
be set to its default value, without having to know what that value is.
- (Details about data manipulation commands are in .)
+ (Details about data manipulation commands are in />.)
where the nextval() function supplies successive values
from a
sequence object (see
- linkend="functions-sequence">). This arrangement is sufficiently common
+ linkend="functions-sequence"/ >). This arrangement is sufficiently common
that there's a special shorthand for it:
CREATE TABLE products (
);
The
SERIAL shorthand is discussed further in
- linkend="datatype-serial">.
+ linkend="datatype-serial"/ >.
More information about updating and deleting data is in
- linkend="dml">. Also see the description of foreign key constraint
+ linkend="dml"/ >. Also see the description of foreign key constraint
syntax in the reference documentation for
- .
+ />.
The object identifier (object ID) of a row. This column is only
present if the table was created using WITH
- OIDS, or if the
+ OIDS, or if the />
configuration variable was set at the time. This column is of type
oid (same name as the column); see
- linkend="datatype-oid"> for more information about the type.
+ linkend="datatype-oid"/ > for more information about the type.
The OID of the table containing this row. This column is
particularly handy for queries that select from inheritance
- hierarchies (see ), since without it,
+ hierarchies (see />), since without it,
it's difficult to tell which individual table a row came from. The
tableoid can be joined against the
oid column of
Transaction identifiers are also 32-bit quantities. In a
long-lived database it is possible for transaction IDs to wrap
around. This is not a fatal problem given appropriate maintenance
- procedures; see for details. It is
+ procedures; see /> for details. It is
unwise, however, to depend on the uniqueness of transaction IDs
over the long term (more than one billion transactions).
All these actions are performed using the
-
+ />
command, whose reference page contains details beyond those given
here.
ALTER TABLE products DROP COLUMN description CASCADE;
- See for a description of the general
+ See /> for a description of the general
mechanism behind this.
object vary depending on the object's type (table, function, etc).
For complete information on the different types of privileges
supported by
PostgreSQL , refer to the
- reference
+ /> reference
page. The following sections and chapters will also show you how
those privileges are used.
An object can be assigned to a new owner with an ALTER
command of the appropriate kind for the object, e.g.
- linkend="sql-altertable">. Superusers can always do
+ linkend="sql-altertable"/ >. Superusers can always do
this; ordinary roles can only do it if they are both the current owner
of the object (or a member of the owning role) and a member of the new
owning role.
be used to grant a privilege to every role on the system. Also,
group
roles can be set up to help manage privileges when
there are many users of a database — for details see
- .
+ />.
the right to grant it in turn to others. If the grant option is
subsequently revoked then all who received the privilege from that
recipient (directly or through a chain of grants) will lose the
- privilege. For details see the and
- reference pages.
+ privilege. For details see the /> and
+ /> reference pages.
In addition to the SQL-standard privilege
- system available through ,
+ system available through />,
tables can have row security policies that restrict,
on a per-user basis, which rows can be returned by normal queries
or inserted, updated, or deleted by data modification commands.
- Policies are created using the
- command, altered using the command,
- and dropped using the command. To
+ Policies are created using the />
+ command, altered using the /> command,
+ and dropped using the /> command. To
enable and disable row security for a given table, use the
- command.
+ /> command.
not being applied. For example, when taking a backup, it could be
disastrous if row security silently caused some rows to be omitted
from the backup. In such a situation, you can set the
- configuration parameter
+ /> configuration parameter
to off . This does not in itself bypass row security;
what it does is throw an error if any query's results would get filtered
by a policy. The reason for the error can then be investigated and
- For additional details see
- and .
+ For additional details see />
+ and />.
- To create a schema, use the
+ To create a schema, use the />
command. Give the schema a name
of your choice. For example:
DROP SCHEMA myschema CASCADE;
- See for a description of the general
+ See /> for a description of the general
mechanism behind this.
You can even omit the schema name, in which case the schema name
will be the same as the user name. See
- linkend="ddl-schemas-patterns"> for how this can be useful.
+ linkend="ddl-schemas-patterns"/ > for how this can be useful.
- See also for other ways to manipulate
+ See also /> for other ways to manipulate
the schema search path.
public
means every user
. In the
first sense it is an identifier, in the second sense it is a
key word, hence the different capitalization; recall the
- guidelines from .)
+ guidelines from />.)
Given the sample data from the
PostgreSQL
- tutorial (see ), this returns:
+ tutorial (see />), this returns:
name | altitude
capitals table, but this does not happen:
INSERT always inserts into exactly the table
specified. In some cases it is possible to redirect the insertion
- using a rule (see ). However that does not
+ using a rule (see />). However that does not
help for the above case because the cities table
does not contain the column state , and so the
command will be rejected before the rule can be applied.
Table inheritance is typically established when the child table is
created, using the INHERITS clause of the
-
+ />
statement.
Alternatively, a table which is already defined in a compatible way can
have a new parent relationship added, using the INHERIT
- variant of .
+ variant of />.
To do this the new child table must already include columns with
the same names and types as the columns of the parent. It must also include
check constraints with the same names and check expressions as those of the
NO INHERIT variant of ALTER TABLE .
Dynamically adding and removing inheritance links like this can be useful
when the inheritance relationship is being used for table
- partitioning (see ).
+ partitioning (see />).
if they are inherited
from any parent tables. If you wish to remove a table and all of its
descendants, one easy way is to drop the parent table with the
- CASCADE option (see ).
+ CASCADE option (see />).
- will
+ /> will
propagate any changes in column data definitions and check
constraints down the inheritance hierarchy. Again, dropping
columns that are depended on by other tables is only possible when using
that the data is (also) in the parent table. But
the capitals table could not be updated directly
without an additional grant. In a similar way, the parent table's row
- security policies (see ) are applied to
+ security policies (see />) are applied to
rows coming from child tables during an inherited query. A child table's
policies, if any, are applied only when it is the table explicitly named
in the query; and in that case, any policies attached to its parent(s) are
- Foreign tables (see ) can also
+ Foreign tables (see />) can also
be part of inheritance hierarchies, either as parent or child
tables, just as regular tables can be. If a foreign table is part
of an inheritance hierarchy then any operations not supported by
typically only work on individual, physical tables and do not
support recursing over inheritance hierarchies. The respective
behavior of each individual command is documented in its reference
- page ().
+ page (/>).
called sub-partitioning . Partitions may have their
own indexes, constraints and default values, distinct from those of other
partitions. Indexes must be created separately for each partition. See
- for more details on creating partitioned
+ /> for more details on creating partitioned
tables and partitions.
vice versa. However, it is possible to add a regular or partitioned table
containing data as a partition of a partitioned table, or remove a
partition from a partitioned table turning it into a standalone table;
- see to learn more about the
+ see /> to learn more about the
ATTACH PARTITION and DETACH PARTITION
sub-commands.
inheritance with regular tables. Since a partition hierarchy consisting
of the partitioned table and its partitions is still an inheritance
hierarchy, all the normal rules of inheritance apply as described in
- with some exceptions, most notably:
+ /> with some exceptions, most notably:
Partitions can also be foreign tables
- (see ),
+ (see />),
although these have some limitations that normal tables do not. For
example, data inserted into the partitioned table is not routed to
foreign table partitions.
- Ensure that the
+ Ensure that the />
configuration parameter is not disabled in postgresql.conf .
If it is, queries will not be optimized as desired.
- Ensure that the
+ Ensure that the />
configuration parameter is not disabled in
postgresql.conf .
If it is, queries will not be optimized as desired.
The default (and recommended) setting of
- is actually neither
+ /> is actually neither
on nor off , but an intermediate setting
called partition , which causes the technique to be
applied only to queries that are likely to be working on partitioned
library that can communicate with an external data source, hiding the
details of connecting to the data source and obtaining data from it.
There are some foreign data wrappers available as contrib
- modules; see . Other kinds of foreign data
+ modules; see />. Other kinds of foreign data
wrappers might be found as third party products. If none of the existing
foreign data wrappers suit your needs, you can write your own; see
- linkend="fdwhandler">.
+ linkend="fdwhandler"/ >.
For additional information, see
- ,
- ,
- ,
- , and
- .
+ />,
+ />,
+ />,
+ />, and
+ />.
Detailed information on
- these topics appears in .
+ these topics appears in />.
PostgreSQL makes sure that you cannot
drop objects that other objects still depend on. For example,
attempting to drop the products table we considered in
- linkend="ddl-constraints-fk">, with the orders table depending on
+ linkend="ddl-constraints-fk"/ >, with the orders table depending on
it, would result in an error message like this:
DROP TABLE products;
LANGUAGE SQL;
- (See for an explanation of SQL-language
+ (See /> for an explanation of SQL-language
functions.)
PostgreSQL will be aware that
the get_color_note function depends on the rainbow
type: dropping the type would force dropping the function, because its
- Refer back to about where the
+ Refer back to /> about where the
server expects to find the shared library files.
but real-world usage will involve including it in a text search
- configuration as described in .
+ configuration as described in />.
That might look like this:
Real-world usage will involve including it in a text search
- configuration as described in .
+ configuration as described in />.
That might look like this:
stored. If the table has any columns with potentially-wide values,
there also might be a
TOAST file associated with the table,
which is used to store values too wide to fit comfortably in the main
- table (see ). There will be one valid index
+ table (see />). There will be one valid index
on the
TOAST table, if present. There also might be indexes
associated with the base table. Each table and index is stored in a
separate disk file — possibly more than one file, if the file would
exceed one gigabyte. Naming conventions for these files are described
- in .
+ in />.
You can monitor disk space in three ways:
- using the SQL functions listed in ,
- using the module, or
+ using the SQL functions listed in />,
+ using the /> module, or
using manual inspection of the system catalogs.
The SQL functions are the easiest to use and are generally recommended.
The remainder of this section shows how to do it by inspection of the
If you cannot free up additional space on the disk by deleting
other things, you can move some of the database files to other file
systems by making use of tablespaces. See
- linkend="manage-ag-tablespaces"> for more information about that.
+ linkend="manage-ag-tablespaces"/ > for more information about that.
- To create a new row, use the
+ To create a new row, use the />
command. The command requires the
table name and column values. For
- example, consider the products table from :
+ example, consider the products table from />:
CREATE TABLE products (
product_no integer,
WHERE release_date = 'today';
This provides the full power of the SQL query mechanism (
- linkend="queries">) for computing the rows to be inserted.
+ linkend="queries"/ >) for computing the rows to be inserted.
When inserting a lot of data at the same time, considering using
- the command.
- It is not as flexible as the
+ the /> command.
+ It is not as flexible as the />
command, but is more efficient. Refer
- to for more information on improving
+ to /> for more information on improving
bulk loading performance.
- To update existing rows, use the
+ To update existing rows, use the />
command. This requires
three pieces of information:
- Recall from that SQL does not, in general,
+ Recall from /> that SQL does not, in general,
provide a unique identifier for rows. Therefore it is not
always possible to directly specify which row to update.
Instead, you specify which conditions a row must meet in order to
this does not create any ambiguity. Of course, the
WHERE condition does
not have to be an equality test. Many other operators are
- available (see ). But the expression
+ available (see />). But the expression
needs to evaluate to a Boolean result.
- You use the
+ You use the />
command to remove rows; the syntax is very similar to the
UPDATE command. For instance, to remove all
rows from the products table that have a price of 10, use:
The allowed contents of a RETURNING clause are the same as
a SELECT command's output list
- (see ). It can contain column
+ (see />). It can contain column
names of the command's target table, or value expressions using those
columns. A common shorthand is RETURNING * , which selects
all columns of the target table in order.
- If there are triggers () on the target table,
+ If there are triggers (/>) on the target table,
the data available to RETURNING is the row as modified by
the triggers. Thus, inspecting columns computed by triggers is another
common use-case for RETURNING .
The documentation sources are written in
DocBook , which is a markup language
- superficially similar to
HTML . Both of these
- languages are applications of the Standard Generalized
- Markup Language,
SGML , which is
- essentially a language for describing other languages. In what
- follows, the terms DocBook and
SGML are both
+ defined in
XML . In what
+ follows, the terms DocBook and
XML are both
used, but technically they are not interchangeable.
-
- The PostgreSQL documentation is currently being transitioned from DocBook
- SGML and DSSSL style sheets to DocBook XML and XSLT style sheets. Be
- careful to look at the instructions relating to the PostgreSQL version you
- are dealing with, as the procedures and required tools will change.
-
-
-
DocBook allows an author to specify the
structure and content of a technical document without worrying
This is the definition of DocBook itself. We currently use version
4.2; you cannot use later or earlier versions. You need
- the
SGML and the
XML variant of
- the DocBook DTD of the same version. These will usually be in separate
- packages.
-
-
-
-
-
-
ISO 8879 character entities
-
- These are required by DocBook SGML but are distributed separately
- because they are maintained by ISO.
+ the
XML variant of the DocBook DTD, not
-
-
- This is the base package of
SGML processing. Note
- that we no longer need OpenJade, the
DSSSL
- processor, only the OpenSP package for converting SGML to XML.
-
-
-
-
To install the required packages, use:
-yum install docbook-dtds docbook-style-xsl fop libxslt opensp
+yum install docbook-dtds docbook-style-xsl fop libxslt
Installation on FreeBSD
- The FreeBSD Documentation Project is itself a heavy user of
- DocBook, so it comes as no surprise that there is a full set of
- ports
of the documentation tools available on
- FreeBSD. The following ports need to be installed to build the
- documentation on FreeBSD.
-
-
-
-
-
-
-
-
-
textproc/dsssl-docbook-modular
-
-
-
-
-
-
-
-
-
-
To install the required packages with pkg , use:
-pkg install docbook-sgml docbook-xml docbook-xsl fop libxslt opensp
+pkg install docbook-xml docbook-xsl fop libxslt
available for
Debian GNU/Linux .
To install, simply use:
-apt-get install docbook docbook-xml docbook-xsl fop libxml2-utils opensp xsltproc
+apt-get install docbook-xml docbook-xsl fop libxml2-utils xsltproc
macOS
- If you use MacPorts, the following will get you set up:
-sudo port install docbook-sgml-4.2 docbook-xml-4.2 docbook-xsl fop libxslt opensp
-
+ On macOS, you can build the HTML and man documentation without installing
+ anything extra. If you want to build PDFs or want to install a local copy
+ of DocBook, you can get those from your preferred package manager.
-
-
-
-
Manual Installation from Source
- The manual installation process of the DocBook tools is somewhat
- complex, so if you have pre-built packages available, use them.
- We describe here only a standard setup, with reasonably standard
- installation paths, and no fancy
features. For
- details, you should study the documentation of the respective
- package, and read
SGML introductory material.
-
-
-
-
Installing OpenSP
-
- The installation of OpenSP offers a GNU-style
- ./configure; make; make install build process.
- Details can be found in the OpenSP source distribution. In a nutshell:
-
-./configure --enable-default-catalog=/usr/local/etc/sgml/catalog
-make
-make install
-
- Be sure to remember where you put the default catalog
; you
- will need it below. You can also leave it off, but then you will have to
- set the environment variable SGML_CATALOG_FILES to point
- to the file whenever you use any programs from OpenSP later on. (This
- method is also an option if OpenSP is already installed and you want to
- install the rest of the toolchain locally.)
-
-
-
-
-
Installing the DocBook DTD Kit
-
-
- DocBook V4.2 distribution.
-
-
-
-
- Create the directory
- /usr/local/share/sgml/docbook-4.2 and change
- to it. (The exact location is irrelevant, but this one is
- reasonable within the layout we are following here.)
-
-
$ mkdir /usr/local/share/sgml/docbook-4.2
-
$ cd /usr/local/share/sgml/docbook-4.2
-
-
-
-
-
- Unpack the archive:
-
-
$ unzip -a ...../docbook-4.2.zip
-
- (The archive will unpack its files into the current directory.)
-
-
-
-
- Edit the file
- /usr/local/share/sgml/catalog (or whatever
- you told jade during installation) and put a line like this
- into it:
+ If you use MacPorts, the following will get you set up:
-CATALOG "docbook-4.2/docbook.cat"
+sudo port install docbook-xml-4.2 docbook-xsl fop
-
-
-
-
- ISO 8879 character entities archive, unpack it, and put the
- files in the same directory you put the DocBook files in:
-
-
$ cd /usr/local/share/sgml/docbook-4.2
-
$ unzip ...../ISOEnts.zip
-
-
-
-
-
- Run the following command in the directory with the DocBook and ISO files:
+ If you use Homebrew, use this:
-perl -pi -e 's/iso-(.*).gml/ISO\1/g' docbook.cat
+brew install docbook docbook-xsl fop
- (This fixes a mixup between the names used in the DocBook
- catalog file and the actual names of the ISO character entity
- files.)
-
-
-
-
+
Check the output near the end of the run, it should look something
like this:
-
-checking for onsgmls... onsgmls
-checking for DocBook V4.2... yes
-checking for dbtoepub... dbtoepub
checking for xmllint... xmllint
+checking for DocBook XML V4.2... yes
+checking for dbtoepub... dbtoepub
checking for xsltproc... xsltproc
-checking for osx... osx
checking for fop... fop
-
- If neither onsgmls nor
- nsgmls were found then some of the following tests
- will be skipped. nsgmls is part of the OpenSP
- package. You can pass the environment variable
- NSGMLS to configure to point
- to the programs if they are not found automatically. If
- DocBook V4.2
was not found then you did not install
- the DocBook DTD kit in a place where OpenSP can find it, or you have
- not set up the catalog files correctly. See the installation hints
- above.
+ If xmllint was not found then some of the following
+ tests will be skipped.
We use the DocBook XSL stylesheets to
refentry pages to *roff output suitable for man
- pages. The man pages are also distributed as a tar archive,
- similar to the
HTML version. To create the man
- pages, use the commands:
+ pages. To create the man pages, use the command:
The installation instructions are also distributed as plain text,
in case they are needed in a situation where better reading tools
are not available. The INSTALL file
- corresponds to , with some minor
+ corresponds to />, with some minor
changes to account for the different context. To recreate the
file, change to the directory doc/src/sgml
and enter make INSTALL .
The provided functions are shown
- in .
+ in />.
A single operator is provided, shown
- in .
+ in />.
specially marked sections. To build the program, the source code (*.pgc )
is first passed through the embedded SQL preprocessor, which converts it
to an ordinary C program (*.c ), and afterwards it can be processed by a C
- compiler. (For details about the compiling and linking see ).
+ compiler. (For details about the compiling and linking see />).
Converted ECPG applications call functions in the libpq library
through the embedded SQL library (ecpglib), and communicate with
the PostgreSQL server using the normal frontend-backend protocol.
row can also be executed using
EXEC SQL directly. To handle result sets with
multiple rows, an application has to use a cursor;
- see below. (As a special case, an
+ see /> below. (As a special case, an
application can fetch multiple rows at once into an array host
- variable; see .)
+ variable; see />.)
:something are
host variables , that is, they refer to
variables in the C program. They are explained in
- linkend="ecpg-variables">.
+ linkend="ecpg-variables"/ >.
For more details about declaration of the cursor,
- see , and
- see for FETCH command
+ see />, and
+ see /> for FETCH command
details.
interface also supports autocommit of transactions (similar to
psql 's default behavior) via the
-t
command-line option to
ecpg (see
- linkend="app-ecpg">) or via the EXEC SQL SET AUTOCOMMIT TO
+ linkend="app-ecpg"/ >) or via the EXEC SQL SET AUTOCOMMIT TO
ON statement. In autocommit mode, each command is
automatically committed unless it is inside an explicit transaction
block. This mode can be explicitly turned off using EXEC
For more details about PREPARE ,
- see . Also
- see for more details about using
+ see />. Also
+ see /> for more details about using
placeholders and input parameters.
Using Host Variables
- In you saw how you can execute SQL
+ In /> you saw how you can execute SQL
statements from an embedded SQL program. Some of those statements
only used fixed values and did not provide a way to insert
user-supplied values into statements or have the program process
Another way to exchange values between PostgreSQL backends and ECPG
applications is the use of SQL descriptors, described
- in .
+ in />.
directly. Other PostgreSQL data types, such
as timestamp and numeric can only be
accessed through special library functions; see
- .
+ />.
- shows which PostgreSQL
+ /> shows which PostgreSQL
data types correspond to which C data types. When you wish to
send or receive a value of a given PostgreSQL data type, you
should declare a C variable of the corresponding C data type in
|
decimal
-
decimal This type can only be accessed through special library functions; see .
+
decimal This type can only be accessed through special library functions; see />.
|
numeric
- numeric
+ numeric />
|
|
timestamp
- timestamp
+ timestamp />
|
interval
- interval
+ interval />
|
date
- date
+ date />
|
structure. Applications deal with these types by declaring host
variables in special types and accessing them using functions in
the pgtypes library. The pgtypes library, described in detail
- in contains basic functions to deal
+ in /> contains basic functions to deal
with those types, such that you do not need to send a query to
the SQL server just for adding an interval to a time stamp for
example.
The follow subsections describe these special data types. For
more details about pgtypes library functions,
- see .
+ see />.
program has to include pgtypes_date.h , declare a host variable
as the date type and convert a DATE value into a text form using
PGTYPESdate_to_asc() function. For more details about the
- pgtypes library functions, see .
+ pgtypes library functions, see />.
allocating some memory space on the heap, and accessing the
variable using the pgtypes library functions. For more details
about the pgtypes library functions,
- see .
+ see />.
There are two use cases for arrays as host variables. The first
is a way to store some text string in char[]
or VARCHAR[] , as
- explained in . The second use case is to
+ explained in />. The second use case is to
retrieve multiple rows from a query result without using a
cursor. Without an array, to process a query result consisting
of multiple rows, it is required to use a cursor and
You can declare pointers to the most common types. Note however
that you cannot use pointers as target variables of queries
- without auto-allocation. See
+ without auto-allocation. See />
for more information on auto-allocation.
Another workaround is to store arrays in their external string
representation in host variables of type char[]
or VARCHAR[] . For more details about this
- representation, see . Note that
+ representation, see />. Note that
this means that the array cannot be accessed naturally as an
array in the host program (without further processing that parses
the text representation).
To enhance this example, the host variables to store values in
the FETCH command can be gathered into one
structure. For more details about the host variable in the
- structure form, see .
+ structure form, see />.
To switch to the structure, the example can be modified as below.
The two host variables, intval
and textval , become members of
Here is an example using the data type complex from
- the example in . The external string
+ the example in />. The external string
representation of that type is (%lf,%lf) ,
which is defined in the
functions complex_in()
and complex_out() functions
- in . The following example inserts the
+ in />. The following example inserts the
complex type values (1,1)
and (3,3) into the
columns a and b , and select
If a query is expected to return more than one result row, a
cursor should be used, as in the following example.
- (See for more details about the
+ (See /> for more details about the
cursor.)
EXEC SQL BEGIN DECLARE SECTION;
The numeric Type
The numeric type offers to do calculations with arbitrary precision. See
- for the equivalent type in the
+ /> for the equivalent type in the
PostgreSQL server. Because of the arbitrary precision this
variable needs to be able to expand and shrink dynamically. That's why you
can only create numeric variables on the heap, by means of the
The date Type
The date type in C enables your programs to deal with data of the SQL type
- date. See for the equivalent type in the
+ date. See /> for the equivalent type in the
currently no variable to change that within ECPG.
- shows the allowed input formats.
+ /> shows the allowed input formats.
Valid Input Formats for PGTYPESdate_from_asc
All other characters are copied 1:1 to the output string.
- indicates a few possible formats. This will give
+ /> indicates a few possible formats. This will give
you an idea of how to use this function. All output lines are based on
the same date: November 23, 1959.
day.
- indicates a few possible formats. This will give
+ /> indicates a few possible formats. This will give
you an idea of how to use this function.
The timestamp Type
The timestamp type in C enables your programs to deal with data of the SQL
- type timestamp. See for the equivalent
+ type timestamp. See /> for the equivalent
type in the
PostgreSQL server.
The function returns the parsed timestamp on success. On error,
PGTYPESInvalidTimestamp is returned and errno is
- set to PGTYPES_TS_BAD_TIMESTAMP . See for important notes on this value.
+ set to PGTYPES_TS_BAD_TIMESTAMP . See /> for important notes on this value.
In general, the input string can contain any combination of an allowed
specifiers are silently discarded.
- contains a few examples for input strings.
+ /> contains a few examples for input strings.
Valid Input Formats for PGTYPEStimestamp_from_asc
This is the reverse function to
- linkend="pgtypestimestampfmtasc">. See the documentation there in
+ linkend="pgtypestimestampfmtasc"/ >. See the documentation there in
order to find out about the possible formatting mask entries.
The interval Type
The interval type in C enables your programs to deal with data of the SQL
- type interval. See for the equivalent
+ type interval. See /> for the equivalent
type in the
PostgreSQL server.
PGTYPESdecimal_free ).
There are a lot of other functions that deal with the decimal type in the
Informix compatibility mode described in
- linkend="ecpg-informix-compat">.
+ linkend="ecpg-informix-compat"/ >.
The following functions can be used to work with the decimal type and are
so using DESCRIPTOR and SQL DESCRIPTOR
produced named SQL Descriptor Areas. Now it is mandatory, omitting
the SQL keyword produces SQLDA Descriptor Areas,
- see .
+ see />.
Note that the SQL keyword is omitted. The paragraphs about
the use cases of the INTO and USING
- keywords in also apply here with an addition.
+ keywords in /> also apply here with an addition.
In a DESCRIBE statement the DESCRIPTOR
keyword can be completely omitted if the INTO keyword is used:
Points to the data. The format of the data is described
- in .
+ in />.
The whole program is shown
- in .
+ in />.
SQLSTATE error codes; therefore a high degree
of consistency can be achieved by using this error code scheme
throughout all applications. For further information see
- .
+ />.
SQLSTATE is also listed. There is, however, no
one-to-one or one-to-many mapping between the two schemes (indeed
it is many-to-many), so you should consult the global
- SQLSTATE listing in
+ SQLSTATE listing in />
in each case.
The complete syntax of the ecpg command is
- detailed in .
+ detailed in />.
ECPGtransactionStatus(const char *connection_name )
returns the current transaction status of the given connection identified by connection_name .
- See and libpq's PQtransactionStatus() for details about the returned status codes.
+ See /> and libpq's PQtransactionStatus() for details about the returned status codes.
For more details about the ECPGget_PGconn() , see
- . For information about the large
- object function interface, see .
+ />. For information about the large
+ object function interface, see />.
- shows an example program that
+ /> shows an example program that
illustrates how to create, write, and read a large object in an
ECPG application.
A safe way to use the embedded SQL code in a C++ application is
hiding the ECPG calls in a C module, which the C++ application code
calls into to access the database, and linking that together with
- the rest of the C++ code. See
+ the rest of the C++ code. See />
about that.
This section describes all SQL commands that are specific to
embedded SQL. Also refer to the SQL commands listed
- in , which can also be used in
+ in />, which can also be used in
embedded SQL, unless stated otherwise.
See Also
-
-
-
+ />
+ />
+ />
See Also
-
-
+ />
+ />
See Also
-
-
-
+ />
+ />
+ />
query
- A or
- command which will provide the
+ A /> or
+ /> command which will provide the
rows to be returned by the cursor.
For the meaning of the cursor options,
- see .
+ see />.
See Also
-
-
-
+ />
+ />
+ />
See Also
-
-
+ />
+ />
See Also
-
-
+ />
+ />
A token identifying which item of information about a column
- to retrieve. See for
+ to retrieve. See /> for
a list of supported items.
See Also
-
-
+ />
+ />
See Also
-
-
+ />
+ />
PREPARE prepares a statement dynamically
specified as a string for execution. This is different from the
- direct SQL statement , which can also
- be used in embedded programs. The
+ direct SQL statement />, which can also
+ be used in embedded programs. The />
command is used to execute either kind of prepared statement.
See Also
-
+ />
See Also
-
-
+ />
+ />
A token identifying which item of information to set in the
- descriptor. See for a
+ descriptor. See /> for a
list of supported items.
See Also
-
-
+ />
+ />
Parameters
- See for a description of the
+ See /> for a description of the
parameters.
Informix-compatible SQLDA Descriptor Areas
Informix-compatible mode supports a different structure than the one described in
- . See below:
+ />. See below:
struct sqlvar_compat
{
that it sets to the current date.
- Internally this function uses the
+ Internally this function uses the />
function.
The function always returns 0 at the moment.
- Internally the function uses the
+ Internally the function uses the />
function.
Internally this function is implemented to use the
- linkend="pgtypesdatedefmtasc"> function. See the reference there for a
+ linkend="pgtypesdatedefmtasc"/ > function. See the reference there for a
table of example input.
On success, 0 is returned and a negative value if an error occurred.
- Internally this function uses the
+ Internally this function uses the />
function, see the reference there for examples.
Internally the function is implemented to use the function
- linkend="pgtypesdatemdyjul">.
+ linkend="pgtypesdatemdyjul"/ >.
Internally the function is implemented to use the function
- linkend="pgtypesdatedayofweek">.
+ linkend="pgtypesdatedayofweek"/ >.
Internally this function uses the
- linkend="pgtypestimestampfromasc"> function. See the reference there
+ linkend="pgtypestimestampfromasc"/ > function. See the reference there
for a table with example inputs.
This function is implemented by means of the
- linkend="pgtypestimestampdefmtasc"> function. See the documentation
+ linkend="pgtypestimestampdefmtasc"/ > function. See the documentation
there for a list of format specifiers that can be used.
Internally, this function uses the
- linkend="pgtypestimestampfmtasc"> function. See the reference there for
+ linkend="pgtypestimestampfmtasc"/ > function. See the reference there for
information on what format mask specifiers can be used.
The function receives the type of the variable to test (t )
as well a pointer to this variable (ptr ). Note that the
latter needs to be cast to a char*. See the function
- linkend="rsetnull"> for a list of possible variable types.
+ linkend="rsetnull"/ > for a list of possible variable types.
Here is an example of how to use this function:
- lists all the error codes defined in
+ /> lists all the error codes defined in
PostgreSQL &version;. (Some are not actually
used at present, but are defined by the SQL standard.)
The error classes are also shown. For each error class there is a
-
-
-
+ />
+ />
+ />
|
- To supplement the trigger mechanism discussed in ,
+ To supplement the trigger mechanism discussed in />,
PostgreSQL also provides event triggers. Unlike regular
triggers, which are attached to a single table and capture only DML events,
event triggers are global to a particular database and are capable of
operations that took place, use the set-returning function
pg_event_trigger_ddl_commands() from the
ddl_command_end event trigger code (see
- ). Note that the trigger fires
+ />). Note that the trigger fires
after the actions have taken place (but before the transaction commits),
and thus the system catalogs can be read as already changed.
database objects. To list the objects that have been dropped, use the
set-returning function pg_event_trigger_dropped_objects() from the
sql_drop event trigger code (see
- ). Note that
+ />). Note that
the trigger is executed after the objects have been deleted from the
system catalogs, so it's not possible to look them up anymore.
For a complete list of commands supported by the event trigger mechanism,
- see .
+ see />.
- Event triggers are created using the command .
+ Event triggers are created using the command />.
In order to create an event trigger, you must first create a function with
the special return type event_trigger . This function
need not (and may not) return a value; the return type serves merely as
Event Trigger Firing Matrix
- lists all commands
+ /> lists all commands
for which event triggers are supported.
Describes the event for which the function is called, one of
"ddl_command_start" , "ddl_command_end" ,
"sql_drop" , "table_rewrite" .
- See for the meaning of these
+ See /> for the meaning of these
events.
The event trigger definition associated the function with
the ddl_command_start event. The effect is that all DDL
commands (with the exceptions mentioned
- in ) are prevented from running.
+ in />) are prevented from running.
- After you have compiled the source code (see ),
+ After you have compiled the source code (see />),
declare the function and the triggers:
CREATE FUNCTION noddl() RETURNS event_trigger
- functions (starting in )
+ functions (starting in />)
- aggregates (starting in )
+ aggregates (starting in />)
- data types (starting in )
+ data types (starting in />)
- operators (starting in )
+ operators (starting in />)
- operator classes for indexes (starting in )
+ operator classes for indexes (starting in />)
- packages of related objects (starting in )
+ packages of related objects (starting in />)
types through functions provided by the user and only understands
the behavior of such types to the extent that the user describes
them.
- The built-in base types are described in .
+ The built-in base types are described in />.
Enumerated (enum) types can be considered as a subcategory of base
types. The main difference is that they can be created using
just
SQL commands, without any low-level programming.
- Refer to for more information.
+ Refer to /> for more information.
type is automatically created for each base type, composite type, range
type, and domain type. But there are no arrays of arrays. So far as
the type system is concerned, multi-dimensional arrays are the same as
- one-dimensional arrays. Refer to for more
+ one-dimensional arrays. Refer to /> for more
information.
Composite types, or row types, are created whenever the user
creates a table. It is also possible to use
- linkend="sql-createtype"> to
+ linkend="sql-createtype"