PostgreSQL> also supports a parameter to strip the realm from
the principal. This method is supported for backwards compatibility and is
strongly discouraged as it is then impossible to distinguish different users
- with the same username but coming from different realms. To enable this,
+ with the same user name but coming from different realms. To enable this,
set include_realm> to 0. For simple single-realm
installations, include_realm> combined with the
krb_realm> parameter (which checks that the realm provided
- matches exactly what is in the krb_realm parameter) would be a secure but
+ matches exactly what is in the krb_realm parameter) would be a secure but
less capable option compared to specifying an explicit mapping in
pg_ident.conf>.
If set to 0, the realm name from the authenticated user principal is
stripped off before being passed through the user name mapping
(). This is discouraged and is
- primairly available for backwards compatibility as it is not secure
- in multi-realm environments unless krb_realm is also used. Users
+ primarily available for backwards compatibility as it is not secure
+ in multi-realm environments unless krb_realm is also used. Users
are recommended to leave include_realm set to the default (1) and to
provide an explicit mapping in pg_ident.conf>.
unless include_realm has been set to 0, in which case
username (or username/hostbased)
- is what is seen as the system username when mapping.
+ is what is seen as the system user name when mapping.
If set to 0, the realm name from the authenticated user principal is
stripped off before being passed through the user name mapping
(). This is discouraged and is
- primairly available for backwards compatibility as it is not secure
- in multi-realm environments unless krb_realm is also used. Users
+ primarily available for backwards compatibility as it is not secure
+ in multi-realm environments unless krb_realm is also used. Users
are recommended to leave include_realm set to the default (1) and to
provide an explicit mapping in pg_ident.conf>.
unless include_realm has been set to 0, in which case
username (or username/hostbased)
- is what is seen as the system username when mapping.
+ is what is seen as the system user name when mapping.
this search, the server disconnects and re-binds to the directory as
this user, using the password specified by the client, to verify that the
login is correct. This mode is the same as that used by LDAP authentication
- schemes in other software, such as Apache mod_authnz_ldap and pam_ldap.
+ schemes in other software, such as Apache mod_authnz_ldap and pam_ldap.
This method allows for significantly more flexibility
in where the user objects are located in the directory, but will cause
two separate connections to the LDAP server to be made.
The default is 1 on supported systems, otherwise 0. This value can
- be overriden for tables in a particular tablespace by setting the
+ be overridden for tables in a particular tablespace by setting the
tablespace parameter of the same name (see
).
In logical> level, the same information is logged as
with hot_standby>, plus information needed to allow
- extracting logical changesets from the WAL. Using a level of
+ extracting logical change sets from the WAL. Using a level of
logical> will increase the WAL volume, particularly if many
tables are configured for REPLICA IDENTITY FULL and
many UPDATE> and DELETE> statements are
listed in the Open Group's
url="http://pubs.opengroup.org/onlinepubs/009695399/functions/strftime.html">strftime
specification.
- Note that the system's <systemitem>strftime> is not used
+ Note that the system's <function>strftime> is not used
directly, so platform-specific (nonstandard) extensions do not work.
The default is postgresql-%Y-%m-%d_%H%M%S.log.
path> must be initialized as for any other path, including
the row-count estimate, start and total cost, and sort ordering provided
- by this path. flags> is a bitmask, which should include
+ by this path. flags> is a bit mask, which should include
CUSTOMPATH_SUPPORT_BACKWARD_SCAN> if the custom path can support
a backward scan and CUSTOMPATH_SUPPORT_MARK_RESTORE> if it
can support mark and restore. Both capabilities are optional.
scan> must be initialized as for any other scan, including
estimated costs, target lists, qualifications, and so on.
- flags> is a bitmask with the same meaning as in
+ flags> is a bit mask with the same meaning as in
CustomPath>.
custom_plans> can be used to store child
Plan> nodes.
that is only used by the custom scan provider itself.
custom_scan_tlist> can be NIL when scanning a base
relation, indicating that the custom scan returns scan tuples that match
- the base relation's rowtype. Otherwise it is a targetlist describing
+ the base relation's row type. Otherwise it is a target list describing
the actual scan tuples. custom_scan_tlist> must be
provided for joins, and could be provided for scans if the custom scan
provider can compute some non-Var expressions.
custom_relids> is set by the core code to the set of
- relations (rangetable indexes) that this scan node handles; except when
+ relations (range table indexes) that this scan node handles; except when
this scan is replacing a join, it will have only one member.
methods> must point to a (usually statically allocated)
object implementing the required custom scan methods, which are further
- ss> is initialized as for any other scanstate,
+ ss> is initialized as for any other scan state,
except that if the scan is for a join rather than a base relation,
ss.ss_currentRelation> is left NULL.
- flags> is a bitmask with the same meaning as in
+ flags> is a bit mask with the same meaning as in
CustomPath> and CustomScan>.
methods> must point to a (usually statically allocated)
object implementing the required custom scan state methods, which are
- Adding a unique constraint will automatically create a unique btree
+ Adding a unique constraint will automatically create a unique B-tree
index on the column or group of columns used in the constraint.
A uniqueness constraint on only some rows can be enforced by creating
a partial index.
- Adding a primary key will automatically create a unique btree index
+ Adding a primary key will automatically create a unique B-tree index
on the column or group of columns used in the primary key.
To specify which rows are visible and what rows can be added to the
table with row level security, an expression is required which returns
- a boolean result. This expression will be evaluated for each row prior
+ a Boolean result. This expression will be evaluated for each row prior
to other conditionals or functions which are part of the query. The
one exception to this rule are leakproof functions,
which are guaranteed to not leak information. Two expressions may be
Below is a larger example of how this feature can be used in
- production environments, based on a unix password file.
+ production environments, based on a Unix password file.
The rows returned must match the fdw_scan_tlist> target
- list if one was supplied, otherwise they must match the rowtype of the
+ list if one was supplied, otherwise they must match the row type of the
foreign table being scanned. If you choose to optimize away fetching
columns that are not needed, you should insert nulls in those column
positions, or else generate a fdw_scan_tlist> list with
remote join cannot be found from the system catalogs, the FDW must
fill fdw_scan_tlist> with an appropriate list
of TargetEntry> nodes, representing the set of columns
- it will supply at runtime in the tuples it returns.
+ it will supply at run time in the tuples it returns.
for the row to be re-fetched. Although the rowid> value is
passed as a Datum>, it can currently only be a tid>. The
function API is chosen in hopes that it may be possible to allow other
- datatypes for row IDs in future.
+ data types for row IDs in future.
is fdw_scan_tlist>, which describes the tuples returned by
the FDW for this plan node. For simple foreign table scans this can be
set to NIL>, implying that the returned tuples have the
- rowtype declared for the foreign table. A non-NIL value must be a
- targetlist (list of TargetEntry>s) containing Vars and/or
+ row type declared for the foreign table. A non-NIL value must be a
+ target list (list of TargetEntry>s) containing Vars and/or
expressions representing the returned columns. This might be used, for
example, to show that the FDW has omitted some columns that it noticed
won't be needed for the query. Also, if the FDW can compute expressions
|
||
jsonb
- Concatentate two jsonb values into a new jsonb value
+ Concatenate two jsonb values into a new jsonb value
'["a", "b"]'::jsonb || '["c", "d"]'::jsonb
|
If the argument to json_strip_nulls> contains duplicate
field names in any object, the result could be semantically somewhat
different, depending on the order in which they occur. This is not an
- issue for jsonb_strip_nulls> since jsonb values never have
+ issue for jsonb_strip_nulls> since jsonb values never have
duplicate object field names.
integer
- Integer bitmask indicating which arguments are not being included in the current
+ Integer bit mask indicating which arguments are not being included in the current
grouping set
boolean
Cancel a backend's current query. This is also allowed if the
- calling role is a member of the role whose backend is being cancelled,
+ calling role is a member of the role whose backend is being canceled,
however only superusers can cancel superuser backends.
pg_event_trigger_table_rewrite_oid()
Oid
- The Oid of the table about to be rewritten.
+ The OID of the table about to be rewritten.
|
BRIN can support many different indexing strategies,
and the particular operators with which a BRIN index can be used
vary depending on the indexing strategy.
- For datatypes that have a linear sort order, the indexed data
+ For data types that have a linear sort order, the indexed data
corresponds to the minimum and maximum values of the
values in the column for each block range,
which support indexed queries using these operators:
- If any parameter is NULL or an emptry string, the corresponding
+ If any parameter is NULL or an empty string, the corresponding
environment variable (see ) is checked.
If the environment variable is not set either, then the indicated
built-in defaults are used.
- This function is equivalent to PQsslStruct(conn, "OpenSSL"). It should
+ This function is equivalent to PQsslStruct(conn, "OpenSSL"). It should
not be used in new applications, because the returned struct is
specific to OpenSSL and will not be available if another SSL
implementation is used. To check if a connection uses SSL, call
The optional filter_by_origin_cb callback
- is called to determine wheter data that has been replayed
+ is called to determine whether data that has been replayed
from
origin_id is of interest to the
output plugin.
for transactions and changes that have been filtered away.
- This is useful when implementing cascading or multi directional
+ This is useful when implementing cascading or multidirectional
replication solutions. Filtering by the origin allows to
prevent replicating the same changes back and forth in such
setups. While transactions and changes also carry information
. Whole-table
vacuum scans will also occur progressively for all tables, starting with
those that have the oldest multixact-age, if the amount of used member
- storage space exceeds the amount 50% of the addressible storage space.
+ storage space exceeds the amount 50% of the addressable storage space.
Both of these kinds of whole-table scans will occur even if autovacuum is
nominally disabled.
and . When an index is used to enforce
uniqueness or other constraints, might
be necessary to swap the existing constraint with one enforced by
- the new index. Review this alternate multi-step rebuild approach
+ the new index. Review this alternate multistep rebuild approach
carefully before using it as there are limitations on which
indexes can be reindexed this way, and errors must be handled.
database cluster or backed up individually. Similarly, if you lose
a tablespace (file deletion, disk failure, etc), the database cluster
might become unreadable or unable to start. Placing a tablespace
- on a temporary file system like a ramdisk risks the reliability of
+ on a temporary file system like a RAM disk risks the reliability of
the entire cluster.
ASSERT condition , message ;
- The condition is a boolean
- expression that is expected to always evaluate to TRUE; if it does,
+ The condition is a Boolean
+ expression that is expected to always evaluate to true; if it does,
the ASSERT statement does nothing further. If the
- result is FALSE or NULL, then an ASSERT_FAILURE> exception
+ result is false or null, then an ASSERT_FAILURE> exception
is raised. (If an error occurs while evaluating
the condition, it is
reported as a normal error.)
Testing of assertions can be enabled or disabled via the configuration
- parameter plpgsql.check_asserts>, which takes a boolean
+ parameter plpgsql.check_asserts>, which takes a Boolean
value; the default is on>. If this parameter
is off> then ASSERT> statements do nothing.
The frontend must now send a PasswordMessage containing the
- password (with username) encrypted via MD5, then encrypted
+ password (with user name) encrypted via MD5, then encrypted
again using the 4-byte random salt specified in the
AuthenticationMD5Password message. If this is the correct
password, the server responds with an AuthenticationOk,
References to the grouping columns or expressions are replaced
- by NULL> values in result rows for grouping sets in which those
+ by null values in result rows for grouping sets in which those
columns do not appear. To distinguish which grouping a particular output
row resulted from, see .
The individual elements of a CUBE> or ROLLUP>
- clause may be either individual expressions, or sub-lists of elements in
- parentheses. In the latter case, the sub-lists are treated as single
+ clause may be either individual expressions, or sublists of elements in
+ parentheses. In the latter case, the sublists are treated as single
units for the purposes of generating the individual grouping sets.
For example:
functions with side-effects.
However, the other side of this coin is that the optimizer is less able to
push restrictions from the parent query down into a WITH> query
- than an ordinary sub-query. The WITH> query will generally be
+ than an ordinary subquery. The WITH> query will generally be
evaluated as written, without suppression of rows that the parent query
might discard afterwards. (But, as mentioned above, evaluation might stop
early if the reference(s) to the query demand only a limited number of
It is possible that the replication delay between servers exceeds the
value of this parameter, in which case no delay is added.
- Note that the delay is calculated between the WAL timestamp as written
+ Note that the delay is calculated between the WAL time stamp as written
on master and the current time on the standby. Delays in transfer
because of network lag or cascading replication configurations
may reduce the actual wait time significantly. If the system
istemplate
- If true, then this database can be cloned by any user with CREATEDB
+ If true, then this database can be cloned by any user with CREATEDB
privileges; if false, then only superusers or the owner of the
database can clone it.
-
+
allowconn
-
+
connlimit
- To change an integer column containing UNIX timestamps to timestamp
+ To change an integer column containing Unix timestamps to timestamp
with time zone via a USING clause:
ALTER TABLE foo
When creating a comment on a constraint on a table or a domain, these
- parameteres specify the name of the table or domain on which the
+ parameters specify the name of the table or domain on which the
constraint is defined.
istemplate
- If true, then this database can be cloned by any user with CREATEDB
+ If true, then this database can be cloned by any user with CREATEDB
privileges; if false (the default), then only superusers or the owner
of the database can clone it.
-
+
allowconn
-
+
connlimit
It is very difficult to avoid such problems, because of SQL's general
- assumption that NULL is a valid value of every datatype. Best practice
- therefore is to design a domain's constraints so that NULL is allowed,
+ assumption that a null value is a valid value of every data type. Best practice
+ therefore is to design a domain's constraints so that a null value is allowed,
and then to apply column NOT NULL> constraints to columns of
the domain type as needed, rather than directly to the domain type.
prevent the inadvertent exposure of data. Functions and operators
marked as leakproof are assumed to be trustworthy, and may be executed
before conditions from security policies and security barrier views.
- In addtion, functions which do not take arguments or which are not
+ In addition, functions which do not take arguments or which are not
passed any arguments from the security barrier view or table do not have
to be marked as leakproof to be executed before security conditions. See
and .
class="PARAMETER">column_name_index or
expression_index use a
particular collation in order to be matched in the inference clause.
- Typically this is omitted, as collations usually do not affect wether or
+ Typically this is omitted, as collations usually do not affect whether or
not a constraint violation occurs. Follows CREATE
INDEX format.
DO NOTHING. Example assumes a unique index has been
defined that constrains values appearing in the
did column on a subset of rows where the
- is_active boolean column evaluates to
+ is_active Boolean column evaluates to
true:
-- This statement could infer a partial unique index on "did"
Tablespaces will in plain format by default be backed up to the same path
they have on the server, unless the
- option <replaceable>--tablespace-mapping> is used. Without
+ option <literal>--tablespace-mapping> is used. Without
this option, running a plain format base backup on the same host as the
server will not work if tablespaces are in use, because the backup would
have to be written to the same directory locations as the original
When tar format mode is used, it is the user's responsibility to unpack each
- tar file before starting postgres. If there are additional tablespaces, the
+ tar file before starting the PostgreSQL server. If there are additional tablespaces, the
tar files for them need to be unpacked in the correct locations. In this
- case the symbolic links for those tablespaces will be created by Postgres
+ case the symbolic links for those tablespaces will be created by the server
according to the contents of the tablespace_map> file that is
included in the base.tar> file.
pg_basebackup works with servers of the same
- or an older major version, down to 9.1. However, WAL streaming mode (-X
- stream) only works with server version 9.3 and later, and tar format mode
- (--format=tar) of the current version only works with server version 9.5
+ or an older major version, down to 9.1. However, WAL streaming mode (-X
+ stream) only works with server version 9.3 and later, and tar format mode
+ (--format=tar) of the current version only works with server version 9.5
or later.
- Use the specifed synchronized snapshot when making a dump of the
+ Use the specified synchronized snapshot when making a dump of the
database (see
for more
details).
Write received and decoded transaction data into this
- file. Use -> for stdout.
+ file. Use -> for stdout.
The following command-line options control the database connection parameters.
-
+
The database to connect to. See the description of the actions for
- what this means in detail. This can be a libpq connection string;
+ what this means in detail. This can be a
libpq connection string;
see for more information. Defaults
to user name.
- Username to connect as. Defaults to current operating system user
+ User name to connect as. Defaults to current operating system user
name.
- Copy all other files like clog, conf files etc. from the new cluster
- to old cluster. Everything except the relation files.
+ Copy all other files such as clog and configuration files from the new cluster
+ to the old cluster, everything except the relation files.
- Only display records marked with the given TransactionId.
+ Only display records marked with the given transaction ID.
pg_xlogdump> cannot read WAL files with suffix
.partial>. If those files need to be read, .partial>
- suffix needs to be removed from the filename.
+ suffix needs to be removed from the file name.
- Typical output from pgbench looks like:
+ Typical output from
pgbench looks like:
transaction type: TPC-B (sort of)
Vacuum all four standard tables before running the test.
- With neither
+ With neither
pgbench_tellers> and pgbench_branches>
tables, and will truncate pgbench_history>.
Notes
-
What is the Transaction> Actually Performed in pgbench?
+
What is the Transaction> Actually Performed in pgbench?
The default transaction script issues seven commands per transaction:
pgbench> writes the time taken by each transaction
to a log file. The log file will be named
pgbench_log.nnn>, where
- nnn> is the PID of the pgbench process.
+
nnn> is the PID of the pgbench process.
If the
threads, each will have its own log file. The first worker will use the
same name for its log file as in the standard single worker case.
file_no> identifies which script file was used
(useful when multiple scripts were specified with
and time_epoch>/time_us> are a
- UNIX epoch format timestamp and an offset
+ Unix epoch format time stamp and an offset
in microseconds (suitable for creating an ISO 8601
- timestamp with fractional seconds) showing when
+ time stamp with fractional seconds) showing when
the transaction completed.
Field schedule_lag> is the difference between the
transaction's scheduled start time, and the time it actually started, in
interval_start> num_of_transactions> latency_sum> latency_2_sum> min_latency> max_latency> lag_sum> lag_2_sum> min_lag> max_lag> skipped_transactions>
- where interval_start> is the start of the interval (UNIX epoch
- format timestamp), num_of_transactions> is the number of transactions
+ where interval_start> is the start of the interval (Unix epoch
+ format time stamp), num_of_transactions> is the number of transactions
within the interval, latency_sum is a sum of latencies
(so you can compute average latency easily). The following two fields are useful
for variance estimation - latency_sum> is a sum of latencies and
pg_test_fsync
-
determine fastest wal_sync_method for PostgreSQL
+
determine fastest wal_sync_method for PostgreSQL
exit and you will have to revert to the old cluster as outlined in
below. To try pg_upgrade again, you will need to modify the old
cluster so the pg_upgrade schema restore succeeds. If the problem is a
- contrib module, you might need to uninstall the contrib module from
+ contrib module, you might need to uninstall the contrib module from
the old cluster and install it in the new cluster after the upgrade,
assuming the module is not being used to store user data.
Show help about
psql and exit. The optional
topic> parameter (defaulting
- to options) selects which part of psql is
+ to
options) selects which part of
psql is
explained:
commands> describes psql>'s
- backslash commands; options> describes the commandline
-
switches that can be passed to
psql>;
- and variables> shows help about about psql configuration
+ backslash commands; options> describes the command-line
+
options that can be passed to
psql>;
+ and
variables> shows help about about psql configuration
variables.
unicode_border_style
- Sets the border drawing style for the unicode linestyle to one
+ Sets the border drawing style for the unicode line style to one
of single or double.
unicode_column_style
- Sets the column drawing style for the unicode linestyle to one
+ Sets the column drawing style for the unicode line style to one
of single or double.
unicode_header_style
- Sets the header drawing style for the unicode linestyle to one
+ Sets the header drawing style for the unicode line style to one
of single or double.
Shows help information. The optional
topic> parameter
- (defaulting to commands>) selects which part of psql is
+ (defaulting to
commands>) selects which part of psql is
explained:
commands> describes psql>'s
- backslash commands; options> describes the commandline
-
switches that can be passed to
psql>;
- and variables> shows help about about psql configuration
+ backslash commands; options> describes the command-line
+
options that can be passed to
psql>;
+ and
variables> shows help about about psql configuration
variables.
- This change causes conversions of booleans to strings to
+ This change causes conversions of Booleans to strings to
produce true> or false>, not t>
or f>. Other type conversions may succeed in more cases
than before; for example, assigning a numeric value 3.9> to
2015-04-27 [dcbf594] Stephe..: Improve qual pushdown for RLS and SB views
-->
- Allow non-LEAKPROOF functions to be passed into security barrier
+ Allow non-leakproof functions to be passed into security barrier
views if the function does not reference any table columns
(Dean Rasheed)
-->
Allow recording of transaction
- commit timestamps when configuration parameter
+ commit time
stamps when configuration parameter
linkend="guc-track-commit-timestamp">
is enabled (Álvaro Herrera, Petr Jelínek)
- Timestamp information can be accessed using functions
+ Time
stamp information can be accessed using functions
linkend="functions-commit-timestamp">pg_xact_commit_timestamp()>>
and pg_last_committed_xact()>.
2014-08-27 [8167a38] Jeff D..: Allow multibyte characters as escape in SIMILA..
-->
- Allow multi
-byte characters as escape in
+ Allow multibyte characters as escape in
linkend="functions-similarto-regexp">SIMILAR TO>>
and SUBSTRING>>
(Jeff Davis)
Previously only :=> could be used. This requires removing
the possibility for =>> to be a user-defined operator.
Creation of user-defined =>> operators has been issuing
- warnings since Postgres 9.0.
+ warnings since PostgreSQL 9.0.
-->
Add
POSIX>-compliant rounding for platforms that use
- Postgres-supplied rounding functions (Pedro Gimeno Fortea)
+ PostgreSQL-supplied rounding functions (Pedro Gimeno Fortea)
Add
linkend="monitoring-stats-funcs-table">pg_stat_get_snapshot_timestamp()>>
- to output the timestamp of the statistics snapshot (Matt Kelly)
+ to output the time stamp of the statistics snapshot (Matt Kelly)
Add
psql>
linkend="APP-PSQL-variables">PROMPT>> variables option
- (%l>) to display the multi-line statement line number
+ (%l>) to display the multiline statement line number
(Sawada Masahiko)
2015-05-19 [0b28ea7] Tom Lane: Avoid collation dependence in indexes of syste..
-->
- Change index opclass for columns
+ Change index op
erator class for columns
linkend="catalog-pg-seclabel">pg_seclabel>>.provider>
and
linkend="catalog-pg-shseclabel">pg_shseclabel>>.provider>
2014-12-08 [8001fe6] Simon ..: Windows: use GetSystemTimePreciseAsFileTime if ..
-->
- Allow higher-precision timestamp resolution on
+ Allow higher-precision time
stamp resolution on
class="osname">Windows 8> or Windows
Server 2012> and later Windows systems (Craig Ringer)
2014-06-30 [1b24887] Tom Lane: Allow multi-character source strings in contrib..
-->
- Allow multi
-character source strings in
+ Allow multicharacter source strings in
linkend="unaccent">
unaccent>> (Tom Lane)
replay progress in a safe manner. When the applying process, or the whole
cluster, dies, it needs to be possible to find out up to where data has
successfully been replicated. Naive solutions to this like updating a row in
- a table for every replayed transaction have problems like runtime overhead
+ a table for every replayed transaction have problems like run-time overhead
bloat.
marked as replaying from a remote node (using the
pg_replication_origin_session_setup()
function). Additionally the
LSN and commit
- timestamp of every source transaction can be configured on a per
+ time stamp of every source transaction can be configured on a per
transaction basis using
pg_replication_origin_xact_setup().
If that's done replication progress will persist in a crash safe