As explained above, the environment of the operating system provides the
defaults for the locales of a newly initialized database cluster. In
- many cases, this is enough: If the operating system is configured for
- the desired language/territory, then
-
PostgreSQL will
by default also behave
- according to that locale.
+ many cases, this is enough: if the operating system is configured for
+ the desired language/territory, by default
+
PostgreSQL will
also behave according
+ to that locale.
The timeout is measured from the time a command arrives at the
server until it is completed by the server. If multiple SQL
- statements appear in a single simple-Query message, the timeout
+ statements appear in a single simple-query message, the timeout
is applied to each statement separately.
(
PostgreSQL versions before 13 usually
treated the timeout as applying to the whole query string.)
If you need to re-create a standby server while transactions are
- waiting, make sure that the commands pg_backup_start() and
- pg_backup_stop() are run in a session with
+ waiting, make sure that the functions pg_backup_start()
+ and pg_backup_stop() are run in a session with
synchronous_commit = off, otherwise those
requests will wait forever for the standby to appear.
The cumulative statistics system is active during recovery. All scans,
reads, blocks, index usage, etc., will be recorded normally on the
standby. However, WAL replay will not increment relation and database
- specific counters. I.e. replay will not increment pg_stat_all_tables
- columns (like n_tup_ins), nor will reads or writes performed by the
- startup process be tracked in the pg_statio views, nor will associated
- pg_stat_database columns be incremented.
+ specific counters. I.e. replay will not increment
+ pg_stat_all_tables columns (like n_tup_ins),
+ nor will reads or writes performed by the startup process be tracked in the
+ pg_statio_ views, nor will associated
+ pg_stat_database columns be incremented.
Index expressions are relatively expensive to maintain, because the
derived expression(s) must be computed for each row insertion
- and non-HOT update. However, the index expressions are
+ and non-HOT update. However, the index expressions are
not recomputed during an indexed search, since they are
already stored in the index. In both examples above, the system
sees the query as just WHERE indexedcolumn = 'constant'
- The following examples shows how logical decoding is controlled over the
+ The following examples show how logical decoding is controlled over the
streaming replication protocol, using the
program included in the PostgreSQL
distribution. This requires that client authentication is set up to allow
unnecessary and can be avoided by setting
stats_fetch_consistency to none.
- You can invoke pg_stat_clear_snapshot() to discard the
+ You can invoke pg_stat_clear_snapshot() to discard the
current transaction's statistics snapshot or cached values (if any). The
next use of statistical information will (when in snapshot mode) cause a
new snapshot to be built or (when in cache mode) accessed statistics to be
An expression to assign to the column. If used in a
WHEN MATCHED clause, the expression can use values
from the original row in the target table, and values from the
- <literal>data_source> row.
+ <replaceable>data_source> row.
If used in a WHEN NOT MATCHED clause, the
- expression can use values from the <literal>data_source>.
+ expression can use values from the <replaceable>data_source>.
A compression detail string can optionally be specified.
If the detail string is an integer, it specifies the compression
level. Otherwise, it should be a comma-separated list of items,
- each of the form keyword or
- keyword=value.
+ each of the form
+ keyword or
+ keyword=value.
Currently, the supported keywords are level,
long, and workers.
The detail string cannot be used when the compression method
- To create a backup of a local server with one tar file for each tablespace
+ To create a backup of the local server with one tar file for each tablespace
compressed with
gzip at level 9, stored in the
directory backup:
A compression detail string can optionally be specified. If the
detail string is an integer, it specifies the compression level.
Otherwise, it should be a comma-separated list of items, each of the
- form keyword or keyword=value.
+ form keyword or
+ keyword=value.
Currently, the only supported keyword is level.
Enables decoding of prepared transactions. This option may only be specified with
-
+ .
Do not output commands to select table access methods.
With this option, all objects will be created with whichever
- access method is the default during restore.
+ table access method is the default during restore.
- If provided, only display records that modify blocks in the given fork.
+ Only display records that modify blocks in the given fork.
The valid values are main for the main fork,
fsm for the free space map,
vm for the visibility map,
names, and exit.
- Extensions may define custom resource managers, but pg_waldump does
+ Extensions may define custom resource managers, but
not load the extension module and therefore does not recognize custom
resource managers by name. Instead, you can specify the custom
resource managers as custom### where
- "###" is the three-digit resource manager ID. Names
- of this form will always be considered valid.
+ ### is the three-digit resource manager ID.
+ Names of this form will always be considered valid.
data is generated in pgbench client and then
sent to the server. This uses the client/server bandwidth
extensively through a COPY.
- pgbench uses the FREEZE option with version 14 or later
+ pgbench uses the option
+ with version 14 or later
of
PostgreSQL to speed up
subsequent VACUUM, except on the
pgbench_accounts table if partitions are
each SQL command on a single line ending with a semicolon.
- It is assumed that pgbench scripts do not contain incomplete blocks of SQL
+ It is assumed that
pgbench scripts do not contain
+ incomplete blocks of SQL
transactions. If at runtime the client reaches the end of the script without
completing the last transaction block, it will be aborted.
- Here is some example output generated with these options:
+ Here is some example output generated with this option:
pgbench --aggregate-interval=10 --time=20 --client=10 --log --rate=1000 --latency-limit=10 --failures-detailed --max-tries=10 test