reformat_dat_file.pl can be adapted to perform
many kinds of bulk changes. Look for its block comments showing where
one-off code can be inserted. In the following example, we are going
- to consolidate two boolean fields in pg_proc
+ to consolidate two Boolean fields in pg_proc
into a char field:
vartype = RECORD,
and wholerowN
for a whole-row Var with
- vartype equal to the table's declared rowtype.
+ vartype equal to the table's declared row type.
Re-use these names when you can (the planner will combine duplicate
requests for identical junk columns). If you need another kind of
junk column besides these, it might be wise to choose a name prefixed
sending a SIGHUP signal to the postmaster
process, which in turn sends SIGHUP to each
of its children.) You can use the
- pg_file_settings and
- pg_hba_file_rules views
+ pg_file_settings and
+ pg_hba_file_rules views
to check the configuration files for possible errors, before reloading.
Depending on which operators you have included in the class, the data
type of query could vary with the operator, since it will
- be whatever type is on the righthand side of the operator, which might
- be different from the indexed data type appearing on the lefthand side.
+ be whatever type is on the right-hand side of the operator, which might
+ be different from the indexed data type appearing on the left-hand side.
(The above code skeleton assumes that only one type is possible; if
not, fetching the query argument value would have to depend
on the operator.) It is recommended that the SQL declaration of
- The indexUnchanged boolean value gives a hint
+ The indexUnchanged Boolean value gives a hint
about the nature of the tuple to be indexed. When it is true,
the tuple is a duplicate of some existing tuple in the index. The
new tuple is a logically unchanged successor MVCC tuple version. This
code, it's better to inspect
prop.
If the amproperty method returns true then
it has determined the property test result: it must set *res
- to the boolean value to return, or set *isnull
+ to the Boolean value to return, or set *isnull
to true to return a NULL. (Both of the referenced variables
are initialized to false before the call.)
If the amproperty method returns false then
UPDATE statements may use subscripting in the
SET clause to modify jsonb values. Subscript
- paths must be traversible for all affected values insofar as they exist. For
+ paths must be traversable for all affected values insofar as they exist. For
instance, the path val['a']['b']['c'] can be traversed all
the way to c if every val,
val['a'], and val['a']['b'] is an
There is no environment variable equivalent to this option, and no
facility for looking it up in .pgpass. It can be
used in a service file connection definition. Users with
- more sophisticated uses should consider using openssl engines and
+ more sophisticated uses should consider using
OpenSSL engines and
tools like PKCS#11 or USB crypto offload devices.
While the pipeline API was introduced in
PostgreSQL 14, it is a client-side feature
- which doesn't require special server support, and works on any server
+ which doesn't require special server support and works on any server
that supports the v3 extended query protocol.
are being performed in rapid succession. There is usually less benefit
in using pipelined commands when each query takes many multiples of the client/server
round-trip time to execute. A 100-statement operation run on a server
- 300ms round-trip-time away would take 30 seconds in network latency alone
- without pipelining; with pipelining it may spend as little as 0.3s waiting for
+ 300 ms round-trip-time away would take 30 seconds in network latency alone
+ without pipelining; with pipelining it may spend as little as 0.3 s waiting for
results from the server.
Each registered event handler is associated with two pieces of data,
known to
libpq only as opaque
void *
- pointers. There is a passthrough pointer that is provided
+ pointers. There is a pass-through pointer that is provided
by the application when the event handler is registered with a
- PGconn. The passthrough pointer never changes for the
+ PGconn. The pass-through pointer never changes for the
life of the PGconn and all PGresults
generated from it; so if used, it must point to long-lived data.
In addition there is an instance data pointer, which starts
,
and
PQsetResultInstanceData functions. Note that
- unlike the passthrough pointer, instance data of a PGconn
+ unlike the pass-through pointer, instance data of a PGconn
is not automatically inherited by PGresults created from
- it.
libpq does not know what passthrough
+ it.
libpq does not know what pass
-through
and instance data pointers point to (if anything) and will never attempt
to free them — that is the responsibility of the event handler.
Similar to spill-to-disk behavior, streaming is triggered when the total
amount of changes decoded from the WAL (for all in-progress transactions)
exceeds the limit defined by logical_decoding_work_mem setting.
- At that point, the largest toplevel transaction (measured by the amount of memory
+ At that point, the largest top-level transaction (measured by the amount of memory
currently used for decoded changes) is selected and streamed. However, in
some cases we still have to spill to disk even if streaming is enabled
because we exceed the memory threshold but still have not decoded the
Number of transactions spilled to disk once the memory used by
logical decoding to decode changes from WAL has exceeded
logical_decoding_work_mem. The counter gets
- incremented for both toplevel transactions and subtransactions.
+ incremented for both top-level transactions and subtransactions.
plugin after the memory used by logical decoding to decode changes
from WAL for this slot has exceeded
logical_decoding_work_mem. Streaming only
- works with toplevel transactions (subtransactions can't be streamed
+ works with top-level transactions (subtransactions can't be streamed
independently), so the counter is not incremented for subtransactions.
Number of decoded transactions sent to the decoding output plugin for
- this slot. This counts toplevel transactions only, and is not incremented
+ this slot. This counts top-level transactions only, and is not incremented
for subtransactions. Note that this includes the transactions that are
streamed and/or spilled.
The statistics gathered by the module are made available via a
view named pg_stat_statements. This view
contains one row for each distinct combination of database ID, user
- ID, query ID and whether it's a top level statement or not (up to
+ ID, query ID and whether it's a top-level statement or not (up to
the maximum number of distinct statements that the module can track).
The columns of the view are shown in
.
toplevel bool
- True if the query was executed as a top level statement
+ True if the query was executed as a top-level statement
(always true if pg_stat_statements.track is set to
top)
If FINALIZE is specified, a previous
- DETACH CONCURRENTLY invocation that was cancelled or
+ DETACH CONCURRENTLY invocation that was canceled or
interrupted is completed.
At most one partition in a partitioned table can be pending detach at
a time.
The \if and \elif commands read
- their argument(s) and evaluate them as a boolean expression. If the
+ their argument(s) and evaluate them as a Boolean expression. If the
expression yields true then processing continues
normally; otherwise, lines are skipped until a
matching \elif, \else,
WAL logs are stored in the directory
pg_wal under the data directory, as a set of
segment files, normally each 16 MB in size (but the size can be changed
- by altering the initdb option). Each segment is
+ by altering the
initdb option). Each segment is
divided into pages, normally 8 kB each (this size can be changed via the
configure option). The log record headers
are described in access/xlogrecord.h; the record