attr = TupleDescAttr(tupdesc, i);
/*
- * Tuple header can specify less attributes than tuple descriptor as
+ * Tuple header can specify fewer attributes than tuple descriptor as
* ALTER TABLE ADD COLUMN without DEFAULT keyword does not actually
* change tuples in pages, so attributes with numbers greater than
* (t_infomask2 & HEAP_NATTS_MASK) should be treated as NULL.
name such as de_DE can be considered unique
within a given database even though it would not be unique globally.
Use of the stripped collation names is recommended, since it will
- make one less thing you need to change if you decide to change to
+ make one fewer thing you need to change if you decide to change to
another database encoding. Note however that the default,
C, and POSIX collations can be used regardless of
the database encoding.
of anycompatible and anycompatiblenonarray
inputs, the array element types of anycompatiblearray
inputs, the range subtypes of anycompatiblerange inputs,
- and the multirange subtypes of anycompatiablemultirange
+ and the multirange subtypes of anycompatiblemultirange
inputs. If anycompatiblenonarray is present then the
common type is required to be a non-array type. Once a common type is
identified, arguments in anycompatible
Insert multiple tuples in bulk into the foreign table.
The parameters are the same for ExecForeignInsert
except slots and planSlots contain
- multiple tuples and *numSlots> specifies the number of
+ multiple tuples and *numSlots specifies the number of
tuples in those arrays.
NULL, attempts to insert into the foreign table will
use ExecForeignInsert.
This function is not used if the INSERT has the
- RETURNING> clause.
+ RETURNING clause.
Report the maximum number of tuples that a single
ExecForeignBatchInsert call can handle for
- the specified foreign table. That is, The executor passes at most
- the number of tuples that this function returns to
- ExecForeignBatchInsert.
+ the specified foreign table. The executor passes at most
+ the given number of tuples to ExecForeignBatchInsert.
rinfo is the ResultRelInfo struct describing
the target foreign table.
The FDW is expected to provide a foreign server and/or foreign
The optional filter_prepare_cb callback
is called to determine whether data that is part of the current
- two-phase commit transaction should be considered for decode
- at this prepare stage or as a regular one-phase transaction at
- COMMIT PREPARED time later. To signal that
+ two-phase commit transaction should be considered for decoding
+ at this prepare stage or later as a regular one-phase transaction at
+ COMMIT PREPARED time. To signal that
decoding should be skipped, return true;
false otherwise. When the callback is not
defined, false is assumed (i.e. nothing is
The required begin_prepare_cb callback is called
whenever the start of a prepared transaction has been decoded. The
gid field, which is part of the
-
txn parameter can be used in this callback to
- check if the plugin has already received this prepare in which case it
- can skip the remaining changes of the transaction. This can only happen
- if the user restarts the decoding after receiving the prepare for a
- transaction but before receiving the commit prepared say because of some
- error.
+
txn parameter
, can be used in this callback to
+ check if the plugin has already received this PREPARE
+ in which case it can skip the remaining changes of the transaction.
+ This can only happen if the user restarts the decoding after receiving
+ the PREPARE for a transaction but before receiving
+ the COMMIT PREPARED, say because of some error.
typedef void (*LogicalDecodeBeginPrepareCB) (struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn);
decoded. The change_cb callback for all modified
rows will have been called before this, if there have been any modified
rows. The
gid field, which is part of the
-
txn parameter can be used in this callback.
+
txn parameter
, can be used in this callback.
typedef void (*LogicalDecodePrepareCB) (struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
The required commit_prepared_cb callback is called
- whenever a transaction commit prepared has been decoded. The
-
gid field, which is part of the
-
txn parameter can be used in this callback.
+ whenever a transaction COMMIT PREPARED has been decoded.
+
The gid field, which is part of the
+
txn parameter
, can be used in this callback.
typedef void (*LogicalDecodeCommitPreparedCB) (struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
The required rollback_prepared_cb callback is called
- whenever a transaction rollback prepared has been decoded. The
-
gid field, which is part of the
-
txn parameter can be used in this callback. The
+ whenever a transaction ROLLBACK PREPARED has been
+
decoded. The gid field, which is part of the
+
txn parameter
, can be used in this callback. The
parameters
prepare_end_lsn and
prepare_time can be used to check if the plugin
- has received this prepare transaction in which case it can apply the
- rollback, otherwise, it can skip the rollback operation. The
+ has received this PREPARE TRANSACTION in which case
+ it can apply the rollback, otherwise, it can skip the rollback operation. The
gid alone is not sufficient because the downstream
- node can have prepared transaction with same identifier.
+ node can have a prepared transaction with same identifier.
typedef void (*LogicalDecodeRollbackPreparedCB) (struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
the stream_commit_cb callback
(or possibly aborted using the stream_abort_cb callback).
If two-phase commits are supported, the transaction can be prepared using the
- stream_prepare_cb callback, commit prepared using the
+ stream_prepare_cb callback,
+ COMMIT PREPARED using the
commit_prepared_cb callback or aborted using the
rollback_prepared_cb.
- When a prepared transaction is rollbacked using the
+ When a prepared transaction is rolled back using the
ROLLBACK PREPARED, then the
rollback_prepared_cb callback is invoked and when the
prepared transaction is committed using COMMIT PREPARED,
- attribute that will be detoasted as needed. Default value is
+ attributes will be detoasted as needed. Default value is
false.
This function discards all the open connections that are established by
postgres_fdw from the local session to
- the foreign servers. If the connections are used in the current local
+ foreign servers. If the connections are used in the current local
transaction, they are not disconnected and warning messages are reported.
This function returns true if it disconnects
at least one connection, otherwise false.
When changing the definition of or removing a foreign server or
- a user mapping, the corresponding connections are closed.
- But note that if the connections are used in the current local transaction
- at that moment, they are kept until the end of the transaction.
- Closed connections will be established again when they are necessary
- by subsequent queries using a foreign table.
+ a user mapping, the associated connections are closed.
+ But note that if any connections are in use in the current local transaction,
+ they are kept until the end of the transaction.
+ Closed connections will be re-established when they are necessary
+ by future queries using a foreign table.
Once a connection to a foreign server has been established,
- it's usually kept until the local or the corresponding remote
+ it's usually kept until the local or corresponding remote
session exits. To disconnect a connection explicitly,
postgres_fdw_disconnect and
postgres_fdw_disconnect_all functions
- need to be used. For example, these are useful when closing
- the connections that are no longer necessary and then preventing them
- from consuming the foreign server connections capacity too much.
+ may be used. For example, these are useful to close
+ connections that are no longer necessary, thereby releasing
+ connections on the foreign server.
- Identifies the following TupleData message as a old tuple.
- This field is present if the table in which the delete has
+ Identifies the following TupleData message as an old tuple.
+ This field is present if the table in which the delete
happened has REPLICA IDENTITY set to FULL.
allocated for the subscription on the remote host are released. If due to
network breakdown or some other error,
PostgreSQL
is unable to remove the slots, an ERROR will be reported. To proceed in this
- situation, either the user need to retry the operation or disassociate the
+ situation, the user either needs to retry the operation or disassociate the
slot from the subscription and drop the subscription as explained in
.
Before
PostgreSQL version 8.3, the name of
a generated array type was always exactly the element type's name with one
underscore character (_) prepended. (Type names were
- therefore restricted in length to one less character than other names.)
+ therefore restricted in length to one fewer character than other names.)
While this is still usually the case, the array type name may vary from
this in case of maximum-length names or collisions with user type names
that begin with underscore. Writing code that depends on this convention
Drop the index without locking out concurrent selects, inserts, updates,
and deletes on the index's table. A normal DROP INDEX
- acquires exclusive lock on the table, blocking other accesses until the
+ acquires an exclusive lock on the table, blocking other accesses until the
index drop can be completed. With this option, the command instead
waits until conflicting transactions have completed.
The query trees generated from rule actions are thrown into the
rewrite system again, and maybe more rules get applied resulting
- in more or less query trees.
+ in additional or fewer query trees.
So a rule's actions must have either a different
command type or a different result relation than the rule itself is
on, otherwise this recursive process will end up in an infinite loop.
- Data pages are not checksum protected by default, but this can optionally be
- enabled for a cluster. When enabled, each data page will be assigned a
- checksum that is updated when the page is written and verified every time
- the page is read. Only data pages are protected by checksums, internal data
+ By default, data pages are not protected by checksums, but this can optionally be
+ enabled for a cluster. When enabled, each data page will be ASSIGNED a
+ checksum that is updated when the page is written and verified each time
+ the page is read. Only data pages are protected by checksums; internal data
structures and temporary files are not.
- Checksums
are normally enabled when the cluster is initialized using
+ Checksums
verification is normally ENABLED when the cluster is initialized using
linkend="app-initdb-data-checksums">
initdb.
They can also be enabled or disabled at a later time as an offline
operation. Data checksums are enabled or disabled at the full cluster
- level, and cannot be specified individually for databases or tables.
+ level, and cannot be specified for individual databases or tables.
- When attempting to recover from corrupt data it may be necessary to bypass
- the checksum protection in order to recover data. To do this, temporarily
- set the configuration parameter .
+ When attempting to recover from corrupt data, it may be necessary to bypass
+ the checksum protection. To do this, temporarily set the configuration
+ parameter .
}
/*
- * Expand a tuple which has less attributes than required. For each attribute
+ * Expand a tuple which has fewer attributes than required. For each attribute
* not present in the sourceTuple, if there is a missing value that will be
* used. Otherwise the attribute will be set to NULL.
*
- * The source tuple must have less attributes than the required number.
+ * The source tuple must have fewer attributes than the required number.
*
* Only one of targetHeapTuple and targetMinimalTuple may be supplied. The
* other argument must be NULL.
* NB: A redo function should normally not call this directly. To get a page
* to modify, use XLogReadBufferForRedoExtended instead. It is important that
* all pages modified by a WAL record are registered in the WAL records, or
- * they will be invisible to tools that that need to know which pages are
- * modified.
+ * they will be invisible to tools that need to know which pages are modified.
*/
Buffer
XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
}
/*
- * get_am_name - given an access method OID name and type, look up its name.
+ * get_am_name - given an access method OID, look up its name.
*/
char *
get_am_name(Oid amOid)
}
/*
- * Look up hash entries for the current tuple in all hashed grouping sets,
- * returning an array of pergroup pointers suitable for advance_aggregates.
+ * Look up hash entries for the current tuple in all hashed grouping sets.
*
* Be aware that lookup_hash_entry can reset the tmpcontext.
*
*
* Information about the aggregates and transition functions are collected
* in the root->agginfos and root->aggtransinfos lists. The 'aggtranstype',
- * 'aggno', and 'aggtransno' fields in are filled in in each Aggref.
+ * 'aggno', and 'aggtransno' fields of each Aggref are filled in.
*
* NOTE: This modifies the Aggrefs in the input expression in-place!
*
* holding ProcArrayLock) exclusively). Thus the xactCompletionCount check
* ensures we would detect if the snapshot would have changed.
*
- * As the snapshot contents are the same as it was before, it is is safe
+ * As the snapshot contents are the same as it was before, it is safe
* to re-enter the snapshot's xmin into the PGPROC array. None of the rows
* visible under the snapshot could already have been removed (that'd
* require the set of running transactions to change) and it fulfills the
* implement @? and @@ operators, which in turn are intended to have an
* index support. Thus, it's desirable to make it easier to achieve
* consistency between index scan results and sequential scan results.
- * So, we throw as less errors as possible. Regarding this function,
+ * So, we throw as few errors as possible. Regarding this function,
* such behavior also matches behavior of JSON_EXISTS() clause of
* SQL/JSON. Regarding jsonb_path_match(), this function doesn't have
* an analogy in SQL/JSON, so we define its behavior on our own.
/*
* The calculation so far gave us a selectivity for the "<=" case.
- * We'll have one less tuple for "<" and one additional tuple for
+ * We'll have one fewer tuple for "<" and one additional tuple for
* ">=", the latter of which we'll reverse the selectivity for
* below, so we can simply subtract one tuple for both cases. The
* cases that need this adjustment can be identified by iseq being
* It doesn't make any sense to specify all of the cache's key columns
* here: since the key is unique, there could be at most one match, so
* you ought to use SearchCatCache() instead. Hence this function takes
- * one less Datum argument than SearchCatCache() does.
+ * one fewer Datum argument than SearchCatCache() does.
*
* The caller must not modify the list object or the pointed-to tuples,
* and must call ReleaseCatCacheList() when done with the list.
chunkoff, rq->path, (int64) rq->offset);
/*
- * We should not receive receive more data than we requested, or
+ * We should not receive more data than we requested, or
* pg_read_binary_file() messed up. We could receive less,
* though, if the file was truncated in the source after we
* checked its size. That's OK, there should be a WAL record of
/*
* If advanceConnectionState changed client to finished state,
- * that's one less client that remains.
+ * that's one fewer client that remains.
*/
if (st->state == CSTATE_FINISHED || st->state == CSTATE_ABORTED)
remains--;
/*
* Maximum length for identifiers (e.g. table names, column names,
- * function names). Names actually are limited to one less byte than this,
+ * function names). Names actually are limited to one fewer byte than this,
* because the length must include a trailing zero byte.
*
* Changing this requires an initdb.
/*
* Maximum length for identifiers (e.g. table names, column names,
- * function names). Names actually are limited to one less byte than this,
+ * function names). Names actually are limited to one fewer byte than this,
* because the length must include a trailing zero byte.
*
* This should be at least as much as NAMEDATALEN of the database the
<(100,1),115> | ((-15,1),(18.6827201635,82.3172798365),(100,116),(181.317279836,82.3172798365),(215,1),(181.317279836,-80.3172798365),(100,-114),(18.6827201635,-80.3172798365))
(6 rows)
--- Too less points error
+-- Error for insufficient number of points
SELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
ERROR: must request at least 2 points
-- Zero radius error
-- only use parallelism when explicitly intending to do so
SET max_parallel_maintenance_workers = 0;
SET max_parallel_workers = 0;
--- A table with with contents that, when sorted, triggers abbreviated
+-- A table with contents that, when sorted, triggers abbreviated
-- key aborts. One easy way to achieve that is to use uuids that all
-- have the same prefix, as abbreviated keys for uuids just use the
-- first sizeof(Datum) bytes.
-- To polygon with less points
SELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
--- Too less points error
+-- Error for insufficient number of points
SELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
-- Zero radius error
SET max_parallel_maintenance_workers = 0;
SET max_parallel_workers = 0;
--- A table with with contents that, when sorted, triggers abbreviated
+-- A table with contents that, when sorted, triggers abbreviated
-- key aborts. One easy way to achieve that is to use uuids that all
-- have the same prefix, as abbreviated keys for uuids just use the
-- first sizeof(Datum) bytes.