certain implementation-level heuristics will fail to identify and
delete even one garbage index tuple (in which case a page split or
deduplication pass resolves the issue of an incoming new tuple not
- fitting on a leaf page). The worst case number of versions that
+ fitting on a leaf page). The worst-case number of versions that
any index scan must traverse (for any single logical row) is an
important contributor to overall system responsiveness and
throughput. A bottom-up index deletion pass targets suspected
This is expected with any B-Tree index that is subject to
significant version churn from UPDATEs that
rarely or never logically modify the columns that the index covers.
- The average and worst case number of versions per logical row can
+ The average and worst-case number of versions per logical row can
be kept low purely through targeted incremental deletion passes.
It's quite possible that the on-disk size of certain indexes will
never increase by even one single page/block despite
constraints) to use deduplication. This allows leaf pages to
temporarily absorb
extra version churn duplicates.
Deduplication in unique indexes augments bottom-up index deletion,
- especially in cases where a long-running transactions holds a
+ especially in cases where a long-running transaction holds a
snapshot that blocks garbage collection. The goal is to buy time
for the bottom-up index deletion strategy to become effective
again. Delaying page splits until a single long-running
pg_stats_ext_exprs is also designed to present
the information in a more readable format than the underlying catalogs
— at the cost that its schema must be extended whenever the structure
- of statistics in <link linkend="catalog-pg-statistic">pg_statistic> changes.
+ of statistics in <structname>pg_statistic_ext> changes.
Commands ALTER SUBSCRIPTION ... REFRESH PUBLICATION and
- ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ... with refresh
- option as true cannot be executed inside a transaction block.
+ ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...
+ with refresh option as true cannot be
+ executed inside a transaction block.
These commands also cannot be executed when the subscription has
two_phase commit enabled,
By default, initdb will write instructions for how
to start the cluster at the end of its output. This option causes
those instructions to be left out. This is primarily intended for use
- by tools that wrap initdb in platform specific
+ by tools that wrap initdb in platform-specific
behavior, where those instructions are likely to be incorrect.
The status of each kind of extended statistics is shown in a column
named after its statistic kind (e.g. Ndistinct).
- "defined" means that it was requested when creating the statistics,
- and NULL means it wasn't requested.
- You can use pg_stats_ext if you'd like to know whether
- ANALYZE was run and statistics are available to the
- planner.
+ defined means that it was requested when creating
+ the statistics, and NULL means it wasn't requested.
+ You can use pg_stats_ext if you'd like to
+ know whether ANALYZE
+ was run and statistics are available to the planner.
Note: the compress method is only applied to
- values to be stored. The consistent methods receive query scankeys
- unchanged, without transformation using compress.
+ values to be stored. The consistent methods receive query
+ scankeys unchanged, without transformation
+ using compress.
- We can also get the changes of the in-progress transaction and the typical
- output, might be:
+ We can also get the changes of the in-progress transaction, and the typical
+ output might be:
postgres[33712]=#* SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'stream-changes', '1');