Additional Supplied Modules
- This appendix contains information regarding the modules that
+ This appendix and the next one contain information regarding the modules that
can be found in the contrib directory of the
PostgreSQL> distribution.
These include porting tools, analysis utilities,
- When building from the source distribution, these modules are not built
+ This appendix covers extensions and other server plug-in modules found in
+ contrib . covers utility
+ programs.
+
+
+ When building from the source distribution, these components are not built
automatically, unless you build the "world" target
(see ).
You can build and install all of them by running:
.
+ Note, however, that some of these modules are not extensions
+ in this sense, but are loaded into the server in some other way, for instance
+ by way of
+ . See the documentation of each
+ module for details.
+
+
&adminpack;
&auth-delay;
&auto-explain;
&isn;
&lo;
<ree;
- &oid2name;
&pageinspect;
&passwordcheck;
- &pgarchivecleanup;
- &pgbench;
&pgbuffercache;
&pgcrypto;
&pgfreespacemap;
&pgrowlocks;
- &pgstandby;
&pgstatstatements;
&pgstattuple;
- &pgtestfsync;
- &pgtesttiming;
&pgtrgm;
- &pgupgrade;
&seg;
&sepgsql;
&contrib-spi;
&tsearch2;
&unaccent;
&uuid-ossp;
- &vacuumlo;
&xml2;
+
+
+
+
Additional Supplied Programs
+
+ This appendix and the previous one contain information regarding the modules that
+ can be found in the contrib directory of the
+
PostgreSQL> distribution. See for
+ more information about the contrib section in general and
+ server extensions and plug-ins found in contrib
+ specifically.
+
+
+ This appendix covers utility programs found in contrib .
+ Once installed, either from source or a packaging system, they are found in
+ the bin directory of the
+
PostgreSQL installation and can be used like any
+ other program.
+
+
+
+
Client Applications
+
+ This section covers
PostgreSQL client
+ applications in contrib . They can be run from anywhere,
+ independent of where the database server resides. See
+ also for information about client
+ applications that part of the core
PostgreSQL
+ distribution.
+
+
+ &oid2name;
+ &pgbench;
+ &vacuumlo;
+
+
+
+
Server Applications
+
+ This section covers
PostgreSQL server-related
+ applications in contrib . They are typically run on the
+ host where the database server resides. See also
+ linkend="reference-server"> for information about server applications that
+ part of the core
PostgreSQL distribution.
+
+
+ &pgarchivecleanup;
+ &pgstandby;
+ &pgtestfsync;
+ &pgtesttiming;
+ &pgupgrade;
+
+
+
+ Author
+
+ Author (only used in the contrib section)
+
+
+
+
See Also
-
-
oid2name
+
+
+ oid2name
+ 1
+ Application
+
+
+
+ oid2name
+
resolve OIDs and file nodes in a PostgreSQL data directory
+
+
+
+ oid2name
+
+
+
+
+
Description
+
oid2name> is a utility program that helps administrators to
examine the file structure used by PostgreSQL. To make use of it, you need
-
-
Overview
-
oid2name connects to a target database and
extracts OID, filenode, and/or table name information. You can also have
it show database OIDs or tablespace OIDs.
- sect2 >
+ refsect1 >
- <sect2 >
+ <refsect1 >
+
Options
oid2name accepts the following command-line arguments:
OIDs. Alternatively you can give -s> to get a tablespace
listing.
- sect2 >
+ refsect1 >
-
+
+
Notes
+
+
oid2name> requires a running database server with
+ non-corrupt system catalogs. It is therefore of only limited use
+ for recovering from catastrophic database corruption situations.
+
+
+
+
Examples
----------------------
155156 foo
-
-
-
-
Limitations
-
-
oid2name> requires a running database server with
- non-corrupt system catalogs. It is therefore of only limited use
- for recovering from catastrophic database corruption situations.
-
-
+
- <sect2 >
+ <refsect1 >
Author
- sect2 >
+ refsect1 >
-sect1 >
+refentry >
-
-
pg_archivecleanup
+
+
+ 1
+ Application
+
+
+
+ pg_archivecleanup
+
clean up PostgreSQL WAL archive files
+
+
+
+ pg_archivecleanup
+
+
+
+
+
Description
+
pg_archivecleanup> is designed to be used as an
archive_cleanup_command to clean up WAL file archives when
clean WAL file archives.
-
pg_archivecleanup features include:
-
-
-
- Written in C, so very portable and easy to install
-
-
-
- Easy-to-modify source code, with specifically designated
- sections to modify for your own needs
-
-
-
-
-
-
Usage
-
To configure a standby
server to use
pg_archivecleanup>, put this into its
from the same archive location.
- The full syntax of
pg_archivecleanup>'s command line is
-
-pg_archivecleanup option> ... archivelocation> restartwalfile>
-
When used as a standalone program all WAL files logically preceding the
- <literal>restartwalfile> will be removed archivelocation>.
+ <replaceable>oldestkeptwalfile> will be removed from archivelocation>.
In this mode, if you specify a .backup> file name, then only the file prefix
- will be used as the <literal>restar twalfile>. This allows you to remove
+ will be used as the <replaceable>oldestkep twalfile>. This allows you to remove
all WAL files archived prior to a specific base backup without error.
For example, the following example will remove all files older than
WAL file name 000000010000003700000010>:
archivelocation> is a directory readable and writable by the
server-owning user.
- sect2 >
+ refsect1 >
- <sect2 >
-
pg_archivecleanup> Options
+ <refsect1 >
+
Options
pg_archivecleanup accepts the following command-line arguments:
+
+
+
+
Notes
+
+
pg_archivecleanup is designed to work with
+
PostgreSQL> 8.0 and later when used as a standalone utility,
+ or with
PostgreSQL> 9.0 and later when used as an
+ archive cleanup command.
+
-
+
pg_archivecleanup is written in C and has an
+ easy-to-modify source code, with specifically designated sections to modify
+ for your own needs
+
+
- <sect2 >
+ <refsect1 >
Examples
On Linux or Unix systems, you might use:
+
-
-
-
-
Supported Server Versions
-
-
pg_archivecleanup is designed to work with
-
PostgreSQL> 8.0 and later when used as a standalone utility,
- or with
PostgreSQL> 9.0 and later when used as an
- archive cleanup command.
-
-
-
-
+
Author
-
+
+
+
+
See Also
-
+
+
+
+
+
-
-
pgbench
+
+
+ 1
+ Application
+
+
+
+ pgbench
+
run a benchmark test on PostgreSQL
+
+
+
+ pgbench
+
+
+ pgbench
+
+
+
+
+
Description
pgbench is a simple program for running benchmark
tests on
PostgreSQL>. It runs the same sequence of SQL
settings. The next line reports the number of transactions completed
and intended (the latter being just the product of number of clients
and number of transactions per client); these will be equal unless the run
- failed before completion. (In <literal >-T> mode, only the actual
+ failed before completion. (In <option >-T> mode, only the actual
number of transactions is printed.)
The last two lines report the number of transactions per second,
figured with and without counting the time to start database sessions.
-
-
Overview
-
The default TPC-B-like transaction test requires specific tables to be
set up beforehand.
pgbench> should be invoked with
- the <literal >-i> (initialize) option to create and populate these
+ the <option >-i> (initialize) option to create and populate these
tables. (When you are testing a custom script, you don't need this
step, but will instead need to do whatever setup your test needs.)
Initialization looks like:
where dbname> is the name of the already-created
- database to test in. (You may also need <literal >-h>,
- <literal>-p>, and/or >-U> options to specify how to
+ database to test in. (You may also need <option >-h>,
+ <option>-p>, and/or >-U> options to specify how to
connect to the database server.)
pgbench_history 0
You can (and, for most purposes, probably should) increase the number
- of rows by using the <literal >-s> (scale factor) option. The
- <literal >-F> (fillfactor) option might also be used at this point.
+ of rows by using the <option >-s> (scale factor) option. The
+ <option >-F> (fillfactor) option might also be used at this point.
Once you have done the necessary setup, you can run your benchmark
- with a command that doesn't include <literal >-i>, that is
+ with a command that doesn't include <option >-i>, that is
pgbench options> dbname>
In nearly all cases, you'll need some options to make a useful test.
- The most important options are <literal >-c> (number of clients),
- <literal>-t> (number of transactions), >-T> (time limit),
- and <literal >-f> (specify a custom script file).
+ The most important options are <option >-c> (number of clients),
+ <option>-t> (number of transactions), >-T> (time limit),
+ and <option >-f> (specify a custom script file).
See below for a full list.
+
+
+
+
Options
- shows options that are used
- during database initialization, while
- shows options that are used
- while running benchmarks, and
- shows options that are useful
- in both cases.
+ The following is divided into three subsections: Different options are used
+ during database initialization and while running benchmarks, some options
+ are useful in both cases.
-
-
-
-
pgbench> Initialization Options
+
+
Initialization Options
pgbench accepts the following command-line
-
+ ref sect2>
-
-
pgbench> Benchmarking Options
+ <ref sect2 id="pgbench-run-options">
+
Benchmarking Options
pgbench accepts the following command-line
Define a variable for use by a custom script (see below).
- Multiple <literal >-D> options are allowed.
+ Multiple <option >-D> options are allowed.
Read transaction script from filename>.
See below for details.
- <literal>-N, -S , and -f >
+ <option>-N, -S , and -f >
are mutually exclusive.
output. With the built-in tests, this is not necessary; the
correct scale factor will be detected by counting the number of
rows in the pgbench_branches> table. However, when testing
- custom benchmarks (<literal >-f> option), the scale factor
+ custom benchmarks (<option >-f> option), the scale factor
will be reported as 1 unless this option is used.
Run the test for this many seconds, rather than a fixed number of
- transactions per client. <literal>-t> and
- <literal>-T> are mutually exclusive.
+ transactions per client. <option>-t> and
+ <option>-T> are mutually exclusive.
Vacuum all four standard tables before running the test.
- With neither <literal>-n> nor >-v>, pgbench will vacuum the
+ With neither <option>-n> nor >-v>, pgbench will vacuum the
pgbench_tellers> and pgbench_branches>
tables, and will truncate pgbench_history>.
-
+ ref sect2>
-
-
pgbench> Common Options
+ <ref sect2 id="pgbench-common-options">
+
Common Options
pgbench accepts the following command-line
-
+
+
-
+
+
Notes
+
+
What is the Transaction> Actually Performed in pgbench?
- If you specify <literal >-N>, steps 4 and 5 aren't included in the
- transaction. If you specify <literal >-S>, only the SELECT> is
+ If you specify <option >-N>, steps 4 and 5 aren't included in the
+ transaction. If you specify <option >-S>, only the SELECT> is
issued.
-
+ ref sect2>
-
+ <ref sect2>
Custom Scripts
pgbench has support for running custom
benchmark scenarios by replacing the default transaction script
(described above) with a transaction script read from a file
- (<literal>-f> option). In this case a transaction>
+ (<option>-f> option). In this case a transaction>
counts as one execution of a script file. You can even specify
- multiple scripts (multiple <literal>-f> options), in which
+ multiple scripts (multiple <option>-f> options), in which
case a random one of the scripts is chosen each time a client session
starts a new transaction.
There is a simple variable-substitution facility for script files.
- Variables can be set by the command-line <literal >-D> option,
+ Variables can be set by the command-line <option >-D> option,
explained above, or by the meta commands explained below.
- In addition to any variables preset by <literal >-D> command-line options,
+ In addition to any variables preset by <option >-D> command-line options,
the variable scale> is preset to the current scale factor.
Once set, a variable's
value can be inserted into a SQL command by writing
otherwise they'd not be independently touching different rows.)
-
+ ref sect2>
-
+ <ref sect2>
Per-Transaction Logging
- With the <
literal >-l> option,
pgbench> writes the time
+ With the <
option >-l> option,
pgbench> writes the time
taken by each transaction to a log file. The log file will be named
pgbench_log.nnn> , where
nnn> is the PID of the pgbench process.
- If the <literal >-j> option is 2 or higher, creating multiple worker
+ If the <option >-j> option is 2 or higher, creating multiple worker
threads, each will have its own log file. The first worker will use the
same name for its log file as in the standard single worker case.
The additional log files for the other workers will be named
where time> is the total elapsed transaction time in microseconds,
file_no> identifies which script file was used
- (useful when multiple scripts were specified with <literal >-f>),
+ (useful when multiple scripts were specified with <option >-f>),
and time_epoch>/time_us> are a
UNIX epoch format timestamp and an offset
in microseconds (suitable for creating a ISO 8601
0 202 2038 0 1175850569 2663
-
+ ref sect2>
-
+ <ref sect2>
Per-Statement Latencies
- With the <
literal >-r> option,
pgbench> collects
+ With the <
option >-r> option,
pgbench> collects
the elapsed transaction time of each statement executed by every
client. It then reports an average of those values, referred to
as the latency for each statement, after the benchmark has finished.
Comparing average TPS values with and without latency reporting enabled
is a good way to measure if the timing overhead is significant.
-
+ ref sect2>
-
+ <ref sect2>
Good Practices
In the first place, never> believe any test that runs
- for only a few seconds. Use the <literal>-t> or >-T> option
+ for only a few seconds. Use the <option>-t> or >-T> option
to make the run last at least a few minutes, so as to average out noise.
In some cases you could need hours to get numbers that are reproducible.
It's a good idea to try the test run a few times, to find out if your
For the default TPC-B-like test scenario, the initialization scale factor
- (<literal >-s>) should be at least as large as the largest number of
- clients you intend to test (<literal >-c>); else you'll mostly be
- measuring update contention. There are only <literal >-s> rows in
+ (<option >-s>) should be at least as large as the largest number of
+ clients you intend to test (<option >-c>); else you'll mostly be
+ measuring update contention. There are only <option >-s> rows in
the pgbench_branches> table, and every transaction wants to
- update one of them, so <literal>-c> values in excess of >-s>
+ update one of them, so <option>-c> values in excess of >-s>
will undoubtedly result in lots of transactions blocked waiting for
other transactions.
instances concurrently, on several client machines, against the same
database server.
-
-
-sect1 >
+ ref sect2>
+
+refentry >
-
-
pg_standby
+
+
+ 1
+ Application
+
+
+
+ pg_standby
+
supports the creation of a PostgreSQL warm standby server
+
+
+
+ pg_standby
+
+
+
+
+
Description
+
pg_standby> supports creation of a warm standby>
database server. It is designed to be a production-ready program, as well
server manual (see ).
-
pg_standby features include:
-
-
-
- Written in C, so very portable and easy to install
-
-
-
- Easy-to-modify source code, with specifically designated
- sections to modify for your own needs
-
-
-
- Already tested on Linux and Windows
-
-
-
-
-
-
Usage
-
To configure a standby
server to use
pg_standby>, put this into its
where archiveDir> is the directory from which WAL segment
files should be restored.
- The full syntax of
pg_standby>'s command line is
-
-pg_standby option> ... archivelocation> nextwalfile> xlogfilepath> restartwalfile>
-
- When used within restore_command>, the %f> and
- %p> macros should be specified for nextwalfile>
- and xlogfilepath> respectively, to provide the actual file
- and path required for the restore.
-
If restartwalfile> is specified, normally by using the
%r macro, then all WAL files logically preceding this
- sect2 >
+ refsect1 >
- <sect2 >
+ <refsect1 >
+
Options
pg_standby accepts the following command-line arguments:
-
+
+
+
+
Notes
+
+
pg_standby is designed to work with
+
PostgreSQL> 8.2 and later.
+
+
PostgreSQL> 8.3 provides the %r macro,
+ which is designed to let
pg_standby know the
+ last file it needs to keep. With
PostgreSQL> 8.2, the
+ -k option must be used if archive cleanup is
+ required. This option remains available in 8.3, but its use is deprecated.
+
+
PostgreSQL> 8.4 provides the
+ recovery_end_command> option. Without this option
+ a leftover trigger file can be hazardous.
+
+
+
pg_standby is written in C and has an
+ easy-to-modify source code, with specifically designated sections to modify
+ for your own needs
+
+
- <sect2 >
+ <refsect1 >
Examples
On Linux or Unix systems, you might use:
network.
-
-
-
-
Supported Server Versions
+
-
pg_standby is designed to work with
-
PostgreSQL> 8.2 and later.
-
-
PostgreSQL> 8.3 provides the %r macro,
- which is designed to let
pg_standby know the
- last file it needs to keep. With
PostgreSQL> 8.2, the
- -k option must be used if archive cleanup is
- required. This option remains available in 8.3, but its use is deprecated.
-
-
PostgreSQL> 8.4 provides the
- recovery_end_command> option. Without this option
- a leftover trigger file can be hazardous.
-
-
-
-
+
Author
-
+
+
+
+
See Also
-
+
+
+
+
+
-
-
pg_test_fsync
+
+
+ 1
+ Application
+
+
+
+ pg_test_fsync
+
determine fastest wal_sync_method for PostgreSQL
+
+
+
+ pg_test_fsync
+
+
+
+
+
Description
+
pg_test_fsync> is intended to give you a reasonable
idea of what the fastest is on your
since many database servers are not speed-limited by their transaction
logs.
+
-
-
Usage
-
-
-pg_test_fsync [options]
-
+
+
Options
pg_test_fsync accepts the following
- sect2 >
+ refsect1 >
- <sect2 >
+ <refsect1 >
Author
-
+
+
+
+
See Also
-
+
+
+
+
+
-
-
pg_test_timing
+
+
+ 1
+ Application
+
+
+
+ pg_test_timing
+ measure timing overhead
+
+
+
+ pg_test_timing
+
+
+
+
+
Description
+
pg_test_timing> is a tool to measure the timing overhead
on your system and confirm that the system time never moves backwards.
Systems that are slow to collect timing data can give less accurate
EXPLAIN ANALYZE results.
+
-
-
Usage
-
-
-pg_test_timing [options]
-
+
+
Options
pg_test_timing accepts the following
- sect2 >
+ refsect1 >
-
+
+
Usage
+
+
Interpreting results
one microsecond (usec).
-
-
+ ref sect2>
+ <ref sect2>
Measuring executor timing overhead
timing overhead would be less problematic.
-
-
+
+
+
Changing time sources
On some newer Linux systems, it's possible to change the clock
1: 27694571 90.23734%
-
-
+ ref sect2>
+ <ref sect2>
Clock hardware and timing accuracy
Programmable Interrupt Controller (APIC) timer, and the Cyclone
timer. These timers aim for millisecond resolution.
-
+
+
- <sect2 >
+ <refsect1 >
Author
-
+
+
+
+
See Also
-
+
+
+
+
+
-
-
pg_upgrade
+
+
+ 1
+ Application
+
+
+
+ pg_upgrade
+
upgrade a PostgreSQL server instance
+
+
+
+ pg_upgrade
+
+
+
+
+
Description
+
pg_upgrade> (formerly called pg_migrator>) allows data
stored in
PostgreSQL> data files to be upgraded to a later PostgreSQL>
be checked by
pg_upgrade>.
-
-
Supported Versions
-
pg_upgrade supports upgrades from 8.3.X and later to the current
major release of
PostgreSQL>, including snapshot and alpha releases.
-
+
-
-
-
+
+
Options
pg_upgrade accepts the following command-line arguments:
- sect2 >
+ refsect1 >
-
-
Upgrade Steps
+
+
Usage
+
+ These are the steps to perform an upgrade
+
-
+
+
+
+
Notes
+
+
pg_upgrade> does not support upgrading of databases
+ containing these reg*> OID-referencing system data types:
+ regproc>, regprocedure>, regoper>,
+ regoperator>, regconfig>, and
+ regdictionary>. (regtype> can be upgraded.)
+
+
+ All failure, rebuild, and reindex cases will be reported by
+
pg_upgrade> if they affect your installation;
+ post-upgrade scripts to rebuild tables and indexes will be
+ generated automatically.
+
+
+ For deployment testing, create a schema-only copy of the old cluster,
+ insert dummy data, and upgrade that.
+
+
+ If you are upgrading a pre-
PostgreSQL> 9.2 cluster
+ that uses a configuration-file-only directory, you must pass the
+ real data directory location to
pg_upgrade>, and
+ pass the configuration directory location to the server, e.g.
+ -d /real-data-directory -o '-D /configuration-directory'>.
+
+
+ If you want to use link mode and you do not want your old cluster
+ to be modified when the new cluster is started, make a copy of the
+ old cluster and upgrade that in link mode. To make a valid copy
+ of the old cluster, use rsync> to create a dirty
+ copy of the old cluster while the server is running, then shut down
+ the old server and run rsync> again to update the copy with any
+ changes to make it consistent.
+
-
-
Limitations in Upgrading from> PostgreSQL 8.3
+ <ref sect2>
+
Limitations in Upgrading PostgreSQL 8.3
- Upgrading from PostgreSQL 8.3 has additional restrictions not present
+ Upgrading from PostgreSQL 8.3 has additional restrictions not present
when upgrading from later PostgreSQL releases. For example,
pg_upgrade will not work for upgrading from 8.3 if a user column
is defined as:
possible to upgrade from the MSI installer to the one-click installer.
-
-
-
-
Notes
-
-
pg_upgrade> does not support upgrading of databases
- containing these reg*> OID-referencing system data types:
- regproc>, regprocedure>, regoper>,
- regoperator>, regconfig>, and
- regdictionary>. (regtype> can be upgraded.)
-
-
- All failure, rebuild, and reindex cases will be reported by
-
pg_upgrade> if they affect your installation;
- post-upgrade scripts to rebuild tables and indexes will be
- generated automatically.
-
+
- For deployment testing, create a schema-only copy of the old cluster,
- insert dummy data, and upgrade that.
-
-
- If you are upgrading a pre-
PostgreSQL> 9.2 cluster
- that uses a configuration-file-only directory, you must pass the
- real data directory location to
pg_upgrade>, and
- pass the configuration directory location to the server, e.g.
- -d /real-data-directory -o '-D /configuration-directory'>.
-
-
- If you want to use link mode and you do not want your old cluster
- to be modified when the new cluster is started, make a copy of the
- old cluster and upgrade that in link mode. To make a valid copy
- of the old cluster, use rsync> to create a dirty
- copy of the old cluster while the server is running, then shut down
- the old server and run rsync> again to update the copy with any
- changes to make it consistent.
-
+
-
+
+
See Also
-
+
+
+
+
+
+
+
+
-
-
vacuumlo
+
+
+ 1
+ Application
+
+
+
+ vacuumlo
+
remove orphaned large objects from a PostgreSQL database
+
+
+
+ vacuumlo
+
+
+
+
+
Description
+
vacuumlo> is a simple utility program that will remove any
orphaned> large objects from a
to avoid creating orphaned LOs in the first place.
-
-
Usage
-
-
-vacuumlo [options] database [database2 ... databaseN]
-
-
- All databases named on the command line are processed. Available options
- include:
+ All databases named on the command line are processed.
+
-
-
- -v
-
-
Write a lot of progress messages.
-
-
+
+
Options
+
- -n >
+ -h hostname>
-
Don't remove anything, just show what would be done .
+
Database server's host .
+
+ -n
+
+
Don't remove anything, just show what would be done.
+
+
+
+
+ -p port>
+
+
Database server's port.
+
+
+
-U username>
+
+ -v
+
+
Write a lot of progress messages.
+
+
+
-w>
--no-password>
-
-
- -h hostname>
-
-
Database server's host.
-
-
-
-
- -p port>
-
-
Database server's port.
-
-
- sect2 >
+ refsect1 >
- <sect2 >
-
Method
+ <refsect1 >
+
Notes
+
vacuumlo works by the following method:
First,
vacuumlo> builds a temporary table which contains all
- of the OIDs of the large objects in the selected database.
-
-
- It then scans through all columns in the database that are of type
- oid> or lo>, and removes matching entries from the
- temporary table. (Note: only types with these names are considered;
- in particular, domains over them are not considered.)
-
-
- The remaining entries in the temporary table identify orphaned LOs.
- These are removed.
+ of the OIDs of the large objects in the selected database. It then scans
+ through all columns in the database that are of type
+ oid> or lo>, and removes matching entries from the temporary
+ table. (Note: Only types with these names are considered; in particular,
+ domains over them are not considered.) The remaining entries in the
+ temporary table identify orphaned LOs. These are removed.
- sect2 >
+ refsect1 >
- <sect2 >
+ <refsect1 >
Author
- sect2 >
+ refsect1 >
-sect1 >
+refentry >