linkend="app-pgrestore"> reference pages for details.
objectbackup To dump
large objects you must use either the custom or the tar output
format, and use the
-
pg_dump>. See the reference pages for details. The
+
pg_dump>. See the reference
+ page for details. The
directory contrib/pg_dumplo> of the
PostgreSQL> source tree also contains a program
that can dump large objects.
data files and WAL log on different disks) there may not be any way
to obtain exactly-simultaneous frozen snapshots of all the volumes.
Read your file system documentation very carefully before trusting
- to the consistent-snapshot technique in such situations.
+ to the consistent-snapshot technique in such situations. The safest
+ approach is to shut down the database server for long enough to
+ establish all the frozen snapshots.
modifications made to the data in your
PostgreSQL> database
it will not restore changes made to configuration files (that is,
postgresql.conf>, pg_hba.conf> and
- pg_ident.conf>) after the initial base backup.
+ pg_ident.conf>), since those are edited manually rather
+ than through SQL operations.
You may wish to keep the configuration files in a location that will
- be backed up by your regular file system backup procedures.
+ be backed up by your regular file system backup procedures. See
+ for how to relocate the
+ configuration files.
in the command.
- It is important for the command to return a zero exit status only
- if it succeeds. The command will> be asked for file
+ It is important for the command to return a zero exit status if and
+ only if it succeeds. The command will> be asked for file
names that are not present in the archive; it must return nonzero
when so asked. Examples:
that was current when the base backup was taken. If you want to recover
into some child timeline (that is, you want to return to some state that
was itself generated after a recovery attempt), you need to specify the
- target timeline in recovery.conf>. You cannot recover into
+ target timeline ID in recovery.conf>. You cannot recover into
timelines that branched off earlier than the base backup.
compatibility
+ This section discusses how to migrate your database data from one
+
PostgreSQL> release to a newer one.
+ The software installation procedure per se> is not the
+ subject of this section; those details are in .
+
+
As a general rule, the internal data storage format is subject to
change between major releases of
PostgreSQL> (where
storage formats. For example, releases 7.0.1, 7.1.2, and 7.2 are
not compatible, whereas 7.1.1 and 7.1.2 are. When you update
between compatible versions, you can simply replace the executables
- and reuse the data area on disk. Otherwise you need to back
- up> your data and restore> it on the new server, using
-
pg_dump>. There are checks in place that prevent you
- from using a data area with an incompatible version of
-
PostgreSQL, so no harm can be done by
- confusing these things. It is recommended that you use the
-
pg_dump> program from the newer version of
-
PostgreSQL> to take advantage of any enhancements in
-
pg_dump> that may have been made. The precise
- installation procedure is not the subject of this section; those
- details are in .
+ and reuse the data directory on disk. Otherwise you need to back
+ up your data and restore it on the new server. This has to be done
+ using
pg_dump>; file system level backup methods
+ obviously won't work. There are checks in place that prevent you
+ from using a data directory with an incompatible version of
+
PostgreSQL, so no great harm can be done by
+ trying to start the wrong server version on a data directory.
+
+
+ It is recommended that you use the
pg_dump> and
+
pg_dumpall> programs from the newer version of
+
PostgreSQL>, to take advantage of any enhancements
+ that may have been made in these programs. Current releases of the
+ dump programs can read data from any server version back to 7.0.
to transfer your data. Or use an intermediate file if you want.
Then you can shut down the old server and start the new server at
the port the old one was running at. You should make sure that the
- database is not updated after you run
pg_dumpall>,
+
old database is not updated after you run
pg_dumpall>,
otherwise you will obviously lose that data. See
linkend="client-authentication"> for information on how to prohibit
- access. In practice you probably want to test your client
- applications on the new setup before switching over.
+ access.
+
+
+ In practice you probably want to test your client
+ applications on the new setup before switching over completely.
+ This is another reason for setting up concurrent installations
+ of old and new versions.
you of strategic places to perform these steps.
- You will always need a SQL dump (
pg_dump> dump) for
- migrating to a new release. File-system-level backups (including
- on-line backups) will not work, for the same reason that you can't
- just do the update in-place: the file formats won't necessarily be
- compatible across major releases.
-
-
When you move the old installation out of the way
- it is no longer perfectly usable. Some parts of the installation
- contain information about where the other parts are located. This
- is usually not a big problem but if you plan on using two
+ it may no longer be perfectly usable. Some of the executable programs
+ contain absolute paths to various installed programs and data files.
+ This is usually not a big problem but if you plan on using two
installations in parallel for a while you should assign them
- different installation directories at build time.
+ different installation directories at build time. (This problem
+ is rectified in
PostgreSQL> 8.0 and later, but you
+ need to be wary of moving older installations.)
which is used to store values too wide to fit comfortably in the main
table. There will be one index on the
TOAST> table, if present. There may also be indexes associated
- with the base table.
+ with the base table. Each table and index is stored in a separate disk
+ file — possibly more than one file, if the file would exceed one
+ gigabyte. Naming conventions for these files are described in
+ linkend="file-layout">.
not wait until the disk is completely full to take action.
+
+ If your system supports per-user disk quotas, then the database
+ will naturally be subject to whatever quota is placed on the user
+ the server runs as. Exceeding the quota will have the same bad
+ effects as running out of space entirely.
+
analyzing performance. Most of this chapter is devoted to describing
PostgreSQL's statistics collector,
but one should not neglect regular Unix monitoring programs such as
- ps> and top>. Also, once one has identified a
+ ps>, top>, iostat>, and vmstat>.
+ Also, once one has identified a
poorly-performing query, further investigation may be needed using
PostgreSQL's
endterm="sql-explain-title"> command.
Viewing Collected Statistics
- Several predefined views are available to show the results of
- statistics collection, listed in
- linkend="monitoring-stats-views-table">. Alternatively, one can
+ Several predefined views, listed in f
+ linkend="monitoring-stats-views-table">, are available to show the results
+ of statistics collection. Alternatively, one can
build custom views using the underlying statistics functions.
When using the statistics to monitor current activity, it is important
to realize that the information does not update instantaneously.
- Each individual server process transmits new access counts to the collector
- just before waiting for another client command; so a query still in
+ Each individual server process transmits new block and row access counts to
+ the collector just before going idle; so a query or transaction still in
progress does not affect the displayed totals. Also, the collector itself
- emits new totals at most once per pgstat_stat_interval milliseconds
- (500 by default). So the displayed totals lag behind actual activity.
+ emits a new report at most once per pgstat_stat_interval
+ milliseconds (500 by default). So the displayed information lags behind
+ actual activity. Current-query information is reported to the collector
+ immediately, but is still subject to the
+ pgstat_stat_interval delay before it becomes visible.
Another important point is that when a server process is asked to display
- any of these statistics, it first fetches the most recent totals emitted by
+ any of these statistics, it first fetches the most recent report emitted by
the collector process and then continues to use this snapshot for all
statistical views and functions until the end of its current transaction.
So the statistics will appear not to change as long as you continue the
database administrator to view information about the outstanding
locks in the lock manager. For example, this capability can be used
to:
-
+
-
+
Regression Tests
The regression tests are a comprehensive set of tests for the SQL
implementation in
PostgreSQL. They test
standard SQL operations as well as the extended capabilities of
-
PostgreSQL 6.1 onward, the regression
- tests are current for every official release.
Running the Tests
- The regression test can be run against an already installed and
+ The regression tests can be run against an already installed and
running server, or using a temporary installation within the build
tree. Furthermore, there is a parallel
and a
sequential
mode for running the tests. The
======================
- All 93 tests passed.
+ All 96 tests passed.
======================
or otherwise a note about which tests failed. See
- linkend="regress-evaluation"> below for more.
+ linkend="regress-evaluation"> below before assuming that a
+ failure> represents a serious problem.
server, , ]]> then type
gmake installcheck
+
+or for a parallel test
+
+gmake installcheck-parallel
The tests will expect to contact the server at the local host and the
- default port number, unless directed otherwise by PGHOST and PGPORT
- environment variables.
+ default port number, unless directed otherwise by PGHOST and
+ PGPORT environment variables.
diff results/random.out expected/random.out
should produce only one or a few lines of differences. You need
- not worry unless the random test repeatedly fails.
+ not worry unless the random test fails repeatedly.