class="parameter">dbname will not be created by this
- command, you must do that yourself before executing
-
psql> (e.g., with createdb
- class="parameter">dbname>).
psql>
- supports similar options to
pg_dump> for
- controlling the database server location and the user names. See
+ command, you must create it yourself from template0 before executing
+
psql> (e.g., with createdb -t template0
+ dbname>).
+
psql> supports similar options to pg_dump>
+ for controlling the database server location and the user names. See
its reference page for more information.
+
+
+
+ The dumps produced by pg_dump are relative to template0. This means
+ that any languages, procedure etc added to template1 will also be
+ dumped by
pg_dump>. As a result, when restoring, if
+ you are using a customized template1, you must create the empty
+ database from template0, as in the example above.
+
+
+
+
+
Use the custom dump format (V7.1).
+ If PostgreSQL was built on a system with the zlib compression library
+ installed, the custom dump format will compress data as it writes it
+ to the output file. For large databases, this will produce similar dump
+ sizes to using gzip, but has the added advantage that the tables can be
+ restored selectively. The following command dumps a database using the
+ custom dump format:
+
+
+pg_dump -Fc dbname > filename
+
+
+
+ See the
pg_dump> and pg_restore> reference pages for details.
+
+
+
+
- Large objects are not handled by
pg_dump>. The
- directory contrib/pg_dumplo> of the
-
Postgres> source tree contains a program that can
- do that.
+ For reasons of backward compatibility,
pg_dump> does
+ not dump large objects by default. To dump large objects you must use
+ either custom or TAR output format, and use the -B option in
+
pg_dump>. See the reference pages for details.
+ The directory contrib/pg_dumplo> of the
+
Postgres> source tree also contains a program that can
+ dump large objects.