HREF="http://www.PostgreSQL.org/docs/awbook.html">
-http://www.PostgreSQL.org/docs/awbook.html
+http://www.PostgreSQL.org/docs/awbook.html
.
psql has some nice \d commands to show information about types,
operators, functions, aggregates, etc.
-
The web site contains even more documentation.
+
Our web site contains even more documentation.
1.9) How do I find out about known bugs or missing features?
Third, submit high-quality patches to pgsql-patches.
There are about a dozen people who have commit privileges to
-the PostgreSQL CVS archive. All of them have submitted so many
+the PostgreSQL CVS archive. They each have submitted so many
high-quality patches that it was a pain for the existing
committers to keep up, and we had confidence that patches they
committed were likely to be of high quality.
commercial databases, though in this mode, an OS crash could cause data
corruption. We are working to provide an intermediate mode that suffers
less performance overhead than full fsync mode, and will allow data
-integrity within 30 seconds of an OS crash. The mode is select-able by
-the database administrator.
+integrity within 30 seconds of an OS crash.
In comparison to MySQL or leaner database systems, we are slower on
inserts/updates because we have transaction overhead. Of course, MySQL
There are two ODBC drivers available, PsqlODBC and OpenLink ODBC.
PsqlODBC is included in the distribution. More information about it can
-ftp://ftp.PostgreSQL.org/pub/odbc/
index.html
+ftp://ftp.PostgreSQL.org/pub/odbc/
.
OpenLink ODBC can be gotten from
http://www.openlinksw.com. It works with their standard ODBC client
http://www.phone.net/home/mwm/hotlist/.
-For web integration, PHP is an excellent interface. It is at:
+For web integration, PHP is an excellent interface. It is at
PHP is great for simple stuff, but for more complex cases, many
-
3.1) Why does initdb fail?
+
3.1) Why does initdb fail?
Try these:
3.4) When I try to start the postmaster, I
get
IpcMemoryCreate errors. Why?
-You either do not have shared memory configured properly in kernel or
+You either do not have shared memory configured properly in your kernel or
you need to enlarge the shared memory available in the kernel. The
exact amount you need depends on your architecture and how many buffers
and backend processes you configure postmaster to run with.
from the local machine. To enable TCP/IP connections, make sure the
postmaster has been started with the -i option, and add an
appropriate host entry to the file
-
pgsql/data/pg_hba.conf. See the
pg_hba.conf manual page.
-
+pgsql/data/pg_hba.conf.
3.8) Why can't I access the database as the root
If you are doing a lot of INSERTs, consider doing them in a large
-batch using the COPY command. This is much faster than single
+batch using the COPY command. This is much faster than
individual INSERTS. Second, statements not in a BEGIN
WORK/COMMIT transaction block are considered to be in their
own transaction. Consider performing several statements in a single
There are several tuning things that can be done. You can disable
-fsync() by starting the postmaster with a -o -F option. This will
+fsync() by starting the postmaster with a -o -F option. This will
prevent
fsync()'s from flushing to disk after every transaction.
You can also use the postmaster -B option to increase the number of
shared memory buffers used by the backend processes. If you make this
-parameter too high, the postmaster may not start up because you've exceeded
+parameter too high, the postmaster may not start because you've exceeded
your kernel's limit on shared memory space.
Each buffer is 8K and the default is 64 buffers.
of memory used by the backend process for temporary sorts. The -S value
is measured in kilobytes, and the default is 512 (ie, 512K).
-You can also use the CLUSTER command to group data in base tables to
+You can also use the CLUSTER command to group data in tables to
match an index. See the cluster(l) manual page for more details.
-
3.11) What debugging features are available in
+
3.11) What debugging features are available?
PostgreSQL has several features that report status information that can
be valuable for debugging purposes.
assert()'s monitor the progress of the backend and halt the program when
something unexpected occurs.
-Both postmaster and postgres have several debug options available.
-First, whenever you start the postmaster, make sure you send the
+Both postmaster and postgres have several debug options available.
+First, whenever you start the postmaster, make sure you send the
standard output and error to a log file, like:
cd /usr/local/pgsql
This will put a server.log file in the top-level PostgreSQL directory.
This file contains useful information about problems or errors
-encountered by the server. Postmaster has a -d option that allows even
-more detailed information to be reported. The -d option takes a number
+encountered by the server. Postmaster has a -d option that allows even
+more detailed information to be reported. The -d option takes a number
that specifies the debug level. Be warned that high debug level values
generate large log files.
If the postmaster is not running, you can actually run the
-postgres backend from the command line, and type your SQL statement
+postgres backend from the command line, and type your SQL statement
directly. This is recommended only for debugging purposes. Note
that a newline terminates the query, not a semicolon. If you have
compiled with debugging symbols, you can use a debugger to see what is
-happening. Because the backend was not started from the postmaster, it
+happening. Because the backend was not started from the postmaster, it
is not running in an identical environment and locking/backend
interaction problems may not be duplicated.
startup to delay for n seconds so you can attach with the
debugger and trace through the startup sequence.
-The postgres program has -s, -A, and -t options that can be very useful
+The postgres program has -s, -A, and -t options that can be very useful
for debugging and performance measurements.
You can also compile with profiling to see what functions are taking
execution time. The backend profile files will be deposited in the
-pgsql/data/base/dbname directory. The client profile file will be put
+pgsql/data/base/dbname directory. The client profile file will be put
in the client's current directory.
In PostgreSQL 6.5 and up, the default limit is 32 processes. You can
increase it by restarting the postmaster with a suitable -N
value. With the default configuration you can set -N as large as
-1024; if you need more, increase MAXBACKENDS in
+1024. If you need more, increase MAXBACKENDS in
include/config.h and rebuild. You can set the default value of
-N at configuration time, if you like, using configure's
--with-maxbackends switch.
processes, NPROC, the maximum number of processes per
user, MAXUPRC, and the maximum number of open files,
NFILE and NINODE. The reason that PostgreSQL
-has a limit on the number of allowed backend processes is so that you
-
can ensure that your system won't run out of resources.
+has a limit on the number of allowed backend processes is so
+your system won't run out of resources.
In PostgreSQL versions prior to 6.5, the maximum number of backends was
64, and changing it required a rebuild after altering the MaxBackendId
constant in
include/storage/sinvaladt.h.
-
3.13) What are the pg_tempNNN.NN files in my
+
3.13) What are the pg_sorttempNNN.NN files in my
They are temporary files generated by the query executor. For
example, if a sort needs to be done to satisfy an ORDER BY, and
-the sort requires more space than the backend's -S parameter allows,
+the sort requires more space than the backend's -S parameter allows,
then temp files are created to hold the extra data.
-The temp files should go away automatically, but might not if a backend
-crashes during a sort. If you have no transactions running at the time,
+The temp files should be deleted automatically, but might not if a backend
+crashes during a sort. If you have no backends running at the time,
it is safe to delete the pg_tempNNN.NN files.
-
4.1) The system seems to be confused about
+
4.1) Why is system confused about
commas, decimal points, and date formats.
-Check your locale configuration. PostgreSQL uses the locale settings of
+Check your locale configuration. PostgreSQL uses the locale setting of
the user that ran the postmaster process. There are postgres and psql
SET commands to control the date format. Set those accordingly for
your operating environment.
or the entire query may have to be evaluated until the desired rows have
-
4.4) How do I get a list of tables, or other
-
information I see in
psql?
+
4.4) How do I get a list of tables or other
+
things I can see in
psql?
-You can read the source code for psql, file
-pgsql/src/bin/psql/psql.c. It contains SQL commands that generate the
+You can read the source code for psql in file
+pgsql/src/bin/psql/psql.c. It contains SQL commands that generate the
output for psql's backslash commands. You can also start psql
-with the -E option so that it will print out the queries it uses
+with the -E option so it will print out the queries it uses
to execute the commands you give.
4.7)How much database disk space is required to
-store data from a typical
flat file?
+store data from a typical
text file?
-A PostgreSQL database can require about six and a half times the disk space
+A PostgreSQL database may need six and a half times the disk space
required to store the data in a flat file.
Consider a file of 300,000 lines with two integers on each line. The
1755 database pages * 8192 bytes per page = 14,376,960 bytes (14MB)
-Indexes do not contain as much overhead, but do contain the data that is
+Indexes do not require as much overhead, but do contain the data that is
being indexed, so they can be large also.
4.8) How do I find out what indices or
4.9) My queries are slow or don't make
-PostgreSQL does not automatically maintain statistics. One has to make
-an explicit VACUUM call to update the statistics. After
+PostgreSQL does not automatically maintain statistics. VACUUM
+must be run to update the statistics. After
statistics are updated, the optimizer knows how many rows in the table,
and can better decide if it should use indices. Note that the optimizer
does not use indices in cases when the table is small because a
4.12) What is Genetic Query
-The GEQO module in PostgreSQL is intended to solve the query
-optimization problem of joining many tables by means of a Genetic
+The GEQO module speeds query
+optimization when joining many tables by means of a Genetic
Algorithm (GA). It allows the handling of large join queries through
-For further information see the documentation.
-
-
-
4.13) How do I do regular expression searches and
case-insensitive regular expression searching?
serial/auto-incrementing field?
-PostgreSQL supports SERIAL data type. It auto-creates a
+PostgreSQL supports a SERIAL data type. It auto-creates a
sequence and index on the column. For example, this:
CREATE TABLE person (
4.16.2) How do I get the back the generated SERIAL value after an insert?
-
Probably the simplest approach is to to retrieve the next SERIAL value from the sequence object with the
nextval() function
before inserting and then insert it explicitly. Using the example table in
4.16.1, that might look like this:
+
One approach is to to retrieve the next SERIAL value from the sequence object with the
nextval() function
before inserting and then insert it explicitly. Using the example table in
4.16.1, that might look like this:
$newSerialID = nextval('person_id_seq');
INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');
You would then also have the new value stored in $newSerialID
for use in other queries (e.g., as a foreign key to the person
table). Note that the name of the automatically-created SEQUENCE object will be named <table>_<serialcolumn>_seq, where table and serialcolumn are the names of your table and your SERIAL column, respectively.
-Similarly, you could retrieve the just-assigned SERIAL value with the currval() function after it was inserted by default, e.g.,
+Alternatively, you could retrieve the just-assigned SERIAL value with the currval() function after it was inserted by default, e.g.,
INSERT INTO person (name) VALUES ('Blaise Pascal');
$newID = currval('person_id_seq');
INSERT statement to lookup the default value, though this is probably
the least portable approach. In perl, using DBI with Edmund Mergl's
DBD::Pg module, the oid value is made available via
-$sth->{pg_oid_status} after $sth->execute().
+$sth->{pg_oid_status} after $sth->execute().
4.16.3) Don't currval() and nextval() lead to a race condition with other
concurrent backend processes?
-No. That has been handled by the backends.
+No. This is handled by the backends.
4.17) What is an oid? What is a tid?
-Oids are PostgreSQL's answer to unique row ids. Every row that is
-created in PostgreSQL gets a unique oid. All oids generated during
-initdb are less than 16384 (from backend/access/transam.h). All
-user-created oids are equal or greater that this. By default, all these
-oids are unique not only within a table, or database, but unique within
+OIDs are PostgreSQL's answer to unique row ids. Every row that is
+created in PostgreSQL gets a unique oid. All oids generated during
+initdb are less than 16384 (from backend/access/transam.h). All
+user-created oids are equal or greater that this. By default, all these
+oids are unique not only within a table, or database, but unique within
the entire PostgreSQL installation.
-PostgreSQL uses oids in its internal system tables to link rows between
-tables. These oids can be used to identify specific user rows and used
-in joins. It is recommended you use column type oid to store oid
-values. See the sql(l) manual page to see the other internal columns.
-You can create an index on the oid field for faster access.
+PostgreSQL uses oids in its internal system tables to link rows between
+tables. These oids can be used to identify specific user rows and used
+in joins. It is recommended you use column type oid to store oid
+values. You can create an index on the
oid field for faster access.
-Oids are assigned to all new rows from a central area that is used by
-all databases. If you want to change the oid to something else, or if
-you want to make a copy of the table, with the original oid's, there is
+Oids are assigned to all new rows from a central area that is used by
+all databases. If you want to change the oid to something else, or if
+you want to make a copy of the table, with the original oid's, there is
no reason you can't do it:
CREATE TABLE new_table(old_oid oid, mycol int);
- SELECT INTO new SELECT old_oid, mycol FROM old;
+ SELECT old_oid, mycol INTO new FROM old;
COPY new TO '/tmp/pgtable';
DELETE FROM new;
COPY new WITH OIDS FROM '/tmp/pgtable';
around any use of a large object handle, that is,
surrounding
lo_open
...
lo_close.
-Current PostgreSQL enforces the rule by closing large object handles at
-transaction commit, which will be instantly upon completion of the
-lo_open command if you are not inside a transaction. So the
-first attempt to do anything with the handle will draw invalid large
-obj descriptor. So code that used to work (at least most of the
-time) will now generate that error message if you fail to use a
+Currently PostgreSQL enforces the rule by closing large object handles
+at transaction commit. So the first attempt to do anything with the
+handle will draw invalid large obj descriptor. So code that used
+to work (at least most of the time) will now generate that error message
+if you fail to use a transaction.
If you are using a client interface like ODBC you may need to set
5.3) How can I contribute some nifty new types and
-functions
for PostgreSQL?
+functions
to PostgreSQL?
Send your extensions to the pgsql-hackers mailing list, and they will