|
- DGUX 5.4R4.11
- m88k
+ DGUX 5.4R4.11>
+ m88k>
6.3
1998-03-01, Brian E Gallew (geek+@cmu.edu)
6.4 probably OK
|
- MkLinux DR1
- PPC750
+ MkLinux DR1>
+ PPC750>
7.0
2001-04-03, Tatsuo Ishii (t-ishii@sra.co.jp)
7.1 needs OS update?
|
- NextStep
- x86
+ NextStep>
+ x86>
6.x
1998-03-01, David Wetzel (dave@turbocat.de)
bit rot suspected
|
- QNX 4.25
- x86
+ QNX 4.25>
+ x86>
7.0
2000-04-01, Dr. Andreas Kardos (kardos@repas-aeg.de)
Spinlock code needs work. See also doc/FAQ_QNX4.
|
- SCO OpenServer 5
- x86
+ SCO OpenServer 5>
+ x86>
6.5
1999-05-25, Andrew Merrill (andrew@compclass.com>)
7.1 should work, but no reports; see also doc/FAQ_SCO
|
- System V R4
- m88k
+ System V R4>
+ m88k>
6.2.1
1998-03-01, Doug Winterburn (dlw@seavme.xroads.com)
- needs new TAS spinlock code
+
needs new TAS spinlock code
|
- System V R4
- MIPS
+ System V R4>
+ MIPS>
6.4
1998-10-28, Frank Ridderbusch (ridderbusch.pad@sni.de)
no 64-bit integer
|
- Ultrix
- MIPS
+ Ultrix>
+ MIPS>
7.1
2001-03-26
- TAS spinlock code not detected
+
TAS spinlock code not detected
|
- Ultrix
- VAX
+ Ultrix>
+ VAX>
6.x
1998-03-01
No recent reports. Obsolete?
|
- Windows 9x, ME, NT, 2000 (native)
- x86
+ Windows 9x, ME, NT, 2000> (native)
+ x86>
7.1
2001-03-26, Magnus Hagander (mha@sollentuna.net)
- client-side libraries (libpq and psql) or ODBC/JDBC, no server-side;
+ client-side libraries (
libpq> and psql>) or ODBC/JDBC, no server-side;
]]>
for instructions
In normal
PostgreSQL operation, an UPDATE or
DELETE of a row does not immediately remove the old tuple>
(version of the row). This approach is necessary to gain the benefits
- of multi-version concurrency control (see the User's Guide): the tuple
+ of multiversion concurrency control (see the User's Guide): the tuple
must not be deleted while
it is still potentially visible to other transactions. But eventually,
an outdated or deleted tuple is no longer of interest to any transaction.
Clearly, a table that receives frequent updates or deletes will need
to be vacuumed more often than tables that are seldom updated. It may
- be useful to set up periodic cron tasks that vacuum only selected tables,
+ be useful to set up periodic
cron> tasks that vacuum only selected tables,
skipping tables that are known not to change often. This is only likely
to be helpful if you have both large heavily-updated tables and large
seldom-updated tables --- the extra cost of vacuuming a small table
statistics updates if the statistical distribution of the data is not
changing much. A simple rule of thumb is to think about how much
the minimum and maximum values of the columns in the table change.
- For example, a timestamp column that contains the time of row update
+ For example, a timestamp column that contains the time of row update
will have a constantly-increasing maximum value as rows are added and
updated; such a column will probably need more frequent statistics
updates than, say, a column containing URLs for pages accessed on a
Prior to
PostgreSQL 7.2, the only defense
- against XID wraparound was to re-initdb at least every 4 billion
+ against XID wraparound was to re-initdb> at least every 4 billion
transactions. This of course was not very satisfactory for high-traffic
sites, so a better solution has been devised. The new approach allows an
- installation to remain up indefinitely, without initdb or any sort of
+ installation to remain up indefinitely, without initdb> or any sort of
restart. The price is this maintenance requirement:
- every table in the database must be VACUUMed at least once every
+ every table in the database must be vacuumed at least once every
billion transactions.
user-created databases that are to be marked datallowconn> =
false> in pg_database>, since there isn't any
convenient way to vacuum a database that you can't connect to. Note
- that VACUUM's automatic warning message about unvacuumed databases will
+ that VACUUM's automatic warning message about unvacuumed databases will
ignore pg_database> entries with datallowconn> =
false>, so as to avoid giving false warnings about these
databases; therefore it's up to you to ensure that such databases are
createdb dbname
- <filename>createdb> does no magic. It connects to the template1
+ <command>createdb> does no magic. It connects to the template1
database and executes the CREATE DATABASE> command,
exactly as described above. It uses
psql> program
- internally. The reference page on createdb contains the invocation
- details. In particular, createdb without any arguments will create
+ internally. The reference page on createdb> contains the invocation
+ details. In particular, createdb> without any arguments will create
a database with the current user name, which may or may not be what
you want.
setenv PGDATA2 /home/postgres/data
- in csh or tcsh. You have to make sure that this environment
+ in
csh> or tcsh>. You have to make sure that this environment
variable is always defined in the server environment, otherwise
you won't be able to access that database. Therefore you probably
want to set it in some sort of shell start-up file or server
-
+
Regression Tests
The tests will expect to contact the server at the local host and the
- default port number, unless directed otherwise by PGHOST and PGPORT
+ default port number, unless directed otherwise by PGHOST and PGPORT
environment variables.
Date and time differences
- Some of the queries in the <quote>timestampe> test will
+ Some of the queries in the <filename>timestampe> test will
fail if you run the test on the day of a daylight-savings time
changeover, or the day before or after one. These queries assume
that the intervals between midnight yesterday, midnight today and
Most of the date and time results are dependent on the time zone
environment. The reference files are generated for time zone
- PST8PDT (Berkeley, California) and there will be apparent
+ PST8PDT (Berkeley, California) and there will be apparent
failures if the tests are not run with that time zone setting.
The regression test driver sets environment variable
PGTZ to PST8PDT, which normally
ensures proper results. However, your system must provide library
- support for the PST8PDT time zone, or the time zone-dependent
+ support for the PST8PDT time zone, or the time zone-dependent
tests will fail. To verify that your machine does have this
support, type the following:
The command above should have returned the current system time in
- the PST8PDT time zone. If the PST8PDT database is not available,
+ the PST8PDT time zone. If the PST8PDT database is not available,
then your system may have returned the time in GMT. If the
- PST8PDT time zone is not available, you can set the time zone
+ PST8PDT time zone is not available, you can set the time zone
rules explicitly:
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
Some systems using older time zone libraries fail to apply
daylight-savings corrections to dates before 1970, causing
- pre-1970 PDT times to be displayed in PST instead. This will
+ pre-1970
PDT times to be displayed in PST instead. This will
result in localized differences in the test results.
-You might wonder why we don't ORDER all the regress test SELECTs to
+You might wonder why we don't order all the regress test queries explicitly to
get rid of this issue once and for all. The reason is that that would
make the regression tests less useful, not more, since they'd tend
to exercise query plan types that produce ordered results to the
The test name is just the name of the particular regression test
module. The platform pattern is a pattern in the style of
- expr(1) (that is, a regular expression with an implicit
+ expr>1> (that is, a regular expression with an implicit
^ anchor
at the start). It is matched against the platform name as printed
by config.guess followed by
For example: some systems using older time zone libraries fail to apply
daylight-savings corrections to dates before 1970, causing
- pre-1970 PDT times to be displayed in PST instead. This causes a
+ pre-1970
PDT times to be displayed in PST instead. This causes a
few differences in the horology> regression test.
Therefore, we provide a variant comparison file,
horology-no-DST-before-1970.out, which includes
the results to be expected on these systems. To silence the bogus
- failure
message on HPPA platforms, resultmap
+ failure
message on HPPA platforms, resultmap
includes
horology/hppa=horology-no-DST-before-1970
- which will trigger on any machine for which config.guess's output
+ which will trigger on any machine for which the output of config.guess
begins with hppa
. Other lines
- in resultmap select the variant comparison file for other
+ in resultmap> select the variant comparison file for other
platforms where it's appropriate.
The previous C function manager did not
-handle NULLs properly, nor did it support 64-bit CPU's (Alpha). The new
+handle NULLs properly, nor did it support 64-bit
CPU's (Alpha). The new
function manager does. You can continue using your old custom
functions, but you may want to rewrite them in the future to use the new
function manager call interface.
- Updated psql
psql, our interactive terminal monitor, has been
- updated with a variety of new features. See the psql manual page for details.
+ updated with a variety of new features. See the
psql manual page for details.
The date/time types datetime and
- timespan have been superceded by the
+ timespan have been superseded by the
SQL92-defined types timestamp and
interval. Although there has been some effort to
ease the transition by allowing
This is basically a cleanup release for 6.5.2. We have added a new
- pgaccess that was missing in 6.5.2, and installed an NT-specific fix.
+
PgAccess> that was missing in 6.5.2, and installed an NT-specific fix.
pg_dump takes advantage of the new
- MVCC features to give a consistant database dump/backup while
+ MVCC features to give a consistent database dump/backup while
the database stays online and available for queries.
We continue to expand our port list, this time including
- WinNT/ix86 and NetBSD/arm32.
+ Windows NT>/ix86> and NetBSD>/arm32>.
New and updated material is present throughout the
documentation. New
FAQs have been
- contributed for SGI and AIX platforms.
+ contributed for SGI> and AIX> platforms.
The Tutorial has introductory information
on
SQL from Stefan Simkovics.
For the User's Guide, there are
Keep the above in mind if you are using
contrib/refint.* triggers for
- referential integrity. Additional technics are required now. One way is
+ referential integrity. Additional techniques are required now. One way is
to use LOCK parent_table IN SHARE ROW EXCLUSIVE MODE
command if a transaction is going to update/delete a primary key and
use LOCK parent_table IN SHARE MODE command if a
Note that if you run a transaction in SERIALIZABLE mode then you must
execute the LOCK commands above before execution of any
- DML statement
(SELECT/INSERT/DELETE/UPDATE/FETCH/COPY_TO) in the
transaction.
-Jan also contributed a second procedural language, PL/pgSQL, to go with the
-original PL/pgTCL procedural language he contributed last release.
+Jan also contributed a second procedural language,
PL/pgSQL, to go with the
+original
PL/pgTCL procedural language he contributed last release.
-->
-This is a bugfix release for 6.3.x.
+This is a bug-fix release for 6.3.x.
Refer to the release notes for version 6.3 for a more complete summary of new features.
A dump/restore is NOT required for those running 6.3 or 6.3.1. A
-'make distclean', 'make', and 'make install' is all that is required.
+make distclean>, make>, and make install> is all that is required.
This last step should be performed while the postmaster is not running.
You should re-link any custom applications that use
Postgres libraries.
A dump/restore is NOT required for those running 6.3. A
-'make distclean', 'make', and 'make install' is all that is required.
+make distclean>, make>, and make install> is all that is required.
This last step should be performed while the postmaster is not running.
You should re-link any custom applications that use
Postgres libraries.
use them in the target list.
- Second, 6.3 uses unix domain sockets rather than TCP/IP by default. To
+ Second, 6.3 uses Unix domain sockets rather than TCP/IP by default. To
enable connections from other machines, you have to use the new
- postmaster -i option, and of course edit pg_hba.conf. Also, for this
- reason, the format of pg_hba.conf has changed.
+ postmaster -i option, and of course edit pg_hba.conf. Also, for this
+ reason, the format of pg_hba.conf has changed.
- Third, char() fields will now allow faster access than varchar() or
- text. Specifically, the text and varchar() have a penalty for access to
- any columns after the first column of this type. char() used to also
+ Third, char() fields will now allow faster access than varchar() or
+ text. Specifically, the text> and varchar() have a penalty for access to
+ any columns after the first column of this type. char() used to also
have this access penalty, but it no longer does. This may suggest that
you redesign some of your tables, especially if you have short character
- columns that you have defined as varchar() or text. This and other
+ columns that you have defined as varchar() or text. This and other
changes make 6.3 even faster than earlier releases.
See the Administrator's Guide for more
information. There is a new table, pg_shadow, which is used to store
user information and user passwords, and it by default only SELECT-able
- by the postgres super-user. pg_user is now a view of pg_shadow, and is
+ by the postgres super-user. pg_user is now a view of pg_shadow, and is
SELECT-able by PUBLIC. You should keep using pg_user in your
application without changes.
We also have real deadlock detection code. No more sixty-second
- timeouts. And the new locking code implements a FIFO better, so there
+ timeouts. And the new locking code implements a
FIFO better, so there
should be less resource starvation during heavy use.
- Many complaints have been made about inadequate documenation in previous
+ Many complaints have been made about inadequate documentation in previous
releases. Thomas has put much effort into many new manuals for this
release. Check out the doc/ directory.
For performance reasons, time travel is gone, but can be implemented
- using triggers (see pgsql/contrib/spi/README). Please check out the new
+ using triggers (see pgsql/contrib/spi/README). Please check out the new
\d command for types, operators, etc. Also, views have their own
permissions now, not based on the underlying tables, so permissions on
- them have to be set separately. Check /pgsql/interfaces for some new
+ them have to be set separately. Check /pgsql/interfaces for some new
ways to talk to
Postgres.
Another way to avoid dump/reload is to use the following SQL command
-from psql to update the existing system table:
+from psql to update the existing system table:
update pg_aggregate set aggfinalfn = 'cash_div_flt8'
restore of the database in 6.2.
-Note that the pg_dump and pg_dumpall utility from 6.2 should be used
+Note that the pg_dump and pg_dumpall utility from 6.2 should be used
to dump the 6.1 database.
- Three new data types (datetime, timespan, and circle) have been added to
+ Three new data types (datetime, timespan, and circle) have been added to
the native set of
Postgres types. Points, boxes, paths, and polygons
- have had their output formats made consistant across the data types.
+ have had their output formats made consistent across the data types.
The polygon output in misc.out has only been spot-checked for correctness
relative to the original regression output.
The float8 regression test fails on at least some platforms. This is due
- to differences in implementations of pow() and exp() and the signaling
+ to differences in implementations of pow() and exp() and the signaling
mechanisms used for overflow and underflow conditions.
Here is a new migration file for 1.02.1. It includes the 'copy' change
-and a script to convert old ascii files.
+and a script to convert old
ASCII files.
The following notes are for the benefit of users who want to migrate
-databases from postgres95 1.01 and 1.02 to postgres95 1.02.1.
+databases from
Postgres95> 1.01 and 1.02 to Postgres95> 1.02.1.
-If you are starting afresh with postgres95 1.02.1 and do not need
+If you are starting afresh with
Postgres95> 1.02.1 and do not need
to migrate old databases, you do not need to read any further.
-In order to upgrade older postgres95 version 1.01 or 1.02 databases to
+In order to upgrade older
Postgres95> version 1.01 or 1.02 databases to
version 1.02.1, the following steps are required:
Add the new built-in functions and operators of 1.02.1 to 1.01 or 1.02
databases. This is done by running the new 1.02.1 server against
your own 1.01 or 1.02 database and applying the queries attached at
- the end of thie file. This can be done easily through psql. If your
- 1.01 or 1.02 database is named "testdb" and you have cut the commands
- from the end of this file and saved them in addfunc.sql:
+ the end of the file. This can be done easily through psql>. If your
+ 1.01 or 1.02 database is named testdb and you have cut the commands
+ from the end of this file and saved them in addfunc.sql:
% psql testdb -f addfunc.sql
Dump/Reload Procedure
-If you are trying to reload a pg_dump or text-mode 'copy tablename to
-stdout' generated with a previous version, you will need to run the
-attached sed script on the ASCII file before loading it into the
+If you are trying to reload a pg_dump or text-mode, copy tablename to
+stdout generated with a previous version, you will need to run the
+attached sed script on the ASCII file before loading it into the
database. The old format used '.' as end-of-data, while '\.' is now the
end-of-data marker. Also, empty strings are now loaded in as '' rather
than NULL. See the copy manual page for full details.
The following notes are for the benefit of users who want to migrate
-databases from postgres95 1.0 to postgres95 1.01.
+databases from
Postgres95> 1.0 to Postgres95> 1.01.
-If you are starting afresh with postgres95 1.01 and do not need
+If you are starting afresh with
Postgres95> 1.01 and do not need
to migrate old databases, you do not need to read any further.
-In order to postgres95 version 1.01 with databases created with
-postgres95 version 1.0, the following steps are required:
+In order to
Postgres95> version 1.01 with databases created with
+
Postgres95> version 1.0, the following steps are required:
-Set the definition of NAMEDATALEN in src/Makefile.global to 16
- and OIDNAMELEN to 20.
+Set the definition of NAMEDATALEN in src/Makefile.global to 16
+ and OIDNAMELEN to 20.
-If you do, you must create a file name "pg_hba" in your top-level data
- directory (typically the value of your $PGDATA). src/libpq/pg_hba
+If you do, you must create a file name pg_hba in your top-level data
+ directory (typically the value of your $PGDATA). src/libpq/pg_hba
shows an example syntax.
HBA = 1
- in src/Makefile.global
+ in src/Makefile.global
Note that host-based authentication is turned on by default, and if
-Compile and install 1.01, but DO NOT do the initdb step.
+Compile and install 1.01, but DO NOT do the initdb step.
Before doing anything else, terminate your 1.0 postmaster, and
- backup your existing $PGDATA directory.
+ backup your existing $PGDATA directory.
-Set your PGDATA environment variable to your 1.0 databases, but set up
+Set your PGDATA environment variable to your 1.0 databases, but set up
path up so that 1.01 binaries are being used.
-Modify the file $PGDATA/PG_VERSION from 5.0 to 5.1
+Modify the file $PGDATA/PG_VERSION from 5.0 to 5.1
Add the new built-in functions and operators of 1.01 to 1.0
databases. This is done by running the new 1.01 server against
your own 1.0 database and applying the queries attached and saving
- in the file 1.0_to_1.01.sql. This can be done easily through psql.
- If your 1.0 database is name "testdb":
+ in the file 1.0_to_1.01.sql. This can be done easily through psql.
+ If your 1.0 database is name testdb:
% psql testdb -f 1.0_to_1.01.sql
04:21 Dual Pentium Pro 180, 224MB, UW-SCSI, Linux 2.0.36, gcc 2.7.2.3 -O2 -m486
- For the linux system above, using UW-SCSI disks rather than (older) IDE
+ For the
Linux system above, using UW-SCSI disks rather than (older) IDE
disks leads to a 50% improvement in speed on the regression test.
To add a user account to your system, look for a command
useradd or adduser. The user
- name <quote>postgres> is often used but by no means
+ name <systemitem>postgres> is often used but by no means
required.
current locale is done by changing the value of the environment variable
LC_ALL or LANG. The sort order used
within a particular database cluster is set by initdb
- and cannot be changed later, short of dumping all data, re-initdb,
+ and cannot be changed later, short of dumping all data, rerunning initdb,
reload data. So it's important to make this choice correctly now.
su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgsql/data"
- Then, create a symlink to it in /etc/rc3.d> as
+ Then, create a symbolic link to it in /etc/rc3.d> as
S99postgresql>.
While the
postmaster is running, its
- PID is in the file postmaster.pid in the data
+
PID is in the file
postmaster.pid in the data
directory. This is used as an interlock against multiple postmasters
running in the same data directory, and can also be used for
shutting down the postmaster.
- Details about configuring System V IPC facilities are given in
+ Details about configuring
System V> IPC> facilities are given in
.
All parameter names are case-insensitive. Every parameter takes a
- value of one of the four types boolean, integer, floating point,
+ value of one of the four types Boolean, integer, floating point,
string as described below. Boolean values are
ON, OFF,
TRUE, FALSE,
The configuration file is reread whenever the postmaster receives
- a SIGHUP signal. This signal is also propagated to all running
+ a SIGHUP> signal. This signal is also propagated to all running
backend processes, so that running sessions get the new default.
Alternatively, you can send the signal to only one backend process
directly.
- CPU_INDEX_TUPLE_COST (floating point)
+ CPU_INDEX_TUPLE_COST (floating point)
Sets the query optimizer's estimate of the cost of processing
- CPU_OPERATOR_COST (floating point)
+ CPU_OPERATOR_COST (floating point)
Sets the optimizer's estimate of the cost of processing each
- CPU_TUPLE_COST (floating point)
+ CPU_TUPLE_COST (floating point)
Sets the query optimizer's estimate of the cost of processing
- EFFECTIVE_CACHE_SIZE (floating point)
+ EFFECTIVE_CACHE_SIZE (floating point)
Sets the optimizer's assumption about the effective size of
- ENABLE_HASHJOIN (boolean)
+ ENABLE_HASHJOIN (boolean)
Enables or disables the query planner's use of hash-join plan
- ENABLE_INDEXSCAN (boolean)
+ ENABLE_INDEXSCAN (boolean)
Enables or disables the query planner's use of index scan plan
- ENABLE_MERGEJOIN (boolean)
+ ENABLE_MERGEJOIN (boolean)
Enables or disables the query planner's use of merge-join plan
- ENABLE_NESTLOOP (boolean)
+ ENABLE_NESTLOOP (boolean)
Enables or disables the query planner's use of nested-loop
- ENABLE_SEQSCAN (boolean)
+ ENABLE_SEQSCAN (boolean)
Enables or disables the query planner's use of sequential scan
- ENABLE_SORT (boolean)
+ ENABLE_SORT (boolean)
Enables or disables the query planner's use of explicit sort
- ENABLE_TIDSCAN (boolean)
+ ENABLE_TIDSCAN (boolean)
- Enables or disables the query planner's use of TID scan plan
+ Enables or disables the query planner's use of
TID> scan plan
types. The default is on. This is mostly useful to debug the
query planner.
genetic query optimization
- GEQO (boolean)
+ GEQO (boolean)
Enables or disables genetic query optimization, which is an
algorithm that attempts to do query planning without
exhaustive search. This is on by default. See also the various
- other GEQO_ settings.
+ other GEQO_ settings.
- GEQO_EFFORT (integer)
- GEQO_GENERATIONS (integer)
- GEQO_POOL_SIZE (integer)
- GEQO_RANDOM_SEED (integer)
- GEQO_SELECTION_BIAS (floating point)
+ GEQO_EFFORT (integer)
+ GEQO_GENERATIONS (integer)
+ GEQO_POOL_SIZE (integer)
+ GEQO_RANDOM_SEED (integer)
+ GEQO_SELECTION_BIAS (floating point)
Various tuning parameters for the genetic query optimization
are between 1 and 80, 40 being the default. Generations
specifies the number of iterations in the algorithm. The
number must be a positive integer. If 0 is specified then
- Effort * Log2(PoolSize) is used. The run time of the algorithm
+ Effort * Log2(PoolSize) is used. The run time of the algorithm
is roughly proportional to the sum of pool size and
generations. The selection bias is the selective pressure
within the population. Values can be from 1.50 to 2.00; the
latter is the default. The random seed can be set to get
- reproduceable results from the algorithm. If it is set to -1
+ reproducible results from the algorithm. If it is set to -1
then the algorithm behaves non-deterministically.
- GEQO_THRESHOLD (integer)
+ GEQO_THRESHOLD (integer)
Use genetic query optimization to plan queries with at least
- KSQO (boolean)
+ KSQO (boolean)
The Key Set Query Optimizer
queries whose WHERE clause contains many OR'ed AND clauses
(such as WHERE (a=1 AND b=2) OR (a=2 AND b=3)
...) into a UNION query. This method can be faster
than the default implementation, but it doesn't necessarily
give exactly the same results, since UNION implicitly adds a
SELECT DISTINCT clause to eliminate identical output rows.
- KSQO is commonly used when working with products like
+
KSQO is commonly used when working with products like
Microsoft Access, which tend to
generate queries of this form.
- The KSQO algorithm used to be absolutely essential for queries
+ The
KSQO algorithm used to be absolutely essential for queries
with many OR'ed AND clauses, but in
Postgres 7.0 and later the standard
planner handles these queries fairly successfully. Hence the
- RANDOM_PAGE_COST (floating point)
+ RANDOM_PAGE_COST (floating point)
Sets the query optimizer's estimate of the cost of a
- DEBUG_ASSERTIONS (boolean)
+ DEBUG_ASSERTIONS (boolean)
Turns on various assertion checks. This is a debugging aid. If
- DEBUG_LEVEL (integer)
+ DEBUG_LEVEL (integer)
The higher this value is set, the more
- DEBUG_PRINT_QUERY (boolean)
- DEBUG_PRINT_PARSE (boolean)
- DEBUG_PRINT_REWRITTEN (boolean)
- DEBUG_PRINT_PLAN (boolean)
- DEBUG_PRETTY_PRINT (boolean)
+ DEBUG_PRINT_QUERY (boolean)
+ DEBUG_PRINT_PARSE (boolean)
+ DEBUG_PRINT_REWRITTEN (boolean)
+ DEBUG_PRINT_PLAN (boolean)
+ DEBUG_PRETTY_PRINT (boolean)
These flags enable various debugging output to be sent to the
- HOSTNAME_LOOKUP (boolean)
+ HOSTNAME_LOOKUP (boolean)
By default, connection logs only show the IP address of the
- LOG_CONNECTIONS (boolean)
+ LOG_CONNECTIONS (boolean)
Prints a line informing about each successful connection to
- LOG_PID (boolean)
+ LOG_PID (boolean)
Prefixes each server log message with the process id of the
- LOG_TIMESTAMP (boolean)
+ LOG_TIMESTAMP (boolean)
- Prefixes each server log message with a timestamp. The default
+ Prefixes each server log message with a time stamp. The default
is off.
- SHOW_QUERY_STATS (boolean)
- SHOW_PARSER_STATS (boolean)
- SHOW_PLANNER_STATS (boolean)
- SHOW_EXECUTOR_STATS (boolean)
+ SHOW_QUERY_STATS (boolean)
+ SHOW_PARSER_STATS (boolean)
+ SHOW_PLANNER_STATS (boolean)
+ SHOW_EXECUTOR_STATS (boolean)
For each query, write performance statistics of the respective
- SHOW_SOURCE_PORT (boolean)
+ SHOW_SOURCE_PORT (boolean)
Shows the outgoing port number of the connecting host in the
- SYSLOG (integer)
+ SYSLOG (integer)
Postgres allows the use of
- <application>syslog> for logging. If this option
- is set to 1, messages go both to syslog and the standard
- output. A setting of 2 sends output only to syslog. (Some
+ <systemitem>syslog> for logging. If this option
+ is set to 1, messages go both to syslog> and the standard
+ output. A setting of 2 sends output only to syslog>. (Some
messages will still go to the standard output/error.) The
- default is 0, which means syslog is off. This option must be
+ default is 0, which means syslog> is off. This option must be
set at server start.
- To use syslog, the build of
+ To use syslog>, the build of
Postgres must be configured with
the option.
- SYSLOG_FACILITY (string)
+ SYSLOG_FACILITY (string)
This option determines the
syslog
- facility
to be used when syslog is enabled.
+
facility
to be used when
syslog is enabled.
You may choose from LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4,
LOCAL5, LOCAL6, LOCAL7; the default is LOCAL0. See also the
documentation of your system's
- SYSLOG_IDENT (string)
+ SYSLOG_IDENT (string)
- If logging to syslog is enabled, this option determines the
+ If logging to
syslog> is enabled, this option determines the
program name used to identify
syslog log messages. The default
- is <quote>postgres>.
+ is <literal>postgres>.
- TRACE_NOTIFY (boolean)
+ TRACE_NOTIFY (boolean)
Generates a great amount of debugging output for the
- AUSTRALIAN_TIMEZONES (bool)
+ AUSTRALIAN_TIMEZONES (bool)
If set to true, CST, EST,
and SAT are interpreted as Australian
- timezones rather than as North American Central/Eastern
- Timezones and Saturday. The default is false.
+ time zones rather than as North American Central/Eastern
+ time zones and Saturday. The default is false.
timeout
- DEADLOCK_TIMEOUT (integer)
+ DEADLOCK_TIMEOUT (integer)
This is the amount of time, in milliseconds, to wait on a lock
transaction isolation level
- DEFAUL_TRANSACTION_ISOLATION (string)
+ DEFAUL_TRANSACTION_ISOLATION (string)
Each SQL transaction has an isolation level, which can be
- DYNAMIC_LIBRARY_PATH (string)
+ DYNAMIC_LIBRARY_PATH (string)
If a dynamically loadable module needs to be opened and the
- FSYNC (boolean)
+ FSYNC (boolean)
If this option is on, the
Postgres> backend
to block and wait for the operating system to flush the
buffers. Without fsync>, the operating system is
allowed to do its best in buffering, sorting, and delaying
- writes, which can make for a considerable perfomance
+ writes, which can make for a considerable performance
increase. However, if the system crashes, the results of the
last few committed transactions may be lost in part or whole;
in the worst case, unrecoverable data corruption may occur.
some leave it on just to be on the safe side. Because it is
the safe side, on is also the default. If you trust your
operating system, your hardware, and your utility company (or
- better your UPS), you might want to disable fsync.
+ better your UPS), you might want to disable fsync.
- KRB_SERVER_KEYFILE (string)
+ KRB_SERVER_KEYFILE (string)
Sets the location of the Kerberos server key file. See
- MAX_CONNECTIONS (integer)
+ MAX_CONNECTIONS (integer)
Determines how many concurrent connections the database server
- MAX_EXPR_DEPTH (integer)
+ MAX_EXPR_DEPTH (integer)
Sets the maximum expression nesting depth that the parser will
- MAX_FSM_RELATIONS (integer)
+ MAX_FSM_RELATIONS (integer)
Sets the maximum number of relations (tables) for which free space
- MAX_FSM_PAGES (integer)
+ MAX_FSM_PAGES (integer)
Sets the maximum number of disk pages for which free space
- MAX_LOCKS_PER_XACT (integer)
+ MAX_LOCKS_PER_XACT (integer)
The shared lock table is sized on the assumption that at most
- max_locks_per_xact * max_connections distinct objects will need
+ max_locks_per_xact> * max_connections distinct objects will need
to be locked at any one time. The default, 64, has historically
proven sufficient, but you might need to raise this value if you
have clients that touch many different tables in a single transaction.
- PORT (integer)
+ PORT (integer)
The TCP port the server listens on; 5432 by default. This
- SHARED_BUFFERS (integer)
+ SHARED_BUFFERS (integer)
Sets the number of shared memory buffers the database server
- SILENT_MODE (bool)
+ SILENT_MODE (bool)
Runs postmaster silently. If this option is set, postmaster
will automatically run in background and any controlling ttys
- are disassociated, thus no messages are written to stdout or
- stderr (same effect as postmaster's -S option). Unless some
- logging system such as syslog is enabled, using this option is
+ are disassociated, thus no messages are written to standard output or
+ standard error (same effect as postmaster's -S option). Unless some
+ logging system such as
syslog> is enabled, using this option is
discouraged since it makes it impossible to see error
messages.
- SORT_MEM (integer)
+ SORT_MEM (integer)
Specifies the amount of memory to be used by internal sorts
much memory as this value specifies before it starts to put
data into temporary files. And don't forget that each running
backend could be doing one or more sorts. So the total memory
- space needed could be many times the value of SORT_MEM.
+ space needed could be many times the value of SORT_MEM.
- SQL_INHERITANCE (bool)
+ SQL_INHERITANCE (bool)
This controls the inheritance semantics, in particular whether
subtables are included into the consideration of various
commands by default. This was not the case in versions prior
- to 7.1. If you need the old behaviour you can set this
+ to 7.1. If you need the old behavior you can set this
variable to off, but in the long run you are encouraged to
change your applications to use the ONLY
keyword to exclude subtables. See the SQL language reference
- SSL (boolean)
+ SSL (boolean)
Enables
SSL> connections. Please read
- TCPIP_SOCKET (boolean)
+ TCPIP_SOCKET (boolean)
If this is true, then the server will accept TCP/IP
- UNIX_SOCKET_DIRECTORY (string)
+ UNIX_SOCKET_DIRECTORY (string)
Specifies the directory of the Unix-domain socket on which the
- UNIX_SOCKET_GROUP (string)
+ UNIX_SOCKET_GROUP (string)
Sets the group owner of the Unix domain socket. (The owning
- UNIX_SOCKET_PERMISSIONS (integer)
+ UNIX_SOCKET_PERMISSIONS (integer)
Sets the access permissions of the Unix domain socket. Unix
- VIRTUAL_HOST (string)
+ VIRTUAL_HOST (string)
- Specifies the TCP/IP hostname or address on which the
+ Specifies the TCP/IP host name or address on which the
postmaster is to listen for
connections from client applications. Defaults to
- listening on all configured addresses (including localhost).
+ listening on all configured addresses (including localhost>).
- CHECKPOINT_SEGMENTS (integer)
+ CHECKPOINT_SEGMENTS (integer)
- Maximum distance between automatic WAL checkpoints, in logfile
+ Maximum distance between automatic WAL checkpoints, in log file
segments (each segment is normally 16 megabytes).
This option can only be set at server start or in the
postgresql.conf file.
- CHECKPOINT_TIMEOUT (integer)
+ CHECKPOINT_TIMEOUT (integer)
Maximum time between automatic WAL checkpoints, in seconds.
- COMMIT_DELAY (integer)
+ COMMIT_DELAY (integer)
Time delay between writing a commit record to the WAL buffer and
- COMMIT_SIBLINGS (integer)
+ COMMIT_SIBLINGS (integer)
Minimum number of concurrent open transactions to require before
- WAL_BUFFERS (integer)
+ WAL_BUFFERS (integer)
Number of disk-page buffers in shared memory for WAL log.
- WAL_DEBUG (integer)
+ WAL_DEBUG (integer)
If non-zero, turn on WAL-related debugging output on standard
- WAL_FILES (integer)
+ WAL_FILES (integer)
Number of log files that are created in advance at checkpoint
- WAL_SYNC_METHOD (string)
+ WAL_SYNC_METHOD (string)
Method used for forcing WAL updates out to disk. Possible
|
- -B x>
- shared_buffers = x>
+ >
+ shared_buffers = x>
|
- -d x>
- debug_level = x>
+ >
+ debug_level = x>
|
- -F
- fsync = off
+
+ fsync = off>
|
- -h x>
- virtual_host = x>
+ >
+ virtual_host = x>
|
- -i
- tcpip_socket = on
+
+ tcpip_socket = on>
|
- -k x>
- unix_socket_directory = x>
+ >
+ unix_socket_directory = x>
|
- -l
- ssl = on
+
+ ssl = on>
|
- -N x>
- max_connections = x>
+ >
+ max_connections = x>
|
- -p x>
- port = x>
+ >
+ port = x>
|
- -fi, -fh, -fm, -fn, -fs, -ft
- enable_indexscan=off, enable_hashjoin=off,
- enable_mergejoin=off, enable_nestloop=off, enable_seqscan=off,
- enable_tidscan=off
+ , , , , ,
+ enable_indexscan=off>, enable_hashjoin=off>,
+ enable_mergejoin=off>, enable_nestloop=off>, enable_seqscan=off>,
+ enable_tidscan=off>
*
|
- -S x>
- sort_mem = x>
+ >
+ sort_mem = x>
*
|
- -s
- show_query_stats = on
+
+ show_query_stats = on>
*
|
- -tpa, -tpl, -te
- show_parser_stats=on, show_planner_stats=on, show_executor_stats=on
+ , ,
+ show_parser_stats=on>, show_planner_stats=on>, show_executor_stats=on>
*
Shared memory and semaphores are collectively referred to as
- System V IPC> (together with message queues, which are
+
System V> IPC>
> (together with message queues, which are
not relevant for
Postgres>). Almost all modern
operating systems provide these features, but not all of them have
them turned on or sufficiently sized by default, especially
- systems with BSD heritage. (For the QNX and BeOS ports,
+ systems with BSD heritage. (For the QNX> and BeOS> ports,
Postgres> provides its own replacement
implementation of these facilities.)
When
Postgres> exceeds one of the various hard
- limits of the IPC resources then the postmaster will refuse to
+ limits of the
IPC> resources then the postmaster will refuse to
start up and should leave a marginally instructive error message
about which problem was encountered and what needs to be done
about it. (See also .)
-
System V IPC parameters>
+
System V> IPC> parameters>
|
SHMMAX>>
Maximum size of shared memory segment (bytes)>
- 250 kB + 8.2kB * buffers + 14.2kB * max_connections or infinity
+ 250 kB + 8.2kB * shared_buffers> + 14.2kB * max_connections> or infinity
|
|
SHMALL>>
Total amount of shared memory available (bytes or pages)>
- if bytes, same as SHMMAX; if pages, ceil(SHMMAX/PAGE_SIZE)>
+ if bytes, same as SHMMAX; if pages, ceil(SHMMAX/PAGE_SIZE)>
|
|
SEMMNI>>
Maximum number of semaphore identifiers (i.e., sets)>
- >= ceil(max_connections / 16)>
+ >= ceil(max_connections / 16)>
|
SEMMNS>>
Maximum number of semaphores system-wide>
- ceil(max_connections / 16) * 17 + room for other applications>
+ ceil(max_connections / 16) * 17 + room for other applications>
|
- BSD/OS>
+ BSD/OS>>
Shared Memory>
mind that shared memory is not pageable; it is locked in RAM.
To increase the number of shared buffers supported by the
- postmaster, add the following to your kernel config file. A
+ postmaster, add the following to your kernel configuration file. A
SHMALL> value of 1024 represents 4MB of shared
memory. The following increases the maximum shared memory area
to 32 MB:
recompile the kernel, and reboot. For those running earlier
releases, use
bpatch> to find the
sysptsize> value in the current kernel. This is
- computed dynamically at bootup.
+ computed dynamically at boot time.
$ bpatch -r sysptsize>
0x9 = 9>
Next, add SYSPTSIZE> as a hard-coded value in the
- kernel config file. Increase the value you found using
+ kernel configuration file. Increase the value you found using
bpatch>. Add 1 for every additional 4 MB of
shared memory you desire.
options "SYSPTSIZE=16"
- sysptsize> can not be changed by sysctl.
+ sysptsize> cannot be changed by sysctl.
- Set the values you want in your kernel config file, e.g.:
+ Set the values you want in your kernel configuration file, e.g.:
options "SEMMNI=40"
options "SEMMNS=240"
- FreeBSD
- NetBSD
- OpenBSD
+ FreeBSD>
+ NetBSD>
+ OpenBSD>
The options SYSVSHM> and SYSVSEM> need
options SEMMNU=256
options SEMMAP=256
- (On NetBSD and OpenBSD the key word is actually
+ (On
NetBSD> and
+ class="osname">OpenBSD> the key word is actually
option singular.)
- HP-UX>
+ HP-UX>>
The default settings tend to suffice for normal installations.
database sites.
-
IPC parameters can be set in the
System
+
IPC> parameters can be set in the
System
Administration Manager> (
SAM>) under
Kernel
Configuration>Configurable Parameters>>.
- Linux>
+ Linux>>
The default shared memory limit (both
- SCO OpenServer>
+ SCO OpenServer>>
In the default configuration, only 512 kB of shared memory per
- Solaris>
+ Solaris>>
At least in version 2.6, the maximum size of a shared memory
- UnixWare>
+ UnixWare>>
On
UnixWare> 7, the maximum size for shared
call setrlimit is responsible for setting
these parameters. The shell's built-in command
ulimit (Bourne shells) or
- limit (csh) is used to control the resource
+
limit (
csh>) is used to control the resource
limits from the command line. On BSD-derived systems the file
/etc/login.conf controls what values the
various resource limits are set to upon login. See
done by what signal you send to the server process.
- SIGTERM
+ SIGTERM
- After receiving SIGTERM, the postmaster disallows new
+ After receiving SIGTERM, the postmaster disallows new
connections, but lets existing backends end their work normally.
It shuts down only after all of the backends terminate by client
request.
- SIGINT
+ SIGINT
The postmaster disallows new connections and sends all existing
- backends SIGTERM, which will cause them to abort their current
+ backends SIGTERM, which will cause them to abort their current
transactions and exit promptly. It then waits for the backends to exit
and finally shuts down the data base.
This is the Fast Shutdown.
- SIGQUIT
+ SIGQUIT
This is the Immediate Shutdown which
- will cause the postmaster to send a SIGQUIT to all backends and
+ will cause the postmaster to send a SIGQUIT to all backends and
exit immediately (without properly shutting down the database
system). The backends likewise exit immediately upon receiving
- SIGQUIT. This will lead to recovery (by replaying the WAL log)
+ SIGQUIT. This will lead to recovery (by replaying the WAL log)
upon next start-up. This is recommended only in emergencies.
- It is best not to use SIGKILL to shut down the postmaster. This
+ It is best not to use SIGKILL to shut down the postmaster. This
will prevent the postmaster from releasing shared memory and
semaphores, which you may then have to do by hand.
- The PID of the postmaster process can be found using the
+ The
PID> of the postmaster process can be found using the
ps program, or from the file
postmaster.pid in the data directory. So for
example, to do a fast shutdown:
For details on how to create your server private key and certificate,
refer to the
OpenSSL> documentation. A simple self-signed
certificate can be used to get started for testing, but a certificate signed
- by a CA (either one of the global CAs or a local one) should be used in
+ by a
CA> (either one of the global CAs> or a local one) should be used in
production so the client can verify the servers identity. To create
- a quick self-signed certificate, use the following OpenSSL command:
+ a quick self-signed certificate, use the following
OpenSSL command:
openssl req -new -text -out cert.req
- Fill out the information that openssl asks for. Make sure that you enter
+ Fill out the information that openssl> asks for. Make sure that you enter
the local host name as Common Name; the challenge password can be
left blank. The script will generate a key that is passphrase protected;
it will not accept a pass phrase that is less than four characters long.
-
Secure TCP/IP Connections with SSH tunnels
+
Secure TCP/IP Connections with SSH tunnels
- First make sure that an <productname>ssh> server is
+ First make sure that an <application>ssh> server is
running properly on the same machine as
Postgres and that you can log in using
- ssh as some user. Then you can establish a secure tunnel with a
+ ssh as some user. Then you can establish a secure tunnel with a
command like this from the client machine:
To the database server it will then look as though you are really
authentication procedure was set up for this user. In order for the
- tunnel setup to succeed you must be allowed to connect via ssh as
+ tunnel setup to succeed you must be allowed to connect via ssh as
terminal session.
initdb) it will have the same name as the
operating system user that initialized the area (and is presumably
being used as the user that runs the server). Customarily, this user
- will be called <quote>postgres>. In order to create more
+ will be called <systemitem>postgres>. In order to create more
users you have to first connect as this initial user.
When a database object is created, it is assigned an owner. The
owner is the user that executed the creation statement. There is
- currenty no polished interface for changing the owner of a database
+ currently no polished interface for changing the owner of a database
object. By default, only an owner (or a superuser) can do anything
with the object. In order to allow other users to use it,
privileges must be granted.
REVOKE ALL ON accounts FROM PUBLIC;
The set of privileges held by the table owner is always implicit
- and is never revokable.
+ and cannot be revoked.
Functions and triggers allow users to insert code into the backend
server that other users may execute without knowing it. Hence, both
- mechanisms permit users to trojan horse
+ mechanisms permit users to Trojan horse
others with relative impunity. The only real protection is tight
control over who can define functions (e.g., write to relations
with SQL fields) and triggers. Audit trails and alerters on the
-
+
Write-Ahead Logging (WAL)
The first obvious benefit of using
WAL is a
significantly reduced number of disk writes, since only the log
file needs to be flushed to disk at the time of transaction
- commit; in multi-user environments, commits of many transactions
+ commit; in multiuser environments, commits of many transactions
may be accomplished with a single fsync() of
the log file. Furthermore, the log file is written sequentially,
and so the cost of syncing the log is much less than the cost of
record to the log with LogInsert but before
performing a LogFlush. This delay allows other
backends to add their commit records to the log so as to have all
- of them flushed with a single log sync. No sleep will occur if fsync
+ of them flushed with a single log sync. No sleep will occur if fsync
is not enabled or if fewer than COMMIT_SIBLINGS
other backends are not currently in active transactions; this avoids
sleeping when it's unlikely that any other backend will commit soon.