+ "http://www.flex.ro/pgaccess">http://www.flex.ro/pgaccess
-Try these:
-
check that you don't have any of the previous version's binaries in
-your path
-
check to see that you have the proper paths set
-
check that the postgres user owns the proper files
-If you see an error message about oidvector, you definately have
+
We also include ecpg, which is an embedded SQL query
+ language interface for C.
+
2.4) What languages are available to
+ communicate with PostgreSQL?
-
3.2) How do I install PostgreSQL somewhere
-other than
/usr/local/pgsql?
-The simplest way is to specify the --prefix option when running configure.
-If you forgot to do that, you can edit Makefile.global and change POSTGRESDIR
-accordingly, or create a
Makefile.custom and define POSTGRESDIR there.
+
C (libpq)
+
C++ (libpq++)
-
3.3) When I start the postmaster, I get a Bad
-System Call or core dumped message. Why?
+
Embedded C (ecpg)
-It could be a variety of problems, but first check to see that you
-have System V extensions installed in your kernel. PostgreSQL requires
-kernel support for shared memory and semaphores.
+
Java (jdbc)
+
Perl (perl5)
-
3.4) When I try to start the postmaster, I
-get
IpcMemoryCreate errors. Why?
+
ODBC (odbc)
-You either do not have shared memory configured properly in your kernel or
-you need to enlarge the shared memory available in the kernel. The
-exact amount you need depends on your architecture and how many buffers
-and backend processes you configure for the postmaster.
-For most systems, with default numbers of buffers and processes, you
+
Python (PyGreSQL)
-
3.5) When I try to start the postmaster, I
-get
IpcSemaphoreCreate errors. Why?
+
TCL (libpgtcl)
-If the error message is IpcSemaphoreCreate: semget failed (No space
-left on device) then your kernel is not configured with enough
-semaphores. Postgres needs one semaphore per potential backend process.
-A temporary solution is to start the postmaster with a smaller limit on
-the number of backend processes. Use -N with a parameter less
-than the default of 32. A more permanent solution is to increase your
-kernel's
SEMMNS and
SEMMNI parameters.
+
C Easy API (libpgeasy)
-If the error message is something else, you might not have semaphore
-support configured in your kernel at all.
+ http://www.php.net)
+
+
+
+
Administrative Questions
+
-
3.6) How do I prevent other hosts from
-accessing my PostgreSQL database?
+
3.1) Why does initdb fail?
-By default, PostgreSQL only allows connections from the local machine
-using Unix domain sockets. Other machines will not be able to connect
-unless you add the -i flag to the postmaster,
-and enable host-based authentication by modifying the file
-$PGDATA/pg_hba.conf accordingly. This will allow TCP/IP connections.
-
3.7) Why can't I connect to my database from
+
check that you don't have any of the previous version's
+ binaries in your path
-The default configuration allows only unix domain socket connections
-from the local machine. To enable TCP/IP connections, make sure the
-postmaster has been started with the -i option, and add an
-appropriate host entry to the file
-pgsql/data/pg_hba.conf.
+
check to see that you have the proper paths set
-
3.8) Why can't I access the database as the root
+
check that the postgres user owns the proper
+ files
+
-You should not create database users with user id 0 (root). They will be
-unable to access the database. This is a security precaution because
-of the ability of users to dynamically link object modules into the
+
If you see an error message about oidvector, you
+ definately have a version mismatch.
+
3.2) How do I install PostgreSQL somewhere
+ other than /usr/local/pgsql?
-
3.9) All my servers crash under concurrent
+
The simplest way is to specify the --prefix option when running
+ configure. If you forgot to do that, you can edit
+ Makefile.global and change POSTGRESDIR accordingly, or
+ create a Makefile.custom and define POSTGRESDIR there.
-This problem can be caused by a kernel that is not configured to support
+
3.3) When I start the postmaster, I
+ get a Bad System Call or core dumped message. Why?>
+
It could be a variety of problems, but first check to see that
+ you have System V extensions installed in your kernel. PostgreSQL
+ requires kernel support for shared memory and semaphores.
-
3.10) How do I tune the database engine for
+
3.4) When I try to start the
+ postmaster, I get IpcMemoryCreate errors. Why?>
-Certainly, indices can speed up queries. The EXPLAIN command
-allows you to see how PostgreSQL is interpreting your query, and which
+
You either do not have shared memory configured properly in your
+ kernel or you need to enlarge the shared memory available in the
+ kernel. The exact amount you need depends on your architecture and
+ how many buffers and backend processes you configure for the
+ postmaster. For most systems, with default numbers of
+ buffers and processes, you need a minimum of ~1MB.
-If you are doing a lot of INSERTs, consider doing them in a large
-batch using the COPY command. This is much faster than
-individual INSERTS. Second, statements not in a BEGIN
-WORK/COMMIT transaction block are considered to be in their
-own transaction. Consider performing several statements in a single
-transaction block. This reduces the transaction overhead. Also
-consider dropping and recreating indices when making large data
+
3.5) When I try to start the
+ postmaster, I get IpcSemaphoreCreate errors.
+ Why?
-There are several tuning options. You can disable
-fsync() by starting the postmaster with a -o -F
-option. This will prevent fsync()'s from flushing to disk after
+
If the error message is IpcSemaphoreCreate: semget failed (No
+ space left on device) then your kernel is not configured with
+ enough semaphores. Postgres needs one semaphore per potential
+ backend process. A temporary solution is to start the
+ postmaster with a smaller limit on the number of backend
+ processes. Use -N with a parameter less than the default of
+ 32. A more permanent solution is to increase your kernel's
+ SEMMNS and SEMMNI parameters.
-You can also use the postmaster -B option to increase the number of
-shared memory buffers used by the backend processes. If you make this
-parameter too high, the postmaster may not start because you've exceeded
-your kernel's limit on shared memory space.
-Each buffer is 8K and the default is 64 buffers.
+
If the error message is something else, you might not have
+ semaphore support configured in your kernel at all.
-You can also use the backend -S option to increase the maximum amount
-of memory used by the backend process for temporary sorts. The -S value
-is measured in kilobytes, and the default is 512 (ie, 512K).
+
3.6) How do I prevent other hosts from
+ accessing my PostgreSQL database?
-You can also use the CLUSTER command to group data in tables to
-match an index. See the
CLUSTER manual page for more details.
+
By default, PostgreSQL only allows connections from the local
+ machine using Unix domain sockets. Other machines will not be able
+ to connect unless you add the -i flag to the
+ postmaster, and enable host-based authentication by
+ modifying the file $PGDATA/pg_hba.conf accordingly. This
+ will allow TCP/IP connections.
+
3.7) Why can't I connect to my database from
+ another machine?
-
3.11) What debugging features are available?
+
The default configuration allows only unix domain socket
+ connections from the local machine. To enable TCP/IP connections,
+ make sure the postmaster has been started with the -i
+ option, and add an appropriate host entry to the file
+ pgsql/data/pg_hba.conf.
-PostgreSQL has several features that report status information that can
-
be valuable for debugging purposes.>
+
3.8) Why can't I access the database as the
+ root user?>
-First, by running configure with the --enable-cassert option, many
-assert()'s monitor the progress of the backend and halt the program when
-something unexpected occurs.
+
You should not create database users with user id 0 (root). They
+ will be unable to access the database. This is a security
+ precaution because of the ability of users to dynamically link
+ object modules into the database engine.
-Both postmaster and postgres have several debug options available.
-First, whenever you start the postmaster, make sure you send the
-standard output and error to a log file, like:
+
3.9) All my servers crash under concurrent
+ table access. Why?
+
+
This problem can be caused by a kernel that is not configured to
+ support semaphores.
+
+
3.10) How do I tune the database engine for
+ better performance?
+
+
Certainly, indices can speed up queries. The
+ EXPLAIN command allows you to see how PostgreSQL is
+ interpreting your query, and which indices are being used.
+
+
If you are doing a lot of INSERTs, consider doing
+ them in a large batch using the COPY command. This
+ is much faster than individual INSERTS. Second,
+ statements not in a BEGIN WORK/COMMIT transaction
+ block are considered to be in their own transaction. Consider
+ performing several statements in a single transaction block. This
+ reduces the transaction overhead. Also consider dropping and
+ recreating indices when making large data changes.
+
+
There are several tuning options. You can disable fsync()
+ by starting the postmaster with a -o -F option. This
+ will prevent fsync()'s from flushing to disk after every
+ transaction.
+
+
You can also use the postmaster -B option to
+ increase the number of shared memory buffers used by the backend
+ processes. If you make this parameter too high, the
+ postmaster may not start because you've exceeded your
+ kernel's limit on shared memory space. Each buffer is 8K and the
+ default is 64 buffers.
+
+
You can also use the backend -S option to increase the
+ maximum amount of memory used by the backend process for temporary
+ sorts. The -S value is measured in kilobytes, and the
+ default is 512 (ie, 512K).
+
+
You can also use the CLUSTER command to group
+ data in tables to match an index. See the CLUSTER
+ manual page for more details.
+
+
3.11) What debugging features are
+ available?
+
+
PostgreSQL has several features that report status information
+ that can be valuable for debugging purposes.
+
+
First, by running configure with the --enable-cassert
+ option, many assert()'s monitor the progress of the backend
+ and halt the program when something unexpected occurs.
+
+
Both postmaster and postgres have several debug
+ options available. First, whenever you start the postmaster,
+ make sure you send the standard output and error to a log file,
+ like:
- cd /usr/local/pgsql
- ./bin/postmaster >server.log 2>&1 &
-
-This will put a server.log file in the top-level PostgreSQL directory.
-This file contains useful information about problems or errors
-encountered by the server. Postmaster has a -d option that allows even
-more detailed information to be reported. The -d option takes a number
-that specifies the debug level. Be warned that high debug level values
-generate large log files.
-
-If the postmaster is not running, you can actually run the
-postgres backend from the command line, and type your SQL statement
-directly. This is recommended only for debugging purposes. Note
-that a newline terminates the query, not a semicolon. If you have
-compiled with debugging symbols, you can use a debugger to see what is
-happening. Because the backend was not started from the postmaster, it
-is not running in an identical environment and locking/backend
-interaction problems may not be duplicated.
-
-If the postmaster is running, start psql in one window,
-then find the PID of the postgres process used by
-psql. Use a debugger to attach to the postgres
-PID. You can set breakpoints in the debugger and issue
-queries from psql. If you are debugging postgres startup,
-you can set PGOPTIONS="-W n", then start psql. This will cause
-startup to delay for n seconds so you can attach with the
-debugger and trace through the startup sequence.
-
-The postgres program has -s, -A, and -t options that can be very useful
-for debugging and performance measurements.
-
-You can also compile with profiling to see what functions are taking
-execution time. The backend profile files will be deposited in the
-pgsql/data/base/dbname directory. The client profile file will be put
-in the client's current directory.
-
-
-
3.12) I get 'Sorry, too many clients' when trying
-
-You need to increase the postmaster's limit on how many concurrent backend
-
-In PostgreSQL 6.5 and up, the default limit is 32 processes. You can
-increase it by restarting the postmaster with a suitable -N
-value. With the default configuration you can set -N as large as
-1024. If you need more, increase MAXBACKENDS in
-include/config.h and rebuild. You can set the default value of
--N at configuration time, if you like, using configure's
-
--with-maxbackends switch.
-
-Note that if you make -N larger than 32, you must also increase
--B beyond its default of 64; -B must be at least twice -N, and
-probably should be more than that for best performance. For large
-numbers of backend processes, you are also likely to find that you need
-to increase various Unix kernel configuration parameters. Things to
-check include the maximum size of shared memory blocks,
-SHMMAX; the maximum number of semaphores,
-SEMMNS and SEMMNI; the maximum number of
-processes, NPROC; the maximum number of processes per
-user, MAXUPRC; and the maximum number of open files,
-NFILE and NINODE. The reason that PostgreSQL
-has a limit on the number of allowed backend processes is so
-your system won't run out of resources.
-
-In PostgreSQL versions prior to 6.5, the maximum number of backends was
-64, and changing it required a rebuild after altering the MaxBackendId
-constant in
include/storage/sinvaladt.h.
-
-
3.13) What are the pg_sorttempNNN.NN files in my
-
-They are temporary files generated by the query executor. For
-example, if a sort needs to be done to satisfy an ORDER BY, and
-the sort requires more space than the backend's -S parameter allows,
-then temporary files are created to hold the extra data.
-
-The temporary files should be deleted automatically, but might not if a backend
-crashes during a sort. If you have no backends running at the time,
-it is safe to delete the pg_tempNNN.NN files.
-
-
-
-
-
-
4.1) Why is system confused about
-commas, decimal points, and date formats.
-
-Check your locale configuration. PostgreSQL uses the locale setting of
-the user that ran the postmaster process. There are postgres and psql
-SET commands to control the date format. Set those accordingly for
-your operating environment.
-
-
-
4.2) What is the exact difference between
-binary cursors and normal cursors?
-
-See the
DECLARE manual page for a description.
-
-
4.3) How do I SELECT only the first few
-
-See the
FETCH manual page, or use SELECT ... LIMIT....
-
-The entire query may have to be evaluated, even if you only want the
-first few rows. Consider a query that has an ORDER BY.
-If there is an index that matches the ORDER BY,
-PostgreSQL may be able to evaluate only the first few records requested,
-or the entire query may have to be evaluated until the desired rows have
-
-
4.4) How do I get a list of tables or other
-things I can see in
psql?
-
-You can read the source code for psql in file
-pgsql/src/bin/psql/psql.c. It contains SQL commands that generate the
-output for psql's backslash commands. You can also start psql
-with the -E option so it will print out the queries it uses
-to execute the commands you give.
-
-
-
4.5) How do you remove a column from a
+ cd /usr/local/pgsql
+ ./bin/postmaster >server.log 2>&1 &
+
-We do not support ALTER TABLE DROP COLUMN, but do
-this:
+
This will put a server.log file in the top-level PostgreSQL
+ directory. This file contains useful information about problems or
+ errors encountered by the server. Postmaster has a -d
+ option that allows even more detailed information to be reported.
+ The -d option takes a number that specifies the debug level.
+ Be warned that high debug level values generate large log
+ files.
+
+
If the postmaster is not running, you can actually run
+ the postgres backend from the command line, and type your
+ SQL statement directly. This is recommended only for
+ debugging purposes. Note that a newline terminates the query, not a
+ semicolon. If you have compiled with debugging symbols, you can use
+ a debugger to see what is happening. Because the backend was not
+ started from the postmaster, it is not running in an
+ identical environment and locking/backend interaction problems may
+ not be duplicated.
+
+
If the postmaster is running, start psql in one
+ window, then find the PID of the postgres
+ process used by psql. Use a debugger to attach to the
+ postgres PID. You can set breakpoints in the
+ debugger and issue queries from psql. If you are debugging
+ postgres startup, you can set PGOPTIONS="-W n", then start
+ psql. This will cause startup to delay for n seconds
+ so you can attach with the debugger and trace through the startup
+ sequence.
+
+
The postgres program has -s, -A, and -t
+ options that can be very useful for debugging and performance
+ measurements.
+
+
You can also compile with profiling to see what functions are
+ taking execution time. The backend profile files will be deposited
+ in the pgsql/data/base/dbname directory. The client profile
+ file will be put in the client's current directory.
+
+
3.12) I get 'Sorry, too many clients' when
+ trying to connect. Why?
+
+
You need to increase the postmaster's limit on how many
+ concurrent backend processes it can start.
+
+
In PostgreSQL 6.5 and up, the default limit is 32 processes. You
+ can increase it by restarting the postmaster with a suitable
+ -N value. With the default configuration you can set
+ -N as large as 1024. If you need more, increase
+ MAXBACKENDS in include/config.h and rebuild.
+ You can set the default value of -N at configuration time,
+ if you like, using configure's --with-maxbackends
+ switch.
+
+
Note that if you make -N larger than 32, you must also
+ increase -B beyond its default of 64; -B must be at
+ least twice -N, and probably should be more than that for
+ best performance. For large numbers of backend processes, you are
+ also likely to find that you need to increase various Unix kernel
+ configuration parameters. Things to check include the maximum size
+ of shared memory blocks, SHMMAX; the maximum number
+ of semaphores, SEMMNS and SEMMNI; the
+ maximum number of processes, NPROC; the maximum
+ number of processes per user, MAXUPRC; and the
+ maximum number of open files, NFILE and
+ NINODE. The reason that PostgreSQL has a limit on
+ the number of allowed backend processes is so your system won't run
+ out of resources.
+
+
In PostgreSQL versions prior to 6.5, the maximum number of
+ backends was 64, and changing it required a rebuild after altering
+ the MaxBackendId constant in
+ include/storage/sinvaladt.h.
+
+
3.13) What are the pg_sorttempNNN.NN
+ files in my database directory?
+
+
They are temporary files generated by the query executor. For
+ example, if a sort needs to be done to satisfy an ORDER
+ BY, and the sort requires more space than the backend's
+ -S parameter allows, then temporary files are created to
+ hold the extra data.
+
+
The temporary files should be deleted automatically, but might
+ not if a backend crashes during a sort. If you have no backends
+ running at the time, it is safe to delete the pg_tempNNN.NN
+ files.
+
+
+
+
Operational Questions
+
+
+
4.1) Why is system confused about commas,
+ decimal points, and date formats.
+
+
Check your locale configuration. PostgreSQL uses the locale
+ setting of the user that ran the postmaster process. There
+ are postgres and psql SET commands to control the date format. Set
+ those accordingly for your operating environment.
+
+
4.2) What is the exact difference between
+ binary cursors and normal cursors?
+
+
See the DECLARE manual page for a
+ description.
+
+
4.3) How do I SELECT only the
+ first few rows of a query?
+
+
See the FETCH manual page, or use SELECT ...
+ LIMIT....
+
+
The entire query may have to be evaluated, even if you only want
+ the first few rows. Consider a query that has an ORDER
+ BY. If there is an index that matches the ORDER
+ BY, PostgreSQL may be able to evaluate only the first few
+ records requested, or the entire query may have to be evaluated
+ until the desired rows have been generated.
+
+
4.4) How do I get a list of tables or other
+ things I can see in psql?
+
+
+
You can read the source code for psql in file
+ pgsql/src/bin/psql/psql.c. It contains SQL commands that
+ generate the output for psql's backslash commands. You can also
+ start psql with the -E option so it will print out
+ the queries it uses to execute the commands you give.
+
+
4.5) How do you remove a column from a
+ table?
+
+
We do not support ALTER TABLE DROP COLUMN, but do
+ this:
- SELECT ... -- select all columns but the one you want to remove
- INTO TABLE new_table
- FROM old_table;
- DROP TABLE old_table;
- ALTER TABLE new_table RENAME TO old_table;
-
-
-
-
-
4.6) What is the maximum size for a
+ SELECT ... -- select all columns but the one you want to remove
+ INTO TABLE new_table
+ FROM old_table;
+ DROP TABLE old_table;
+ ALTER TABLE new_table RENAME TO old_table;
+
-These are the limits:
+
4.6) What is the maximum size for a row,
+ table, database?
-Maximum size for a database? unlimited (60GB databases exist)
+Maximum size for a database? unlimited (60GB databases exist)
Maximum size for a table? unlimited on all operating systems
Maximum size for a row? 8k, configurable to 32k
-Maximum number of rows in a table? unlimited
+Maximum number of rows in a table? unlimited
Maximum number of columns in a table? unlimited
-Maximum number of indexes on a table? unlimited
+Maximum number of indexes on a table? unlimited
+ Of course, these are not actually unlimited, but limited to
+ available disk space.
-Of course, these are not actually unlimited, but limited to available
-
-To change the maximum row size, edit include/config.h and change
-BLCKSZ. To use attributes larger than 8K, you can also
-use the large object interface.
-
-The row length limit will be removed in 7.1.
-
+
To change the maximum row size, edit include/config.h and
+ change BLCKSZ. To use attributes larger than 8K, you
+ can also use the large object interface.
-
4.7) How much database disk space is required to
-store data from a typical text file?
+
The row length limit will be removed in 7.1.
-A PostgreSQL database may need six-and-a-half times the disk space
-required to store the data in a flat file.
+
4.7) How much database disk space is required
+ to store data from a typical text file?
+
-Consider a file of 300,000 lines with two integers on each line. The
-flat file is 2.4MB. The size of the PostgreSQL database file containing
-this data can be estimated at 14MB:
+
A PostgreSQL database may need six-and-a-half times the disk
+ space required to store the data in a flat file.
+
Consider a file of 300,000 lines with two integers on each line.
+ The flat file is 2.4MB. The size of the PostgreSQL database file
+ containing this data can be estimated at 14MB:
36 bytes: each row header (approximate)
+ 8 bytes: two int fields @ 4 bytes each
171 rows per page
1755 database pages * 8192 bytes per page = 14,376,960 bytes (14MB)
-
-
-Indexes do not require as much overhead, but do contain the data that is
-being indexed, so they can be large also.
-
-
4.8) How do I find out what indices or
-operations are defined in the database?
-
-psql has a variety of backslash commands to show such information. Use
-
-Also try the file pgsql/src/tutorial/syscat.source. It
-illustrates many of the SELECTs needed to get information from
-the database system tables.
-
-
-
4.9) My queries are slow or don't make
-use of the indexes. Why?
-
-PostgreSQL does not automatically maintain statistics. VACUUM
-must be run to update the statistics. After
-statistics are updated, the optimizer knows how many rows in the table,
-and can better decide if it should use indices. Note that the optimizer
-does not use indices in cases when the table is small because a
-sequential scan would be faster.
-
-For column-specific optimization statistics, use VACUUM
-ANALYZE. VACUUM ANALYZE is important for complex
-multijoin queries, so the optimizer can estimate the number of rows
-returned from each table, and choose the proper join order. The backend
-does not keep track of column statistics on its own, so VACUUM
-ANALYZE must be run to collect them periodically.
-
-Indexes are usually not used for ORDER BY operations: a
-sequential scan followed by an explicit sort is faster than an indexscan
-of all tuples of a large table, because it takes fewer disk accesses.
-
-When using wild-card operators such as LIKE or ~, indices can
-only be used if the beginning of the search is anchored to the start of
-the string. So, to use indices, LIKE searches should not
-begin with %, and ~(regular expression searches) should
-start with ^.
-
-
4.10) How do I see how the query optimizer is
-
-See the
EXPLAIN manual page.
-
-
4.11) What is an R-tree index?
-
-An R-tree index is used for indexing spatial data. A hash index can't
-handle range searches. A B-tree index only handles range searches in a
-single dimension. R-tree's can handle multi-dimensional data. For
-example, if an R-tree index can be built on an attribute of type point,
-the system can more efficiently answer queries such as "select all points
-within a bounding rectangle."
-
-The canonical paper that describes the original R-tree design is:
-
-Guttman, A. "R-trees: A Dynamic Index Structure for Spatial Searching."
-Proc of the 1984 ACM SIGMOD Int'l Conf on Mgmt of Data, 45-57.
-
-You can also find this paper in Stonebraker's "Readings in Database
-
-Built-in R-trees can handle polygons and boxes. In theory, R-trees can
-be extended to handle higher number of dimensions. In practice,
-extending R-trees requires a bit of work and we don't currently have any
-documentation on how to do it.
-
-
-
4.12) What is Genetic Query
-
-The GEQO module speeds query
-optimization when joining many tables by means of a Genetic
-Algorithm (GA). It allows the handling of large join queries through
-
-
4.13) How do I do regular expression searches and
-case-insensitive regular expression searches?
+
-The ~ operator does regular expression matching, and ~*
-does case-insensitive regular expression matching. There is no
-case-insensitive variant of the LIKE operator, but you can get the
-effect of case-insensitive LIKE with this:
+
Indexes do not require as much overhead, but do contain the data
+ that is being indexed, so they can be large also.
+
+
4.8) How do I find out what indices or
+ operations are defined in the database?
+
+
psql has a variety of backslash commands to show such
+ information. Use \? to see them.
+
+
Also try the file pgsql/src/tutorial/syscat.source. It
+ illustrates many of the SELECTs needed to get
+ information from the database system tables.
+
+
4.9) My queries are slow or don't make use of
+ the indexes. Why?
+
+
PostgreSQL does not automatically maintain statistics.
+ VACUUM must be run to update the statistics. After
+ statistics are updated, the optimizer knows how many rows in the
+ table, and can better decide if it should use indices. Note that
+ the optimizer does not use indices in cases when the table is small
+ because a sequential scan would be faster.
+
+
For column-specific optimization statistics, use VACUUM
+ ANALYZE. VACUUM ANALYZE is important for
+ complex multijoin queries, so the optimizer can estimate the number
+ of rows returned from each table, and choose the proper join order.
+ The backend does not keep track of column statistics on its own, so
+ VACUUM ANALYZE must be run to collect them
+ periodically.
+
+
Indexes are usually not used for ORDER BY
+ operations: a sequential scan followed by an explicit sort is
+ faster than an indexscan of all tuples of a large table, because it
+ takes fewer disk accesses.
+
+
When using wild-card operators such as LIKE or
+ ~, indices can only be used if the beginning of the search
+ is anchored to the start of the string. So, to use indices,
+ LIKE searches should not begin with %, and
+ ~(regular expression searches) should start with
+ ^.
+
+
4.10) How do I see how the query optimizer
+ is evaluating my query?
+
+
See the EXPLAIN manual page.
+
+
4.11) What is an R-tree index?
+
+
An R-tree index is used for indexing spatial data. A hash index
+ can't handle range searches. A B-tree index only handles range
+ searches in a single dimension. R-tree's can handle
+ multi-dimensional data. For example, if an R-tree index can be
+ built on an attribute of type point, the system can more
+ efficiently answer queries such as "select all points within a
+ bounding rectangle."
+
+
The canonical paper that describes the original R-tree design
+ is:
+
+
Guttman, A. "R-trees: A Dynamic Index Structure for Spatial
+ Searching." Proc of the 1984 ACM SIGMOD Int'l Conf on Mgmt of Data,
+ 45-57.
+
+
You can also find this paper in Stonebraker's "Readings in
+ Database Systems".
+
+
Built-in R-trees can handle polygons and boxes. In theory,
+ R-trees can be extended to handle higher number of dimensions. In
+ practice, extending R-trees requires a bit of work and we don't
+ currently have any documentation on how to do it.
+
+
4.12) What is Genetic Query
+ Optimization?
+
+
The GEQO module speeds query optimization when joining many
+ tables by means of a Genetic Algorithm (GA). It allows the handling
+ of large join queries through nonexhaustive search.
+
+
4.13) How do I do regular expression
+ searches and case-insensitive regular expression searches?
+
+
The ~ operator does regular expression matching, and
+ ~* does case-insensitive regular expression matching. There
+ is no case-insensitive variant of the LIKE operator, but you can
+ get the effect of case-insensitive LIKE with
+ this:
- WHERE lower(textfield) LIKE lower(pattern)
+ WHERE lower(textfield) LIKE lower(pattern)
-
4.14) In a query, how do I detect if a field
-
-You test the column with IS NULL and IS NOT NULL.
-
+
4.14) In a query, how do I detect if a field
+ is NULL?
-
4.15) What is the difference between the
-various character types?
+
You test the column with IS NULL and IS NOT NULL.
+
4.15) What is the difference between the
+ various character types?
Type Internal Name Notes
--------------------------------------------------
VARCHAR(#) varchar size specifies maximum length, no padding
TEXT text no specific upper limit on length
BYTEA bytea variable-length byte array (null-safe)
+
-You will see the internal name when examining system catalogs
-and in some error messages.
-
-The last four types above are "varlena" types (i.e., the first four
-bytes on disk are the length, followed by the data). Thus the actual
-space used is slightly greater than the declared size. However, these
-data types are also subject to compression or being stored out-of-line
-by TOAST, so the space on disk might also be less than expected.
+
You will see the internal name when examining system catalogs
+ and in some error messages.
+
The last four types above are "varlena" types (i.e., the first
+ four bytes on disk are the length, followed by the data). Thus the
+ actual space used is slightly greater than the declared size.
+ However, these data types are also subject to compression or being
+ stored out-of-line by TOAST, so the space on disk might also be
+ less than expected.
-
serial/auto-incrementing field?>
+ serial/auto-incrementing field?>
-PostgreSQL supports a SERIAL data type. It auto-creates a
-sequence and index on the column. For example, this:
+
PostgreSQL supports a SERIAL data type. It
+ auto-creates a sequence and index on the column. For example,
+ this:
- CREATE TABLE person (
- id SERIAL,
- name TEXT
- );
+ CREATE TABLE person (
+ id SERIAL,
+ name TEXT
+ );
-is automatically translated into this:
+ is automatically translated into this:
- CREATE SEQUENCE person_id_seq;
- CREATE TABLE person (
- id INT4 NOT NULL DEFAULT nextval('person_id_seq'),
- name TEXT
- );
- CREATE UNIQUE INDEX person_id_key ON person ( id );
+ CREATE SEQUENCE person_id_seq;
+ CREATE TABLE person (
+ id INT4 NOT NULL DEFAULT nextval('person_id_seq'),
+ name TEXT
+ );
+ CREATE UNIQUE INDEX person_id_key ON person ( id );
-See the create_sequence manual page for more information about sequences.
-
-You can also use each row's OID field as a unique value. However, if
-you need to dump and reload the database, you need to use pg_dump's -o
-option or
COPY WITH OIDS option to preserve the
OIDs.
-
-
-
4.16.2) How do I get the value of a
-One approach is to to retrieve the next SERIAL value from the sequence object with the
nextval() function
before inserting and then insert it explicitly. Using the example table in
4.16.1, that might look like this:
+ See the create_sequence manual page for more information
+ about sequences. You can also use each row's OID field as a
+ unique value. However, if you need to dump and reload the database,
+ you need to use pg_dump's -o option or COPY WITH
+ OIDS option to preserve the OIDs.
+
+ Rows.
+
+
4.16.2) How do I get the value of a
+ SERIAL insert?
+
+
One approach is to to retrieve the next SERIAL value from the
+ sequence object with the nextval() function before
+ inserting and then insert it explicitly. Using the example table in
+
4.16.1, that might look like this:
- $newSerialID = nextval('person_id_seq');
- INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');
+ $newSerialID = nextval('person_id_seq');
+ INSERT INTO person (id, name) VALUES ($newSerialID, 'Blaise Pascal');
-
-You would then also have the new value stored in
-$newSerialID
for use in other queries (e.g., as a foreign
-key to the person
table). Note that the name of the
-automatically created SEQUENCE object will be named
-<table>_<serialcolumn>_seq, where
-table and serialcolumn are the names of your table and
-your SERIAL column, respectively.
-
-Alternatively, you could retrieve the assigned SERIAL value with the currval() function after it was inserted by default, e.g.,
+ You would then also have the new value stored in
+ $newSerialID
for use in other queries (e.g., as a
+ foreign key to the person
table). Note that the name
+ of the automatically created SEQUENCE object will be named
+ <table>_<serialcolumn>_seq, where
+ table and serialcolumn are the names of your table
+ and your SERIAL column, respectively.
+
+
Alternatively, you could retrieve the assigned SERIAL value with
+ the currval() function after it was inserted by
+ default, e.g.,
- INSERT INTO person (name) VALUES ('Blaise Pascal');
- $newID = currval('person_id_seq');
+ INSERT INTO person (name) VALUES ('Blaise Pascal');
+ $newID = currval('person_id_seq');
-
-Finally, you could use the
OID
-returned from the INSERT statement to look up the default value, though
-this is probably the least portable approach. In Perl, using DBI with
-Edmund Mergl's DBD::Pg module, the oid value is made available via
-$sth->{pg_oid_status} after $sth->execute().
-
-
4.16.3) Don't currval() and nextval() lead to
-a race condition with other users?
-
-No. This is handled by the backends.
-
-
-
4.17) What is an OID? What is a
-
-OIDs are PostgreSQL's answer to unique row ids. Every
-row that is created in PostgreSQL gets a unique OID. All
-OIDs generated during initdb are less than 16384
-(from backend/access/transam.h). All user-created
-OIDs are equal to or greater than this. By default, all
-these OIDs are unique not only within a table or
-database, but unique within the entire PostgreSQL installation.
-
-PostgreSQL uses OIDs in its internal system tables to link rows between
-tables. These OIDs can be used to identify specific user rows and used
-in joins. It is recommended you use column type OID to
-store OID
-values. You can create an index on the
OID field for faster access.
-
-Oids are assigned to all new rows from a central area that is used by
-all databases. If you want to change the OID to something else, or if
-you want to make a copy of the table, with the original OID's, there is
-no reason you can't do it:
-
+ Finally, you could use the
OID
+ returned from the INSERT statement to look up the default value,
+ though this is probably the least portable approach. In Perl, using
+ DBI with Edmund Mergl's DBD::Pg module, the oid value is made
+ available via $sth->{pg_oid_status} after
+ $sth->execute().
+
+ nextval() lead to a race condition with other users?
+
+
No. This is handled by the backends.
+
+
4.17) What is an OID? What is
+ a TID?
+
+
OIDs are PostgreSQL's answer to unique row ids.
+ Every row that is created in PostgreSQL gets a unique
+ OID. All OIDs generated during
+ initdb are less than 16384 (from
+ backend/access/transam.h). All user-created
+ OIDs are equal to or greater than this. By default,
+ all these OIDs are unique not only within a table or
+ database, but unique within the entire PostgreSQL installation.
+
+
PostgreSQL uses OIDs in its internal system
+ tables to link rows between tables. These OIDs can
+ be used to identify specific user rows and used in joins. It is
+ recommended you use column type OID to store
+ OID values. You can create an index on the
+ OID field for faster access.
+
+
Oids are assigned to all new rows from a central
+ area that is used by all databases. If you want to change the
+ OID to something else, or if you want to make a copy
+ of the table, with the original OID's, there is no
+ reason you can't do it:
CREATE TABLE new_table(old_oid oid, mycol int);
SELECT old_oid, mycol INTO new FROM old;
DELETE FROM new;
COPY new WITH OIDS FROM '/tmp/pgtable';
+
-OIDs are stored as 4-byte integers, and will overflow
-at 4 billion. No one has reported this ever happening, and we plan to
-have the limit removed before anyone does.<P>
+
OIDs are stored as 4-byte integers, and will
+ overflow at 4 billion. No one has reported this ever happening, and
+ we plan to have the limit removed before anyone does.P>
-TIDs are used to identify specific physical rows with block and offset
-values. Tids change after rows are modified or reloaded. They are used
-by index entries to point to physical rows.
+
TIDs are used to identify specific physical rows
+ with block and offset values. Tids change after rows are modified
+ or reloaded. They are used by index entries to point to physical
+ rows.
+
4.18) What is the meaning of some of the
+ terms used in PostgreSQL?
-
4.18) What is the meaning of some of the terms
-used in PostgreSQL?<P>
+
Some of the source code and older documentation use terms that
+ have more common usage. Here are some:P>
-Some of the source code and older documentation use terms that have more
-common usage. Here are some:
+
table, relation, class
-
table, relation, class
-
row, record, tuple
-
column, field, attribute
-
retrieve, select
-
replace, update
-
append, insert
-
OID, serial value
-
portal, cursor
-
range variable, table name, table alias
+
row, record, tuple