-
+
Backup and Restore
If recovery fails for an external reason, such as a system crash or
the WAL archive has become inaccessible, then the recovery can be
simply restarted and it will restart almost from where it failed.
- Restartable recovery works by writing a restartpoint record to the control
+ Restartable recovery works by writing a restart-point record to the control
file at the first safely usable checkpoint record found after
checkpoint_timeout> seconds.
If we take a backup of the server files whilst a recovery is in progress,
- we will be able to restart the recovery from the last restartpoint.
+ we will be able to restart the recovery from the last restart point.
That backup now has many of the changes from previous WAL archive files,
so this version is now an updated version of the original base backup.
If we need to recover, it will be faster to recover from the
An external program can call pg_xlogfile_name_offset()>
- to find out the filename and the exact byte offset within it of
+ to find out the file name and the exact byte offset within it of
the latest WAL pointer. If the external program regularly polls
the server it can find out how far forward the pointer has
moved. It can then access the WAL file directly and copy those
-
+
amclusterable
bool
- Can an index of this type be CLUSTERed on?
+ Can an index of this type be clustered on?
|
amoptions
regproc
pg_proc.oid
- Function to parse and validate reloptions for an index
+ Function to parse and validate reloptions> for an index
vac_scale_factor
float4
- Multiplier for reltuples to add to
+ Multiplier for reltuples> to add to
vac_base_thresh>
anl_scale_factor
float4
- Multiplier for reltuples to add to
+ Multiplier for reltuples> to add to
anl_base_thresh>
operation. All rows inserted or deleted by transaction IDs before this one
have been marked as known good or deleted. This
is used to determine when commit-log space can be recycled.
- If InvalidTransactionId, then the minimum is unknown and can be
+ If InvalidTransactionId, then the minimum is unknown and can be
determined by scanning pg_class>.relvacuumxid>.
relabeled with a permanent (frozen>) transaction ID in this
database. This is useful to check whether a database must be
vacuumed soon to avoid transaction ID wrap-around problems.
- If InvalidTransactionId, then the minimum is unknown and can be
+ If InvalidTransactionId, then the minimum is unknown and can be
determined by scanning pg_class>.relminxid>.
pg_type.oid
An array with the data types of the function arguments. This includes
- only input arguments (including INOUT arguments), and thus represents
+ only input arguments (including INOUT arguments), and thus represents
the call signature of the function.
pg_type.oid
An array with the data types of the function arguments. This includes
- all arguments (including OUT and INOUT arguments); however, if all the
+ all arguments (including OUT and INOUT arguments); however, if all the
arguments are IN arguments, this field will be null.
Note that subscripting is 1-based, whereas for historical reasons
proargtypes> is subscripted from 0.
An array with the modes of the function arguments, encoded as
- i for IN arguments,
- o for OUT arguments,
- b for INOUT arguments.
- If all the arguments are IN arguments, this field will be null.
+ i for IN> arguments,
+ o for OUT> arguments,
+ b for INOUT> arguments.
+ If all the arguments are IN arguments, this field will be null.
Note that subscripts correspond to positions of
proallargtypes> not proargtypes>.
Advisory locks can be acquired on keys consisting of either a single
- bigint value or two integer values. A bigint key is displayed with its
+ bigint value or two integer values. A bigint key is displayed with its
high-order half in the classid> column, its low-order half
in the objid> column, and objsubid> equal
to 1. Integer keys are displayed with the first key in the
-
+
Client Authentication
The server will bind to the distinguished name specified as
- base dn> using the username supplied by the client.
+ base dn> using the user name supplied by the client.
If prefix> and suffix> is
- specified, it will be prepended and appended to the username
+ specified, it will be prepended and appended to the user name
before the bind. Typically, the prefix parameter is used to specify
cn=>, or DOMAIN\> in an Active
Directory environment.
-
+
Server Configuration
include 'filename'
- If the filename is not an absolute path, it is taken as relative to
+ If the file name is not an absolute path, it is taken as relative to
the directory containing the referencing configuration file.
Inclusions can be nested.
Specifies the main server configuration file
(customarily called postgresql.conf>).
- This parameter can only be set on the postgres command line.
+ This parameter can only be set on the postgres command line.
If you wish to keep the configuration files elsewhere than the
- data directory, the postgres
+ data directory, the postgres
command-line option or PGDATA environment variable
must point to the directory containing the configuration files,
and the data_directory> parameter must be set in
or power failure. The risks are similar to turning off
fsync>, though smaller. It may be safe to turn off
this parameter if you have hardware (such as a battery-backed disk
- controller) or filesystem software (e.g., Reiser4) that reduces
- the risk of partial page writes to an acceptably low level.
+ controller) or file-system software that reduces
+ the risk of partial page writes to an acceptably low level (e.g., ReiserFS 4).
This controls whether the array input parser recognizes
- unquoted NULL> as specifying a NULL array element.
+ unquoted NULL> as specifying a null array element.
By default, this is on>, allowing array values containing
-
NULLs to be entered. However,
PostgreSQL> versions
- before 8.2 did not support NULLs in arrays, and therefore would
+
null values to be entered. However,
PostgreSQL> versions
+ before 8.2 did not support null values in arrays, and therefore would
treat NULL> as specifying a normal array element with
the string value NULL>. For backwards compatibility with
applications that require the old behavior, this variable can be
- Note that it is possible to create array values containing NULLs
+ Note that it is possible to create array values containing null values
even when this variable is off>.
-
+
If you have a fast link to the Internet, you may not need
, which instructs
-
CVS to use
gzip compression for transferred data. But
+
CVS to use
gzip compression for transferred data. But
on a modem-speed link, it's a very substantial win.
update -d -P
- This supplies the option to all cvs commands, and the
- and options to cvs update. Then you just have
+ This supplies the option to all cvs> commands, and the
+ and options to cvs update>. Then you just have
to say
cvs update
When you tag more than one file with the same tag you can think
- about the tag as a curve drawn through a matrix of filename vs.
+ about the tag as a curve drawn through a matrix of file name vs.
revision number. Say we have 5 files with the following revisions:
A major advantage to using
CVSup is that it can reliably
replicate the entire CVS repository on your local system,
- allowing fast local access to cvs operations such as
+ allowing fast local access to cvs> operations such as
and . Other advantages include fast synchronization to
the
PostgreSQL server due to an efficient
streaming transfer protocol which only sends the changes since the last update.
- The following is a suggested
CVSup config file from
+ The following is a suggested
CVSup config
uration file from
ftp site
If there is a directory structure in the tar file, then unpack
- the tar file within /usr/local/src and move the binaries into
+ the tar file within /usr/local/src and move the binaries into
the appropriate location as above.
- Install the Modula-3 rpms:
+ Install the Modula-3 RPMs:
# rpm -Uvh pm3*.rpm
-
+
Data Types
-
XML> (eXtensible Markup Language) support is not one
+
XML> (Extensible Markup Language) support is not one
capability, but a variety of features supported by a database
system. These capabilities include storage, import/export,
validation, indexing, efficiency of modification, searching,
indexes to index specific
XML> fields. To index the
full contents of
XML> documents, the full-text indexing
tool /contrib/tsearch2> can be used. Of course,
-
tsearch2 indexes have no
XML> awareness so additional
+
Tsearch2 indexes have no
XML> awareness so additional
/contrib/xml2> checks should be added to queries.
-
/contrib/xml2> supports XSLT> (XML
+
/contrib/xml2> supports XSLT> (Extensible
Stylesheet Language Transformation).
-
+
Data Definition
However, the need to
recreate the view adds an extra step to adding and dropping
- individual partitions of the dataset.
+ individual partitions of the data set.
Constraint exclusion only works when the query's WHERE>
clause contains constants. A parameterized query will not be
optimized, since the planner cannot know what partitions the
- parameter value might select at runtime. For the same reason,
+ parameter value might select at run time. For the same reason,
stable> functions such as CURRENT_DATE
must be avoided.
- Avoid cross-datatype comparisons in the CHECK>
+ Avoid cross-data type comparisons in the CHECK>
constraints, as the planner will currently fail to prove such
conditions false. For example, the following constraint
will work if x is an integer
The problem is not limited to the bigint data type
— it can occur whenever the default data type of the
constant does not match the data type of the column to which it
- is being compared. Cross-datatype comparisons in the supplied
+ is being compared. Cross-data type comparisons in the supplied
queries are usually OK, just not in the CHECK> conditions.
-
+
Documentation
SGML_CATALOG_FILES to point to the file
whenever you use
jade later on.
(This method is also an option if OpenJade is already
- installed and you want to install the rest of the toolchain
+ installed and you want to install the rest of the tool chain
locally.)
-
+
As a host variable you can also use arrays, typedefs, structs and
pointers. Moreover there are special types of host variables that exist
- only in ecpg.
+ only in ECPG.
Special types of variables
- ecpg contains some special types that help you to interact easily with
+ ECPG contains some special types that help you to interact easily with
data from the SQL server. For example it has implemented support for
- the varchar, numeric, date, timestamp and interval types.
+ the varchar>, numeric>, date>, timestamp>, and interval> types.
contains basic functions to deal with
those types, such that you do not need to send a query to the SQL
server just for adding an interval to a timestamp for example.
PGTYPESnumeric_to_asc
- Returns a pointer to a malloced string that contains the string
+ Returns a pointer to a string allocated by malloc that contains the string
representation of the numeric type num.
char *PGTYPESnumeric_to_asc(numeric *num, int dscale);
The function converts the numeric value from the variable that
nv> points to into the double variable that dp> points
- to. It retuns 0 on success and -1 if an error occurs, including
+ to. It returns 0 on success and -1 if an error occurs, including
overflow. On overflow, the global variable errno> will be set
to PGTYPES_NUM_OVERFLOW> additionally.
The function converts the numeric value from the variable that
nv> points to into the integer variable that ip>
- points to. It retuns 0 on success and -1 if an error occurs, including
+ points to. It returns 0 on success and -1 if an error occurs, including
overflow. On overflow, the global variable errno> will be set
to PGTYPES_NUM_OVERFLOW> additionally.
The function converts the numeric value from the variable that
nv> points to into the long integer variable that
- lp> points to. It retuns 0 on success and -1 if an error
+ lp> points to. It returns 0 on success and -1 if an error
occurs, including overflow. On overflow, the global variable
errno> will be set to PGTYPES_NUM_OVERFLOW>
additionally.
The function converts the numeric value from the variable that
src> points to into the decimal variable that
- dst> points to. It retuns 0 on success and -1 if an error
+ dst> points to. It returns 0 on success and -1 if an error
occurs, including overflow. On overflow, the global variable
errno> will be set to PGTYPES_NUM_OVERFLOW>
additionally.
The function converts the decimal value from the variable that
src> points to into the numeric variable that
- dst> points to. It retuns 0 on success and -1 if an error
+ dst> points to. It returns 0 on success and -1 if an error
occurs. Since the decimal type is implemented as a limited version of
the numeric type, overflow can not occur with this conversion.
specification. Note that timezones are not supported by ecpg. It can
parse them but does not apply any calculation as the
PostgreSQL> server does for example. Timezone
- specificiers are silently discarded.
+ specifiers are silently discarded.
The following table contains a few examples for input strings:
type which can be created on the heap only, the decimal type can be
created either on the stack or on the heap (by means of the functions
PGTYPESdecimal_new() and PGTYPESdecimal_free(). There are a lot of other
- functions that deal with the decimal type in the Informix compatibility
+ functions that deal with the decimal type in the
Informix compatibility
mode described in .
-
Informix compatibility mode
+
Informix compatibility mode
ecpg can be run in a so-called Informix compatibility mode>. If
- this mode is active, it tries to behave as if it were the Informix
- precompiler for Informix E/SQL. Generally spoken this will allow you to use
+ this mode is active, it tries to behave as if it were the
Informix
+ precompiler for
Informix E/SQL. Generally spoken this will allow you to use
the dollar sign instead of the EXEC SQL> primitive to introduce
embedded SQL commands.
against libcompat> that is shipped with ecpg.
- Besides the previously explained syntactic sugar, the Informix compatibility
+ Besides the previously explained syntactic sugar, the
Informix compatibility
mode ports some functions for input, output and transformation of data as
well as embedded SQL statements known from E/SQL to ecpg.
- Informix compatibility mode is closely connected to the pgtypeslib library
+
Informix compatibility mode is closely connected to the pgtypeslib library
of ecpg. pgtypeslib maps SQL data types to data types within the C host
- program and most of the additional functions of the Informix compatibility
+ program and most of the additional functions of the
Informix compatibility
mode allow you to operate on those C host program types. Note however that
- the extent of the compatibility is limited. It does not try to copy Informix
+ the extent of the compatibility is limited. It does not try to copy
Informix
behaviour; it allows you to do more or less the same operations and gives
you functions that have the same name and the same basic behavior but it is
- no drop-in replacement if you are using Informix at the moment. Moreover,
+ no drop-in replacement if you are using
Informix at the moment. Moreover,
some of the data types are different. For example,
PostgreSQL's datetime and interval types do not
know about ranges like for example YEAR TO MINUTE> so you won't
ECPG_INFORMIX_NUM_UNDERFLOW> is returned. If the ASCII
representation could not be parsed,
ECPG_INFORMIX_BAD_NUMERIC> is returned or
- ECPG_INFORMIX_BAD_EXPONENT> if this problem ocurred while
+ ECPG_INFORMIX_BAD_EXPONENT> if this problem occurred while
parsing the exponent.
is returned.
- Note that the ecpg implementation differs from the Informix
- implementation. Informix limits an integer to the range from -32767 to
+ Note that the ecpg implementation differs from the
Informix
+ implementation.
Informix limits an integer to the range from -32767 to
32767, while the limits in the ecpg implementation depend on the
architecture (-INT_MAX .. INT_MAX>).
is returned.
- Note that the ecpg implementation differs from the Informix
- implementation. Informix limits a long integer to the range from
+ Note that the ecpg implementation differs from the
Informix
+ implementation.
Informix limits a long integer to the range from
-2,147,483,647 to 2,147,483,647, while the limits in the ecpg
implementation depend on the architecture (-LONG_MAX ..
LONG_MAX>).
error.
- Note that ecpg's implementation differs from the Informix
- implementation. In Informix the format can be influenced by setting
+ Note that ecpg's implementation differs from the
Informix
+ implementation. In
Informix the format can be influenced by setting
environment variables. In ecpg however, you cannot change the output
format.
The function receives the textual representation of the date to convert
(str>) and a pointer to a variable of type date
(d>). This function does not allow you to specify a format
- mask. It uses the default format mask of Informix which is
+ mask. It uses the default format mask of
Informix which is
mm/dd/yyyy>. Internally, this function is implemented by
means of rdefmtdate>. Therefore, rstrdate> is
not faster and if you have the choice you should opt for
error occurred.
- Internally this function uses the
+ Internally
, this function uses the
linkend="PGTYPEStimestampfmtasc"> function. See the reference there for
- informations on what format mask specifiers can be used.
+ information on what format mask specifiers can be used.
- CCHARTYPE - For a variable of type char or char*
+ CCHARTYPE - For a variable of type char or char*
- CSHORTTYPE - For a variable of type short int
+ CSHORTTYPE - For a variable of type short int
- CINTTYPE - For a variable of type int
+ CINTTYPE - For a variable of type int
- CBOOLTYPE - For a variable of type boolean
+ CBOOLTYPE - For a variable of type boolean
- CFLOATTYPE - For a variable of type float
+ CFLOATTYPE - For a variable of type float
- CLONGTYPE - For a variable of type long
+ CLONGTYPE - For a variable of type long
- CDOUBLETYPE - For a variable of type double
+ CDOUBLETYPE - For a variable of type double
- CDECIMALTYPE - For a variable of type decimal
+ CDECIMALTYPE - For a variable of type decimal
- CDATETYPE - For a variable of type date
+ CDATETYPE - For a variable of type date
- CDTIMETYPE - For a variable of type timestamp
+ CDTIMETYPE - For a variable of type timestamp
Functions return this value if an overflow occurred in a
- calculation. Internally it is defined to -1200 (the Informix
+ calculation. Internally it is defined to -1200 (the
Informix
definition).
Functions return this value if an underflow occurred in a calculation.
- Internally it is defined to -1201 (the Informix definition).
+ Internally it is defined to -1201 (the
Informix definition).
Functions return this value if an attempt to divide by zero is
- observed. Internally it is defined to -1202 (the Informix definition).
+ observed. Internally it is defined to -1202 (the
Informix definition).
Functions return this value if a bad value for a year was found while
- parsing a date. Internally it is defined to -1204 (the Informix
+ parsing a date. Internally it is defined to -1204 (the
Informix
definition).
Functions return this value if a bad value for a month was found while
- parsing a date. Internally it is defined to -1205 (the Informix
+ parsing a date. Internally it is defined to -1205 (the
Informix
definition).
Functions return this value if a bad value for a day was found while
- parsing a date. Internally it is defined to -1206 (the Informix
+ parsing a date. Internally it is defined to -1206 (the
Informix
definition).
Functions return this value if a parsing routine needs a short date
representation but did not get the date string in the right length.
- Internally it is defined to -1209 (the Informix definition).
+ Internally it is defined to -1209 (the
Informix definition).
Functions return this value if Internally it is defined to -1210 (the
- Informix definition).
Functions return this value if Internally it is defined to -1211 (the
- Informix definition).
Functions return this value if a parsing routine was supposed to get a
format mask (like mmddyy>) but not all fields were listed
- correctly. Internally it is defined to -1212 (the Informix definition).
+ correctly. Internally it is defined to -1212 (the
Informix definition).
the textual representation for a numeric value because it contains
errors or if a routine cannot complete a calculation involving numeric
variables because at least one of the numeric variables is invalid.
- Internally it is defined to -1213 (the Informix definition).
+ Internally it is defined to -1213 (the
Informix definition).
Functions return this value if Internally it is defined to -1216 (the
- Informix definition).
Functions return this value if Internally it is defined to -1218 (the
- Informix definition).
Functions return this value if Internally it is defined to -1264 (the
- Informix definition).
-
+
Functions and Operators
Array comparisons compare the array contents element-by-element,
- using the default btree comparison function for the element data type.
+ using the default B-Tree comparison function for the element data type.
In multidimensional arrays the elements are visited in row-major order
(last subscript varies most rapidly).
If the contents of two arrays are equal but the dimensionality is
>> or
>=>,
or has semantics similar to one of these. (To be specific, an operator
- can be a row comparison operator if it is a member of a btree operator
- class, or is the negator of the => member of a btree operator
+ can be a row comparison operator if it is a member of a B-Tree operator
+ class, or is the negator of the => member of a B-Tree operator
class.)
pg_switch_xlog()
text
- Force switch to a new xlog file
+ Force switch to a new transaction log file
|
pg_current_xlog_location()
text
- Get current xlog write location
+ Get current transaction log write location
|
pg_current_xlog_insert_location()
text
- Get current xlog insert location
+ Get current transaction log insert location
|
pg_xlogfile_name_offset(location> text>)
text>, integer>
- Convert xlog location string to filename and decimal byte offset within file
+ Convert transaction log location string to file name and decimal byte offset within file
|
pg_xlogfile_name(location> text>)
text
- Convert xlog location string to filename
+ Convert transaction log location string to file name
arbitrary user-defined label for the backup. (Typically this would be
the name under which the backup dump file will be stored.) The function
writes a backup label file into the database cluster's data directory,
- and then returns the backup's starting xlog location as text. The user
+ and then returns the backup's starting transaction log location as text. The user
need not pay any attention to this result value, but it is provided in
case it is of use.
pg_stop_backup> removes the label file created by
pg_start_backup>, and instead creates a backup history file in
- the xlog archive area. The history file includes the label given to
- pg_start_backup>, the starting and ending xlog locations for
+ the transaction log archive area. The history file includes the label given to
+ pg_start_backup>, the starting and ending transaction log locations for
the backup, and the starting and ending times of the backup. The return
- value is the backup's ending xlog location (which again may be of little
- interest). After noting the ending location, the current xlog insertion
- point is automatically advanced to the next xlog file, so that the
- ending xlog file can be archived immediately to complete the backup.
+ value is the backup's ending transaction log location (which again may be of little
+ interest). After noting the ending location, the current transaction log insertion
+ point is automatically advanced to the next transaction log file, so that the
+ ending transaction log file can be archived immediately to complete the backup.
- pg_switch_xlog> moves to the next xlog file, allowing the
+ pg_switch_xlog> moves to the next transaction log file, allowing the
current file to be archived (assuming you are using continuous archiving).
- The result is the ending xlog location within the just-completed xlog file.
- If there has been no xlog activity since the last xlog switch,
+ The result is the ending transaction log location within the just-completed transaction log file.
+ If there has been no transaction log activity since the last transaction log switch,
pg_switch_xlog> does nothing and returns the end location
- of the previous xlog file.
+ of the previous transaction log file.
- pg_current_xlog_location> displays the current xlog write
+ pg_current_xlog_location> displays the current transaction log write
location in the same format used by the above functions. Similarly
- pg_current_xlog_insert_location> displays the current xlog
- insertion point. The insertion point is the logical> end of xlog
+ pg_current_xlog_insert_location> displays the current transaction log
+ insertion point. The insertion point is the logical> end of transaction log
at any instant, while the write location is the end of what has actually
been written out from the server's internal buffers. The write location
is the end of what can be examined from outside the server, and is usually
- what you want if you are interested in archiving partially-complete xlog
+ what you want if you are interested in archiving partially-complete transaction log
files. The insertion point is made available primarily for server
debugging purposes. These are both read-only operations and do not
require superuser permissions.
You can use pg_xlogfile_name_offset> to extract the
- corresponding xlog filename and byte offset from the results of any of the
+ corresponding transaction log file name and byte offset from the results of any of the
above functions. For example:
postgres=# select * from pg_xlogfile_name_offset(pg_stop_backup());
00000001000000000000000D | 4039624
(1 row)
- Similarly, pg_xlogfile_name> extracts just the xlog filename.
- When the given xlog location is exactly at an xlog file boundary, both
- these functions return the name of the preceding xlog file.
- This is usually the desired behavior for managing xlog archiving
+ Similarly, pg_xlogfile_name> extracts just the transaction log file name.
+ When the given transction log location is exactly at an transaction log file boundary, both
+ these functions return the name of the preceding transaction log file.
+ This is usually the desired behavior for managing transaction log archiving
behavior, since the preceding file is the last one that currently
needs to be archived.
pg_stat_file> returns a record containing the file
size, last accessed time stamp, last modified time stamp,
last file status change time stamp (Unix platforms only),
- file creation timestamp (Windows only), and a boolean
+ file creation time stamp (Windows only), and a boolean
indicating if it is a directory. Typical usages include:
SELECT * FROM pg_stat_file('filename');
-
+
GiST Indexes
cube
-
Indexing for multi-dimensional cubes
+
Indexing for multidimensional cubes
-
+
A Brief History of PostgreSQL
Office (
ARO), the National Science Foundation
(
NSF), and ESL, Inc. The implementation of
POSTGRES began in 1986. The initial
- concepts for the system were presented in
+ concepts for the system were presented in ,
and the definition of the initial data model appeared in
linkend="ROWE87">. The design of the rule system at that time was
described in . The rationale and
, was released to a few external users in
June 1989. In response to a critique of the first rule system
(
), the rule system was redesigned (
- linkend="STON90b">) and Version 2 was released in June 1990 with
+ linkend="STON90b">), and Version 2 was released in June 1990 with
the new rule system. Version 3 appeared in 1991 and added support
for multiple storage managers, an improved query executor, and a
rewritten rule system. For the most part, subsequent releases
-
+
Indexes
indexing strategy.
As an example, the standard distribution of
PostgreSQL includes GIN operator classes
- for one-dimentional arrays, which support indexed
+ for one-dimensional arrays, which support indexed
queries using these operators:
(See for the meaning of
these operators.)
Other GIN operator classes are available in the contrib>
- tsearch2 and intarray modules. For more information see .
+ tsearch2 and intarray modules. For more information see .
-
+
The Information Schema
IN for input parameter,
OUT for output parameter,
- and INOUT for input/ouput parameter.
+ and INOUT for input/output parameter.
-
+
command, because it might be exposed in command logs, activity displays,
and so on. Instead, use this function to convert the password to encrypted
form before it is sent. The arguments are the cleartext password, and the SQL
-name of the user it is for. The return value is a malloc'd string, or NULL if
-out-of-memory. The caller may assume the string doesn't contain any special
+name of the user it is for. The return value is a string allocated by
+malloc, or NULL if out of memory.
+The caller may assume the string doesn't contain any special
characters that would require escaping. Use PQfreemem> to free
the result when done with it.
entries first when you are using wildcards.)
If an entry needs to contain : or
\, escape this character with \.
-A hostname of localhost> matches both TCP (hostname
+A host name of localhost> matches both TCP (hostname
localhost>) and Unix domain socket (pghost> empty or the
default socket directory) connections coming from the local machine.
fail if the server does not present a certificate; therefore, to
use this feature the server must also have a root.crt> file.
Certificate Revocation List (CRL) entries are also checked if the file
- ~/.postgresql/root.crl exists (%APPDATA%\postgresql\root.crl
+ ~/.postgresql/root.crl exists (%APPDATA%\postgresql\root.crl
on Microsoft Windows).
- Returns 1 if the
libpq is thead-safe and
+ Returns 1 if the
libpq is th
read-safe and
0 if it is not.
-
+
Large Objects
creates a new large object.
The return value is the OID that was assigned to the new large object,
- or InvalidOid (zero) on failure.
+ or InvalidOid (zero) on failure.
mode is unused and
ignored as of
PostgreSQL 8.1; however, for
specified by lobjId;
if so, failure occurs if that OID is already in use for some large
object. If lobjId
- is InvalidOid (zero) then lo_create> assigns an unused
+ is InvalidOid (zero) then lo_create> assigns an unused
OID (this is the same behavior as lo_creat>).
The return value is the OID that was assigned to the new large object,
- or InvalidOid (zero) on failure.
+ or InvalidOid (zero) on failure.
lo_create> is new as of PostgreSQL
8.1; if this function is run against an older server version, it will
- fail and return InvalidOid.
+ fail and return InvalidOid.
specifies the operating system name of
the file to be imported as a large object.
The return value is the OID that was assigned to the new large object,
- or InvalidOid (zero) on failure.
+ or InvalidOid (zero) on failure.
Note that the file is read by the client interface library, not by
- the server; so it must exist in the client filesystem and be readable
+ the server; so it must exist in the client file system and be readable
by the client application.
-
+
Routine Database Maintenance Tasks
The standard form of VACUUM> can run in parallel with production
- database operations. Commands such as SELECTs, INSERTs, UPDATEs and DELETEs
+ database operations. Commands such as SELECT,
+ INSERT, UPDATE, and DELETE
will continue to function as normal, though you will not be able to modify the
- definition of a table with commands such as ALTER TABLE ADD COLUMN
+ definition of a table with commands such as ALTER TABLE ADD COLUMN
while it is being vacuumed.
Beginning in
PostgreSQL 8.0, there are
configuration parameters that can be adjusted to further reduce the
-
+
PL/Perl - Perl Procedural Language
within stored functions, of the manyfold string
munging operators and functions available for Perl. Parsing
complex strings may be be easier using Perl than it is with the
- string functions and control structures provided in PL/pgsql.
+ string functions and control structures provided in PL/pgSQL.
To install PL/Perl in a particular database, use
Any columns in the declared result data type that are not present in the
- hash will be returned as NULLs.
+ hash will be returned as null values.
-
+
PL/pgSQL - SQL Procedural Language
client and server
Intermediate results that the client does not
- need do not need to be marshalled or transferred between server
+ need do not need to be marshaled or transferred between server
and client
There is no need for additional rounds of query
Any
PL/pgSQL variable name appearing
in the query text is replaced by a parameter symbol, and then the
current value of the variable is provided as the parameter value
- at runtime. This allows the same textual query to do different
+ at run time. This allows the same textual query to do different
things in different calls of the function.
substituted into the rest of the query as usual.
This works for SELECT>,
INSERT>/UPDATE>/DELETE> with
- RETURNING>, and utility commands that return rowset
+ RETURNING>, and utility commands that return row-set
results (such as EXPLAIN>).
Except for the INTO> clause, the SQL command is the same
as it would be written outside
PL/pgSQL.
RAISE EXCEPTION presently always generates
- the same SQLSTATE code, P0001>, no matter what message
+ the same SQLSTATE code, P0001>, no matter what message
it is invoked with. It is possible to trap this exception with
EXCEPTION ... WHEN RAISE_EXCEPTION THEN ...> but there
is no way to tell one RAISE> from another.
-
+
PL/Python - Python Procedural Language
args[]; named arguments are also passed as ordinary
variables to the Python script. The result is returned from the Python code
in the usual way, with return or
- yield (in case of a resultset statement).
+ yield (in case of a result-set statement).
-
+
Queries
Each parenthesized list of expressions generates a row in the table.
The lists must all have the same number of elements (i.e., the number
of columns in the table), and corresponding entries in each list must
- have compatible datatypes. The actual datatype assigned to each column
+ have compatible data types. The actual data type assigned to each column
of the result is determined using the same rules as for UNION>
(see ).
The same, when the column has a default expression that won't automatically
- cast to the new datatype:
+ cast to the new data type:
ALTER TABLE foo
ALTER COLUMN foo_timestamp DROP DEFAULT,
connected to a database can see all the comments for objects in
that database (although only superusers can change comments for
objects that they don't own). For shared objects such as
- databases, roles, and tablespaces comments are stored gloablly
+ databases, roles, and tablespaces comments are stored globally
and any user connected to any database can see all the comments
for shared objects. Therefore, don't put security-critical
information in comments.
DELETE to always fire before the delete action, even a cascading
one. This is considered more consistent. There is also unpredictable
behavior when BEFORE triggers modify rows that are later
- to be modified by referential actions. This can lead to contraint violations
+ to be modified by referential actions. This can lead to constraint violations
or stored data that does not honor the referential constraint.
This command is similar to the corresponding command in the SQL
- standard, aapart from the IF EXISTS>
+ standard, apart from the IF EXISTS>
option, which is a
PostgreSQL> extension.
But note that the CREATE TYPE command
and the data type extension mechanisms in
- To insert multiple rows using the multi-row VALUES> syntax:
+ To insert multiple rows using the multirow VALUES> syntax:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
-
+
>
- Print the value of the CC macro that was used for building
+ Print the value of the CC variable that was used for building
PostgreSQL>. This shows the C compiler used.
>
- Print the value of the CPPFLAGS macro that was used for building
+ Print the value of the CPPFLAGS variable that was used for building
PostgreSQL>. This shows C compiler switches needed
at preprocessing time (typically, -I> switches).
>
- Print the value of the CFLAGS macro that was used for building
+ Print the value of the CFLAGS variable that was used for building
PostgreSQL>. This shows C compiler switches.
>
- Print the value of the CFLAGS_SL macro that was used for building
+ Print the value of the CFLAGS_SL variable that was used for building
PostgreSQL>. This shows extra C compiler switches
used for building shared libraries.
>
- Print the value of the LDFLAGS macro that was used for building
+ Print the value of the LDFLAGS variable that was used for building
PostgreSQL>. This shows linker switches.
>
- Print the value of the LDFLAGS_SL macro that was used for building
+ Print the value of the LDFLAGS_SL variable that was used for building
PostgreSQL>. This shows linker switches
used for building shared libraries.
>
- Print the value of the LIBS macro that was used for building
+ Print the value of the LIBS variable that was used for building
PostgreSQL>. This normally contains -l>
switches for external libraries linked into
PostgreSQL>.
- Because
pg_dump is used to tranfer data
+ Because
pg_dump is used to tran
sfer data
to newer versions of
PostgreSQL>, the output of
pg_dump can be loaded into
newer
PostgreSQL> databases. It also can read older
- Set the console font to <quote>Lucida Console>, because the
+ Set the console font to <literal>Lucida Console>, because the
raster font does not work with the ANSI code page.
When VALUES> is used in INSERT>, the values are all
- automatically coerced to the datatype of the corresponding destination
+ automatically coerced to the data type of the corresponding destination
column. When it's used in other contexts, it may be necessary to specify
- the correct datatype. If the entries are all quoted literal constants,
+ the correct data type. If the entries are all quoted literal constants,
coercing the first is sufficient to determine the assumed type for all:
-
+
+
PostgreSQL Coding Conventions
ereport(level, (errmsg_internal("format string", ...)));
- Notice that the SQLSTATE errcode is always defaulted, and the message
+ Notice that the SQLSTATE error code is always defaulted, and the message
string is not included in the internationalization message dictionary.
Therefore, elog> should be used only for internal errors and
low-level debug logging. Any message that is likely to be of interest to
-
+
Server Programming Interface
then you may use the
global pointer SPITupleTable *SPI_tuptable to
access the result rows. Some utility commands (such as
- EXPLAIN>) also return rowsets, and SPI_tuptable>
+ EXPLAIN>) also return row sets, and SPI_tuptable>
will contain the result in these cases too.
-
+
Reliability and the Write-Ahead Log
permanent storage before> modifying the actual page on
disk. By doing this, during crash recovery
PostgreSQL> can
restore partially-written pages. If you have a battery-backed disk
- controller or file-system software (e.g., Reiser4) that prevents partial
- page writes, you can turn off this page imaging by using the
+ controller or file-system software that prevents partial page writes
+ (e.g., ReiserFS 4), you can turn off this page imaging by using the
parameter.
-
+
Interfacing Extensions To Indexes
|
- consistent - determine whether key satifies the
+ consistent - determine whether key satisfies the
query qualifier
1