-
PostgreSQL Date Order Conventions
- Order
+
Postgres Date Order Conventions
+ Date Order
|
Style Specification
+ Description
Example
|
European
+ day/month/year
17/12/1997 15:37:16.00 MET
|
US
+ month/day/year
12/17/1997 07:37:16.00 PST
- interval output looks like the input format, expect that units like
+ interval output looks like the input format, except that units like
week or century are converted to years and days.
In ISO mode the output looks like
+
[ Quantity Units [ ... ] ] [ Days ] Hours:Minutes [ ago ]
Time Zones
-
PostgreSQL endeavors to be compatible with
+
Postgres endeavors to be compatible with
SQL92 definitions for typical usage.
However, the
SQL92 standard has an odd mix of date and
time types and capabilities. Two obvious problems are:
- To address these difficulties,
PostgreSQL
+ To address these difficulties,
Postgres
associates time zones only with date and time
types which contain both date and time,
and assumes local time for any type containing only
-
PostgreSQL obtains time zone support
+
Postgres obtains time zone support
from the underlying operating system for dates between 1902 and
2038 (near the typical date limits for Unix-style
systems). Outside of this range, all dates are assumed to be
Internals
-
PostgreSQL uses Julian dates
+
Postgres uses Julian dates
for all date/time calculations. They have the nice property of correctly
predicting/calculating any date more recent than 4713BC
to far into the future, using the assumption that the length of the
point is specified using the following syntax:
-( x , y )
- x , y
-where
- x is the x-axis coordinate as a floating point number
- y is the y-axis coordinate as a floating point number
-
+
+( x , y )
+ x , y
+
+
+ where the arguments are
+
+
+
+ x
+
+ The x-axis coordinate as a floating point number.
+
+
+
+
+
+ y
+
+ The y-axis coordinate as a floating point number.
+
+
+
+
lseg is specified using the following syntax:
-( ( x1 , y1 ) , ( x2 , y2 ) )
- ( x1 , y1 ) , ( x2 , y2 )
- x1 , y1 , x2 , y2
-where
- (x1,y1) and (x2,y2) are the endpoints of the segment
-
+
+
+( ( x1 , y1 ) , ( x2 , y2 ) )
+ ( x1 , y1 ) , ( x2 , y2 )
+ x1 , y1 , x2 , y2
+
+
+ where the arguments are
+
+
+
+ (x1,y1)
+ (x2,y2)
+
+ The endpoints of the line segment.
+
+
+
+
box is specified using the following syntax:
-( ( x1 , y1 ) , ( x2 , y2 ) )
- ( x1 , y1 ) , ( x2 , y2 )
- x1 , y1 , x2 , y2
-where
- (x1,y1) and (x2,y2) are opposite corners
-
+
+( ( x1 , y1 ) , ( x2 , y2 ) )
+ ( x1 , y1 ) , ( x2 , y2 )
+ x1 , y1 , x2 , y2
+
+
+ where the arguments are
+
+
+
+ (x1,y1)
+ (x2,y2)
+
+ Opposite corners of the box.
+
+
+
+
+
Boxes are output using the first syntax.
The corners are reordered on input to store
the lower left corner first and the upper right corner last.
isopen(p)
and
isclosed(p)
- are supplied to select either type in a query.
+ are supplied to test for either type in a query.
path is specified using the following syntax:
-( ( x1 , y1 ) , ... , ( xn , yn ) )
-[ ( x1 , y1 ) , ... , ( xn , yn ) ]
- ( x1 , y1 ) , ... , ( xn , yn )
- ( x1 , y1 , ... , xn , yn )
- x1 , y1 , ... , xn , yn
-where
- (x1,y1),...,(xn,yn) are points 1 through n
- a leading "[" indicates an open path
- a leading "(" indicates a closed path
-
+
+( ( x1 , y1 ) , ... , ( xn , yn ) )
+[ ( x1 , y1 ) , ... , ( xn , yn ) ]
+ ( x1 , y1 ) , ... , ( xn , yn )
+ ( x1 , y1 , ... , xn , yn )
+ x1 , y1 , ... , xn , yn
+
+
+ where the arguments are
+
+
+
+ (x,y)
+
+ Endpoints of the line segments comprising the path.
+ A leading square bracket ("[") indicates an open path, while
+ a leading parenthesis ("(") indicates a closed path.
+
+
+
+
+
Paths are output using the first syntax.
Note that
Postgres versions prior to
v6.1 used a format for paths which had a single leading parenthesis,
polygon is specified using the following syntax:
-( ( x1 , y1 ) , ... , ( xn , yn ) )
- ( x1 , y1 ) , ... , ( xn , yn )
- ( x1 , y1 , ... , xn , yn )
- x1 , y1 , ... , xn , yn
-where
- (x1,y1),...,(xn,yn) are points 1 through n
-
+
+( ( x1 , y1 ) , ... , ( xn , yn ) )
+ ( x1 , y1 ) , ... , ( xn , yn )
+ ( x1 , y1 , ... , xn , yn )
+ x1 , y1 , ... , xn , yn
+
+
+ where the arguments are
+
+
+
+ (x,y)
+
+ Endpoints of the line segments comprising the boundary of the
+ polygon.
+
+
+
+
+
Polygons are output using the first syntax.
Note that
Postgres versions prior to
v6.1 used a format for polygons which had a single leading parenthesis, the list
- of x-axis coordinates, the list of y-axis coordinates,
+ of x-axis coordinates, the list of y-axis coordinates,
followed by a closing parenthesis.
The built-in function upgradepoly is supplied to convert
polygons dumped and reloaded from pre-v6.1 databases.
circle is specified using the following syntax:
-< ( x , y ) , r >
-( ( x , y ) , r )
- ( x , y ) , r
- x , y , r
-where
- (x,y) is the center of the circle
- r is the radius of the circle
-
+
+< ( x , y ) , r >
+( ( x , y ) , r )
+ ( x , y ) , r
+ x , y , r
+
+
+ where the arguments are
+
+
+
+ (x,y)
+
+ Center of the circle.
+
+
+
+
+
+ r
+
+ Radius of the circle.
+
+
+
+
+
Circles are output using the first syntax.
- Julian Day
is different from Julian Date
.
+ "Julian Day" is different from "Julian Date".
The Julian calendar was introduced by Julius Caesar in 45 BC. It was
in common use until the 1582, when countries started changing to the
-
-
After you have created and registered a user-defined
function, your work is essentially done.
describes how to perform the compilation and
link-editing required before you can load your user-defined
functions into a running
Postgres server.
- Note that
- this process has changed as of Version 4.2.
file with special compiler flags and a shared library
must be produced.
The necessary steps with HP-UX are as follows. The +z
- flag to the HP-UX C compiler produces so-called
- "Position Independent Code" (PIC) and the +u flag
- removes
+ flag to the HP-UX C compiler produces
+ Position Independent Code (PIC)
+ and the +u flag removes
some alignment restrictions that the PA-RISC architecture
normally enforces. The object file must be turned
into a shared library using the HP-UX link editor with
the -b option. This sounds complicated but is actually
very simple, since the commands to do it are just:
+
# simple HP-UX example
% cc +z +u -c foo.c
command line.
+
+
+
The purpose of documentation is to make
Postgres
- easier to learn, use, and develop.
+ easier to learn, use, and extend..
The documentation set should describe the
Postgres
system, language, and interfaces.
It should be able to answer
formats:
+
Plain text for pre-installation information.
-
+
+
+
HTML, for on-line browsing and reference.
-
- Hardcopy, for in-depth reading and reference.
-
+
+
+
+ Hardcopy (Postscript or PDF), for in-depth reading and reference.
+
+
+
man pages, for quick reference.
-
+
+
-
Hardcopy Generation for v6.5
+
Hardcopy Generation for v7.0
The hardcopy Postscript documentation is generated by converting the
- Export the result as ASCII Layout
.
+ Export the result as "ASCII Layout".
Using emacs or vi, clean up the tabular information in
- INSTALL. Remove the mailto
+ INSTALL. Remove the "mailto"
URLs for the porting contributors to shrink
the column heights.
Several areas are addressed while generating Postscript
- hardcopy.
+ hardcopy, including RTF repair, ToC generation, and page break
+ adjustments.
- Applixware does not seem to do a complete job of importing
RTF
- generated by jade/MSS. In particular, all text is given the
- Header1
style attribute label, although the text
- formatting itself is acceptable. Also, the Table of Contents page
- numbers do not refer to the section listed in the table, but rather
- refer to the page of the ToC itself.
+
jade, an integral part of the
+ hardcopy procedure, omits specifying a default style for body
+ text. In the past, this undiagnosed problem led to a long process
+ of Table of Contents (ToC) generation. However, with great help
+ from the ApplixWare folks the symptom was diagnosed and a
+ workaround is available.
+
-
- Open a new document in
Applix Words and
- then import the
RTF file.
-
-
- Print out the existing Table of Contents, to mark up in the following
- few steps.
+ Repair the RTF file to correctly specify all
+ styles, in particular the default style. The field can be added
+ using
vi or the following small
+
+#!/bin/sh
+# fixrtf.sh
+# Utility to repair slight damage in RTF files generated by jade
+#
+for i in $* ; do
+ mv $i $i.orig
+ cat $i.orig | sed 's#\\stylesheet#\\stylesheet{\\s0 Normal;}#' > $i
+done
+
+exit
+
+
+ where the script is adding {\s0 Normal;} as
+ the zero-th style in the document. According to ApplixWare, the
+ RTF standard would prohibit adding an implicit zero-th style,
+ though M$Word happens to handle this case.
- Insert figures into the document. Center each figure on the page using
- the centering margins button.
-
- Not all documents have figures.
- You can grep the
SGML source files for
- the string graphic
to identify those parts of the
- documentation which may have figures. A few figures are replicated in
- various parts of the documentation.
+ Open a new document in
Applix Words and
+ then import the
RTF file.
- Work through the document, adjusting page breaks and table column
- widths.
+ Generate a new ToC using ApplixWare.
+
+
+
+ Select the existing ToC lines, from the beginning of the first
+ character on the first line to the last character of the last
+ line.
+
+
+
+
+ Build a new ToC using
+ Tools.BookBuilding.CreateToC. Select the
+ first three levels of headers for inclusion in the ToC.
+ This will
+ replace the existing lines imported in the RTF with a native
+ ApplixWare ToC.
+
+
+
+
+ Adjust the ToC formatting by using
+ Format.Style, selecting each of the three
+ ToC styles, and adjusting the indents for First and
+ Left. Use the following values:
+
+
+
Indent Formatting for Table of Contents
+
+
+ |
+
+ Style
+
+
+ First Indent (inches)
+
+
+ Left Indent (inches)
+
+
+
+
+
+ |
+
+ TOC-Heading 1
+
+
+ 0.6
+
+
+ 0.6
+
+
+
+ |
+
+ TOC-Heading 2
+
+
+ 1.0
+
+
+ 1.0
+
+
+
+ |
+
+ TOC-Heading 3
+
+
+ 1.4
+
+
+ 1.4
+
+
+
+
+
+
+
+
- If a bibliography is present, Applix Words seems to mark all remaining
- text after the first title as having an underlined attribute. Select
- all remaining text, turn off underlining using the underlining button,
- then explicitly underline each document and book title.
+ Work through the document to:
+
+
+
+ Adjust page breaks.
+
+
+
+
+ Adjust table column widths.
+
+
+
+
+ Insert figures into the document. Center each figure on the page using
+ the centering margins button on the ApplixWare toolbar.
+
+
+ Not all documents have figures.
+ You can grep the
SGML source files for
+ the string "graphic" to identify those parts of the
+ documentation which may have figures. A few figures are replicated in
+ various parts of the documentation.
+
+
+
+
+
- Work through the document, marking up the ToC hardcopy with the actual
- page number of each ToC entry.
+ Replace the right-justified page numbers in the Examples and
+ Figures portions of the ToC with
+ correct values. This only takes a few minutes per document.
- Replace the right-justified incorrect page numbers in the ToC with
- correct values. This only takes a few minutes per document.
+ If a bibliography is present, remove the short
+ form reference title from each entry. The
+
DocBook stylesheets from Norm Walsh
+ seem to print these out, even though this is a subset of the
+ information immediately following.
- Print
the document
+ "Print" the document
to a file in Postscript format.
This describes an embedded
SQL in
C
- It is written by
Linus Tolke
Permission is granted to copy and use in the same way as you are allowed
- to copy and use the rest of
the PostgreSQL.
+ to copy and use the rest of
PostgreSQL.
+
The following list shows all the known incompatibilities. If you find one
- not listed please notify
Michael
- Meskes. Note, however, that we list only incompatibilities from
+ not listed please notify
+ Note, however, that we list only incompatibilities from
a precompiler of another RDBMS to
ecpg and not
additional
ecpg features that these RDBMS do not
have.
This request is modified
by the input variables, i.e. the variables that where not known at
compile time but are to be entered in the request. Where the variables
- should go the string contains ;
.
+ should go the string contains ";".
to the .profile file in your home directory.
From now on, we will assume that you have added the
Postgres bin directory to your path. In addition, we
- will make frequent reference to
setting a shell
- variable or setting an environment variable
throughout
+ will make frequent reference to "setting a shell
+ variable" or "setting an environment variable" throughout
this document. If you did not fully understand the
last paragraph on modifying your search path, you
should consult the Unix manual pages that describe your
SQL Functions
- <
quote>SQL functions> are constructs
+ <
firstterm>SQL functions> are constructs
defined by the
SQL92 standard which have
function-like syntax but which can not be implemented as simple
functions.
preserve months and years
age('now', timestamp '1957-06-13')
- |
- timestamp(abstime)
- timestamp
- convert to timestamp
- timestamp(abstime 'now')
-
- |
- timestamp(date)
- timestamp
- convert to timestamp
- timestamp(date 'today')
-
- |
- timestamp(date,time)
- timestamp
- convert to timestamp
- timestamp(timestamp '1998-02-24',time '23:07');
-
|
date_part(text,timestamp)
float8
date_trunc('month',abstime 'now')
|
- isfinite(abstime)
- bool
- a finite time?
- isfinite(abstime 'now')
+ interval(reltime)
+ interval
+ convert to interval
+ interval(reltime '4 hours')
|
isfinite(timestamp)
reltime(interval '4 hrs')
|
- interval(reltime)
- interval
- convert to interval
- interval(reltime '4 hours')
+ timestamp(date)
+ timestamp
+ convert to timestamp
+ timestamp(date 'today')
+
+ |
+ timestamp(date,time)
+ timestamp
+ convert to timestamp
+ timestamp(timestamp '1998-02-24',time '23:07');
+
+ |
+ to_char(timestamp,text)
+ text
+ convert to string
+ to_char(timestamp '1998-02-24','DD');
HH12
hour of day (01-12)
+ |
+ HH24
+ hour of day (00-23)
+
|
MI
minute (00-59)
month in Roman Numerals (I-XII; I=JAN) - upper case
|
- rn
+ rm
month in Roman Numerals (I-XII; I=JAN) - lower case
to_timestamp and to_date
skip blank space if the FX option is
- not use. FX Must be specified as the first item
+ not used. FX must be specified as the first item
in the template.
- '\' - must be use as double \\, example '\\HH\\MI\\SS'
+ Backslash ("\") must be specified with a double backslash
+ ("\\"); for example '\\HH\\MI\\SS'.
- '"' - string between a quotation marks is skipen and not is parsed.
- If you want write '"' to output you must use \\", example '\\"YYYY Month\\"'.
+ A double quote ('"') between quotation marks is skipped and is not parsed.
+ If you want to write a double quote to output you must preceed
+ it with a double backslash ('\\"), for
+ example '\\"YYYY Month\\"'.
- text - the PostgreSQL's to_char() support text without '"', but string
- between a quotation marks is fastly and you have guarantee, that a text
- not will interpreted as a keyword (format-picture), exapmle '"Hello Year: "YYYY'.
+ to_char supports text without a leading
+ double quote ('"'), but any string
+ between a quotation marks is rapidly handled and you are
+ guaranteed that it will not be interpreted as a template
+ keyword (example: '"Hello Year: "YYYY').
|
area(object)
float8
- area of circle, ...
+ area of item
area(box '((0,0),(1,1))')
|
box(box,box)
box
- boxes to intersection box
+ intersection box
box(box '((0,0),(1,1))',box '((0.5,0.5),(2,2))')
|
center(object)
point
- center of circle, ...
+ center of item
center(box '((0,0),(1,2))')
|
|
length(object)
float8
- length of line segment, ...
+ length of item
length(path '((-1,0),(1,0))')
- |
- length(path)
- float8
- length of path
- length(path '((0,0),(1,1),(2,0))')
-
|
pclose(path)
path
|
box(circle)
box
- convert circle to box
+ circle to box
box('((0,0),2.0)'::circle)
|
box(point,point)
box
- convert points to box
+ points to box
box('(0,0)'::point,'(1,1)'::point)
|
box(polygon)
box
- convert polygon to box
+ polygon to box
box('((0,0),(1,1),(2,0))'::polygon)
|
circle(box)
circle
- convert to circle
+ to circle
circle('((0,0),(1,1))'::box)
|
circle(point,float8)
circle
- convert to circle
+ point to circle
circle('(0,0)'::point,2.0)
|
lseg(box)
lseg
- convert diagonal to lseg
+ box diagonal to lseg
lseg('((-1,0),(1,0))'::box)
|
lseg(point,point)
lseg
- convert to lseg
+ points to lseg
lseg('(-1,0)'::point,'(1,0)'::point)
|
path(polygon)
point
- convert to path
+ polygon to path
path('((0,0),(1,1),(2,0))'::polygon)
|
point(circle)
point
- convert to point (center)
+ center
point('((0,0),2.0)'::circle)
|
point(lseg,lseg)
point
- convert to point (intersection)
+ intersection
point('((-1,0),(1,0))'::lseg, '((-2,-2),(2,2))'::lseg)
|
point(polygon)
point
- center of polygon
+ center
point('((0,0),(1,1),(2,0))'::polygon)
|
polygon(box)
polygon
- convert to polygon with 12 points
+ 12 point polygon
polygon('((0,0),(1,1))'::box)
|
polygon(circle)
polygon
- convert to 12-point polygon
+ 12-point polygon
polygon('((0,0),2.0)'::circle)
|
polygon(npts,circle)
polygon
- convert to npts polygon
+ npts polygon
polygon(12,'((0,0),2.0)'::circle)
|
polygon(path)
polygon
- convert to polygon
+ path to polygon
polygon('((0,0),(1,1),(2,0))'::path)
|
revertpoly(polygon)
polygon
- convert pre-v6.1 polygon
+ to pre-v6.1
revertpoly('((0,0),(1,1),(2,0))'::polygon)
|
upgradepath(path)
path
- convert pre-v6.1 path
+ to pre-v6.1
upgradepath('(1,3,0,0,1,1,2,0)'::path)
|
upgradepoly(polygon)
polygon
- convert pre-v6.1 polygon
+ to pre-v6.1
upgradepoly('(0,1,2,0,1,0)'::polygon)
- By 1996, it became clear that the name Postgres95
would
+ By 1996, it became clear that the name "Postgres95" would
not stand the test of time. We chose a new name,
PostgreSQL, to reflect the relationship
between the original
Postgres and the more
is an index built over a subset of a table; the subset is defined by
supported partial indices with arbitrary
- predicates. I believe IBM's db2 for as/400 supports partial indices
+ predicates. I believe IBM's
DB2
+ for AS/400 supports partial indices
using single-clause predicates.
Madison | 845
- Here the *
after cities indicates that the query should
+ Here the "*" after cities indicates that the query should
be run over cities and all classes below cities in the
inheritance hierarchy. Many of the commands that we
have already discussed -- SELECT,
UPDATE and DELETE --
- support this *
notation, as do others, like
+ support this "*" notation, as do others, like
ALTER TABLE.
work with other
make programs. On GNU/Linux systems
GNU make is the default tool, on other systems you may find that
GNU
make is installed under the name
- <quote>gmake>.
+ <literal>gmake>.
We will use that name from now on to indicate
GNU
make, no matter what name it has on your system.
To test for
GNU make enter
Run the regression tests against the installed server (using the sequential
test method). If you didn't run the tests before installation, you should
definitely do it now.
- For detailed instructions see
- linkend="regress">.
+ For detailed instructions see
+ linkend="regress">.
- To start playing around
, set up the paths as explained above
+ To start experimenting with
Postgres,
+ set up the paths as explained above
and start the server. To create a database, type
+
> createdb testdb
+
Then enter
+
> psql testdb
+
to connect to that database. At the prompt you can enter SQL commands
and start experimenting.
-
PostgreSQL Installation Guide
-
- Covering v6.5 for general release
-
- The PostgreSQL Development Team
-
+
PostgreSQL Installation Guide
+
+ Covering v7.0 for general release
+
+ The PostgreSQL Development Team
+
-
- Thomas
- Lockhart
- Caltech/JPL
-
-
+
+ Thomas
+ Lockhart
+ Caltech/JPL
+
+
-->
- (last updated 1999-06-01)
-
+ (last updated 2000-05-01)
+
-
-
PostgreSQL is Copyright © 1996-9
-by the Postgres Global Development Group.
-
-
+
+
PostgreSQL is Copyright © 1996-2000
+ by PostgreSQL Inc.
+
+
-
+
-
Summary
-
- developed originally in the UC Berkeley Computer Science Department,
- pioneered many of the object-relational concepts
- now becoming available in some commercial databases.
-It provides SQL92/SQL3 language support,
- transaction integrity, and type extensibility.
-
PostgreSQL is an open-source descendant
- of this original Berkeley code.
-
-
-
-
-
Introduction
-
-This installation procedure makes some assumptions about the desired configuration
-and runtime environment for your system. This may be adequate for many installations,
-and is almost certainly adequate for a first installation. But you may want to
-do an initial installation up to the point of unpacking the source tree
-and installing documentation, and then print or browse the
-Administrator's Guide.
-
-
-
-&ports;
-&install;
-&config;
-&release;
+
Summary
+
+ developed originally in the UC Berkeley Computer Science Department,
+ pioneered many of the object-relational concepts
+ now becoming available in some commercial databases.
+ It provides SQL92/SQL3 language support,
+ transaction integrity, and type extensibility.
+
PostgreSQL is an open-source descendant
+ of this original Berkeley code.
+
+
+
+
+
Introduction
+
+ This installation procedure makes some assumptions about the desired configuration
+ and runtime environment for your system. This may be adequate for many installations,
+ and is almost certainly adequate for a first installation. But you may want to
+ do an initial installation up to the point of unpacking the source tree
+ and installing documentation, and then print or browse the
+ Administrator's Guide.
+
+
+
+ &ports;
+ &install;
+ &config;
+ &release;
PgDatabase::PutLine
or when the last string has been received from the backend using
PgDatabase::GetLine.
- It must be issued or the backend may get out of sync
with
+ It must be issued or the backend may get "out of sync" with
the frontend. Upon return from this function, the backend is ready to
receive the next query.
application programmer's interface to
-
PostgreSQL.
libpq is a set
+
Postgres.
libpq is a set
of library routines that allow client programs to pass queries to the
Postgres backend server and to receive the
results of these queries. libpq is also the
- underlying engine for several other
PostgreSQL
+ underlying engine for several other
Postgres
application interfaces, including libpq++ (C++),
libpgtcl (Tcl),
Perl, and
ecpg. So some aspects of libpq's behavior will be
is leaked for each call to PQconndefaults().
- In PostgreSQL versions before 7.0, PQconndefaults() returned a pointer
+ In Postgres versions before 7.0, PQconndefaults() returned a pointer
to a static array, rather than a dynamically allocated array. That
wasn't thread-safe, so the behavior has been changed.
maintain the PGconn abstraction. Use the accessor functions below to get
at the contents of PGconn. Avoid directly referencing the fields of the
PGconn structure because they are subject to change in the future.
-(Beginning in
PostgreSQL release 6.4, the
+(Beginning in
Postgres release 6.4, the
definition of struct PGconn is not even provided in libpq-fe.h.
If you have old code that accesses PGconn fields directly, you can keep using it
by including libpq-int.h too, but you are encouraged to fix the code
PQprint
Prints out all the tuples and, optionally, the
attribute names to the specified output stream.
-
+
void PQprint(FILE* fout, /* output stream */
const PGresult *res,
const PQprintOpt *po);
pqbool expanded; /* expand tables */
pqbool pager; /* use pager for output if needed */
char *fieldSep; /* field separator */
- char *tableOpt; /* insert to HTML <table ...> */
- char *caption; /* HTML <caption> */
+ char *tableOpt; /* insert to HTML table ... */
+ char *caption; /* HTML caption */
char **fieldName; /* null terminated array of replacement field names */
} PQprintOpt;
-
+
This function was formerly used by
psql
to print query results, but this is no longer the case and this
function is no longer actively supported.
Fast Path
-
PostgreSQL provides a fast path interface to send
+
Postgres provides a fast path interface to send
function calls to the backend. This is a trapdoor into system internals and
can be a potential security hole. Most users will not need this feature.
Asynchronous Notification
-
PostgreSQL supports asynchronous notification via the
+
Postgres supports asynchronous notification via the
LISTEN and NOTIFY commands. A backend registers its interest in a particular
notification condition with the LISTEN command (and can stop listening
with the UNLISTEN command). All backends listening on a
- In
PostgreSQL 6.4 and later,
+ In
Postgres 6.4 and later,
the be_pid is the notifying backend's,
whereas in earlier versions it was always your own backend's
PID.
Functions Associated with the COPY Command
- The COPY command in
PostgreSQL has options to read from
+ The COPY command in
Postgres has options to read from
or write to the network connection used by libpq.
Therefore, functions are necessary to access this network
connection directly so applications may take advantage of this capability.
a whole line will be returned at one time. But if the buffer offered by
the caller is too small to hold a line sent by the backend, then a partial
data line will be returned. This can be detected by testing whether the
-last returned byte is \n
or not.
+last returned byte is "\n" or not.
The returned string is not null-terminated. (If you want to add a
terminating null, be sure to pass a bufsize one smaller than the room
actually available.)
const char *string);
Note the application must explicitly send the two
-characters \.
on a final line to indicate to
+characters "\." on a final line to indicate to
the backend that it has finished sending its data.
sent to the backend using PQputline or when the
last string has been received from the backend
using PGgetline. It must be issued or the backend
- may get out of sync
with the frontend. Upon
+ may get "out of sync" with the frontend. Upon
return from this function, the backend is ready to
receive the next query.
The return value is 0 on successful completion,
-By default,
libpq prints
notice
+By default,
libpq prints
"notice"
messages from the backend on stderr,
as well as a few error messages that it generates by itself.
This behavior can be overridden by supplying a callback function that
PGPORT sets the default port or local Unix domain socket
-file extension for communicating with the
PostgreSQL
+file extension for communicating with the
Postgres
backend.
PGDATABASE sets the default
-
PostgreSQL database name.
PGREALM sets the Kerberos realm to use with
-
PostgreSQL, if it is different from the local realm.
-If
PGREALM is set,
PostgreSQL
+
Postgres, if it is different from the local realm.
+If
PGREALM is set,
Postgres
applications will attempt authentication with servers for this realm and use
separate ticket files to avoid conflicts with local
ticket files. This environment variable is only
PGOPTIONS sets additional runtime options for
libpq is thread-safe as of
-
PostgreSQL 7.0, so long as no two threads
+
Postgres 7.0, so long as no two threads
attempt to manipulate the same PGconn object at the same time. In particular,
you can't issue concurrent queries from different threads through the same
connection object. (If you need to run concurrent queries, start up multiple
The code (version 0.2) is available under GNU GPL from
- http://www.chez.com/emarsden/downloads/pg.el
/*--------------------------------------------------------------
- *
- * testlo.c--
- * test using large objects with libpq
- *
- * Copyright (c) 1994, Regents of the University of California
- *
- *
- * IDENTIFICATION
- * /usr/local/devel/pglite/cvs/src/doc/manual.me,v 1.16 1995/09/01 23:55:00 jolly Exp
- *
- *--------------------------------------------------------------
- */
- #include <stdio.h>
- #include "libpq-fe.h"
- #include "libpq/libpq-fs.h"
-
- #define BUFSIZE 1024
-
- /*
- * importFile * import file "in_filename" into database as large object "lobjOid"
- *
- */
- Oid importFile(PGconn *conn, char *filename)
- {
- Oid lobjId;
- int lobj_fd;
- char buf[BUFSIZE];
- int nbytes, tmp;
- int fd;
-
- /*
- * open the file to be read in
- */
- fd = open(filename, O_RDONLY, 0666);
- if (fd < 0) { /* error */
- fprintf(stderr, "can't open unix file %s\n", filename);
- }
-
- /*
- * create the large object
- */
- lobjId = lo_creat(conn, INV_READ|INV_WRITE);
- if (lobjId == 0) {
- fprintf(stderr, "can't create large object\n");
- }
-
- lobj_fd = lo_open(conn, lobjId, INV_WRITE);
- /*
- * read in from the Unix file and write to the inversion file
- */
- while ((nbytes = read(fd, buf, BUFSIZE)) > 0) {
- tmp = lo_write(conn, lobj_fd, buf, nbytes);
- if (tmp < nbytes) {
- fprintf(stderr, "error while reading large object\n");
- }
- }
-
- (void) close(fd);
- (void) lo_close(conn, lobj_fd);
-
- return lobjId;
- }
-
- void pickout(PGconn *conn, Oid lobjId, int start, int len)
- {
- int lobj_fd;
- char* buf;
- int nbytes;
- int nread;
-
- lobj_fd = lo_open(conn, lobjId, INV_READ);
- if (lobj_fd < 0) {
- fprintf(stderr,"can't open large object %d\n",
- lobjId);
- }
-
- lo_lseek(conn, lobj_fd, start, SEEK_SET);
- buf = malloc(len+1);
-
- nread = 0;
- while (len - nread > 0) {
- nbytes = lo_read(conn, lobj_fd, buf, len - nread);
- buf[nbytes] = ' ';
- fprintf(stderr,">>> %s", buf);
- nread += nbytes;
- }
- fprintf(stderr,"\n");
- lo_close(conn, lobj_fd);
- }
-
- void overwrite(PGconn *conn, Oid lobjId, int start, int len)
- {
- int lobj_fd;
- char* buf;
- int nbytes;
- int nwritten;
- int i;
-
- lobj_fd = lo_open(conn, lobjId, INV_READ);
- if (lobj_fd < 0) {
- fprintf(stderr,"can't open large object %d\n",
- lobjId);
- }
-
- lo_lseek(conn, lobj_fd, start, SEEK_SET);
- buf = malloc(len+1);
-
- for (i=0;i<len;i++)
- buf[i] = 'X';
- buf[i] = ' ';
-
- nwritten = 0;
- while (len - nwritten > 0) {
- nbytes = lo_write(conn, lobj_fd, buf + nwritten, len - nwritten);
- nwritten += nbytes;
- }
- fprintf(stderr,"\n");
- lo_close(conn, lobj_fd);
- }
-
- /*
- * exportFile * export large object "lobjOid" to file "out_filename"
- *
- */
- void exportFile(PGconn *conn, Oid lobjId, char *filename)
- {
- int lobj_fd;
- char buf[BUFSIZE];
- int nbytes, tmp;
- int fd;
-
- /*
- * create an inversion "object"
- */
- lobj_fd = lo_open(conn, lobjId, INV_READ);
- if (lobj_fd < 0) {
- fprintf(stderr,"can't open large object %d\n",
- lobjId);
- }
-
- /*
- * open the file to be written to
- */
- fd = open(filename, O_CREAT|O_WRONLY, 0666);
- if (fd < 0) { /* error */
- fprintf(stderr, "can't open unix file %s\n",
- filename);
- }
-
- /*
- * read in from the Unix file and write to the inversion file
- */
- while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0) {
- tmp = write(fd, buf, nbytes);
- if (tmp < nbytes) {
- fprintf(stderr,"error while writing %s\n",
- filename);
- }
- }
-
- (void) lo_close(conn, lobj_fd);
- (void) close(fd);
-
- return;
- }
-
- void
- exit_nicely(PGconn* conn)
- {
- PQfinish(conn);
- exit(1);
- }
-
- int
- main(int argc, char **argv)
- {
- char *in_filename, *out_filename;
- char *database;
- Oid lobjOid;
- PGconn *conn;
- PGresult *res;
-
- if (argc != 4) {
- fprintf(stderr, "Usage: %s database_name in_filename out_filename\n",
- argv[0]);
- exit(1);
- }
-
- database = argv[1];
- in_filename = argv[2];
- out_filename = argv[3];
-
- /*
- * set up the connection
- */
- conn = PQsetdb(NULL, NULL, NULL, NULL, database);
-
- /* check to see that the backend connection was successfully made */
- if (PQstatus(conn) == CONNECTION_BAD) {
- fprintf(stderr,"Connection to database '%s' failed.\n", database);
- fprintf(stderr,"%s",PQerrorMessage(conn));
- exit_nicely(conn);
- }
-
- res = PQexec(conn, "begin");
- PQclear(res);
-
- printf("importing file %s\n", in_filename);
- /* lobjOid = importFile(conn, in_filename); */
- lobjOid = lo_import(conn, in_filename);
- /*
- printf("as large object %d.\n", lobjOid);
-
- printf("picking out bytes 1000-2000 of the large object\n");
- pickout(conn, lobjOid, 1000, 1000);
-
- printf("overwriting bytes 1000-2000 of the large object with X's\n");
- overwrite(conn, lobjOid, 1000, 1000);
- */
-
- printf("exporting large object to file %s\n", out_filename);
- /* exportFile(conn, lobjOid, out_filename); */
- lo_export(conn, lobjOid,out_filename);
-
- res = PQexec(conn, "end");
- PQclear(res);
- PQfinish(conn);
- exit(0);
- }
+ *
+ * testlo.c--
+ * test using large objects with libpq
+ *
+ * Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ * /usr/local/devel/pglite/cvs/src/doc/manual.me,v 1.16 1995/09/01 23:55:00 jolly Exp
+ *
+ *--------------------------------------------------------------
+ */
+#include <stdio.h>
+#include "libpq-fe.h"
+#include "libpq/libpq-fs.h"
+
+#define BUFSIZE 1024
+
+/*
+ * importFile * import file "in_filename" into database as large object "lobjOid"
+ *
+ */
+Oid importFile(PGconn *conn, char *filename)
+{
+ Oid lobjId;
+ int lobj_fd;
+ char buf[BUFSIZE];
+ int nbytes, tmp;
+ int fd;
+
+ /*
+ * open the file to be read in
+ */
+ fd = open(filename, O_RDONLY, 0666);
+ if (fd < 0) { /* error */
+ fprintf(stderr, "can't open unix file %s\n", filename);
+ }
+
+ /*
+ * create the large object
+ */
+ lobjId = lo_creat(conn, INV_READ|INV_WRITE);
+ if (lobjId == 0) {
+ fprintf(stderr, "can't create large object\n");
+ }
+
+ lobj_fd = lo_open(conn, lobjId, INV_WRITE);
+ /*
+ * read in from the Unix file and write to the inversion file
+ */
+ while ((nbytes = read(fd, buf, BUFSIZE)) > 0) {
+ tmp = lo_write(conn, lobj_fd, buf, nbytes);
+ if (tmp < nbytes) {
+ fprintf(stderr, "error while reading large object\n");
+ }
+ }
+
+ (void) close(fd);
+ (void) lo_close(conn, lobj_fd);
+
+ return lobjId;
+}
+
+void pickout(PGconn *conn, Oid lobjId, int start, int len)
+{
+ int lobj_fd;
+ char* buf;
+ int nbytes;
+ int nread;
+
+ lobj_fd = lo_open(conn, lobjId, INV_READ);
+ if (lobj_fd < 0) {
+ fprintf(stderr,"can't open large object %d\n",
+ lobjId);
+ }
+
+ lo_lseek(conn, lobj_fd, start, SEEK_SET);
+ buf = malloc(len+1);
+
+ nread = 0;
+ while (len - nread > 0) {
+ nbytes = lo_read(conn, lobj_fd, buf, len - nread);
+ buf[nbytes] = ' ';
+ fprintf(stderr,">>> %s", buf);
+ nread += nbytes;
+ }
+ fprintf(stderr,"\n");
+ lo_close(conn, lobj_fd);
+}
+
+void overwrite(PGconn *conn, Oid lobjId, int start, int len)
+{
+ int lobj_fd;
+ char* buf;
+ int nbytes;
+ int nwritten;
+ int i;
+
+ lobj_fd = lo_open(conn, lobjId, INV_READ);
+ if (lobj_fd < 0) {
+ fprintf(stderr,"can't open large object %d\n",
+ lobjId);
+ }
+
+ lo_lseek(conn, lobj_fd, start, SEEK_SET);
+ buf = malloc(len+1);
+
+ for (i=0;i<len;i++)
+ buf[i] = 'X';
+ buf[i] = ' ';
+
+ nwritten = 0;
+ while (len - nwritten > 0) {
+ nbytes = lo_write(conn, lobj_fd, buf + nwritten, len - nwritten);
+ nwritten += nbytes;
+ }
+ fprintf(stderr,"\n");
+ lo_close(conn, lobj_fd);
+}
+
+/*
+ * exportFile * export large object "lobjOid" to file "out_filename"
+ *
+ */
+void exportFile(PGconn *conn, Oid lobjId, char *filename)
+{
+ int lobj_fd;
+ char buf[BUFSIZE];
+ int nbytes, tmp;
+ int fd;
+
+ /*
+ * create an inversion "object"
+ */
+ lobj_fd = lo_open(conn, lobjId, INV_READ);
+ if (lobj_fd < 0) {
+ fprintf(stderr,"can't open large object %d\n",
+ lobjId);
+ }
+
+ /*
+ * open the file to be written to
+ */
+ fd = open(filename, O_CREAT|O_WRONLY, 0666);
+ if (fd < 0) { /* error */
+ fprintf(stderr, "can't open unix file %s\n",
+ filename);
+ }
+
+ /*
+ * read in from the Unix file and write to the inversion file
+ */
+ while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0) {
+ tmp = write(fd, buf, nbytes);
+ if (tmp < nbytes) {
+ fprintf(stderr,"error while writing %s\n",
+ filename);
+ }
+ }
+
+ (void) lo_close(conn, lobj_fd);
+ (void) close(fd);
+
+ return;
+}
+
+void
+exit_nicely(PGconn* conn)
+{
+ PQfinish(conn);
+ exit(1);
+}
+
+int
+main(int argc, char **argv)
+{
+ char *in_filename, *out_filename;
+ char *database;
+ Oid lobjOid;
+ PGconn *conn;
+ PGresult *res;
+
+ if (argc != 4) {
+ fprintf(stderr, "Usage: %s database_name in_filename out_filename\n",
+ argv[0]);
+ exit(1);
+ }
+
+ database = argv[1];
+ in_filename = argv[2];
+ out_filename = argv[3];
+
+ /*
+ * set up the connection
+ */
+ conn = PQsetdb(NULL, NULL, NULL, NULL, database);
+
+ /* check to see that the backend connection was successfully made */
+ if (PQstatus(conn) == CONNECTION_BAD) {
+ fprintf(stderr,"Connection to database '%s' failed.\n", database);
+ fprintf(stderr,"%s",PQerrorMessage(conn));
+ exit_nicely(conn);
+ }
+
+ res = PQexec(conn, "begin");
+ PQclear(res);
+
+ printf("importing file %s\n", in_filename);
+/* lobjOid = importFile(conn, in_filename); */
+ lobjOid = lo_import(conn, in_filename);
+/*
+ printf("as large object %d.\n", lobjOid);
+
+ printf("picking out bytes 1000-2000 of the large object\n");
+ pickout(conn, lobjOid, 1000, 1000);
+
+ printf("overwriting bytes 1000-2000 of the large object with X's\n");
+ overwrite(conn, lobjOid, 1000, 1000);
+*/
+
+ printf("exporting large object to file %s\n", out_filename);
+/* exportFile(conn, lobjOid, out_filename); */
+ lo_export(conn, lobjOid,out_filename);
+
+ res = PQexec(conn, "end");
+ PQclear(res);
+ PQfinish(conn);
+ exit(0);
+}
White space (i.e., spaces, tabs and newlines) may be
used freely in
SQL queries.
Single-line comments are denoted by two dashes
- (--
). Everything after the dashes up to the end of the
+ ("--"). Everything after the dashes up to the end of the
line is ignored. Multiple-line comments, and comments within a line,
- are denoted by /* ... */
, a convention borrowed
+ are denoted by "/* ... */", a convention borrowed
- To create a new database named <Quote>mydb> from the command line, type
+ To create a new database named <literal>mydb> from the command line, type
% createdb mydb
Consult with the site administrator
regarding preconfigured alternate database locations.
Any valid environment variable name may be used to reference an alternate location,
- although using variable names with a prefix of <quote>PGDATA> is recommended
+ although using variable names with a prefix of <envar>PGDATA> is recommended
to avoid confusion
and conflict with other variables.
library. This allows you to submit
SQL commands
from C and get answers and status messages back to
your program. This interface is discussed further
- in section ??.
+ in The PostgreSQL Programmer's Guide.
to you and that you can type
SQL queries into a
workspace maintained by the terminal monitor.
The
psql program responds to escape codes that begin
- with the backslash character, \
For example, you
+ with the backslash character, "\". For example, you
can get help on the syntax of various
PostgreSQL SQL commands by typing:
This tells the server to process the query. If you
- terminate your query with a semicolon, the \g
is not
+ terminate your query with a semicolon, the "\g" is not
necessary.
psql will automatically process semicolon terminated queries.
To read queries from a file, say myFile, instead of
prompt.)
White space (i.e., spaces, tabs and newlines) may be
used freely in
SQL queries. Single-line comments are denoted by
- --
. Everything after the dashes up to the end of the
+ "--". Everything after the dashes up to the end of the
line is ignored. Multiple-line comments, and comments within a line,
- are denoted by /* ... */
+ are denoted by "/* ... */".
Notation
- ...
or /usr/local/pgsql/
+ "..." or /usr/local/pgsql/
at the front of a file name is used to represent the
path to the
Postgres superuser's home directory.
In a command synopsis, brackets
- ([
and ]
) indicate an optional phrase or keyword.
+ ("[" and "]") indicate an optional phrase or keyword.
Anything in braces
- ({
and }
) and containing vertical bars
- (|
)
+ ("{" and "}") and containing vertical bars
+ ("|")
indicates that you must choose one.
- In examples, parentheses ((
and )
) are
+ In examples, parentheses ("(" and ")") are
used to group boolean
- expressions. |
is the boolean operator OR.
+ expressions. "|" is the boolean operator OR.
Examples will show commands executed from various accounts and programs.
Commands executed from the root account will be preceeded with
- >
.
+ ">".
Commands executed from the
Postgres
- superuser account will be preceeded with %
, while commands
+ superuser account will be preceeded with "%", while commands
executed from an unprivileged user's account will be preceeded with
- $
.
-
SQL commands will be preceeded with
=>
+ "$".
+
SQL commands will be preceeded with
"=>"
or will have no leading prompt, depending on the context.
can I write it using
ODBC calls
or is that only when another database program
- like MS SQL Server or Access needs to access the data?
+ like MS SQL Server or Access needs to access the data?
+
supported on at least some platforms.
-
ApplixWare v4.4.
1 has been
- demonstrated under Linux with
Postgres v
6.4
+
ApplixWare v4.4.
2 has been
+ demonstrated under Linux with
Postgres v
7.0
driver contained in the
Postgres distribution.
command-line argument for
src/configure:
- % ./configure --with-odbc
- % make
+% ./configure --with-odbc
+% make
+
Rebuild the
Postgres distribution:
- % make install
+% make install
+
+
+ Install the ODBC catalog extensions available in
+ PGROOT/contrib/odbc/odbc.sql:
+
+% psql -e template1 < $PGROOT/contrib/odbc/odbc.sql
+
+
+ where specifying template1 as the target
+ database will ensure that all subsequent new databases will
+ have these same definitions.
+
+
This can be overridden from the
make command-line
as
- % make ODBCINST=filename install
+% make ODBCINST=filename install
sources, type:
- % ./configure
- % make
- % make POSTGRESDIR=PostgresTopDir install
+% ./configure
+% make
+% make POSTGRESDIR=PostgresTopDir install
then you can specify various destinations explicitly:
- % make BINDIR=bindir LIBDIR=libdir HEADERDIR=headerdir ODBCINST=instfile install
+% make BINDIR=bindir LIBDIR=libdir HEADERDIR=headerdir ODBCINST=instfile install
or gzipped tarfile to an empty directory. If using the zip package
unzip it with the command
- % unzip -a packagename
+% unzip -a packagename
The option
If you have the gzipped tar package than simply run
- tar -xzf packagename
+% tar -xzf packagename
Create the tar file:
- % cd interfaces/odbc
- % make standalone
+% cd interfaces/odbc
+% make standalone
Configure the standalone installation:
- % ./configure
+% ./configure
The configuration can be done with options:
- % ./configure --prefix=rootdir
- --with-odbc=inidir
+% ./configure --prefix=rootdir --with-odbc=inidir
where installs the libraries and headers in
Compile and link the source code:
- % make ODBCINST=instdir
+% make ODBCINST=instdir
Install the source code:
- % make POSTGRESDIR=targettree install
+% make POSTGRESDIR=targettree install
Here is how you would specify the various destinations explicitly:
- % make BINDIR=bindir
- LIBDIR>libdi>
- HEADERDIR=headerdir install
+% make BINDIR=bindir LIBDIR=libdir HEADERDIR=headerdir install
For example, typing
- % make POSTGRESDIR=/opt/psqlodbc install
+% make POSTGRESDIR=/opt/psqlodbc install
(after you've used
The command
- % make POSTGRESDIR=/opt/psqlodbc HEADERDIR=/usr/local install
+% make POSTGRESDIR=/opt/psqlodbc HEADERDIR=/usr/local install
should cause the libraries to be installed in /opt/psqlodbc/lib and
[ODBC Data Sources] and must contain the following entries:
- Driver = POSTGRESDIR/lib/libpsqlodbc.so
- Database=DatabaseName
- Servername=localhost
- Port=5432
+Driver = POSTGRESDIR/lib/libpsqlodbc.so
+Database=DatabaseName
+Servername=localhost
+Port=5432
+
ApplixWare
find the line that starts with
- #libFor elfodbc /ax/...
+#libFor elfodbc /ax/...
Change line to read
- libFor elfodbc applixroot/applix/axdata/axshlib/lib
+libFor elfodbc applixroot/applix/axdata/axshlib/lib
which will tell elfodbc to look in this directory
described above. You may also want to add the flag
- TextAsLongVarchar=0
+TextAsLongVarchar=0
to the database-specific portion of .odbc.ini
- You should see Starting elfodbc server
+ You should see "Starting elfodbc server"
in the lower left corner of the
data window. If you get an error dialog box, see the debugging section
below.
the axnet process. For example, if
- ps -aucx | grep ax
+% ps -aucx | grep ax
shows
- cary 10432 0.0 2.6 1740 392 ? S Oct 9 0:00 axnet
- cary 27883 0.9 31.0 12692 4596 ? S 10:24 0:04 axmain
+cary 10432 0.0 2.6 1740 392 ? S Oct 9 0:00 axnet
+cary 27883 0.9 31.0 12692 4596 ? S 10:24 0:04 axmain
Then run
- strace -f -s 1024 -p 10432
+% strace -f -s 1024 -p 10432
For example, after getting
- a Cannot launch gateway on server
,
+ a "Cannot launch gateway on server",
I ran strace on axnet and got
- [pid 27947] open("/usr/lib/libodbc.so", O_RDONLY) = -1 ENOENT
- (No such file or directory)
- [pid 27947] open("/lib/libodbc.so", O_RDONLY) = -1 ENOENT
- (No such file or directory)
- [pid 27947] write(2, "/usr2/applix/axdata/elfodbc:
- can't load library 'libodbc.so'\n", 61) = -1 EIO (I/O error)
+[pid 27947] open("/usr/lib/libodbc.so", O_RDONLY) = -1 ENOENT
+(No such file or directory)
+[pid 27947] open("/lib/libodbc.so", O_RDONLY) = -1 ENOENT
+(No such file or directory)
+[pid 27947] write(2, "/usr2/applix/axdata/elfodbc:
+can't load library 'libodbc.so'\n", 61) = -1 EIO (I/O error)
So what is happening is that applix elfodbc is searching for libodbc.so, but it
can't find it. That is why axnet.cnf needed to be changed.
- Enter the value sqldemo
, then click OK.
+ Enter the value "sqldemo", then click OK.
~/axhome/macros/login.am file:
- macro login
- set_set_system_var@("sql_username@","tgl")
- set_system_var@("sql_passwd@","no$way")
- endmacro
+macro login
+set_set_system_var@("sql_username@","tgl")
+set_system_var@("sql_passwd@","no$way")
+endmacro
- To view all variations of the ||
string concatenation operator,
+ To view all variations of the "||" string concatenation operator,
try
SELECT oprleft, oprright, oprresult, oprcode
|
-
-Element
-
-
-Precedence
-
-
-Description
-
+Element
+Precedence
+Description
unary minus
+
|
:
-boolean inequality
+inequality
|
right
-negation
+logical negation
|
Natural Exponentiation
: 3.0
+
|
@
Absolute value
- The operators ":" and ";" are deprecated, and will be removed in
- the near future. Use the equivalent functions exp() and ln()
+ Two operators, ":" and ";", are now deprecated and will be removed in
+ the next release. Use the equivalent functions exp() and ln()
instead.
RAISE level format'' [, identifier [...]];
- Inside the format, %
is used as a placeholder for the
+ Inside the format, "%" is used as a placeholder for the
subsequent comma-separated identifiers. Possible levels are
DEBUG (silently suppressed in production running databases), NOTICE
(written into the database log and forwarded to the client application)
Ports
- This manual describes version
6.5 of
Postgres.
+ This manual describes version
7.0 of
Postgres.
The
Postgres developer community has
compiled and tested
Postgres on a
number of platforms. Check
RS6000
v7.0
2000-04-05
|
BSDI 4.01
x86
v7.0
2000-04-04
|
Compaq Tru64 5.0
Alpha
v7.0
2000-04-11
|
FreeBSD 4.0
x86
v7.0
2000-04-04
|
HPUX
PA-RISC
v7.0
2000-04-12
- Both 9.0x and 10.20
+ Both 9.0x and 10.20.
|
IRIX 6.5.6f
MIPS
v6.5.3
2000-02-18
- MIPSPro 7.3.1.1m; full N32 build
+ MIPSPro 7.3.1.1m N32 build.
|
Linux 2.0.x
Alpha
v7.0
2000-04-05
- With patches
+ With published patches.
|
Linux 2.2.x
armv4l
v7.0
2000-04-17
- Regression test needs work
+ Regression test needs work.
|
Linux 2.2.x
x86
v7.0
2000-03-26
|
Linux 2.0.x
MIPS
v7.0
2000-04-13
- Cobalt Qube
+ Cobalt Qube.
|
Linux 2.2.5
Sparc
v7.0
2000-04-02
|
LinuxPPC R4
PPC603e
v7.0
2000-04-13
|
mklinux
PPC750
v7.0
2000-04-13
|
NetBSD 1.4
arm32
v7.0
2000-04-08
- Welche)
+ Welche
|
NetBSD 1.4U
x86
v7.0
2000-03-26
- Welche)
+ Welche
|
NetBSD
m68k
v7.0
2000-04-10
- Mac 8xx
+ Mac 8xx.
|
NetBSD/sparc
Sparc
v7.0
2000-04-13
|
QNX 4.25
x86
v7.0
2000-04-01
-
|
SCO OpenServer 5
x86
v6.5
1999-05-25
|
SCO UnixWare 7
x86
v7.0
2000-04-18
- See FAQ; needs patch for compiler bug
+ See FAQ.
|
Solaris
x86
v7.0
2000-04-12
|
Solaris 2.5.1-2.7
Sparc
v7.0
2000-04-12
|
SunOS 4.1.4
Sparc
v7.0
2000-04-13
-
- |
- SVR4
- MIPS
- v6.4
- 1998-10-28
- No 64-bit int compiler support
|
Windows/Win32
v7.0
2000-04-02
Client-side libraries or ODBC/JDBC. No server-side.
|
WinNT/Cygwin
x86
v7.0
2000-03-30
- Working with the Cygwin library.
+ Uses Cygwin library.
the server-side port of
Postgres uses
the RedHat/Cygnus
Cygwin library and
- toolset.
+ toolset. For
Windows 9x, no
+ server-side port is available due to OS limitations.
tested for v7.0 or v6.5.x:
-
Obsolete Platforms
+
Unsupported Platforms
+ |
+ BeOS
+ x86
+ v7.0
+ 2000-05-01
+ Client-side coming soon?
+
|
DGUX 5.4R4.11
m88k
v6.3
1998-03-01
v6.4 probably OK. Needs new maintainer.
|
NetBSD-current
NS32532
v6.4
1998-10-27
- small problems in date/time math
+ Date math annoyances.
|
NetBSD 1.3
VAX
v6.3
1998-03-01
+ v7.0 should work.
|
SVR4 4.4
m88k
v6.2.1
1998-03-01
- Confirmed with patching; v6.4.x will need TAS spinlock code
+ v6.4.x will need TAS spinlock code.
+
+ |
+ SVR4
+ MIPS
+ v6.4
+ 1998-10-28
+ No 64-bit int.
|
Ultrix
- MIPS,VAX?
+ MIPS, VAX
v6.x
1998-03-01
No recent reports; obsolete?
x86
v6.x
1998-03-01
-
Client-only support; v1.0.9 worked with patches (
+ Client-only support; v1.0.9 worked with patches
+
]>
-
+
+
- <Title>PostgreSQLitle>
- <BookInfo>
- <ReleaseInfo>Covering v6.5 for general releasenfo>
- <BookBiblio>
- <AuthorGroup>
- <CorpAuthor>The PostgreSQL Development Teamuthor>
- AuthorGroup>
+ <title>PostgreSQLitle>
+ <bookinfo>
+ <releaseinfo>Covering v7.0 for general releasenfo>
+ <bookbiblio>
+ <authorgroup>
+ <corpauthor>The PostgreSQL Development Teamuthor>
+ authorgroup>
- <Editor>
- <FirstName>Thomasame>
- <SurName>Lockhartame>
- <Affiliation>
- <OrgName>Caltech/JPLame>
- Affiliation>
- Editor>
+ <editor>
+ <firstname>Thomasame>
+ <surname>Lockhartame>
+ <affiliation>
+ <orgname>Caltech/JPLame>
+ affiliation>
+ editor>
-->
- <Date>(last updated 1999-06-01)ate>
- BookBiblio>
+ <date>(last updated 2000-05-01)ate>
+ bookbiblio>
- <LegalNotice>
- <Para>
- <ProductName>PostgreSQL is Copyright © 1996-9
- by the Postgres Global Development Group.
- Para>
- LegalNotice>
+ <legalnotice>
+ <para>
+ <productname>PostgreSQL is Copyright © 1996-2000
+ by PostgreSQL Inc.
+ para>
+ legalnotice>
- BookInfo>
+ bookinfo>
- <Title>Summaryitle>
+ <title>Summaryitle>
- <Para>
- <ProductName>Postgresame>,
+ <para>
+ <productname>Postgresame>,
developed originally in the UC Berkeley Computer Science Department,
pioneered many of the object-relational concepts
now becoming available in some commercial databases.
It provides SQL92/SQL3 language support,
transaction integrity, and type extensibility.
- <ProductName>PostgreSQLame> is an
+ <productname>PostgreSQLame> is an
open-source descendant of this original Berkeley code.
- Para>
- Preface>
+ para>
+ preface>
- <Title>User's Guideitle>
- <PartIntro>
- <Para>
+ <title>User's Guideitle>
+ <partintro>
+ <para>
Information for users.
- Para>
- PartIntro>
+ para>
+ partintro>
&intro;
&syntax;
&plan;
&populate;
&commands;
- Part>
+ part>
- <Title>Administrator's Guideitle>
- <PartIntro>
- <Para>
+ <title>Administrator's Guideitle>
+ <partintro>
+ <para>
Installation and maintenance information.
- Para>
- PartIntro>
+ para>
+ partintro>
&protocol;
&signals;
&compiler;
&bki;
&page;
- Part>
+ part>
- <Title>Tutorialitle>
- <PartIntro>
- <Para>
+ <title>Tutorialitle>
+ <partintro>
+ <para>
Introduction for new users.
- Para>
- PartIntro>
+ para>
+ partintro>
&sql;
&arch;
&start;
&query;
&advanced;
- Part>
+ part>
- <Title>Appendicesitle>
- <PartIntro>
- <Para>
+ <title>Appendicesitle>
+ <partintro>
+ <para>
Additional related information.
- Para>
- PartIntro>
+ para>
+ partintro>
&datetime;
&cvs;
&contacts;
-->
&biblio;
- Part>
+ part>
-Book>
+book>
+
PostgreSQL Programmer's Guide
- Covering v6.5 for general release
+ Covering v7.0 for general release
The PostgreSQL Development Team
-->
- (last updated 1999-06-19)
+ (last updated 2000-05-01)
-
PostgreSQL is Copyright © 1996-
9
- by the Postgres Global Development Group.
+
PostgreSQL is Copyright © 1996-
2000
+ by PostgreSQL Inc.
&func-ref;
-->
- &trigger;
- &spi;
- &lobj;
- &libpq;
- &libpqpp;
- &libpgtcl;
- &libpgeasy;
- &ecpg;
- &odbc;
- &jdbc;
- &lisp;
+ &trigger;
+ &spi;
+ &lobj;
+ &libpq;
+ &libpqpp;
+ &libpgtcl;
+ &libpgeasy;
+ &ecpg;
+ &odbc;
+ &jdbc;
+ &lisp;
- &sources;
- &arch-dev;
- &options;
- &geqo;
- &protocol;
- &signals;
- &compiler;
- &bki;
- &page;
+ &sources;
+ &arch-dev;
+ &options;
+ &geqo;
+
+ &protocol;
+ &signals;
+ &compiler;
+ &bki;
+ &page;
- ERROR: Unable to create database directory 'xxx'.
+ ERROR: Unable to create database directory 'path'.
ERROR: Could not initialize database directory.
An alternate location can be specified in order to,
for example, store the database on a different disk.
- The path must have been prepared with the
- linkend="APP-INITLOCATION" endterm="APP-INITLOCATION-title">
+ The path must have been prepared with the
+ linkend="APP-INITLOCATION" endterm="APP-INITLOCATION-title">
command.
access methods).
+ Use
+ to remove an index.
+
+
1998-09-09
- Use
- to remove an index.
-
-
Usage
REFERENCES Constraint
-[ CONSTRAINT name ] REFERENCES
-reftable [ ( refcolumn ) ]
-[ MATCH matchtype ]
-[ ON DELETE action ]
-[ ON UPDATE action ]
-[ [ NOT ] DEFERRABLE ]
-[ INITIALLY checktime ]
+[ CONSTRAINT name ] REFERENCES reftable [ ( refcolumn ) ]
+ [ MATCH matchtype ]
+ [ ON DELETE action ]
+ [ ON UPDATE action ]
+ [ [ NOT ] DEFERRABLE ]
+ [ INITIALLY checktime ]
The REFERENCES constraint specifies a rule that a column
REFERENCES Constraint
-[ CONSTRAINT name ]
-FOREIGN KEY ( column [, ...] ) REFERENCES
-reftable [ ( refcolumn [, ...] ) ]
-[ MATCH matchtype ]
-[ ON DELETE action ]
-[ ON UPDATE action ]
-[ [ NOT ] DEFERRABLE ]
-[ INITIALLY checktime ]
+[ CONSTRAINT name ] FOREIGN KEY ( column [, ...] )
+ REFERENCES reftable [ ( refcolumn [, ...] ) ]
+ [ MATCH matchtype ]
+ [ ON DELETE action ]
+ [ ON UPDATE action ]
+ [ [ NOT ] DEFERRABLE ]
+ [ INITIALLY checktime ]
The REFERENCES constraint specifies a rule that a column value is
Table Constraint definition:
-[ CONSTRAINT name ] UNIQUE ( column [, ...] )
+[ CONSTRAINT name ] UNIQUE ( column [, ...] )
[ { INITIALLY DEFERRED | INITIALLY IMMEDIATE } ]
[ [ NOT ] DEFERRABLE ]
Column Constraint definition:
-[ CONSTRAINT name ] UNIQUE
+[ CONSTRAINT name ] UNIQUE
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
included for symmetry with the NOT NULL clause. Since it is the
default for any column, its presence is simply noise.
-[ CONSTRAINT name ] NULL
+[ CONSTRAINT name ] NULL
SQL92 specifies some additional capabilities for NOT NULL:
-[ CONSTRAINT name ] NOT NULL
+[ CONSTRAINT name ] NOT NULL
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
or a domain.
- DEFAULT niladic USER function |
- niladic datetime function |
- NULL
+DEFAULT niladic_user_function | niladic_datetime_function | NULL
-->
as an alternate method for defining a constraint:
-CREATE ASSERTION name CHECK ( condition )
+CREATE ASSERTION name CHECK ( condition )
Domain constraint:
-[ CONSTRAINT name ] CHECK constraint
+[ CONSTRAINT name ] CHECK constraint
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
Table constraint definition:
-[ CONSTRAINT name ] { PRIMARY KEY ( column, ... ) | FOREIGN KEY constraint | UNIQUE constraint | CHECK constraint }
+[ CONSTRAINT name ] { PRIMARY KEY ( column, ... ) | FOREIGN KEY constraint | UNIQUE constraint | CHECK constraint }
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
Column constraint definition:
-[ CONSTRAINT name ] { NOT NULL | PRIMARY KEY | FOREIGN KEY constraint | UNIQUE | CHECK constraint }
+[ CONSTRAINT name ] { NOT NULL | PRIMARY KEY | FOREIGN KEY constraint | UNIQUE | CHECK constraint }
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
INITIALLY IMMEDIATE
- Check constraint only at the end of the transaction. This
- is the default
+ Check constraint only at the end of the transaction. This
+ is the default
INITIALLY DEFERRED
- Check constraint after each statement.
+ Check constraint after each statement.
table constraint definition:
-[ CONSTRAINT name ] CHECK ( VALUE condition )
+[ CONSTRAINT name ] CHECK ( VALUE condition )
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
column constraint definition:
-[ CONSTRAINT name ] CHECK ( VALUE condition )
+[ CONSTRAINT name ] CHECK ( VALUE condition )
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
domain constraint definition:
- [ CONSTRAINT name ]
+ [ CONSTRAINT name]
CHECK ( VALUE condition )
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
Table Constraint definition:
-[ CONSTRAINT name ] PRIMARY KEY ( column [, ...] )
+[ CONSTRAINT name ] PRIMARY KEY ( column [, ...] )
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
Column Constraint definition:
-[ CONSTRAINT name ] PRIMARY KEY
+[ CONSTRAINT name ] PRIMARY KEY
[ {INITIALLY DEFERRED | INITIALLY IMMEDIATE} ]
[ [ NOT ] DEFERRABLE ]
initdb [ --pgdata|-D dbdir ]
- [ --sysid|-i sysid ]
- [ --pwprompt|-W ]
- [ --encoding|-E encoding ]
- [ --pglib|-L libdir ]
- [ --noclean | -n ] [ --debug | -d ] [ --template | -t ]
+ [ --sysid|-i sysid ]
+ [ --pwprompt|-W ]
+ [ --encoding|-E encoding ]
+ [ --pglib|-L libdir ]
+ [ --noclean | -n ] [ --debug | -d ] [ --template | -t ]
pg_ctl [-w] [-D datadir][-p path] [-o "options"] start
pg_ctl [-w] [-D datadir] [-m [s[mart]|f[ast]|i[mmediate]]] stop
-pg_ctl [-w] [-D datadir] [-m [s[mart]|f[ast]|i[mmediate]] [-o "options"] restart
+pg_ctl [-w] [-D datadir] [-m [s[mart]|f[ast]|i[mmediate]]
+ [-o "options"] restart
pg_ctl [-D datadir] status
postmaster [ -B nBuffers ] [ -D DataDir ] [ -N maxBackends ] [ -S ]
- [ -d DebugLevel ] [ -i ] [ -l ]
- [ -o BackendOptions ] [ -p port ] [ -n | -s ]
+ [ -d DebugLevel ] [ -i ] [ -l ]
+ [ -o BackendOptions ] [ -p port ] [ -n | -s ]
1999-12-04
-vacuumdb [ connection options ] [ --analyze | -z ] [ --alldb | -a ] [ --verbose | -v ]
- [ --table 'table [ ( column [,...] ) ]' ] [ [-d] dbname ]
+vacuumdb [ options ] [ --analyze | -z ]
+ [ --alldb | -a ] [ --verbose | -v ]
+ [ --table 'table [ ( column [,...] ) ]' ] [ [-d] dbname ]
- [-d, --dbname] dbname
+ -d dbname
+ --dbname dbname
Specifies the name of the database to be cleaned or analyzed.
- -z, --analyze
+ -z
+ --analyze
Calculate statistics on the database for use by the optimizer.
- -a, --alldb
+ -a
+ --alldb
Vacuum all databases.
- -v, --verbose
+ -v
+ --verbose
Print detailed information during processing.
- -t, --table table [ (column [,...]) ]
+ -t table [ (column [,...]) ]
+ --table table [ (column [,...]) ]
Clean or analyze table only.
- -h, --host host
+ -h host
+ --host host
Specifies the hostname of the machine on which the
- -p, --port port
+ -p port
+ --port port
Specifies the Internet TCP/IP port or local Unix domain socket file
- -U, --username username
+ -U username
+ --username username
Username to connect as.
- -W, --password
+ -W
+ --password
Force password prompt.
- -e, --echo
+ -e
+ --echo
Echo the commands that
vacuumdb generates
- -q, --quiet
+ -q
+ --quiet
Do not display a response.
]>
-
-
-
-
PostgreSQL Reference Manual
-
- Covering v6.5 for general release
-
- Jose
- Soares Da Silva
-
- Oliver
- Elphick
-
-
+
+
+
+
+
PostgreSQL Reference Manual
+
+ Covering v6.5 for general release
+
+ Jose
+ Soares Da Silva
+
+ Oliver
+ Elphick
+
+
- ditor>
- Oliverame>
- Elphickame>
- ditor>
+ ditor>
+ Oliverame>
+ Elphickame>
+ ditor>
- (last updated 1999-06-01)ate>
- iblio>
+ (last updated 2000-05-01)ate>
+ iblio>
-otice>
-by the Postgres Global Development Group.
-ara>
-otice>
+ otice>
+
PostgreSQL is © 1998-2000
+ by PostgreSQL Inc.
+ ara>
+ otice>
-nfo>
+ nfo>
-Summaryitle>
-
- developed originally in the UC Berkeley Computer Science Department,
- pioneered many of the object-relational concepts
- now becoming available in some commercial databases.
-It provides SQL92/SQL3 language support,
- transaction integrity, and type extensibility.
-
PostgreSQLame> is a public-domain, open source descendant
- of this original Berkeley code.
-ara>
-reface>
+ Summaryitle>
+
+ developed originally in the UC Berkeley Computer Science Department,
+ pioneered many of the object-relational concepts
+ now becoming available in some commercial databases.
+ It provides SQL92/SQL3 language support,
+ transaction integrity, and type extensibility.
+
PostgreSQLame> is a public-domain, open source descendant
+ of this original Berkeley code.
+ ara>
+ reface>
-&commands;
+ &commands;
-&biblio;
+ &biblio;
-
-
+
+
+
-
-
Regression Test
-
-Regression test instructions and analysis.
-
-
-
- The PostgreSQL regression tests are a comprehensive set of tests for the
- SQL implementation embedded in PostgreSQL. They test standard SQL
- operations as well as the extended capabilities of PostgreSQL.
-
-
- There are two different ways in which the regression tests can be run:
- the "sequential" method and the "parallel" method. The sequential method
- runs each test script in turn, whereas the parallel method starts up
- multiple server processes to run groups of tests in parallel. Parallel
- testing gives confidence that interprocess communication and locking
- are working correctly. Another key difference is that the sequential
- test procedure uses an already-installed postmaster, whereas the
- parallel test procedure tests a system that has been built but not yet
- installed. (The parallel test script actually does an installation into
- a temporary directory and fires up a private postmaster therein.)
-
-
- Some properly installed and fully functional PostgreSQL installations
- can "fail" some of these regression tests due to artifacts of floating point
- representation and time zone support. The tests are currently evaluated
- using a simple
diff comparison against the
- outputs generated on a reference system, so the results are sensitive to
- small system differences.
- When a test is reported as "failed", always examine the differences
- between expected and actual results; you may well find that the differences
- are not significant.
-
-
- The regression tests were originally developed by Jolly Chen and Andrew Yu,
- and were extensively revised/repackaged by Marc Fournier and Thomas Lockhart.
- From
PostgreSQL v6.1 onward
- the regression tests are current for every official release.
-
-
-
-
Regression Environment
-
-The regression testing notes below assume the following (except where noted):
-
-
-Commands are Unix-compatible. See note below.
-
-
-
-Defaults are used except where noted.
-
-
-
-User postgres is the
Postgres superuser.
-
-
-
-The source path is /usr/src/pgsql (other paths are possible).
-
-
-
-The runtime path is /usr/local/pgsql (other paths are possible).
-
-
-
-
-
- Normally, the regression tests should be run as the postgres user since
- the 'src/test/regress' directory and sub-directories are owned by the
- postgres user. If you run the regression test as another user the
- 'src/test/regress' directory tree must be writeable by that user.
-
-
- It was formerly necessary to run the postmaster with system time zone
- set to PST, but this is no longer required. You can run the regression
- tests under your normal postmaster configuration. The test script will
- set the PGTZ environment variable to ensure that timezone-dependent tests
- produce the expected results. However, your system must provide
- library support for the PST8PDT time zone, or the timezone-dependent
- tests will fail.
- To verify that your machine does have this support, type
- the following:
- setenv TZ PST8PDT
- date
-
-
-
- The "date" command above should have returned the current system time
- in the PST8PDT time zone. If the PST8PDT database is not available, then
- your system may have returned the time in GMT. If the PST8PDT time zone
- is not available, you can set the time zone rules explicitly:
- setenv PGTZ PST8PDT7,M04.01.0,M10.05.03
-
-
-
-
-
-
Directory Layout
-
-
- This should become a table in the previous section.
-
-
-
-
- input/ .... .source files that are converted using 'make all' into
- some of the .sql files in the 'sql' subdirectory
+
+
Regression Test
+
+ Regression test instructions and analysis.
+
+
+
+ The PostgreSQL regression tests are a comprehensive set of tests for the
+ SQL implementation embedded in PostgreSQL. They test standard SQL
+ operations as well as the extended capabilities of PostgreSQL.
+
+
+ There are two different ways in which the regression tests can be run:
+ the "sequential" method and the "parallel" method. The sequential method
+ runs each test script in turn, whereas the parallel method starts up
+ multiple server processes to run groups of tests in parallel. Parallel
+ testing gives confidence that interprocess communication and locking
+ are working correctly. Another key difference is that the sequential
+ test procedure uses an already-installed postmaster, whereas the
+ parallel test procedure tests a system that has been built but not yet
+ installed. (The parallel test script actually does an installation into
+ a temporary directory and fires up a private postmaster therein.)
+
+
+ Some properly installed and fully functional PostgreSQL installations
+ can "fail" some of these regression tests due to artifacts of floating point
+ representation and time zone support. The tests are currently evaluated
+ using a simple
diff comparison against the
+ outputs generated on a reference system, so the results are sensitive to
+ small system differences.
+ When a test is reported as "failed", always examine the differences
+ between expected and actual results; you may well find that the differences
+ are not significant.
+
+
+ The regression tests were originally developed by Jolly Chen and Andrew Yu,
+ and were extensively revised/repackaged by Marc Fournier and Thomas Lockhart.
+ From
PostgreSQL v6.1 onward
+ the regression tests are current for every official release.
+
+
+
+
Regression Environment
+
+ The regression testing notes below assume the following (except where noted):
+
+
+ Commands are Unix-compatible. See note below.
+
+
+
+ Defaults are used except where noted.
+
+
+
+ User postgres is the
Postgres superuser.
+
+
+
+ The source path is /usr/src/pgsql (other paths are possible).
+
+
+
+ The runtime path is /usr/local/pgsql (other paths are possible).
+
+
+
+
+
+ Normally, the regression tests should be run as the postgres user since
+ the 'src/test/regress' directory and sub-directories are owned by the
+ postgres user. If you run the regression test as another user the
+ 'src/test/regress' directory tree must be writeable by that user.
+
+
+ It was formerly necessary to run the postmaster with system time zone
+ set to PST, but this is no longer required. You can run the regression
+ tests under your normal postmaster configuration. The test script will
+ set the PGTZ environment variable to ensure that timezone-dependent tests
+ produce the expected results. However, your system must provide
+ library support for the PST8PDT time zone, or the timezone-dependent
+ tests will fail.
+ To verify that your machine does have this support, type
+ the following:
+
+setenv TZ PST8PDT
+date
+
+
+
+ The "date" command above should have returned the current system time
+ in the PST8PDT time zone. If the PST8PDT database is not available, then
+ your system may have returned the time in GMT. If the PST8PDT time zone
+ is not available, you can set the time zone rules explicitly:
+setenv PGTZ PST8PDT7,M04.01.0,M10.05.03
+
+
+
+ The directory layout for the regression test area is:
+
+
+
Directory Layout
- output/ ... .source files that are converted using 'make all' into
- .out files in the 'expected' subdirectory
+ Kerberos
- sql/ ...... .sql files used to perform the regression tests
+
+
+ |
+ Directory
+ Description
+
+
+
+ |
+ Directory
+ Description
+
+ |
+ input
+
+ Source files that are converted using
+ make all into
+ some of the .sql files in the
+ sql subdirectory.
+
+
- expected/ . .out files that represent what we *expect* the results to
- look like
+ |
+ output
+
+ Source files that are converted using
+ make all into
+ .out files in the
+ expected subdirectory.
+
+
- results/ .. .out files that contain what the results *actually* look
- like. Also used as temporary storage for table copy testing.
+ |
+ sql
+
+ .sql files used to perform the
+ regression tests.
+
+
- tmp_check/ temporary installation created by parallel testing script.
-
-
-
+ |
+ expected
+
+ .out files that represent what we
+ expect the results to
+ look like.
+
+
+
+ |
+ results
+
+ .out files that contain what the results
+ actually look
+ like. Also used as temporary storage for table copy testing.
+
+
+
+ |
+ tmp_check
+
+ Temporary installation created by parallel testing script.
+
+
+
+
+
+
+
- <Sect1>
- <Title>Regression Test Procedureitle>
+ <sect1>
+ <title>Regression Test Procedureitle>
- <Para>
+ <para>
Commands were tested on RedHat Linux version 4.2 using the bash shell.
Except where noted, they will probably work on most systems. Commands
- like ps and tar vary wildly on what options you should use on each
- platform. Use common sense before typing in these commands.
-
+ like ps and tar vary
+ wildly on what options you should use on each
+ platform. Use common sense before typing in these commands.
+
- <Procedure>
- <
Title>Postgres Regression Testitle>
+ <procedure>
+ <
title>Postgres Regression Testitle>
- <Step Performance="required">
- <Para>
+ <step performance="required">
+ <para>
Prepare the files needed for the regression test with:
- <ProgramListing>
+ <programlisting>
cd /usr/src/pgsql/src/test/regress
gmake clean
gmake all
- ProgramListing>
+ programlisting>
You can skip "gmake clean" if this is the first time you
are running the tests.
- <Para>
- This step compiles a <Acronym>Ccronym>
+ <para>
+ This step compiles a <acronym>Ccronym>
program with PostgreSQL extension functions into a shared library.
Localized SQL scripts and output-comparison files are also created
for the tests that need them. The localization replaces macros in
the source files with absolute pathnames and user names.
- Para>
+ para>
- <Step Performance="optional">
- <Para>
+ <step performance="optional">
+ <para>
If you intend to use the "sequential" test procedure, which tests
an already-installed postmaster, be sure that the postmaster
is running. If it isn't already running,
start the postmaster in an available window by typing
- <ProgramListing>
+ <programlisting>
postmaster
- ProgramListing>
+ programlisting>
or start the postmaster daemon running in the background by typing
- <ProgramListing>
+ <programlisting>
cd
nohup postmaster > regress.log 2>&1 &
- ProgramListing>
+ programlisting>
The latter is probably preferable, since the regression test log
will be quite lengthy (60K or so, in
- <ProductName>Postgresame> 7.0) and you might want to
+ <productname>Postgresame> 7.0) and you might want to
review it for clues if things go wrong.
- <Note>
- <Para>
- Do not run <FileName>postmasterame> from the root account.
- Para>
- Note>
- Para>
- Step>
+ <note>
+ <para>
+ Do not run <filename>postmasterame> from the root account.
+ para>
+ note>
+ para>
+ step>
- <Step Performance="required">
- <Para>
+ <step performance="required">
+ <para>
Run the regression tests. For a sequential test, type
- <ProgramListing>
+ <programlisting>
cd /usr/src/pgsql/src/test/regress
gmake runtest
- ProgramListing>
+ programlisting>
For a parallel test, type
- <ProgramListing>
+ <programlisting>
cd /usr/src/pgsql/src/test/regress
gmake runcheck
- ProgramListing>
+ programlisting>
The sequential test just runs the test scripts using your
already-running postmaster.
The parallel test will perform a complete installation of
- <ProductName>Postgresame> into a temporary directory,
+ <productname>Postgresame> into a temporary directory,
start a private postmaster therein, and then run the test scripts.
Finally it will kill the private postmaster (but the temporary
directory isn't removed automatically).
- Para>
- Step>
+ para>
+ step>
- <Step Performance="required">
- <Para>
+ <step performance="required">
+ <para>
You should get on the screen (and also written to file ./regress.out)
a series of statements stating which tests passed and which tests
failed. Please note that it can be normal for some of the tests to
"fail" due to platform-specific variations. See the next section
for details on determining whether a "failure" is significant.
- Para>
- <Para>
+ para>
+ <para>
Some of the tests, notably "numeric", can take a while, especially
on slower platforms. Have patience.
- Para>
- Step>
+ para>
+ step>
- <Step Performance="required">
- <Para>
+ <step performance="required">
+ <para>
After running the tests and examining the results, type
- <ProgramListing>
+ <programlisting>
cd /usr/src/pgsql/src/test/regress
gmake clean
- ProgramListing>
+ programlisting>
to recover the temporary disk space used by the tests.
If you ran a sequential test, also type
- <ProgramListing>
+ <programlisting>
dropdb regression
- ProgramListing>
- Para>
- Step>
+ programlisting>
+ para>
+ step>
- Sect1>
+ sect1>
- <Sect1>
- <Title>Regression Analysisitle>
+ <sect1>
+ <title>Regression Analysisitle>
- <Para>
+ <para>
The actual outputs of the regression tests are in files in the
./results directory. The test script
uses
diff to compare each output file
saved for your inspection in
./regression.diffs. (Or you can run
diff yourself, if you prefer.)
- Para>
+ para>
- <Para>
+ <para>
The files might not compare exactly. The test script will report
any difference as a "failure", but the difference might be due
to small cross-system differences in error message wording,
math library behavior, etc.
"Failures" of this type do not indicate a problem with
- <ProductName>Postgresame>.
- Para>
+ <productname>Postgresame>.
+ para>
- <Para>
+ <para>
Thus, it is necessary to examine the actual differences for each
"failed" test to determine whether there is really a problem.
The following paragraphs attempt to provide some guidance in
determining whether a difference is significant or not.
- Para>
+ para>
- <Sect2>
- <Title>Error message differencesitle>
+ <sect2>
+ <title>Error message differencesitle>
- <Para>
+ <para>
Some of the regression tests involve intentional invalid input values.
Error messages can come from either the Postgres code or from the host
platform system routines. In the latter case, the messages may vary
between platforms, but should reflect similar information. These
differences in messages will result in a "failed" regression test which
can be validated by inspection.
- Para>
+ para>
- Sect2>
+ sect2>
- <Sect2>
- <Title>Date and time differencesitle>
+ <sect2>
+ <title>Date and time differencesitle>
- <Para>
+ <para>
Most of the date and time results are dependent on timezone environment.
The reference files are generated for timezone PST8PDT (Berkeley,
California) and there will be apparent failures if the tests are not
run with that timezone setting. The regression test driver sets
environment variable PGTZ to PST8PDT to ensure proper results.
- Para>
+ para>
- <Para>
+ <para>
Some of the queries in the "timestamp" test will fail if you run
the test on the day of a daylight-savings time changeover, or the
day before or after one. These queries assume that the intervals
between midnight yesterday, midnight today and midnight tomorrow are
exactly twenty-four hours ... which is wrong if daylight-savings time
went into or out of effect meanwhile.
- Para>
+ para>
- <Para>
+ <para>
There appear to be some systems which do not accept the recommended syntax
for explicitly setting the local time zone rules; you may need to use
a different PGTZ setting on such machines.
- Para>
+ para>
- <Para>
+ <para>
Some systems using older timezone libraries fail to apply daylight-savings
corrections to pre-1970 dates, causing pre-1970 PDT times to be displayed
in PST instead. This will result in localized differences in the test
results.
- Para>
+ para>
- Sect2>
+ sect2>
- <Sect2>
- <Title>Floating point differencesitle>
+ <sect2>
+ <title>Floating point differencesitle>
- <Para>
- Some of the tests involve computing 64-bit (<Type>float8ype>) numbers from table
+ <para>
+ Some of the tests involve computing 64-bit (<type>float8ype>) numbers from table
columns. Differences in results involving mathematical functions of
- <Type>float8ype> columns have been observed. The float8
+ <type>float8ype> columns have been observed. The float8
and geometry tests are particularly prone to small differences
across platforms.
Human eyeball comparison is needed to determine the real significance
of these differences which are usually 10 places to the right of
the decimal point.
- Para>
+ para>
- <Para>
+ <para>
Some systems signal errors from pow() and exp() differently from
the mechanism expected by the current Postgres code.
- Para>
+ para>
- Sect2>
+ sect2>
- <Sect2>
- <Title>Polygon differencesitle>
+ <sect2>
+ <title>Polygon differencesitle>
- <Para>
+ <para>
Several of the tests involve operations on geographic date about the
Oakland/Berkley CA street map. The map data is expressed as polygons
- whose vertices are represented as pairs of <Type>float8ype> numbers (decimal
+ whose vertices are represented as pairs of <type>float8ype> numbers (decimal
latitude and longitude). Initially, some tables are created and
loaded with geographic data, then some views are created which join
two tables using the polygon intersection operator (##), then a select
in the 2nd or 3rd place to the right of the decimal point. The SQL
statements where these problems occur are the following:
- <ProgramListing>
+ <programlisting>
QUERY: SELECT * from street;
QUERY: SELECT * from iexit;
- ProgramListing>
- Para>
+ programlisting>
+ para>
- Sect2>
+ sect2>
- <Sect2>
- <Title>Random differencesitle>
+ <sect2>
+ <title>Random differencesitle>
- <Para>
+ <para>
There is at least one case in the "random" test script that is
intended to produce
random results. This causes random to fail the regression test
once in a while (perhaps once in every five to ten trials).
Typing
- <ProgramListing>
+ <programlisting>
diff results/random.out expected/random.out
- ProgramListing>
+ programlisting>
should produce only one or a few lines of differences. You need
not worry unless the random test always fails in repeated attempts.
(On the other hand, if the random test is never
reported to fail even in many trials of the regress tests, you
probably should worry.)
- Para>
+ para>
- Sect2>
+ sect2>
- <Sect2>
- <Title>The expected
filesitle>
+ <sect2>
+ <title>The "expected" filesitle>
- <Para>
- The <FileName>./expected/*.outame> files were adapted from the original monolithic
- <FileName>expected.inputame> file provided by Jolly Chen et al. Newer versions of these
+ <para>
+ The <filename>./expected/*.outame> files were adapted from the original monolithic
+ <filename>expected.inputame> file provided by Jolly Chen et al. Newer versions of these
files generated on various development machines have been substituted after
careful (?) inspection. Many of the development machines are running a
Unix OS variant (FreeBSD, Linux, etc) on Ix86 hardware.
- The original <FileName>expected.inputame> file was created on a SPARC Solaris 2.4
- system using the <FileName>postgres5-1.02a5.tar.gzame> source tree. It was compared
+ The original <filename>expected.inputame> file was created on a SPARC Solaris 2.4
+ system using the <filename>postgres5-1.02a5.tar.gzame> source tree. It was compared
with a file created on an I386 Solaris 2.4 system and the differences
were only in the floating point polygons in the 3rd digit to the right
of the decimal point.
- The original <FileName>sample.regress.outame> file was from the postgres-1.01 release
+ The original <filename>sample.regress.outame> file was from the postgres-1.01 release
constructed by Jolly Chen. It may
- have been created on a DEC ALPHA machine as the <FileName>Makefile.globalame>
+ have been created on a DEC ALPHA machine as the <filename>Makefile.globalame>
in the postgres-1.01 release has PORTNAME=alpha.
- Para>
+ para>
- Sect2>
+ sect2>
- Sect1>
+ sect1>
- <Sect1>
- <Title>Platform-specific comparison filesitle>
+ <sect1>
+ <title>Platform-specific comparison filesitle>
- <Para>
+ <para>
Since some of the tests inherently produce platform-specific results,
we have provided a way to supply platform-specific result comparison
files. Frequently, the same variation applies to multiple platforms;
So, to eliminate bogus test "failures" for a particular platform,
you must choose or make a variant result file, and then add a line
to the mapping file, which is "resultmap".
- Para>
+ para>
- <Para>
+ <para>
Each line in the mapping file is of the form
- <ProgramListing>
+ <programlisting>
testname/platformnamepattern=comparisonfilename
- ProgramListing>
+ programlisting>
The test name is just the name of the particular regression test module.
The platform name pattern is a pattern in the style of expr(1) (that is,
a regular expression with an implicit ^ anchor at the start). It is matched
against the platform name as printed by config.guess. The comparison
file name is the name of the substitute result comparison file.
- Para>
+ para>
- <Para>
+ <para>
For example: the int2 regress test includes a deliberate entry of a value
that is too large to fit in int2. The specific error message that is
produced is platform-dependent; our reference platform emits
- <ProgramListing>
+ <programlisting>
ERROR: pg_atoi: error reading "100000": Numerical result out of range
- ProgramListing>
+ programlisting>
but a fair number of other Unix platforms emit
- <ProgramListing>
+ <programlisting>
ERROR: pg_atoi: error reading "100000": Result too large
- ProgramListing>
+ programlisting>
Therefore, we provide a variant comparison file, int2-too-large.out,
that includes this spelling of the error message. To silence the
bogus "failure" message on HPPA platforms, resultmap includes
- <ProgramListing>
+ <programlisting>
int2/hppa=int2-too-large
- ProgramListing>
+ programlisting>
which will trigger on any machine for which config.guess's output
begins with 'hppa'. Other lines in resultmap select the variant
comparison file for other platforms where it's appropriate.
- Para>
+ para>
- Sect1>
+ sect1>
-
+
+
+
-->
- This release shows the continued growth of PostgreSQL. There are more
- changes in 7.0 than in any previous release. Don't be concerned this is
- a dot-zero release. We do our best to put out only solid releases, and
+ This release contains improvements in many areas, demonstrating
+ the continued growth of
PostgreSQL.
+ There are more improvements and fixes in 7.0 than in any previous
+ release. The developers have confidence that this is the best
+ release yet; we do our best to put out only solid releases, and
this one is no exception.
Continuing on work started a year ago, the optimizer has been
- overhauled, allowing improved query execution and better performance
+ improved, allowing better query plan selection and faster performance
with less memory usage.
-
+
A dump/restore using
pg_dump
is required for those wishing to migrate data from any
previous release of
Postgres.
- For those upgrading from 6.5.*, you can use
+ For those upgrading from 6.5.*, you may instead use
pg_upgrade to upgrade to this
- release.
+ release; however, a full dump/reload installation is always the
+ most robust method for upgrades.
+
+ Interface and compatibility issues to consider for the new
+ release include:
+
+
+
+ The date/time types datetime and
+ timespan have been superceded by the
+ SQL92-defined types timestamp and
+ interval. Although there has been some effort to
+ ease the transition by allowing
+ the deprecated type names and translate them to the new type
+ names, this mechanism may not be completely transparent to
+ your existing application.
+
+
+
+
+
+
+ The optimizer has been substantially improved in the area of
+ query cost estimation. In some cases, this will result in
+ decreased query times as the optimizer makes a better choice
+ for the preferred plan. However, in a small number of cases,
+ usually involving pathological distributions of data, your
+ query times may go up. If you are dealing with large amounts
+ of data, you may want to check your queries to verify
+ performance.
+
+
+
+
+ interfaces have been upgraded and extended.
+
+
+
+
+ The string function CHAR_LENGTH is now a
+ native function. Previous versions translated this into a call
+ to LENGTH, which could result in
+ ambiguity with other types implementing
+ LENGTH such as the geometric types.
+
+
+
+
+
Bug Fixes
---------
-Prevent function calls with more than maximum number of arguments (Tom)
+Prevent function calls exceeding maximum number of arguments (Tom)
Improve CASE construct (Tom)
Fix SELECT coalesce(f1,0) FROM int4_tbl GROUP BY f1 (Tom)
Fix SELECT sentence.words[0] FROM sentence GROUP BY sentence.words[0] (Tom)
Fix for subselects in INSERT ... SELECT (Tom)
Prevent INSERT ... SELECT ... ORDER BY (Tom)
Fixes for relations greater than 2GB, including vacuum
-Improve communication of system table changes to other running backends (Tom)
-Improve communication of user table modifications to other running backends (Tom)
+Improve propagating system table changes to other backends (Tom)
+Improve propagating user table changes to other backends (Tom)
Fix handling of temp tables in complex situations (Bruce, Tom)
-Allow table locking when tables opened, improving concurrent reliability (Tom)
+Allow table locking at table open, improving concurrent reliability (Tom)
Properly quote sequence names in pg_dump (Ross J. Reedstrom)
Prevent DROP DATABASE while others accessing
Prevent any rows from being returned by GROUP BY if no rows processed (Tom)
Fix SELECT COUNT(1) FROM table WHERE ...' if no rows matching WHERE (Tom)
-Fix pg_upgrade so it works for MVCC(Tom)
+Fix pg_upgrade so it works for MVCC (Tom)
Fix for SELECT ... WHERE x IN (SELECT ... HAVING SUM(x) > 1) (Tom)
Fix for "f1 datetime DEFAULT 'now'" (Tom)
Fix problems with CURRENT_DATE used in DEFAULT (Tom)
Improve recovery after failed disk writes, disk full (Hiroshi)
Fix cases where table is mentioned in FROM but not joined (Tom)
Allow HAVING clause without aggregate functions (Tom)
-Fix for "--" comment and no trailing newline, as seen in Perl
-Improve pg_dump failure error reports (Bruce)
+Fix for "--" comment and no trailing newline, as seen in perl interface
+Improve pg_dump failure error reports (Bruce)
Allow sorts and hashes to exceed 2GB file sizes (Tom)
Fix for pg_dump dumping of inherited rules (Tom)
Fix for NULL handling comparisons (Tom)
Add TRUNCATE command to quickly truncate relation (Mike Mascari)
Fix to give super user and createdb user proper update catalog rights (Peter E)
Allow ecpg bool variables to have NULL values (Christof)
-Issue ecpg error if NULL value is returned to variable with no NULL
-indicator (Christof)
+Issue ecpg error if NULL value for variable with no NULL indicator (Christof)
Allow ^C to cancel COPY command (Massimo)
Add SET FSYNC and SHOW PG_OPTIONS commands(Massimo)
Function name overloading for dynamically-loaded C functions (Frankpitt)
Allow WHERE restriction on ctid (physical heap location) (Hiroshi)
Move pginterface from contrib to interface directory, rename to pgeasy (Bruce)
Change pgeasy connectdb() parameter ordering (Bruce)
-Add DEC and SESSION_USER as reserved words (Thomas)
Require SELECT DISTINCT target list to have all ORDER BY columns (Tom)
Add Oracle's COMMENT ON command (
Mike Mascari)
libpq's PQsetNoticeProcessor function now returns previous hook(Peter E)
Force permissions on PGDATA directory to be secure, even if it exists (Tom)
Added psql LASTOID variable to return last inserted oid (Peter E)
Allow concurrent vacuum and remove pg_vlock vacuum lock file (Tom)
-Add permissions check so only Postgres superuser or table owner can
-vacuum (Peter E)
+Add permissions check for vacuum (Peter E)
New libpq functions to allow asynchronous connections: PQconnectStart(),
PQconnectPoll(), PQresetStart(), PQresetPoll(), PQsetenvStart(),
PQsetenvPoll(), PQsetenvAbort (Ewan Mellor)
create/alter user extension (Peter E)
New postmaster.pid and postmaster.opts under $PGDATA (Tatsuo)
New scripts for create/drop user/db (Peter E)
-Major psql overhaul(Peter E)
-Add const to libpq interface(Peter E)
+Major psql overhaul (Peter E)
+Add const to libpq interface (Peter E)
New libpq function PQoidValue (Peter E)
Show specific non-aggregate causing problem with GROUP BY (Tom)
Make changes to pg_shadow recreate pg_pwd file (Peter E)
Enable backward sequential scan even after reaching EOF (Hiroshi)
Add btree indexing of boolean values, >= and <= (Don Baccus)
Print current line number when COPY FROM fails (Massimo)
-Recognize special case of POSIX time zone: "GMT+8" and "GMT-8" (Thomas)
-Add DEC as synonym for "DECIMAL" (Thomas)
+Recognize POSIX time zone e.g. "PST+8" and "GMT-8" (Thomas)
+Add DEC as synonym for DECIMAL (Thomas)
Add SESSION_USER as SQL92 keyword, same as CURRENT_USER (Thomas)
-Implement column aliases (aka correlation names) and join syntax (Thomas)
-Allow queries like SELECT a FROM t1 tx (a) (Thomas)
-Allow queries like SELECT * FROM t1 NATURAL JOIN t2 (Thomas)
+Implement SQL92 column aliases (aka correlation names) (Thomas)
+Implement SQL92 join syntax (Thomas)
Make INTERVAL reserved word allowed as a column identifier (Thomas)
Implement REINDEX command (Hiroshi)
Accept ALL in aggregate function SUM(ALL col) (Tom)
Improve type casting of int and float constants (Tom)
Cleanups for int8 inputs, range checking, and type conversion (Tom)
Fix for SELECT timespan('21:11:26'::time) (Tom)
-Fix for netmask('x.x.x.x/0') is 255.255.255.255 instead of 0.0.0.0
- (Oleg Sharoiko)
-Add btree index on NUMERIC(Jan)
+netmask('x.x.x.x/0') is 255.255.255.255 instead of 0.0.0.0 (Oleg Sharoiko)
+Add btree index on NUMERIC (Jan)
Perl fix for large objects containing NUL characters (Douglas Thomson)
ODBC fix for for large objects (free)
Fix indexing of cidr data type
Made abstime/reltime use int4 instead of time_t (Peter E)
New lztext data type for compressed text fields
Revise code to handle coercion of int and float constants (Tom)
-New C-routines to implement a BIT and BIT VARYING type in /contrib
- (Adriaan Joubert)
+Start at new code to implement a BIT and BIT VARYING type (Adriaan Joubert)
NUMERIC now accepts scientific notation (Tom)
NUMERIC to int4 rounds (Tom)
Convert float4/8 to NUMERIC properly (Tom)
Allow type conversion with NUMERIC (Thomas)
Make ISO date style (2000-02-16 09:33) the default (Thomas)
-Add NATIONAL CHAR [ VARYING ]
+Add NATIONAL CHAR [ VARYING ] (Thomas)
Allow NUMERIC round and trunc to accept negative scales (Tom)
New TIME WITH TIME ZONE type (Thomas)
Add MAX()/MIN() on time type (Thomas)
Add abs(), mod(), fac() for int8 (Thomas)
-Add round(), sqrt(), cbrt(), pow()
-Rename NUMERIC power() to pow()
-Improved TRANSLATE() function
+Rename functions to round(), sqrt(), cbrt(), pow() for float8 (Thomas)
+Add transcendental math functions (e.g. sin(), acos()) for float8 (Thomas)
+Add exp() and ln() for NUMERIC type
+Rename NUMERIC power() to pow() (Thomas)
+Improved TRANSLATE() function (Edwin Ramirez, Tom)
Allow X=-Y operators (Tom)
-Add exp() and ln() as NUMERIC types
-Allow SELECT float8(COUNT(*)) / (SELECT COUNT(*) FROM int4_tbl) FROM int4_tbl
- GROUP BY f1; (Tom)
-Allow LOCALE to use indexes in regular expression searches(Tom)
+Allow SELECT float8(COUNT(*))/(SELECT COUNT(*) FROM t) FROM t GROUP BY f1; (Tom)
+Allow LOCALE to use indexes in regular expression searches (Tom)
Allow creation of functional indexes to use default types
Performance
Allocate large memory requests in fix-sized chunks for performance (Tom)
Fix vacuum's performance by reducing memory allocation requests (Tom)
Implement constant-expression simplification (Bernard Frankpitt, Tom)
-Allow more than first column to be used to determine start of index scan
- (Hiroshi)
+Use secondary columns to be used to determine start of index scan (Hiroshi)
Prevent quadruple use of disk space when doing internal sorting (Tom)
Faster sorting by calling fewer functions (Tom)
Create system indexes to match all system caches (Bruce, Hiroshi)
-Make system caches use system indexes(Bruce)
-Make all system indexes unique(Bruce)
+Make system caches use system indexes (Bruce)
+Make all system indexes unique (Bruce)
Improve pg_statistics management for VACUUM speed improvement (Tom)
Flush backend cache less frequently (Tom, Hiroshi)
COPY now reuses previous memory allocation, improving performance (Tom)
Optimizer queries based on LIMIT, OFFSET, and EXISTS qualifications (Tom)
Reduce optimizer internal housekeeping of join paths for speedup (Tom)
Major subquery speedup (Tom)
-Fewer fsync writes when fsync is not disabled(Tom)
-Improved LIKE optimizer estimates(Tom)
-Prevent fsync in SELECT-only queries(Vadim)
-Make index creation use psort code, because it is now faster(Tom)
+Fewer fsync writes when fsync is not disabled (Tom)
+Improved LIKE optimizer estimates (Tom)
+Prevent fsync in SELECT-only queries (Vadim)
+Make index creation use psort code, because it is now faster (Tom)
Allow creation of sort temp tables > 1 Gig
Source Tree Changes
-------------------
Fix for linux PPC compile
New generic expression-tree-walker subroutine (Tom)
-Change form() to varargform() to prevent portability problems.
+Change form() to varargform() to prevent portability problems
Improved range checking for large integers on Alphas
Clean up #include in /include directory (Bruce)
Add scripts for checking includes (Bruce)
Alpha spinlock fix from
Uncle George
Overhaul of optimizer data structures (Tom)
Fix to cygipc library (Yutaka Tanida)
-Allow pgsql to work on newer Cygwin snapshots(Dan)
+Allow pgsql to work on newer Cygwin snapshots (Dan)
New catalog version number (Tom)
-Add Linux ARM.
+Add Linux ARM
Rename heap_replace to heap_update
Update for QNX (Dr. Andreas Kardos)
New platform-specific regression handling (Tom)
Socket interface for client/server connection. This is the default now
so you may need to start
postmaster with the
-<quote>-i> flag.
+<option>-i> flag.
-
-Old-style time travel
has been removed. Performance has been improved.
-
-
+
+ Old-style time travel
+ has been removed. Performance has been improved.
+
+
There was a long time where the
Postgres
rule system was considered broken. The use of rules was not
- recommended and the only part working where view rules. And also
- these view rules made problems because the rule system wasn't able
- to apply them properly on other statements than a SELECT (for
+ recommended and the only part working was view rules. And also
+ these view rules gave problems because the rule system wasn't able
+ to apply them properly on statements other than a SELECT (for
example an UPDATE
that used data from a view didn't work).
- During that time, development moved on and many features where
+ During that time, development moved on and many features were
added to the parser and optimizer. The rule system got more and more
out of sync with their capabilities and it became harder and harder
- to start fixing it. Thus, noone did.
+ to start fixing it. Thus, no one did.
- Another situation are cases on UPDATE where it depends on the
+ Another situation is cases on UPDATE where it depends on the
change of an attribute if an action should be performed or
not. In
Postgres version 6.4, the
attribute specification for rule events is disabled (it will have
- stay tuned). So for now the only way to
create a rule as in the shoelace_log example is to do it with
a rule qualification. That results in an extra query that is
- performed allways, even if the attribute of interest cannot
+ performed always, even if the attribute of interest cannot
change at all because it does not appear in the targetlist
of the initial query. When this is enabled again, it will be
one more advantage of rules over triggers. Optimization of
decision. The rule system will know it by looking up the
targetlist and will suppress the additional query completely
if the attribute isn't touched. So the rule, qualified or not,
- will only do it's scan's if there ever could be something to do.
+ will only do its scans if there ever could be something to do.
+
+
All
Postgres commands that are executed
directly from a Unix shell are
- found in the directory <quote>.../bine>. Including this directory in
+ found in the directory <filename>.../bine>. Including this directory in
your search path will make executing the commands easier.
-kill(*,signal)
means sending a signal to all backends.
+"kill(*,signal)" means sending a signal to all backends.
+
+
SQL has become the most popular relational query
language.
- The name
SQL
is an abbreviation for
+ The name
"SQL" is an abbreviation for
Structured Query Language.
In 1974 Donald Chamberlin and others defined the
language SEQUEL (Structured English Query
can be formulated using relational algebra can also be formulated
using the relational calculus and vice versa.
This was first proved by E. F. Codd in
- 1972. This proof is based on an algorithm (
Codd's reduction
- algorithm) by which an arbitrary expression of the relational
+ 1972. This proof is based on an algorithm ("Codd's reduction
+ algorithm") by which an arbitrary expression of the relational
calculus can be reduced to a semantically equivalent expression of
relational algebra. For a more detailed discussion on that refer to
the database directories and started the
process. This person does not have to be the Unix
- superuser (root
)
+ superuser ("root")
or the computer system administrator; a person can install and use
Postgres without any special accounts or
privileges.
Throughout this manual, any examples that begin with
- the character %
are commands that should be typed
+ the character "%" are commands that should be typed
at the Unix shell prompt. Examples that begin with the
- character *
are commands in the Postgres query
+ character "*" are commands in the Postgres query
workspace maintained by the terminal monitor.
The
psql program responds to escape
codes that begin
- with the backslash character, \
For example, you
+ with the backslash character, "\" For example, you
can get help on the syntax of various
commands by typing:
This tells the server to process the query. If you
- terminate your query with a semicolon, the \g
is not
+ terminate your query with a semicolon, the "\g" is not
necessary.
psql will automatically process
semicolon terminated queries.
White space (i.e., spaces, tabs and newlines) may be
used freely in
SQL queries. Single-line
comments are denoted by
- --
. Everything after the dashes up to the end of the
+ "--". Everything after the dashes up to the end of the
line is ignored. Multiple-line comments, and comments within a line,
- are denoted by /* ... */
+ are denoted by "/* ... */".
Any string can be specified as an identifier if surrounded by
double quotes (like this!
). Some care is required since
such an identifier will be case sensitive
- and will retain embedded whitespace other special characters.
+ and will retain embedded whitespace and most other special characters.
LISTEN LOAD LOCK
MOVE
NEW NONE NOTIFY
+OFFSET
RESET
SETOF SHOW
UNLISTEN UNTIL
are allowed to be present as column labels, but not as identifiers:
-CASE COALESCE CROSS CURRENT CURRENT_USER CURRENT_SESSION
-DEC DECIMAL
-ELSE END
-FALSE FOREIGN
+ALL ANY ASC BETWEEN BIT BOTH
+CASE CAST CHAR CHARACTER CHECK COALESCE COLLATE COLUMN
+ CONSTRAINT CROSS CURRENT CURRENT_DATE CURRENT_TIME
+ CURRENT_TIMESTAMP CURRENT_USER
+DEC DECIMAL DEFAULT DESC DISTINCT
+ELSE END EXCEPT EXISTS EXTRACT
+FALSE FLOAT FOR FOREIGN FROM FULL
GLOBAL GROUP
-LOCAL
-NULLIF NUMERIC
-ORDER
-POSITION PRECISION
-SESSION_USER
-TABLE THEN TRANSACTION TRUE
-USER
-WHEN
+HAVING
+IN INNER INTERSECT INTO IS
+JOIN
+LEADING LEFT LIKE LOCAL
+NATURAL NCHAR NOT NULL NULLIF NUMERIC
+ON OR ORDER OUTER OVERLAPS
+POSITION PRECISION PRIMARY PUBLIC
+REFERENCES RIGHT
+SELECT SESSION_USER SOME SUBSTRING
+TABLE THEN TO TRANSACTION TRIM TRUE
+UNION UNIQUE USER
+VARCHAR
+WHEN WHERE
The following are
Postgres
-ADD ALL ALTER AND ANY AS ASC
-BEGIN BETWEEN BOTH BY
-CASCADE CAST CHAR CHARACTER CHECK CLOSE
- COLLATE COLUMN COMMIT CONSTRAINT CREATE
- CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP
- CURSOR
+ADD ALTER AND AS
+BEGIN BY
+CASCADE CLOSE COMMIT CREATE CURSOR
DECLARE DEFAULT DELETE DESC DISTINCT DROP
EXECUTE EXISTS EXTRACT
FETCH FLOAT FOR FROM FULL
The following are
SQL92 reserved key words which
are not
Postgres reserved key words, but which
if used as function names are always translated into the function
- length:
+ CHAR_LENGTH:
-CHAR_LENGTH CHARACTER_LENGTH
+CHARACTER_LENGTH
+ The following are not keywords of any kind, but when used in the
+ context of a type name are translated into a native
+
Postgres type, and when used in the
+ context of a function name are translated into a native function:
+
+DATETIME TIMESPAN
+
+
+ (translated to TIMESTAMP and INTERVAL,
+ respectively). This feature is intended to help with
+ transitioning to v7.0, and will be removed in the next full
+ release (likely v7.1).
+
+
The following are either
SQL92
or
SQL3 reserved key words
which are not key words in
Postgres.
These have no proscribed usage in
Postgres
- at the time of writing (v6.5) but may become reserved key words in the
+ at the time of writing (v7.0) but may become reserved key words in the
future:
ALLOCATE ARE ASSERTION AT AUTHORIZATION AVG
-BIT BIT_LENGTH
-CASCADED CATALOG COLLATION CONNECT CONNECTION
- CONTINUE CONVERT CORRESPONDING COUNT
+BIT_LENGTH
+CASCADED CATALOG CHAR_LENGTH CHARACTER_LENGTH COLLATION
+ CONNECT CONNECTION CONTINUE CONVERT CORRESPONDING COUNT
+ CURRENT_SESSION
DATE DEALLOCATE DEC DESCRIBE DESCRIPTOR
DIAGNOSTICS DISCONNECT DOMAIN
ESCAPE EXCEPT EXCEPTION EXEC EXTERNAL
ACCESS AFTER AGGREGATE
BACKWARD BEFORE
-CACHE CREATEDB CREATEUSER CYCLE
+CACHE COMMENT CREATEDB CREATEUSER CYCLE
DATABASE DELIMITERS
EACH ENCODING EXCLUSIVE
-FORWARD FUNCTION
+FORCE FORWARD FUNCTION
HANDLER
INCREMENT INDEX INHERITS INSENSITIVE INSTEAD ISNULL
LANCOMPILER LOCATION
MAXVALUE MINVALUE MODE
-NOCREATEDB NOCREATEUSER NOTHING NOTNULL
+NOCREATEDB NOCREATEUSER NOTHING NOTIFY NOTNULL
OIDS OPERATOR
PASSWORD PROCEDURAL
-RECIPE RENAME RETURNS ROW RULE
+RECIPE REINDEX RENAME RETURNS ROW RULE
SEQUENCE SERIAL SHARE START STATEMENT STDIN STDOUT
-TRUSTED
+TEMP TRUSTED
+UNLISTEN UNTIL
VALID VERSION
-
-
Triggers
-
-
Postgres has various client interfaces
-such as Perl, Tcl, Python and C, as well as two
-Procedural Languages
-(PL). It is also possible
-to call C functions as trigger actions. Note that STATEMENT-level trigger
-events are not supported in the current version. You can currently specify
-BEFORE or AFTER on INSERT, DELETE or UPDATE of a tuple as a trigger event.
-
-
-
-
Trigger Creation
-
- If a trigger event occurs, the trigger manager (called by the Executor)
-initializes the global structure TriggerData *CurrentTriggerData (described
-below) and calls the trigger function to handle the event.
-
-
- The trigger function must be created before the trigger is created as a
-function taking no arguments and returns opaque.
-
-
- The syntax for creating triggers is as follows:
-
- CREATE TRIGGER <trigger name> <BEFORE|AFTER> <INSERT|DELETE|UPDATE>
- ON <relation name> FOR EACH <ROW|STATEMENT>
- EXECUTE PROCEDURE <procedure name> (<function args>);
-
-
-
- The name of the trigger is used if you ever have to delete the trigger.
-It is used as an argument to the DROP TRIGGER command.
-
-
- The next word determines whether the function is called before or after
-the event.
-
-
- The next element of the command determines on what event(s) will trigger
-the function. Multiple events can be specified separated by OR.
-
-
- The relation name determines which table the event applies to.
-
-
- The FOR EACH statement determines whether the trigger is fired for each
-affected row or before (or after) the entire statement has completed.
-
-
- The procedure name is the C function called.
-
-
- The args are passed to the function in the CurrentTriggerData structure.
-The purpose of passing arguments to the function is to allow different
-triggers with similar requirements to call the same function.
-
-
- Also, function may be used for triggering different relations (these
-functions are named as "general trigger functions").
-
-
- As example of using both features above, there could be a general
-function that takes as its arguments two field names and puts the current
-user in one and the current timestamp in the other. This allows triggers to
-be written on INSERT events to automatically track creation of records in a
-transaction table for example. It could also be used as a "last updated"
-function if used in an UPDATE event.
-
-
- Trigger functions return HeapTuple to the calling Executor. This
-is ignored for triggers fired after an INSERT, DELETE or UPDATE operation
-but it allows BEFORE triggers to:
-
- - return NULL to skip the operation for the current tuple (and so the
- tuple will not be inserted/updated/deleted);
- - return a pointer to another tuple (INSERT and UPDATE only) which will
- be inserted (as the new version of the updated tuple if UPDATE) instead
- of original tuple.
-
-
- Note, that there is no initialization performed by the CREATE TRIGGER
-handler. This will be changed in the future. Also, if more than one trigger
-is defined for the same event on the same relation, the order of trigger
-firing is unpredictable. This may be changed in the future.
-
-
- If a trigger function executes SQL-queries (using SPI) then these queries
-may fire triggers again. This is known as cascading triggers. There is no
-explicit limitation on the number of cascade levels.
-
-
- If a trigger is fired by INSERT and inserts a new tuple in the same
-relation then this trigger will be fired again. Currently, there is nothing
-provided for synchronization (etc) of these cases but this may change. At
-the moment, there is function funny_dup17() in the regress tests which uses
-some techniques to stop recursion (cascading) on itself...
-
-
-
-
-
-
Interaction with the Trigger Manager
-
- As mentioned above, when function is called by the trigger manager,
-structure TriggerData *CurrentTriggerData is NOT NULL and initialized. So
-it is better to check CurrentTriggerData against being NULL at the start
-and set it to NULL just after fetching the information to prevent calls to
-a trigger function not from the trigger manager.
-
-
- struct TriggerData is defined in src/include/commands/trigger.h:
-
+
+
Triggers
+
+
Postgres has various client interfaces
+ such as Perl, Tcl, Python and C, as well as three
+ Procedural Languages
+ (PL). It is also possible
+ to call C functions as trigger actions. Note that STATEMENT-level trigger
+ events are not supported in the current version. You can currently specify
+ BEFORE or AFTER on INSERT, DELETE or UPDATE of a tuple as a trigger event.
+
+
+
+
Trigger Creation
+
+ If a trigger event occurs, the trigger manager (called by the Executor)
+ initializes the global structure TriggerData *CurrentTriggerData (described
+ below) and calls the trigger function to handle the event.
+
+
+ The trigger function must be created before the trigger is created as a
+ function taking no arguments and returns opaque.
+
+
+ The syntax for creating triggers is as follows:
+
+CREATE TRIGGER trigger [ BEFORE | AFTER ] [ INSERT | DELETE | UPDATE [ OR ... ] ]
+ ON relation FOR EACH [ ROW | STATEMENT ]
+ EXECUTE PROCEDURE procedure
+ (args);
+
+
+ where the arguments are:
+
+
+
+
+ trigger
+
+
+ The name of the trigger is
+ used if you ever have to delete the trigger.
+ It is used as an argument to the DROP TRIGGER command.
+
+
+
+
+
+ BEFORE
+ AFTER
+
+ Determines whether the function is called before or after
+ the event.
+
+
+
+
+
+ INSERT
+ DELETE
+ UPDATE
+
+ The next element of the command determines on what event(s) will trigger
+ the function. Multiple events can be specified separated by OR.
+
+
+
+
+
+ relation
+
+ The relation name determines which table the event applies to.
+
+
+
+
+
+ ROW
+ STATEMENT
+
+ The FOR EACH clause determines whether the trigger is fired for each
+ affected row or before (or after) the entire statement has completed.
+
+
+
+
+
+ procedure
+
+ The procedure name is the C function called.
+
+
+
+
+
+ args
+
+ The arguments passed to the function in the CurrentTriggerData structure.
+ The purpose of passing arguments to the function is to allow different
+ triggers with similar requirements to call the same function.
+
+
+ Also, procedure
+ may be used for triggering different relations (these
+ functions are named as "general trigger functions").
+
+
+ As example of using both features above, there could be a general
+ function that takes as its arguments two field names and puts the current
+ user in one and the current timestamp in the other. This allows triggers to
+ be written on INSERT events to automatically track creation of records in a
+ transaction table for example. It could also be used as a "last updated"
+ function if used in an UPDATE event.
+
+
+
+
+
+
+ Trigger functions return HeapTuple to the calling Executor. This
+ is ignored for triggers fired after an INSERT, DELETE or UPDATE operation
+ but it allows BEFORE triggers to:
+
+
+
+ Return NULL to skip the operation for the current tuple (and so the
+ tuple will not be inserted/updated/deleted).
+
+
+
+
+ Return a pointer to another tuple (INSERT and UPDATE only) which will
+ be inserted (as the new version of the updated tuple if UPDATE) instead
+ of original tuple.
+
+
+
+
+
+ Note that there is no initialization performed by the CREATE TRIGGER
+ handler. This will be changed in the future. Also, if more than one trigger
+ is defined for the same event on the same relation, the order of trigger
+ firing is unpredictable. This may be changed in the future.
+
+
+ If a trigger function executes SQL-queries (using SPI) then these queries
+ may fire triggers again. This is known as cascading triggers. There is no
+ explicit limitation on the number of cascade levels.
+
+
+ If a trigger is fired by INSERT and inserts a new tuple in the same
+ relation then this trigger will be fired again. Currently, there is nothing
+ provided for synchronization (etc) of these cases but this may change. At
+ the moment, there is function funny_dup17() in the regress tests which uses
+ some techniques to stop recursion (cascading) on itself...
+
+
+
+
+
Interaction with the Trigger Manager
+
+ As mentioned above, when function is called by the trigger manager,
+ structure TriggerData *CurrentTriggerData is NOT NULL and initialized. So
+ it is better to check CurrentTriggerData against being NULL at the start
+ and set it to NULL just after fetching the information to prevent calls to
+ a trigger function not from the trigger manager.
+
+
+ struct TriggerData is defined in src/include/commands/trigger.h:
+
typedef struct TriggerData
{
- TriggerEvent tg_event;
- Relation tg_relation;
- HeapTuple tg_trigtuple;
- HeapTuple tg_newtuple;
- Trigger *tg_trigger;
+ TriggerEvent tg_event;
+ Relation tg_relation;
+ HeapTuple tg_trigtuple;
+ HeapTuple tg_newtuple;
+ Trigger *tg_trigger;
} TriggerData;
-
-
-tg_event
- describes event for which the function is called. You may use the
- following macros to examine tg_event:
-
- TRIGGER_FIRED_BEFORE(event) returns TRUE if trigger fired BEFORE;
- TRIGGER_FIRED_AFTER(event) returns TRUE if trigger fired AFTER;
- TRIGGER_FIRED_FOR_ROW(event) returns TRUE if trigger fired for
- ROW-level event;
- TRIGGER_FIRED_FOR_STATEMENT(event) returns TRUE if trigger fired for
- STATEMENT-level event;
- TRIGGER_FIRED_BY_INSERT(event) returns TRUE if trigger fired by INSERT;
- TRIGGER_FIRED_BY_DELETE(event) returns TRUE if trigger fired by DELETE;
- TRIGGER_FIRED_BY_UPDATE(event) returns TRUE if trigger fired by UPDATE.
-
-tg_relation
- is pointer to structure describing the triggered relation. Look at
- src/include/utils/rel.h for details about this structure. The most
- interest things are tg_relation->rd_att (descriptor of the relation
- tuples) and tg_relation->rd_rel->relname (relation's name. This is not
- char*, but NameData. Use SPI_getrelname(tg_relation) to get char* if
- you need a copy of name).
-
-tg_trigtuple
- is a pointer to the tuple for which the trigger is fired. This is the tuple
- being inserted (if INSERT), deleted (if DELETE) or updated (if UPDATE).
- If INSERT/DELETE then this is what you are to return to Executor if
- you don't want to replace tuple with another one (INSERT) or skip the
- operation.
-
-tg_newtuple
- is a pointer to the new version of tuple if UPDATE and NULL if this is
- for an INSERT or a DELETE. This is what you are to return to Executor if
- UPDATE and you don't want to replace this tuple with another one or skip
- the operation.
-
-tg_trigger
- is pointer to structure Trigger defined in src/include/utils/rel.h:
-
+
+
+ where the members are defined as follows:
+
+
+
+ tg_event
+
+ describes the event for which the function is called. You may use the
+ following macros to examine tg_event:
+
+
+
+ TRIGGER_FIRED_BEFORE(tg_event)
+
+ returns TRUE if trigger fired BEFORE.
+
+
+
+
+
+ TRIGGER_FIRED_AFTER(tg_event)
+
+ Returns TRUE if trigger fired AFTER.
+
+
+
+
+
+ TRIGGER_FIRED_FOR_ROW(event)
+
+ Returns TRUE if trigger fired for
+ a ROW-level event.
+
+
+
+
+
+ TRIGGER_FIRED_FOR_STATEMENT(event)
+
+ Returns TRUE if trigger fired for
+ STATEMENT-level event.
+
+
+
+
+
+ TRIGGER_FIRED_BY_INSERT(event)
+
+ Returns TRUE if trigger fired by INSERT.
+
+
+
+
+
+ TRIGGER_FIRED_BY_DELETE(event)
+
+ Returns TRUE if trigger fired by DELETE.
+
+
+
+
+
+ TRIGGER_FIRED_BY_UPDATE(event)
+
+ Returns TRUE if trigger fired by UPDATE.
+
+
+
+
+
+
+
+
+
+ tg_relation
+
+ is a pointer to structure describing the triggered relation. Look at
+ src/include/utils/rel.h for details about this structure. The most
+ interest things are tg_relation->rd_att (descriptor of the relation
+ tuples) and tg_relation->rd_rel->relname (relation's name. This is not
+ char*, but NameData. Use SPI_getrelname(tg_relation) to get char* if
+ you need a copy of name).
+
+
+
+
+
+ tg_trigtuple
+
+ is a pointer to the tuple for which the trigger is fired. This is the tuple
+ being inserted (if INSERT), deleted (if DELETE) or updated (if UPDATE).
+ If INSERT/DELETE then this is what you are to return to Executor if
+ you don't want to replace tuple with another one (INSERT) or skip the
+ operation.
+
+
+
+
+
+ tg_newtuple
+
+ is a pointer to the new version of tuple if UPDATE and NULL if this is
+ for an INSERT or a DELETE. This is what you are to return to Executor if
+ UPDATE and you don't want to replace this tuple with another one or skip
+ the operation.
+
+
+
+
+
+ tg_trigger
+
+ is pointer to structure Trigger defined in src/include/utils/rel.h:
+
typedef struct Trigger
{
Oid tgoid;
int16 tgattr[FUNC_MAX_ARGS];
char **tgargs;
} Trigger;
-
- tgname is the trigger's name, tgnargs is number of arguments in tgargs,
- tgargs is an array of pointers to the arguments specified in the CREATE
- TRIGGER statement. Other members are for internal use only.
-
-
-
-
-
-
Visibility of Data Changes
-
-
Postgres data changes visibility rule: during a query execution, data
-changes made by the query itself (via SQL-function, SPI-function, triggers)
-are invisible to the query scan. For example, in query
-
- INSERT INTO a SELECT * FROM a
-
-
- tuples inserted are invisible for SELECT' scan. In effect, this
-duplicates the database table within itself (subject to unique index
-rules, of course) without recursing.
-
-
- But keep in mind this notice about visibility in the SPI documentation:
-
- Changes made by query Q are visible by queries which are started after
- query Q, no matter whether they are started inside Q (during the
- execution of Q) or after Q is done.
-
-
-
- This is true for triggers as well so, though a tuple being inserted
-(tg_trigtuple) is not visible to queries in a BEFORE trigger, this tuple
-(just inserted) is visible to queries in an AFTER trigger, and to queries
-in BEFORE/AFTER triggers fired after this!
-
-
-
-
-
Examples
-
- There are more complex examples in in src/test/regress/regress.c and
-in contrib/spi.
-
-
- Here is a very simple example of trigger usage. Function trigf reports
-the number of tuples in the triggered relation ttest and skips the
-operation if the query attempts to insert NULL into x (i.e - it acts as a
-NOT NULL constraint but doesn't abort the transaction).
-
+
+
+ where
+ tgname is the trigger's name, tgnargs is number of arguments in tgargs,
+ tgargs is an array of pointers to the arguments specified in the CREATE
+ TRIGGER statement. Other members are for internal use only.
+
+
+
+
+
+
+
+
+
Visibility of Data Changes
+
+
Postgres data changes visibility rule: during a query execution, data
+ changes made by the query itself (via SQL-function, SPI-function, triggers)
+ are invisible to the query scan. For example, in query
+
+INSERT INTO a SELECT * FROM a;
+
+
+ tuples inserted are invisible for SELECT scan. In effect, this
+ duplicates the database table within itself (subject to unique index
+ rules, of course) without recursing.
+
+
+ But keep in mind this notice about visibility in the SPI documentation:
+
+Changes made by query Q are visible by queries which are started after
+query Q, no matter whether they are started inside Q (during the
+execution of Q) or after Q is done.
+
+
+
+
+ This is true for triggers as well so, though a tuple being inserted
+ (tg_trigtuple) is not visible to queries in a BEFORE trigger, this tuple
+ (just inserted) is visible to queries in an AFTER trigger, and to queries
+ in BEFORE/AFTER triggers fired after this!
+
+
+
+
+
Examples
+
+ There are more complex examples in
+ src/test/regress/regress.c and
+ in contrib/spi.
+
+
+ Here is a very simple example of trigger usage. Function trigf reports
+ the number of tuples in the triggered relation ttest and skips the
+ operation if the query attempts to insert NULL into x (i.e - it acts as a
+ NOT NULL constraint but doesn't abort the transaction).
+
#include "executor/spi.h" /* this is what you need to work with SPI */
#include "commands/trigger.h" /* -"- and triggers */
return (rettuple);
}
-isting>
-ara>
+ isting>
+ ara>
- Now, compile and
-create table ttest (x int4);
+ Now, compile and
+ create table ttest (x int4):
+
create function trigf () returns opaque as
'...path_to_so' language 'c';
+
vac=> create trigger tbefore before insert or update or delete on ttest
for each row execute procedure trigf();
CREATE
x
-
(0 rows)
-
-
-
-
-
+
+
+
+
+
+
+
- <Title>PostgreSQL Tutorialitle>
- <BookInfo>
- <ReleaseInfo>Covering v6.5 for general releasenfo>
- <BookBiblio>
- <AuthorGroup>
- <CorpAuthor>The PostgreSQL Development Teamuthor>
- AuthorGroup>
+ <title>PostgreSQL Tutorialitle>
+ <bookinfo>
+ <releaseinfo>Covering v7.0 for general releasenfo>
+ <bookbiblio>
+ <authorgroup>
+ <corpauthor>The PostgreSQL Development Teamuthor>
+ authorgroup>
- <Editor>
- <FirstName>Thomasame>
- <SurName>Lockhartame>
- <Affiliation>
- <OrgName>Caltech/JPLame>
- Affiliation>
- Editor>
+ <editor>
+ <firstname>Thomasame>
+ <surname>Lockhartame>
+ <affiliation>
+ <orgname>Caltech/JPLame>
+ affiliation>
+ editor>
-->
- <Date>(last updated 1999-05-19)ate>
- BookBiblio>
+ <date>(last updated 2000-05-01)ate>
+ bookbiblio>
- <LegalNotice>
- <Para>
- <ProductName>PostgreSQL is Copyright © 1996-9
- by the Postgres Global Development Group.
- Para>
- LegalNotice>
+ <legalnotice>
+ <para>
+ <productname>PostgreSQL is Copyright © 1996-2000
+ by PostgreSQL Inc.
+ para>
+ legalnotice>
- BookInfo>
+ bookinfo>
- <Preface>
- <Title>Summaryitle>
+ <preface>
+ <title>Summaryitle>
- <Para>
- <ProductName>Postgresame>,
+ <para>
+ <productname>Postgresame>,
developed originally in the UC Berkeley Computer Science Department,
pioneered many of the object-relational concepts
now becoming available in some commercial databases.
It provides SQL92/SQL3 language support,
transaction integrity, and type extensibility.
- <ProductName>PostgreSQLame> is an open-source descendant
+ <productname>PostgreSQLame> is an open-source descendant
of this original Berkeley code.
- Para>
- Preface>
+ para>
+ preface>
&intro;
&sql;
-->
-Book>
+book>
%allfiles;
]>
-<Book Id="user">
+<book id="user">
- <Title>PostgreSQL User's Guideitle>
- <BookInfo>
- <ReleaseInfo>Covering v6.5 for general releasenfo>
- <BookBiblio>
- <AuthorGroup>
- <CorpAuthor>The PostgreSQL Development Teamuthor>
- AuthorGroup>
+ <title>PostgreSQL User's Guideitle>
+ <bookinfo>
+ <releaseinfo>Covering v7.0 for general releasenfo>
+ <bookbiblio>
+ <authorgroup>
+ <corpauthor>The PostgreSQL Development Teamuthor>
+ authorgroup>
- <Editor>
- <FirstName>Thomasame>
- <SurName>Lockhartame>
- <Affiliation>
- <OrgName>Caltech/JPLame>
- Affiliation>
- Editor>
+ <editor>
+ <firstname>Thomasame>
+ <surname>Lockhartame>
+ <affiliation>
+ <orgname>Caltech/JPLame>
+ affiliation>
+ editor>
-->
- <Date>(last updated 1999-06-01)ate>
- BookBiblio>
+ <date>(last updated 2000-05-01)ate>
+ bookbiblio>
- <LegalNotice>
- <Para>
- <ProductName>PostgreSQL is Copyright © 1996-9
- by the Postgres Global Development Group.
- Para>
- LegalNotice>
+ <legalnotice>
+ <para>
+ <productname>PostgreSQL is Copyright © 1996-2000
+ by PostgreSQL Inc.
+ para>
+ legalnotice>
- BookInfo>
+ bookinfo>
- <Title>Summaryitle>
+ <title>Summaryitle>
- <Para>
- <ProductName>Postgresame>,
+ <para>
+ <productname>Postgresame>,
developed originally in the UC Berkeley Computer Science Department,
pioneered many of the object-relational concepts
now becoming available in some commercial databases.
It provides SQL92/SQL3 language support,
transaction integrity, and type extensibility.
- <ProductName>PostgreSQLame> is an open-source descendant
+ <productname>PostgreSQLame> is an open-source descendant
of this original Berkeley code.
- Para>
- Preface>
+ para>
+ preface>
&intro;
&syntax;
-->
-Book>
+book>
simply returns a base type, such as int4:
- CREATE FUNCTION one() RETURNS int4
- AS 'SELECT 1 as RESULT' LANGUAGE 'sql';
+CREATE FUNCTION one() RETURNS int4
+ AS 'SELECT 1 as RESULT' LANGUAGE 'sql';
- SELECT one() AS answer;
+SELECT one() AS answer;
- +-------+
- |answer |
- +-------+
- |1 |
- +-------+
+ +-------+
+ |answer |
+ +-------+
+ |1 |
+ +-------+
and $2:
- CREATE FUNCTION add_em(int4, int4) RETURNS int4
- AS 'SELECT $1 + $2;' LANGUAGE 'sql';
+CREATE FUNCTION add_em(int4, int4) RETURNS int4
+ AS 'SELECT $1 + $2;' LANGUAGE 'sql';
- SELECT add_em(1, 2) AS answer;
+SELECT add_em(1, 2) AS answer;
- +-------+
- |answer |
- +-------+
- |3 |
- +-------+
+ +-------+
+ |answer |
+ +-------+
+ |3 |
+ +-------+
salary would be if it were doubled:
- CREATE FUNCTION double_salary(EMP) RETURNS int4
- AS 'SELECT $1.salary * 2 AS salary;' LANGUAGE 'sql';
-
- SELECT name, double_salary(EMP) AS dream
- FROM EMP
- WHERE EMP.cubicle ~= '(2,1)'::point;
-
-
- +-----+-------+
- |name | dream |
- +-----+-------+
- |Sam | 2400 |
- +-----+-------+
+CREATE FUNCTION double_salary(EMP) RETURNS int4
+ AS 'SELECT $1.salary * 2 AS salary;' LANGUAGE 'sql';
+
+SELECT name, double_salary(EMP) AS dream
+ FROM EMP
+ WHERE EMP.cubicle ~= '(2,1)'::point;
+
+
+ +-----+-------+
+ |name | dream |
+ +-----+-------+
+ |Sam | 2400 |
+ +-----+-------+
notation attribute(class) and class.attribute interchangably:
- --
- -- this is the same as:
- -- SELECT EMP.name AS youngster FROM EMP WHERE EMP.age < 30
- --
- SELECT name(EMP) AS youngster
- FROM EMP
- WHERE age(EMP) < 30;
-
- +----------+
- |youngster |
- +----------+
- |Sam |
- +----------+
+--
+-- this is the same as:
+-- SELECT EMP.name AS youngster FROM EMP WHERE EMP.age < 30
+--
+SELECT name(EMP) AS youngster
+ FROM EMP
+ WHERE age(EMP) < 30;
+
+ +----------+
+ |youngster |
+ +----------+
+ |Sam |
+ +----------+
that returns a single EMP instance:
- CREATE FUNCTION new_emp() RETURNS EMP
- AS 'SELECT \'None\'::text AS name,
- 1000 AS salary,
- 25 AS age,
- \'(2,2)\'::point AS cubicle'
- LANGUAGE 'sql';
+CREATE FUNCTION new_emp() RETURNS EMP
+ AS 'SELECT \'None\'::text AS name,
+ 1000 AS salary,
+ 25 AS age,
+ \'(2,2)\'::point AS cubicle'
+ LANGUAGE 'sql';
entire instance into another function.
- SELECT name(new_emp()) AS nobody;
+SELECT name(new_emp()) AS nobody;
- +-------+
- |nobody |
- +-------+
- |None |
- +-------+
+ +-------+
+ |nobody |
+ +-------+
+ |None |
+ +-------+
with function calls.
- SELECT new_emp().name AS nobody;
- WARN:parser: syntax error at or near "."
+SELECT new_emp().name AS nobody;
+WARN:parser: syntax error at or near "."
specified as the function's returntype.
- CREATE FUNCTION clean_EMP () RETURNS int4
- AS 'DELETE FROM EMP WHERE EMP.salary <= 0;
- SELECT 1 AS ignore_this'
- LANGUAGE 'sql';
-
- SELECT clean_EMP();
-
- +--+
- |x |
- +--+
- |1 |
- +--+
-
+CREATE FUNCTION clean_EMP () RETURNS int4
+ AS 'DELETE FROM EMP WHERE EMP.salary <= 0;
+SELECT 1 AS ignore_this'
+ LANGUAGE 'sql';
+
+SELECT clean_EMP();
+
+ +--+
+ |x |
+ +--+
+ |1 |
+ +--+
Suppose funcs.c look like:
- #include <string.h>
- #include "postgres.h"
+#include <string.h>
+#include "postgres.h"
- /* By Value */
-
- int
- add_one(int arg)
- {
- return(arg + 1);
- }
-
- /* By Reference, Fixed Length */
+/* By Value */
- Point *
- makepoint(Point *pointx, Point *pointy )
- {
- Point *new_point = (Point *) palloc(sizeof(Point));
-
- new_point->x = pointx->x;
- new_point->y = pointy->y;
-
- return new_point;
- }
-
- /* By Reference, Variable Length */
-
- text *
- copytext(text *t)
- {
- /*
- * VARSIZE is the total size of the struct in bytes.
- */
- text *new_t = (text *) palloc(VARSIZE(t));
- memset(new_t, 0, VARSIZE(t));
- VARSIZE(new_t) = VARSIZE(t);
- /*
- * VARDATA is a pointer to the data region of the struct.
- */
- memcpy((void *) VARDATA(new_t), /* destination */
- (void *) VARDATA(t), /* source */
- VARSIZE(t)-VARHDRSZ); /* how many bytes */
- return(new_t);
- }
-
- text *
- concat_text(text *arg1, text *arg2)
- {
- int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
- text *new_text = (text *) palloc(new_text_size);
-
- memset((void *) new_text, 0, new_text_size);
- VARSIZE(new_text) = new_text_size;
- strncpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
- strncat(VARDATA(new_text), VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
- return (new_text);
- }
+int
+add_one(int arg)
+{
+ return(arg + 1);
+}
+
+/* By Reference, Fixed Length */
+
+Point *
+makepoint(Point *pointx, Point *pointy )
+{
+ Point *new_point = (Point *) palloc(sizeof(Point));
+
+ new_point->x = pointx->x;
+ new_point->y = pointy->y;
+
+ return new_point;
+}
+
+/* By Reference, Variable Length */
+
+text *
+copytext(text *t)
+{
+ /*
+ * VARSIZE is the total size of the struct in bytes.
+ */
+ text *new_t = (text *) palloc(VARSIZE(t));
+ memset(new_t, 0, VARSIZE(t));
+ VARSIZE(new_t) = VARSIZE(t);
+ /*
+ * VARDATA is a pointer to the data region of the struct.
+ */
+ memcpy((void *) VARDATA(new_t), /* destination */
+ (void *) VARDATA(t), /* source */
+ VARSIZE(t)-VARHDRSZ); /* how many bytes */
+ return(new_t);
+}
+
+text *
+concat_text(text *arg1, text *arg2)
+{
+ int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
+ text *new_text = (text *) palloc(new_text_size);
+
+ memset((void *) new_text, 0, new_text_size);
+ VARSIZE(new_text) = new_text_size;
+ strncpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
+ strncat(VARDATA(new_text), VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
+ return (new_text);
+}
- CREATE FUNCTION add_one(int4) RETURNS int4
- AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
-
- CREATE FUNCTION makepoint(point, point) RETURNS point
- AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
-
- CREATE FUNCTION concat_text(text, text) RETURNS text
- AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
-
- CREATE FUNCTION copytext(text) RETURNS text
- AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
+CREATE FUNCTION add_one(int4) RETURNS int4
+ AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
+
+CREATE FUNCTION makepoint(point, point) RETURNS point
+ AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
+
+CREATE FUNCTION concat_text(text, text) RETURNS text
+ AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
+
+CREATE FUNCTION copytext(text) RETURNS text
+ AS 'PGROOT/tutorial/funcs.so' LANGUAGE 'c';
In the query above, we can define c_overpaid as:
- #include "postgres.h"
- #include "executor/executor.h" /* for GetAttributeByName() */
-
- bool
- c_overpaid(TupleTableSlot *t, /* the current instance of EMP */
- int4 limit)
- {
- bool isnull = false;
- int4 salary;
- salary = (int4) GetAttributeByName(t, "salary", &isnull);
- if (isnull)
- return (false);
- return(salary > limit);
- }
+#include "postgres.h"
+#include "executor/executor.h" /* for GetAttributeByName() */
+
+bool
+c_overpaid(TupleTableSlot *t, /* the current instance of EMP */
+ int4 limit)
+{
+ bool isnull = false;
+ int4 salary;
+ salary = (int4) GetAttributeByName(t, "salary", &isnull);
+ if (isnull)
+ return (false);
+ return(salary > limit);
+}
call would look like:
- char *str;
- ...
- str = (char *) GetAttributeByName(t, "name", &isnull)
+char *str;
+...
+str = (char *) GetAttributeByName(t, "name", &isnull)
know about the c_overpaid function:
- * CREATE FUNCTION c_overpaid(EMP, int4) RETURNS bool
- AS 'PGROOT/tutorial/obj/funcs.so' LANGUAGE 'c';
+* CREATE FUNCTION c_overpaid(EMP, int4) RETURNS bool
+ AS 'PGROOT/tutorial/obj/funcs.so' LANGUAGE 'c';
-d="xtypes">
-
Extending SQL: Typesitle>
- As previously mentioned, there are two kinds of types
-
in Postgresame>: base types (defined in a programming language)
- and composite types (instances).
- Examples in this section up to interfacing indices can
- be found in complex.sql and complex.came>. Composite examples
- are in funcs.sqlame>.
-ara>
-
-ect1>
-User-Defined Typesitle>
-
-ect2>
-Functions Needed for a User-Defined Typeitle>
+ d="xtypes">
+
Extending SQL: Typesitle>
+ As previously mentioned, there are two kinds of types
+
in Postgresame>: base types (defined in a programming language)
+ and composite types (instances).
+ Examples in this section up to interfacing indices can
+ be found in complex.sql and complex.came>. Composite examples
+ are in funcs.sqlame>.
+ ara>
+
+ ect1>
+ User-Defined Typesitle>
+
+ ect2>
+ Functions Needed for a User-Defined Typeitle>
A user-defined type must always have input and output
functions. These functions determine how the type
appears in strings (for input by the user and output to
delimited character string.
Suppose we want to define a complex type which represents
complex numbers. Naturally, we choose to represent a
- complex in memory as the following
C structure:
- typedef struct Complex {
- double x;
- double y;
- } Complex;
-
+ complex in memory as the following
C structure:
+
+typedef struct Complex {
+ double x;
+ double y;
+} Complex;
+
+
and a string of the form (x,y) as the external string
representation.
These functions are usually not hard to write, especially
the output function. However, there are a number of points
to remember:
-
-
-
When defining your external (string) representation,
- remember that you must eventually write a
- complete and robust parser for that representation
- as your input function!
- Complex *
- complex_in(char *str)
- {
- double x, y;
- Complex *result;
- if (sscanf(str, " ( %lf , %lf )", &x, &y) != 2) {
- elog(WARN, "complex_in: error in parsing
- return NULL;
- }
- result = (Complex *)palloc(sizeof(Complex));
- result->x = x;
- result->y = y;
- return (result);
- }
-
-
- The output function can simply be:
- char *
- complex_out(Complex *complex)
- {
- char *result;
- if (complex == NULL)
- return(NULL);
- result = (char *) palloc(60);
- sprintf(result, "(%g,%g)", complex->x, complex->y);
- return(result);
- }
-
-
-
-
-
You should try to make the input and output
- functions inverses of each other. If you do
- not, you will have severe problems when you need
- to dump your data into a file and then read it
- back in (say, into someone else's database on
- another computer). This is a particularly common
- problem when floating-point numbers are
- involved.
-
-
-
-
- To define the
complex type, we need to create the two
+
+
+
When defining your external (string) representation,
+ remember that you must eventually write a
+ complete and robust parser for that representation
+ as your input function!
+
+Complex *
+complex_in(char *str)
+{
+ double x, y;
+ Complex *result;
+ if (sscanf(str, " ( %lf , %lf )", &x, &y) != 2) {
+ elog(WARN, "complex_in: error in parsing
+ return NULL;
+ }
+ result = (Complex *)palloc(sizeof(Complex));
+ result->x = x;
+ result->y = y;
+ return (result);
+}
+
+
+ The output function can simply be:
+
+char *
+complex_out(Complex *complex)
+{
+ char *result;
+ if (complex == NULL)
+ return(NULL);
+ result = (char *) palloc(60);
+ sprintf(result, "(%g,%g)", complex->x, complex->y);
+ return(result);
+}
+
+
+
+
+
+ You should try to make the input and output
+ functions inverses of each other. If you do
+ not, you will have severe problems when you need
+ to dump your data into a file and then read it
+ back in (say, into someone else's database on
+ another computer). This is a particularly common
+ problem when floating-point numbers are
+ involved.
+
+
+
+
+ To define the
complex type, we need to create the two
user-defined functions complex_in and complex_out
before creating the type:
- CREATE FUNCTION complex_in(opaque)
- RETURNS complex
- AS 'PGROOT/tutorial/obj/complex.so'
- LANGUAGE 'c';
-
- CREATE FUNCTION complex_out(opaque)
- RETURNS opaque
- AS 'PGROOT/tutorial/obj/complex.so'
- LANGUAGE 'c';
-
- CREATE TYPE complex (
- internallength = 16,
- input = complex_in,
- output = complex_out
- );
-
-
-
- As discussed earlier,
Postgres fully supports arrays of
- base types. Additionally,
Postgres supports arrays of
+
+CREATE FUNCTION complex_in(opaque)
+ RETURNS complex
+ AS 'PGROOT/tutorial/obj/complex.so'
+ LANGUAGE 'c';
+
+CREATE FUNCTION complex_out(opaque)
+ RETURNS opaque
+ AS 'PGROOT/tutorial/obj/complex.so'
+ LANGUAGE 'c';
+
+CREATE TYPE complex (
+ internallength = 16,
+ input = complex_in,
+ output = complex_out
+);
+
+
+
+ As discussed earlier,
Postgres fully supports arrays of
+ base types. Additionally,
Postgres supports arrays of
user-defined types as well. When you define a type,
- <ProductName>Postgresame> automatically provides support for arrays of
+ <productname>Postgresame> automatically provides support for arrays of
that type. For historical reasons, the array type has
the same name as the user-defined type with the
underscore character _ prepended.
Composite types do not need any function defined on
them, since the system already understands what they
look like inside.
-
-
-
-
Large Objects
+
+
+
+
+
Large Objects
The types discussed to this point are all "small"
objects -- that is, they are smaller than 8KB in size.
-ote>
- 1024 longwords == 8192 bytes. In fact, the type must be considerably smaller than 8192 bytes,
-
since the Postgresame> tuple
-and page overhead must also fit into this 8KB limitation.
-The actual value that fits depends on the machine architecture.
-ara>
-ote>
+ ote>
+ 1024 longwords == 8192 bytes. In fact, the type must be considerably smaller than 8192 bytes,
+
since the Postgresame> tuple
+ and page overhead must also fit into this 8KB limitation.
+ The actual value that fits depends on the machine architecture.
+ ara>
+ ote>
If you require a larger type for something like a document
retrieval system or for storing bitmaps, you will
- need to use the
Postgres large object interface.
-
-
-
-
+ need to use the
Postgres large object
+ interface, or will need to recompile the
+
Postgres backend to use internal
+ storage blocks greater than 8kbytes..
+
+
+
+
+
+
in the chapter on data types.
For two-digit years, the significant transition year is 1970, not 2000;
- e.g. 70-01-01
is interpreted as 1970-01-01
,
- whereas 69-01-01
is interpreted as 2069-01-01
.
+ e.g. "70-01-01" is interpreted as 1970-01-01,
+ whereas "69-01-01" is interpreted as 2069-01-01.