as there is in
psql>. To continue a command
across multiple lines, you must type backslash just before each
newline except the last one.
- Also, you won't have any of the conveniences of readline processing
+ Also, you won't have any of the conveniences of command-line editing
(no command history, for example).
- To quit the backend, type EOF (control-D, usually).
+ To quit the backend, type
EOF> (
+ action="simul">Control>D>>, usually).
Examples
- Set DateStyle to its default value:
+ Set DateStyle> to its default value:
RESET DateStyle;
- Set Geqo to its default value:
+ Set geqo> to its default value:
RESET GEQO;
(i.e., all combined rows that pass its ON condition), plus one copy of each
row in the left-hand table for which there was no right-hand row that
passed the ON condition. This left-hand row is extended to the full
- width of the joined table by inserting NULLs for the right-hand columns.
- Note that only the JOIN's own ON or USING condition is considered while
+ width of the joined table by inserting null values for the right-hand columns.
+ Note that only the JOIN>'s own ON or USING condition is considered while
deciding which rows have matches. Outer ON or WHERE conditions are
applied afterwards.
- Optionally one may add the keyword DESC (descending)
- or ASC (ascending) after each column name in the ORDER BY clause.
- If not specified, ASC is assumed by default. Alternatively, a
- specific ordering operator name may be specified. ASC is equivalent
- to USING < and DESC is equivalent to USING >.
+ Optionally one may add the key word DESC> (descending)
+ or ASC> (ascending) after each column name in the
+ ORDER BY> clause. If not specified, ASC> is
+ assumed by default. Alternatively, a specific ordering operator
+ name may be specified. ASC> is equivalent to
+ USING <> and DESC> is equivalent to
+ USING >>.
The UNION operator computes the collection (set union) of the rows
returned by the queries involved.
- The two SELECTs that represent the direct operands of the UNION must
+ The two SELECT statements that represent the direct operands of the UNION must
produce the same number of columns, and corresponding columns must be
of compatible data types.
PostgreSQL allows one to omit
the FROM clause from a query. This feature
-was retained from the original PostQuel query language. It has
+was retained from the original PostQUEL query language. It has
a straightforward use to compute the results of simple expressions:
4
-Some other DBMSes cannot do this except by introducing a dummy one-row
+Some other SQL databases cannot do this except by introducing a dummy one-row
table to do the select from. A less obvious use is to abbreviate a
normal select from one or more tables:
- DATESTYLE
+ DATESTYLE>
Choose the date/time representation style. Two separate
- ISO
+ ISO>
Use ISO 8601-style dates and times (YYYY-MM-DD
- SQL
+ SQL>
Use Oracle/Ingres-style dates and times. Note that this
- PostgreSQL
+ PostgreSQL>
Use traditional
PostgreSQL format.
- German
+ German>
Use dd.mm.yyyy for numeric date representations.
- European
+ European>
Use dd/mm/yyyy for numeric date representations.
- NonEuropean
- US
+ NonEuropean>
+ US>
Use mm/dd/yyyy for numeric date representations.
- There are several now-deprecated means for setting the datestyle
+ There are several now-deprecated means for setting the date style
in addition to the normal methods of setting it via SET> or
a configuration-file entry:
Setting the client's PGDATESTYLE environment variable.
- If PGDATESTYLE is set in the frontend environment of a client
- based on libpq, libpq will automatically set DATESTYLE to the
- value of PGDATESTYLE during connection start-up. This is
+ If PGDATESTYLE is set in the frontend environment of a client
+ based on
libpq>, libpq> will automatically set DATESTYLE> to the
+ value of PGDATESTYLE during connection start-up. This is
equivalent to a manually issued SET DATESTYLE>.
Shows the server-side multibyte encoding. (At present, this
parameter can be shown but not set, because the encoding is
- determined at initdb time.)
+ determined at
initdb> time.)
If the PGTZ environment variable is set in the frontend
- environment of a client based on libpq, libpq will automatically
+ environment of a client based on
libpq>, libpq> will automatically
SET TIMEZONE to the value of
PGTZ during connection start-up.
-
+
2001-04-21
The session user identifier may be changed only if the initial session
user (the authenticated user) had the
superuser privilege. Otherwise, the command is accepted only if it
- specifies the authenticated username.
+ specifies the authenticated user name.
UNLISTEN
is used to remove an existing NOTIFY registration.
- UNLISTEN cancels any existing registration of the current
+ UNLISTEN cancels any existing registration of the current
PostgreSQL session as a listener on the notify
condition notifyname.
The special condition wildcard * cancels all listener registrations
as a name up to 64 characters long.
- The backend does not complain if you UNLISTEN something you were not
+ The backend does not complain if you unlisten something you were not
listening for.
Each backend will automatically execute UNLISTEN * when
exiting.
- Once UNLISTEN has been executed, further NOTIFY commands will be
+ Once UNLISTEN> has been executed, further NOTIFY> commands will be
ignored:
VACUUM reclaims storage occupied by deleted tuples.
In normal
PostgreSQL operation, tuples that
- are DELETEd or obsoleted by UPDATE are not physically removed from
+ are deleted or obsoleted by UPDATE are not physically removed from
their table; they remain present until a VACUUM is
done. Therefore it's necessary to do VACUUM
periodically, especially on frequently-updated tables.
- -d dbname>
- --dbname dbname>
+ >
+ >
Specifies the name of the database to be cleaned or analyzed.
- -a
- --all
+
+
Vacuum all databases.
- -f
- --full
+
+
Perform full
vacuuming.
- -v
- --verbose
+
+
Print detailed information during processing.
- -z
- --analyze
+
+
Calculate statistics for use by the optimizer.
- -t table [ (column [,...]) ]
- --table table [ (column [,...]) ]
+
+
Clean or analyze table only.
- -h host>
- --host host>
+ >
+ >
Specifies the host name of the machine on which the
- -p port>
- --port port>
+ >
+ >
Specifies the Internet TCP/IP port or local Unix domain socket file
- -U username>
- --username username>
+ >
+ >
User name to connect as
- -W
- --password
+
+
Force password prompt.
- -e
- --echo
+
+
Echo the commands that
vacuumdb generates
- -q
- --quiet
+
+
Do not display a response.
-
+
Regression Tests
The parallel regression test starts quite a few processes under your
user ID. Presently, the maximum concurrency is twenty parallel test
- scripts, which means sixty processes --- there's a backend, a psql,
- and usually a shell parent process for the psql for each test script.
+ scripts, which means sixty processes --- there's a backend, a
psql>,
+ and usually a shell parent process for the
psql> for each test script.
So if your system enforces a per-user limit on the number of processes,
make sure this limit is at least seventy-five or so, else you may get
random-seeming failures in the parallel test. If you are not in
problem by providing alternative result files that together are
known to handle a large number of locales. For example, for the
char
test, the expected file
- char.out handles the C and POSIX locales,
+ char.out handles the C> and POSIX> locales,
and the file char_1.out handles many other
locales. The regression test driver will automatically pick the
best file to match against when checking for success and for
-
PL/pgSQL
Now uses portals for SELECT loops, allowing huge result sets (Jan)
CURSOR and REFCURSOR support (Jan)
-
Psql
\d displays indexes in unique, primary groupings (Christopher Kings-Lynne)
Allow trailing semicolons in backslash commands (Greg Sabino Mullane)
-
Libpq
New function PQescapeString() to escape quotes in command strings (Florian Weimer)
New function PQescapeBytea() escapes binary strings for use as SQL string literals
-
ECPG
EXECUTE ... INTO implemented (Christof Petig)
Multiple row descriptor support (e.g. CARDINALITY) (Christof Petig)
The previous C function manager did not
-handle
NULLs properly, nor did it support 64-bit
CPU's (Alpha). The new
+handle
null values properly, nor did it support 64-bit
CPU's (Alpha). The new
function manager does. You can continue using your old custom
functions, but you may want to rewrite them in the future to use the new
function manager call interface.
- SQL92 join syntax is now supported, though only as INNER JOINs
- for this release. JOIN, NATURAL JOIN, JOIN/USING, JOIN/ON are
- available, as are column correlation names.
+ SQL92 join syntax is now supported, though only as
+ INNER JOIN> for this release. JOIN>,
+ NATURAL JOIN>, JOIN>/USING>,
+ and JOIN>/ON> are available, as are
+ column correlation names.
-We have optional multiple-byte character set support from Tatsuo Iishi
+We have optional multiple-byte character set support from Tatsuo Ishii
to complement our existing locale support.
- The "random" results in the random test should cause the "random" test
- to be "failed", since the regression tests are evaluated using a simple
- diff. However, "random" does not seem to produce random results on my
- test machine (Linux/gcc/i686).
+ The random> results in the random test should cause the
+ random
test to be failed
, since the
+ regression tests are evaluated using a simple diff. However,
+ random> does not seem to produce random results on my test
+ machine (Linux/
gcc>/i686).
-If you are loading an older binary copy or non-stdout copy, there is no
+If you are loading an older binary copy or non-stdout> copy, there is no
end-of-data character, and hence no conversion necessary.
-
+
The Rule System
DELETE queries don't need a target list because they don't
- produce any result. In fact the planner will add a special CTID
+ produce any result. In fact the planner will add a special
CTID>
entry to the empty target list. But this is after the rule
system and will be discussed later. For the rule system the
target list is empty.
expressions from the SET attribute = expression part of the query.
The planner will add missing columns by inserting expressions that
copy the values from the old row into the new one. And it will add
- the special CTID entry just as for DELETE too.
+ the special
CTID> entry just as for DELETE too.
To resolve this problem, another entry is added to the target list
- in UPDATE (and also in DELETE) statements: the current tuple ID (CTID).
+ in UPDATE (and also in DELETE) statements: the current tuple ID (
CTID>).
This is a system attribute containing the file
block number and position in the block for the row. Knowing the table,
- the CTID can be used to retrieve the original t1 row to be updated.
- After adding the CTID to the target list, the query actually looks like
+ the
CTID> can be used to retrieve the original t1 row to be updated.
+ After adding the
CTID> to the target list, the query actually looks like
SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a;
- Now another detail of
PostgreSQL enters the
- stage. At this moment, table rows aren't overwritten and this is why
- ABORT TRANSACTION is fast. In an UPDATE, the new result row is inserted
- into the table (after stripping CTID) and in the tuple header of the row
- that CTID pointed to the cmax and xmax entries are set to the current
- command counter and current transaction ID. Thus the old row is hidden
- and after the transaction committed the vacuum cleaner can really move
- it out.
+ Now another detail of
PostgreSQL enters
+ the stage. At this moment, table rows aren't overwritten and this
+ is why ABORT TRANSACTION is fast. In an UPDATE, the new result row
+ is inserted into the table (after stripping
CTID>) and
+ in the tuple header of the row that
CTID> pointed to
+ the cmax> and xmax> entries are set to the
+ current command counter and current transaction ID. Thus the old
+ row is hidden and after the transaction committed the vacuum
+ cleaner can really move it out.
env PGOPTIONS='-c geqo=off' psql
- (This works for any libpq-based client application, not just
+ (This works for any
libpq>-based client application, not just
psql.) Note that this won't work for
options that are fixed when the server is started, such as the port
number.
Determines whether EXPLAIN VERBOSE> uses the indented
- or non-indented format for displaying detailed querytree dumps.
+ or non-indented format for displaying detailed query-tree dumps.
LOG_PID (boolean)
- Prefixes each server message in the logfile with the process ID of
+ Prefixes each server message in the log file with the process ID of
the backend process. This is useful to sort out which messages
pertain to which connection. The default is off. This parameter
- does not affect messages logged via syslog(), which always contain
+ does not affect messages logged via
syslog>, which always contain
the process ID.
This variable specifies the order in which namespaces are searched
- when an object (table, datatype, function, etc) is referenced by a
+ when an object (table, data type, function, etc) is referenced by a
simple name with no schema component. When there are objects of
identical names in different namespaces, the one found first
in the search path is used. An object that is not in any of the
However, filtered forms in
Microsoft
Access generate queries that appear to use
expr> = NULL to test for
- NULLs, so if you use that interface to access the database you
+ null values, so if you use that interface to access the database you
might want to turn this option on. Since expressions of the
form expr> = NULL always
return NULL (using the correct interpretation) they are not
|
IS
- test for TRUE, FALSE, UNKNOWN, NULL
+ IS TRUE>, IS FALSE>, IS UNKNOWN>, IS NULL>
|
ISNULL
- test for NULL
+ test for null
|
NOTNULL
- test for NOT NULL
+ test for not null
|
The first form of aggregate expression invokes the aggregate
across all input rows for which the given expression yields a
- non-NULL value. (Actually, it is up to the aggregate function
- whether to ignore NULLs or not --- but all the standard ones do.)
+ non-null value. (Actually, it is up to the aggregate function
+ whether to ignore null values or not --- but all the standard ones do.)
The second form is the same as the first, since
ALL is the default. The third form invokes the
- aggregate for all distinct non-NULL values of the expression found
+ aggregate for all distinct non-null values of the expression found
in the input rows. The last form invokes the aggregate once for
- each input row regardless of NULL or non-NULL values; since no
+ each input row regardless of null or non-null values; since no
particular input value is specified, it is generally only useful
for the count() aggregate function.
For example, count(*) yields the total number
of input rows; count(f1) yields the number of
- input rows in which f1 is non-NULL;
+ input rows in which f1 is non-null;
count(distinct f1) yields the number of
- distinct non-NULL values of f1.
+ distinct non-null values of f1.
to the type that a value expression must produce (for example, when it is
assigned to a table column); the system will automatically apply a
type cast in such cases. However, automatic casting is only done for
- cast functions that are marked okay to apply implicitly>
+ cast functions that are marked OK to apply implicitly>
in the system catalogs. Other cast functions must be invoked with
explicit casting syntax. This restriction is intended to prevent
surprising conversions from being applied silently.
It is an error to use a query that
returns more than one row or more than one column as a scalar subquery.
(But if, during a particular execution, the subquery returns no rows,
- there is no error; the scalar result is taken to be NULL.)
+ there is no error; the scalar result is taken to be null.)
The subquery can refer to variables from the surrounding query,
which will act as constants during any one evaluation of the subquery.
See also .
Triggers
-
PostgreSQL has various server-side function
- interfaces. Server-side functions can be written in SQL, PL/pgSQL,
- Tcl, or C. Trigger functions can be written in any of these
- languages except SQL. Note that statement-level trigger events are not
- supported in the current version. You can currently specify BEFORE or
- AFTER on INSERT, DELETE or UPDATE of a tuple as a trigger event.
+
PostgreSQL has various server-side
+ function interfaces. Server-side functions can be written in SQL,
+ C, or any defined procedural language. Trigger functions can be
+ written in C and most procedural languages, but not in SQL. Note that
+ statement-level trigger events are not supported in the current
+ version. You can currently specify BEFORE or AFTER on INSERT,
+ DELETE or UPDATE of a tuple as a trigger event.
If a trigger event occurs, the trigger manager (called by the Executor)
- sets up a TriggerData information structure (described below) and calls
+ sets up a TriggerData> information structure (described below) and calls
the trigger function to handle the event.
The trigger function must be defined before the trigger itself can be
created. The trigger function must be declared as a
function taking no arguments and returning type trigger>.
- (The trigger function receives its input through a TriggerData
+ (The trigger function receives its input through a TriggerData>
structure, not in the form of ordinary function arguments.)
If the function is written in C, it must use the version 1>
function manager interface.
The syntax for creating triggers is:
CREATE TRIGGER trigger [ BEFORE | AFTER ] [ INSERT | DELETE | UPDATE [ OR ... ] ]
ON relation FOR EACH [ ROW | STATEMENT ]
EXECUTE PROCEDURE procedure
(args);
-
+
where the arguments are:
args
- The arguments passed to the function in the TriggerData structure.
+ The arguments passed to the function in the TriggerData> structure.
This is either empty or a list of one or more simple literal
constants (which will be passed to the function as strings).
triggers with similar requirements to call the same function.
As an example, there could be a generalized trigger
function that takes as its arguments two field names and puts the
- current user in one and the current timestamp in the other.
+ current user in one and the current time stamp in the other.
Properly written, this trigger function would be independent of
the specific table it is triggering on. So the same function
could be used for INSERT events on any table with suitable fields,
- Trigger functions return a HeapTuple to the calling Executor. The return
+ Trigger functions return a HeapTuple> to the calling executor. The return
value is ignored for triggers fired AFTER an operation,
but it allows BEFORE triggers to:
- Return NULL to skip the operation for the current tuple (and so the
- tuple will not be inserted/updated/deleted).
+ Return a NULL> pointer to skip the operation for the
+ current tuple (and so the tuple will not be
+ inserted/updated/deleted).
- If more than one trigger
- is defined for the same event on the same relation, the triggers will
- be fired in alphabetical order by name. In the case of BEFORE triggers,
- the possibly-modified tuple returned by each trigger becomes the input
- to the next trigger. If any BEFORE trigger returns NULL, the operation
- is abandoned and subsequent triggers are not fired.
+ If more than one trigger is defined for the same event on the same
+ relation, the triggers will be fired in alphabetical order by
+ name. In the case of BEFORE triggers, the possibly-modified tuple
+ returned by each trigger becomes the input to the next trigger.
+ If any BEFORE trigger returns NULL>, the operation is
+ abandoned and subsequent triggers are not fired.
The interface described here applies for
PostgreSQL 7.1 and later.
- Earlier versions passed the TriggerData pointer in a global
- variable CurrentTriggerData.
+ Earlier versions passed the TriggerData> pointer in a global
+ variable CurrentTriggerData>.
When a function is called by the trigger manager, it is not passed any
normal parameters, but it is passed a context> pointer pointing to a
- TriggerData structure. C functions can check whether they were called
+ TriggerData> structure. C functions can check whether they were called
from the trigger manager or not by executing the macro
CALLED_AS_TRIGGER(fcinfo), which expands to
- ((fcinfo)->context != NULL && IsA((fcinfo)->context, TriggerData))
-
- If this returns TRUE, then it is safe to cast fcinfo->context to type
+((fcinfo)->context != NULL && IsA((fcinfo)->context, TriggerData))
+
+ If this returns true, then it is safe to cast fcinfo->context> to type
TriggerData * and make use of the pointed-to
- TriggerData structure.
- The function must not alter the TriggerData
+ TriggerData> structure.
+ The function must not alter the TriggerData>
structure or any of the data it points to.
- type
+ type>
Always T_TriggerData if this is a trigger event.
- tg_event
+ tg_event>
describes the event for which the function is called. You may use the
- tg_relation
+ tg_relation>
- is a pointer to structure describing the triggered relation. Look at
- src/include/utils/rel.h for details about this structure. The most
- interesting things are tg_relation->rd_att (descriptor of the relation
- tuples) and tg_relation->rd_rel->relname (relation's name. This is not
- char*, but NameData. Use SPI_getrelname(tg_relation) to get char* if
- you need a copy of name).
+ is a pointer to structure describing the triggered
+ relation. Look at utils/rel.h> for details about
+ this structure. The most interesting things are
+ tg_relation->rd_att> (descriptor of the relation
+ tuples) and tg_relation->rd_rel->relname>
+ (relation's name. This is not char*>, but
+ NameData>. Use
+ SPI_getrelname(tg_relation)> to get char*> if you
+ need a copy of the name).
- tg_trigtuple
+ tg_trigtuple>
is a pointer to the tuple for which the trigger is fired. This is the tuple
- tg_newtuple
+ tg_newtuple>
- is a pointer to the new version of tuple if UPDATE and NULL if this is
+ is a pointer to the new version of tuple if UPDATE and NULL> if this is
for an INSERT or a DELETE. This is what you are to return to Executor if
UPDATE and you don't want to replace this tuple with another one or skip
the operation.
- tg_trigger
+ tg_trigger>
- is pointer to structure Trigger defined in src/include/utils/rel.h:
+ is pointer to structure Trigger> defined in utils/rel.h>:
typedef struct Trigger
{
Oid tgoid;
int16 tgattr[FUNC_MAX_ARGS];
char **tgargs;
} Trigger;
-
+
- where
- tgname is the trigger's name, tgnargs is number of arguments in tgargs,
- tgargs is an array of pointers to the arguments specified in the CREATE
- TRIGGER statement. Other members are for internal use only.
+ where tgname> is the trigger's name,
+ tgnargs> is number of arguments in
+ tgargs>, tgargs> is an array of
+ pointers to the arguments specified in the CREATE TRIGGER
+ statement. Other members are for internal use only.
changes made by the query itself (via SQL-function, SPI-function, triggers)
are invisible to the query scan. For example, in query
INSERT INTO a SELECT * FROM a;
-
+
tuples inserted are invisible for SELECT scan. In effect, this
duplicates the database table within itself (subject to unique index
This is true for triggers as well so, though a tuple being inserted
- (tg_trigtuple) is not visible to queries in a BEFORE trigger, this tuple
+ (tg_trigtuple>) is not visible to queries in a BEFORE trigger, this tuple
(just inserted) is visible to queries in an AFTER trigger, and to queries
in BEFORE/AFTER triggers fired after this!
- Here is a very simple example of trigger usage. Function trigf reports
- the number of tuples in the triggered relation ttest and skips the
- operation if the query attempts to insert NULL into x (i.e - it acts as a
- NOT NULL constraint but doesn't abort the transaction).
+ Here is a very simple example of trigger usage. Function trigf> reports
+ the number of tuples in the triggered relation ttest> and skips the
+ operation if the query attempts to insert a null value into x (i.e - it acts as a
+ not-null constraint but doesn't abort the transaction).
-#include "executor/spi.h" /* this is what you need to work with SPI */
-#include "commands/trigger.h" /* -"- and triggers */
+#include "executor/spi.h" /* this is what you need to work with SPI */
+#include "commands/trigger.h" /* -"- and triggers */
extern Datum trigf(PG_FUNCTION_ARGS);
Datum
trigf(PG_FUNCTION_ARGS)
{
- TriggerData *trigdata = (TriggerData *) fcinfo->context;
- TupleDesc tupdesc;
- HeapTuple rettuple;
- char *when;
- bool checknull = false;
- bool isnull;
- int ret, i;
-
- /* Make sure trigdata is pointing at what I expect */
- if (!CALLED_AS_TRIGGER(fcinfo))
- elog(ERROR, "trigf: not fired by trigger manager");
-
- /* tuple to return to Executor */
- if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
- rettuple = trigdata->tg_newtuple;
- else
- rettuple = trigdata->tg_trigtuple;
-
- /* check for NULLs ? */
- if (!TRIGGER_FIRED_BY_DELETE(trigdata->tg_event) &&
- TRIGGER_FIRED_BEFORE(trigdata->tg_event))
- checknull = true;
-
- if (TRIGGER_FIRED_BEFORE(trigdata->tg_event))
- when = "before";
- else
- when = "after ";
-
- tupdesc = trigdata->tg_relation->rd_att;
-
- /* Connect to SPI manager */
- if ((ret = SPI_connect()) < 0)
- elog(INFO, "trigf (fired %s): SPI_connect returned %d", when, ret);
-
- /* Get number of tuples in relation */
- ret = SPI_exec("SELECT count(*) FROM ttest", 0);
-
- if (ret < 0)
- elog(NOTICE, "trigf (fired %s): SPI_exec returned %d", when, ret);
-
- /* count(*) returns int8 as of PG 7.2, so be careful to convert */
- i = (int) DatumGetInt64(SPI_getbinval(SPI_tuptable->vals[0],
- SPI_tuptable->tupdesc,
- 1,
- &isnull));
-
- elog (NOTICE, "trigf (fired %s): there are %d tuples in ttest", when, i);
-
- SPI_finish();
-
- if (checknull)
- {
- (void) SPI_getbinval(rettuple, tupdesc, 1, &isnull);
- if (isnull)
- rettuple = NULL;
- }
-
- return PointerGetDatum(rettuple);
+ TriggerData *trigdata = (TriggerData *) fcinfo->context;
+ TupleDesc tupdesc;
+ HeapTuple rettuple;
+ char *when;
+ bool checknull = false;
+ bool isnull;
+ int ret, i;
+
+ /* Make sure trigdata is pointing at what I expect */
+ if (!CALLED_AS_TRIGGER(fcinfo))
+ elog(ERROR, "trigf: not fired by trigger manager");
+
+ /* tuple to return to Executor */
+ if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
+ rettuple = trigdata->tg_newtuple;
+ else
+ rettuple = trigdata->tg_trigtuple;
+
+ /* check for null values */
+ if (!TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)
+ && TRIGGER_FIRED_BEFORE(trigdata->tg_event))
+ checknull = true;
+
+ if (TRIGGER_FIRED_BEFORE(trigdata->tg_event))
+ when = "before";
+ else
+ when = "after ";
+
+ tupdesc = trigdata->tg_relation->rd_att;
+
+ /* Connect to SPI manager */
+ if ((ret = SPI_connect()) < 0)
+ elog(INFO, "trigf (fired %s): SPI_connect returned %d", when, ret);
+
+ /* Get number of tuples in relation */
+ ret = SPI_exec("SELECT count(*) FROM ttest", 0);
+
+ if (ret < 0)
+ elog(NOTICE, "trigf (fired %s): SPI_exec returned %d", when, ret);
+
+ /* count(*) returns int8 as of PG 7.2, so be careful to convert */
+ i = (int) DatumGetInt64(SPI_getbinval(SPI_tuptable->vals[0],
+ SPI_tuptable->tupdesc,
+ 1,
+ &isnull));
+
+ elog (NOTICE, "trigf (fired %s): there are %d tuples in ttest", when, i);
+
+ SPI_finish();
+
+ if (checknull)
+ {
+ (void) SPI_getbinval(rettuple, tupdesc, 1, &isnull);
+ if (isnull)
+ rettuple = NULL;
+ }
+
+ return PointerGetDatum(rettuple);
}
-
+
Now, compile and create the trigger function:
CREATE FUNCTION trigf () RETURNS TRIGGER AS
-'...path_to_so' LANGUAGE 'C';
+'...path_to_so' LANGUAGE C;
CREATE TABLE ttest (x int4);
-
+
vac=> CREATE TRIGGER tbefore BEFORE INSERT OR UPDATE OR DELETE ON ttest
FOR EACH ROW EXECUTE PROCEDURE trigf();
CREATE
-- Insertion skipped and AFTER trigger is not fired
vac=> SELECT * FROM ttest;
-x
--
+ x
+---
(0 rows)
vac=> INSERT INTO ttest VALUES (1);
remember what we said about visibility.
INSERT 167793 1
vac=> SELECT * FROM ttest;
-x
--
-1
+ x
+---
+ 1
(1 row)
vac=> INSERT INTO ttest SELECT x * 2 FROM ttest;
remember what we said about visibility.
INSERT 167794 1
vac=> SELECT * FROM ttest;
-x
--
-1
-2
+ x
+---
+ 1
+ 2
(2 rows)
-vac=> UPDATE ttest SET x = null WHERE x = 2;
+vac=> UPDATE ttest SET x = NULL WHERE x = 2;
INFO: trigf (fired before): there are 2 tuples in ttest
UPDATE 0
vac=> UPDATE ttest SET x = 4 WHERE x = 2;
INFO: trigf (fired after ): there are 2 tuples in ttest
UPDATE 1
vac=> SELECT * FROM ttest;
-x
--
-1
-4
+ x
+---
+ 1
+ 4
(2 rows)
vac=> DELETE FROM ttest;
remember what we said about visibility.
DELETE 2
vac=> SELECT * FROM ttest;
-x
--
+ x
+---
(0 rows)
-
+
Another bit of default behavior for a strict> transition function
is that the previous state value is retained unchanged whenever a
- NULL input value is encountered. Thus, NULLs are ignored. If you
+ NULL input value is encountered. Thus, null values are ignored. If you
need some other behavior for NULL inputs, just define your transition
function as non-strict, and code it to test for NULL inputs and do
whatever is needed.
meaning that
the system should automatically assume a NULL result if any input
value is NULL. By doing this, we avoid having to check for NULL inputs
- in the function code. Without this, we'd have to check for NULLs
+ in the function code. Without this, we'd have to check for null values
explicitly, for example by checking for a null pointer for each
pass-by-reference argument. (For pass-by-value arguments, we don't
even have a way to check!)
either base (scalar) data types, or composite (multi-column) data types.
The API is split into two main components: support for returning
composite data types, and support for returning multiple rows
- (set returning functions or SRFs).
+ (set returning functions or
SRF>s).
-
Returning Tuples (Composite Types)
+
Returning Rows (Composite Types)
The Table Function API support for returning composite data types
- (or tuples) starts with the AttInMetadata struct. This struct holds
- arrays of individual attribute information needed to create a tuple from
- raw C strings. It also saves a pointer to the TupleDesc. The information
- carried here is derived from the TupleDesc, but it is stored here to
- avoid redundant CPU cycles on each call to a Table Function. In the
- case of a function returning a set, the AttInMetadata struct should be
- computed once during the first call and saved for re-use in later calls.
+ (or rows) starts with the AttInMetadata>
+ structure. This structure holds arrays of individual attribute
+ information needed to create a row from raw C strings. It also
+ saves a pointer to the TupleDesc>. The information
+ carried here is derived from the TupleDesc>, but it
+ is stored here to avoid redundant CPU cycles on each call to a
+ table function. In the case of a function returning a set, the
+ AttInMetadata> structure should be computed
+ once during the first call and saved for re-use in later calls.
typedef struct AttInMetadata
{
int32 *atttypmods;
} AttInMetadata;
- To assist you in populating this struct, several functions and a macro
+
+
+ To assist you in populating this structure, several functions and a macro
are available. Use
TupleDesc RelationNameGetTupleDesc(const char *relname)
- to get a TupleDesc based on a specified relation, or
+ to get a TupleDesc> based on a specified relation, or
TupleDesc TypeGetTupleDesc(Oid typeoid, List *colaliases)
- to get a TupleDesc based on a type OID. This can be used to
- get a TupleDesc for a base (scalar) or composite (relation) type. Then
+ to get a TupleDesc> based on a type OID. This can
+ be used to get a TupleDesc> for a base (scalar) or
+ composite (relation) type. Then
AttInMetadata *TupleDescGetAttInMetadata(TupleDesc tupdesc)
- will return a pointer to an AttInMetadata struct, initialized based on
- the given TupleDesc. AttInMetadata can be used in conjunction with
- C strings to produce a properly formed tuple. The metadata is stored here
- to avoid redundant work across multiple calls.
+ will return a pointer to an AttInMetadata>,
+ initialized based on the given
+ TupleDesc>. AttInMetadata> can be
+ used in conjunction with C strings to produce a properly formed
+ tuple. The metadata is stored here to avoid redundant work across
+ multiple calls.
To return a tuple you must create a tuple slot based on the
- TupleDesc. You can use
+ TupleDesc>. You can use
TupleTableSlot *TupleDescGetSlot(TupleDesc tupdesc)
to initialize this tuple slot, or obtain one through other (user provided)
- means. The tuple slot is needed to create a Datum for return by the
+ means. The tuple slot is needed to create a Datum> for return by the
function. The same slot can (and should) be re-used on each call.
- After constructing an AttInMetadata structure,
+ After constructing an AttInMetadata> structure,
HeapTuple BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values)
- can be used to build a HeapTuple given user data in C string form.
- "values" is an array of C strings, one for each attribute of the return
- tuple. Each C string should be in the form expected by the input function
- of the attribute data type. In order to return a NULL value for
- one of the attributes, the corresponding pointer in the "values" array
- should be set to NULL. This function will need to be called again
- for each tuple you return.
+ can be used to build a HeapTuple> given user data
+ in C string form. "values" is an array of C strings, one for
+ each attribute of the return tuple. Each C string should be in
+ the form expected by the input function of the attribute data
+ type. In order to return a null value for one of the attributes,
+ the corresponding pointer in the
values> array
+ should be set to NULL>. This function will need to
+ be called again for each tuple you return.
- Building a tuple via TupleDescGetAttInMetadata and BuildTupleFromCStrings
- is only convenient if your function naturally computes the values to
- be returned as text strings. If your code naturally computes the
- values as a set of Datums, you should instead use the underlying
- heap_formtuple routine to convert the Datums directly into a tuple.
- You will still need the TupleDesc and a TupleTableSlot, but not
- AttInMetadata.
+ Building a tuple via TupleDescGetAttInMetadata> and
+ BuildTupleFromCStrings> is only convenient if your
+ function naturally computes the values to be returned as text
+ strings. If your code naturally computes the values as a set of
+ Datums, you should instead use the underlying
+ heap_formtuple> routine to convert the
+ Datums directly into a tuple. You will still need
+ the TupleDesc> and a TupleTableSlot>,
+ but not AttInMetadata>.
Once you have built a tuple to return from your function, the tuple must
- be converted into a Datum. Use
+ be converted into a Datum>. Use
TupleGetDatum(TupleTableSlot *slot, HeapTuple tuple)
- to get a Datum given a tuple and a slot. This Datum can be returned
- directly if you intend to return just a single row, or it can be used
- as the current return value in a set-returning function.
+ to get a Datum> given a tuple and a slot. This
+ Datum> can be returned directly if you intend to return
+ just a single row, or it can be used as the current return value
+ in a set-returning function.
Returning Sets
- A set-returning function (SRF) is normally called once for each item it
- returns. The SRF must therefore save enough state to remember what it
- was doing and return the next item on each call. The Table Function API
- provides the FuncCallContext struct to help control this process.
- fcinfo->flinfo->fn_extra> is used to
- hold a pointer to FuncCallContext across calls.
+ A set-returning function (
SRF>) is normally called
+ once for each item it returns. The
SRF> must
+ therefore save enough state to remember what it was doing and
+ return the next item on each call. The Table Function API
+ provides the FuncCallContext> structure to help
+ control this process. fcinfo->flinfo->fn_extra>
+ is used to hold a pointer to FuncCallContext>
+ across calls.
typedef struct
{
- /*
- * Number of times we've been called before.
- *
- * call_cntr is initialized to 0 for you by SRF_FIRSTCALL_INIT(), and
- * incremented for you every time SRF_RETURN_NEXT() is called.
- */
- uint32 call_cntr;
-
- /*
- * OPTIONAL maximum number of calls
- *
- * max_calls is here for convenience ONLY and setting it is OPTIONAL.
- * If not set, you must provide alternative means to know when the
- * function is done.
- */
- uint32 max_calls;
-
- /*
- * OPTIONAL pointer to result slot
- *
- * slot is for use when returning tuples (i.e. composite data types)
- * and is not needed when returning base (i.e. scalar) data types.
- */
- TupleTableSlot *slot;
-
- /*
- * OPTIONAL pointer to misc user provided context info
- *
- * user_fctx is for use as a pointer to your own struct to retain
- * arbitrary context information between calls for your function.
- */
- void *user_fctx;
-
- /*
- * OPTIONAL pointer to struct containing arrays of attribute type input
- * metainfo
- *
- * attinmeta is for use when returning tuples (i.e. composite data types)
- * and is not needed when returning base (i.e. scalar) data types. It
- * is ONLY needed if you intend to use BuildTupleFromCStrings() to create
- * the return tuple.
- */
- AttInMetadata *attinmeta;
-
- /*
- * memory context used for structures which must live for multiple calls
- *
- * multi_call_memory_ctx is set by SRF_FIRSTCALL_INIT() for you, and used
- * by SRF_RETURN_DONE() for cleanup. It is the most appropriate memory
- * context for any memory that is to be re-used across multiple calls
- * of the SRF.
- */
- MemoryContext multi_call_memory_ctx;
-
-} FuncCallContext;
+ /*
+ * Number of times we've been called before.
+ *
+ * call_cntr is initialized to 0 for you by SRF_FIRSTCALL_INIT(), and
+ * incremented for you every time SRF_RETURN_NEXT() is called.
+ */
+ uint32 call_cntr;
+
+ /*
+ * OPTIONAL maximum number of calls
+ *
+ * max_calls is here for convenience ONLY and setting it is OPTIONAL.
+ * If not set, you must provide alternative means to know when the
+ * function is done.
+ */
+ uint32 max_calls;
+
+ /*
+ * OPTIONAL pointer to result slot
+ *
+ * slot is for use when returning tuples (i.e. composite data types)
+ * and is not needed when returning base (i.e. scalar) data types.
+ */
+ TupleTableSlot *slot;
+
+ /*
+ * OPTIONAL pointer to misc user provided context info
+ *
+ * user_fctx is for use as a pointer to your own struct to retain
+ * arbitrary context information between calls for your function.
+ */
+ void *user_fctx;
+
+ /*
+ * OPTIONAL pointer to struct containing arrays of attribute type input
+ * metainfo
+ *
+ * attinmeta is for use when returning tuples (i.e. composite data types)
+ * and is not needed when returning base (i.e. scalar) data types. It
+ * is ONLY needed if you intend to use BuildTupleFromCStrings() to create
+ * the return tuple.
+ */
+ AttInMetadata *attinmeta;
+
+ /*
+ * memory context used for structures which must live for multiple calls
+ *
+ * multi_call_memory_ctx is set by SRF_FIRSTCALL_INIT() for you, and used
+ * by SRF_RETURN_DONE() for cleanup. It is the most appropriate memory
+ * context for any memory that is to be re-used across multiple calls
+ * of the SRF.
+ */
+ MemoryContext multi_call_memory_ctx;
+} FuncCallContext;
- An SRF uses several functions and macros that automatically manipulate
- the FuncCallContext struct (and expect to find it via
- fn_extra>). Use
+ An
SRF> uses several functions and macros that
+ automatically manipulate the FuncCallContext>
+ structure (and expect to find it via fn_extra>). Use
SRF_IS_FIRSTCALL()
SRF_FIRSTCALL_INIT()
- to initialize the FuncCallContext struct. On every function call,
+ to initialize the FuncCallContext>. On every function call,
including the first, use
SRF_PERCALL_SETUP()
- to properly set up for using the FuncCallContext struct and clearing
- any previously returned data left over from the previous pass.
+ to properly set up for using the FuncCallContext>
+ and clearing any previously returned data left over from the
+ previous pass.
SRF_RETURN_NEXT(funcctx, result)
- to return it to the caller. (The result>
- must be a Datum, either a single value or a tuple prepared as described
- earlier.) Finally, when your function is finished returning data, use
+ to return it to the caller. (The result> must be a
+ Datum>, either a single value or a tuple prepared as
+ described earlier.) Finally, when your function is finished
+ returning data, use
SRF_RETURN_DONE(funcctx)
- to clean up and end the SRF.
+ to clean up and end the
SRF>.
- The palloc memory context that is current when the SRF is called is
+ The
memory context that is current when the SRF> is called is
a transient context that will be cleared between calls. This means
- that you do not need to be careful about pfree'ing everything
- you palloc; it will go away anyway. However, if you want to allocate
+ that you do not need to pfree> everything
+ you palloc>; it will go away anyway. However, if you want to allocate
any data structures to live across calls, you need to put them somewhere
else. The memory context referenced by
multi_call_memory_ctx> is a suitable location for any
- data that needs to survive until the SRF is finished running. In most
+ data that needs to survive until the
SRF> is finished running. In most
cases, this means that you should switch into
multi_call_memory_ctx> while doing the first-call setup.
- A complete example of a simple SRF returning a composite type looks like:
+ A complete example of a simple
SRF> returning a composite type looks like:
PG_FUNCTION_INFO_V1(testpassbyval);
Datum
- See contrib/tablefunc for more examples of Table Functions.
+ See contrib/tablefunc> for more examples of table functions.
programming language functions. Be warned: this section
of the manual will not make you a programmer. You must
have a good understanding of
C
- (including the use of pointers and the malloc memory manager)
+ (including the use of pointers)
before trying to write
C functions for
use with
PostgreSQL. While it may
be possible to load functions written in languages other
The call handler is called in the same way as any other function:
It receives a pointer to a
- FunctionCallInfoData struct containing
+ FunctionCallInfoData struct> containing
argument values and information about the called function, and it
is expected to return a Datum result (and possibly
set the isnull field of the
- FunctionCallInfoData struct, if it wishes
+ FunctionCallInfoData structure, if it wishes
to return an SQL NULL result). The difference between a call
handler and an ordinary callee function is that the
flinfo->fn_oid field of the
- FunctionCallInfoData struct will contain
+ FunctionCallInfoData structure will contain
the OID of the actual function to be called, not of the call
handler itself. The call handler must use this field to determine
which function to execute. Also, the passed argument list has
over a new type, nor associate operators of a new type with secondary
indexes.
To do these things, we must define an operator class>
- for the new datatype. We will describe operator classes in the
+ for the new data type. We will describe operator classes in the
context of a running example: a new operator
class for the B-tree access method that stores and
sorts complex numbers in ascending absolute value order.
Prior to
PostgreSQL release 7.3, it was
- necesssary to make manual additions to
+ necessary to make manual additions to
pg_amop>, pg_amproc>, and
pg_opclass> in order to create a user-defined
operator class. That approach is now deprecated in favor of
access method needs to be able to use to work with a particular data type.
Operator classes are so called because one thing they specify is the set
of WHERE-clause operators that can be used with an index (ie, can be
- converted into an indexscan qualification). An operator class may also
+ converted into an index scan qualification). An operator class may also
specify some support procedures> that are needed by the
internal operations of the index access method, but do not directly
correspond to any WHERE-clause operator that can be used with the index.
It is possible to define multiple operator classes for the same
- input datatype and index access method. By doing this, multiple
- sets of indexing semantics can be defined for a single datatype.
+ input data type and index access method. By doing this, multiple
+ sets of indexing semantics can be defined for a single data type.
For example, a B-tree index requires a sort ordering to be defined
- for each datatype it works on.
- It might be useful for a complex-number datatype
+ for each data type it works on.
+ It might be useful for a complex-number data type
to have one B-tree operator class that sorts the data by complex
absolute value, another that sorts by real part, and so on.
Typically one of the operator classes will be deemed most commonly
useful and will be marked as the default operator class for that
- datatype and index access method.
+ data type and index access method.
comparison it is. Instead, the index access method defines a set of
strategies>, which can be thought of as generalized operators.
Each operator class shows which actual operator corresponds to each
- strategy for a particular datatype and interpretation of the index
+ strategy for a particular data type and interpretation of the index
semantics.
In short, an operator class must specify a set of operators that express
- each of these semantic ideas for the operator class's datatype.
+ each of these semantic ideas for the operator class's data type.
Just as with operators, the operator class identifies which specific
- functions should play each of these roles for a given datatype and
+ functions should play each of these roles for a given data type and
semantic interpretation. The index access method specifies the set
of functions it needs, and the operator class identifies the correct
functions to use by assigning support function numbers> to them.
OPERATOR 1 < (complex, complex) ,
- but there is no need to do so when the operators take the same datatype
+ but there is no need to do so when the operators take the same data type
we are defining the operator class for.
At present, only the GiST access method supports a
- STORAGE> type that's different from the column datatype.
+ STORAGE> type that's different from the column data type.
The GiST compress> and decompress> support
- routines must deal with datatype conversion when STORAGE>
+ routines must deal with data-type conversion when STORAGE>
is used.
Providing a negator is very helpful to the query optimizer since
- it allows expressions like NOT (x = y) to be simplified into
+ it allows expressions like NOT (x = y)> to be simplified into
x <> y. This comes up more often than you might think, because
- NOTs can be inserted as a consequence of other rearrangements.
+ NOT> operations can be inserted as a consequence of other rearrangements.
-
MERGES (SORT1, SORT2, LTCMP, GTCMP)
+
MERGES> (SORT1>, SORT2>, LTCMP>, GTCMP>)
The MERGES clause, if present, tells the system that
it is permissible to use the merge join method for a join based on this
operator. MERGES> only makes sense for binary operators that
return boolean>, and in practice the operator must represent
- equality for some datatype or pair of datatypes.
+ equality for some data type or pair of data types.
it is possible to merge-join two
distinct data types so long as they are logically compatible. For
example, the int2-versus-int4 equality operator
- is mergejoinable.
+ is merge-joinable.
We only need sorting operators that will bring both data types into a
logically compatible sequence.
Execution of a merge join requires that the system be able to identify
- four operators related to the mergejoin equality operator: less-than
- comparison for the left input datatype, less-than comparison for the
- right input datatype, less-than comparison between the two datatypes, and
- greater-than comparison between the two datatypes. (These are actually
- four distinct operators if the mergejoinable operator has two different
- input datatypes; but when the input types are the same the three
+ four operators related to the merge-join equality operator: less-than
+ comparison for the left input data type, less-than comparison for the
+ right input data type, less-than comparison between the two data types, and
+ greater-than comparison between the two data types. (These are actually
+ four distinct operators if the merge-joinable operator has two different
+ input data types; but when the input types are the same the three
less-than operators are all the same operator.)
It is possible to
specify these operators individually by name, as the SORT1>,
- The input datatypes of the four comparison operators can be deduced
- from the input types of the mergejoinable operator, so just as with
+ The input data types of the four comparison operators can be deduced
+ from the input types of the merge-joinable operator, so just as with
COMMUTATOR>, only the operator names need be given in these
clauses. Unless you are using peculiar choices of operator names,
it's sufficient to write MERGES> and let the system fill in
There are additional restrictions on operators that you mark
- mergejoinable. These restrictions are not currently checked by
+ merge-joinable. These restrictions are not currently checked by
CREATE OPERATOR, but errors may occur when
the operator is used if any are not true:
- A mergejoinable equality operator must have a mergejoinable
+ A merge-joinable equality operator must have a merge-joinable
commutator (itself if the two data types are the same, or a related
equality operator if they are different).
- If there is a mergejoinable operator relating any two data types
- A and B, and another mergejoinable operator relating B to any
- third data type C, then A and C must also have a mergejoinable
- operator; in other words, having a mergejoinable operator must
+ If there is a merge-joinable operator relating any two data types
+ A and B, and another merge-joinable operator relating B to any
+ third data type C, then A and C must also have a merge-joinable
+ operator; in other words, having a merge-joinable operator must
be transitive.
In
PostgreSQL versions before 7.3,
the MERGES> shorthand was not available: to make a
- mergejoinable operator one had to write both SORT1> and
+ merge-joinable operator one had to write both SORT1> and
SORT2> explicitly. Also, the LTCMP> and
GTCMP>
options did not exist; the names of those operators were hardwired as
In a default
PostgreSQL installation,
the handler for the
PL/pgSQL language
is built and installed into the library
- directory. If Tcl/Tk support is configured in, the handlers for
- PL/Tcl and PL/TclU are also built and installed in the same
- location. Likewise, the PL/Perl and PL/PerlU handlers are built
- and installed if Perl support is configured, and PL/Python is
+ directory. If
Tcl/Tk> support is configured in, the handlers for
+
PL/Tcl> and PL/TclU> are also built and installed in the same
+ location. Likewise, the
PL/Perl> and PL/PerlU> handlers are built
+ and installed if Perl support is configured, and
PL/Python> is
installed if Python support is configured. The
createlang script automates
linkend="xplang-install-cr1"> and