Postgres Australian Time Zones
If the token is numeric only, then it is either a single field
- or an ISO-8601 concatenated date (e.g. "19990113" for January 13, 1999)
+ or an ISO-8601 concatenated date (e.g. 19990113 for January 13, 1999)
or time (e.g. 141516 for 14:15:16).
If there are more than 4 digits,
and if no other date fields have been previously read, then interpret
- as a "concatenated date" (e.g. 19990118). 8
+ as a concatenated date
(e.g. 19990118). 8
and 6 digits are interpreted as year, month, and day, while 7
and 5 digits are interpreted as year, day of year, respectively.
- "Julian Day" is different from "Julian Date".
+ Julian Day
is different from Julian Date
.
The Julian calendar was introduced by Julius Caesar in 45 BC. It was
in common use until the 1582, when countries started changing to the
-
+
Documentation
SGML source code to
RTF, then
importing into
ApplixWare-4.4.1.
After a little cleanup (see the following
- section) the output is "printed" to a postscript file.
+ section) the output is printed
to a postscript file.
- The library has some methods that are "hidden" but may prove
+ The library has some methods that are hidden
but may prove
useful.
This means the host variable is of type bool and
the field in the
Postgres database
- is neither 't' nor 'f'.
+ is neither 't'> nor 'f'>.
100, Data not found line %d.
- This is a "normal" error that tells you that what you are querying cannot
+ This is a normal
error that tells you that what you are querying cannot
be found or you are at the end of the cursor.
Oracle version 7.0 on AIX> 3 uses OS-supported locks in shared
memory that allow an application designer to link an application
- in a "single tasking" way. Instead of starting one client
+ in a single tasking
way. Instead of starting one client
process per application process, both the database part and the
application part run in the same process. In later versions of
Oracle this is no longer supported.
message 'no data found'
- The error message for "no data" in:
+ The error message for no data
in:
exec sql insert select from statement
or shared library) that implements a new type or function
and
Postgres will load it as required. Code written
in
SQL are even more trivial to add to the server.
- This ability to modify its operation "on the fly" makes
+ This ability to modify its operation on the fly
makes
Postgres uniquely suited for rapid prototyping of new
applications and storage structures.
Types are divided into base types and composite types.
Base types are those, like int4, that are implemented
in a language such as
C. They generally correspond to
- what are often known as
"abstract data types";
Postgres
+ what are often known as
abstract data types;
Postgres
can only operate on such types through methods provided
by the user and only understands the behavior of such
types to the extent that the user describes them.
Postgres stores these types
in only one way (within the
file that stores all rows of a table) but the
- user can "look inside" at the attributes of these types
+ user can look inside
at the attributes of these types
from the query language and optimize their retrieval by
(for example) defining indexes on the attributes.
Postgres base types are further
- According to the
"comp.ai.genetic" FAQ it cannot be stressed too
+ According to the
comp.ai.genetic> FAQ it cannot be stressed too
strongly that a
GA is not a pure random search for a solution to a
problem. A
GA uses stochastic processes, but the result is distinctly
non-random (better than random).
Postgres has undergone several major releases since
- then. The first "demoware" system became operational
+ then. The first demoware
system became operational
in 1987 and was shown at the 1988
ACM-SIGMOD
Conference. We released Version 1, described in
,
- By 1996, it became clear that the name "Postgres95" would
+ By 1996, it became clear that the name Postgres95
would
not stand the test of time. We chose a new name,
PostgreSQL, to reflect the relationship
between the original
Postgres and the more
- The "start-up cost" is the part of the total scan cost that must be expended
+ The start-up cost
is the part of the total scan cost that must be expended
before we can begin to fetch the first tuple. For most indexes this can
be taken as zero, but an index type with a high start-up cost might want
to set it nonzero.
In previous versions of
Postgres, the
default was not to get access to child tables. This was found to
be error prone and is also in violation of SQL99. Under the old
- syntax, to get the sub-tables you append "*" to the table name.
+ syntax, to get the sub-tables you append * to the table name.
For example
SELECT * from cities*;
You can still explicitly specify scanning child tables by appending
- "*", as well as explicitly specify not scanning child tables by
+ *, as well as explicitly specify not scanning child tables by
writing ONLY
. But beginning in version 7.1, the default
behavior for an undecorated table name is to scan its child tables
too, whereas before the default was not to do so. To get the old
The only time you would access this class, is to use the create()
methods. These are not used by the driver, but issue one or more
- "create table" statements to the database, based on a Java Object
+ CREATE TABLE statements to the database, based on a Java Object
or Class that you want to serialize.
String original)
Encrypt a password given the clear-text password and a
-"salt".
+salt
.
Parameters:
salt - A two-character string representing the salt
Usage
-This would work if table "table" has fields "control" and "name"
+This would work if table table> has fields control> and name>
(and, perhaps, other fields):
pg_select $pgconn "SELECT * from table" array {
mode can be any OR'ing together of INV_READ and INV_WRITE.
-The OR delimiter character is "|".
+The OR delimiter character is |.
[pg_lo_creat $conn "INV_READ|INV_WRITE"]
Usage
-Mode can be either "r", "w", or "rw".
+Mode can be either r>, w>, or rw>.
whence
-
whence can be "SEEK_CUR", "SEEK_END", or "SEEK_SET"
+
whence can be SEEK_CUR>, SEEK_END>, or SEEK_SET>
whence
-can be "SEEK_CUR", "SEEK_END", or "SEEK_SET".
+can be SEEK_CUR, SEEK_END>, or SEEK_SET.
PgDatabase::PutLine
or when the last string has been received from the backend using
PgDatabase::GetLine.
- It must be issued or the backend may get "out of sync" with
+ It must be issued or the backend may get out of sync
with
the frontend. Upon return from this function, the backend is ready to
receive the next query.
requiressl
- Set to
'1' to require SSL connection to the backend.
Libpq>
+ Set to
1 to require SSL connection to the backend.
Libpq>
will then refuse to connect if the server does not support
- SSL. Set to '0' (default) to negotiate with server.
+ SSL. Set to 0 (default) to negotiate with server.
If any parameter is unspecified, then the corresponding
- environment variable (see "Environment Variables" section)
+ environment variable (see )
is checked. If the environment variable is not set either,
then hardwired defaults are used.
The return value is a pointer to an abstract struct
PQoidStatus
Returns a string with the object id of the tuple inserted, if the
SQL command was an INSERT.
- (The string will be "0" if the INSERT did not insert exactly one
+ (The string will be 0> if the INSERT did not insert exactly one
row, or if the target table does not have OIDs.) If the command
was not an INSERT, returns an empty string.
int PQconsumeInput(PGconn *conn);
-PQconsumeInput normally returns 1 indicating "no error",
+PQconsumeInput normally returns 1 indicating no error
,
but returns 0 if there was some kind of trouble (in which case
PQerrorMessage is set). Note that the result does not say
whether any input data was actually collected. After calling
PQconsumeInput to read the input. It can then call
PQisBusy, followed by PQgetResult
if PQisBusy returns false (0). It can also call
-PQnotifies to detect NOTIFY messages (see "Asynchronous
-Notification", below).
+PQnotifies to detect NOTIFY messages (see ).
Notice that the application must check to see if a
-new line consists of the two characters "\.",
+new line consists of the two characters \.,
which indicates that the backend server has finished sending
the results of the copy command.
If the application might
receive lines that are more than length-1 characters long,
-care is needed to be sure one recognizes the "\." line correctly
+care is needed to be sure one recognizes the \. line correctly
(and does not, for example, mistake the end of a long data line
for a terminator line).
The code in
a whole line will be returned at one time. But if the buffer offered by
the caller is too small to hold a line sent by the backend, then a partial
data line will be returned. This can be detected by testing whether the
-last returned byte is "\n" or not.
+last returned byte is \n or not.
The returned string is not null-terminated. (If you want to add a
terminating null, be sure to pass a
bufsize one smaller than the room
actually available.)
const char *string);
Note the application must explicitly send the two
-characters "\." on a final line to indicate to
+characters \. on a final line to indicate to
the backend that it has finished sending its data.
sent to the backend using PQputline or when the
last string has been received from the backend
using PGgetline. It must be issued or the backend
- may get "out of sync" with the frontend. Upon
+ may get out of sync
with the frontend. Upon
return from this function, the backend is ready to
receive the next query.
The return value is 0 on successful completion,
-By default,
libpq prints
"notice"
+By default,
libpq prints
notice
messages from the backend on stderr,
as well as a few error messages that it generates by itself.
This behavior can be overridden by supplying a callback function that
The Inversion large object implementation breaks large
- objects up into "chunks" and stores the chunks in
+ objects up into chunks
and stores the chunks in
tuples in the database. A B-tree index guarantees fast
searches for the correct chunk number when doing random
access reads and writes.
workspace maintained by the terminal monitor.
The
psql program responds to escape
codes that begin
- with the backslash character, "\". For example, you
+ with the backslash character, \. For example, you
can get help on the syntax of various
Postgres SQL commands by typing:
Single-line comments are denoted by two dashes
("--"). Everything after the dashes up to the end of the
line is ignored. Multiple-line comments, and comments within a line,
- are denoted by "/* ... */", a convention borrowed
+ are denoted by /* ... */, a convention borrowed
to you and that you can type
SQL queries into a
workspace maintained by the terminal monitor.
The
psql program responds to escape codes that begin
- with the backslash character, "\". For example, you
+ with the backslash character, \. For example, you
can get help on the syntax of various
PostgreSQL SQL commands by typing:
This tells the server to process the query. If you
- terminate your query with a semicolon, the "\g" is not
+ terminate your query with a semicolon, the \g is not
necessary.
psql will automatically process semicolon terminated queries.
To read queries from a file, say myFile, instead of
prompt.)
White space (i.e., spaces, tabs and newlines) may be
used freely in
SQL queries. Single-line comments are denoted by
- "--". Everything after the dashes up to the end of the
+ --. Everything after the dashes up to the end of the
line is ignored. Multiple-line comments, and comments within a line,
- are denoted by "/* ... */".
+ are denoted by /* ... */.
the row still exists at the time it is returned (i.e. sometime after the
current transaction began); the row might have been modified or deleted
by an already-committed transaction that committed after this one started.
- Even if the row is still valid "now", it could be changed or deleted
+ Even if the row is still valid now
, it could be changed or deleted
before the current transaction does a commit or rollback.
Another way to think about it is that each
transaction sees a snapshot of the database contents, and concurrently
executing transactions may very well see different snapshots. So the
- whole concept of "now" is somewhat suspect anyway. This is not normally
+ whole concept of now
is somewhat suspect anyway. This is not normally
a big problem if the client applications are isolated from each other,
but if the clients can communicate via channels outside the database
then serious confusion may ensue.
In a command synopsis, brackets
- ("[" and "]") indicate an optional phrase or keyword.
+ ([ and ]) indicate an optional phrase or keyword.
Anything in braces
- ("{" and "}") and containing vertical bars
- ("|")
+ ({ and }) and containing vertical bars
+ (|)
indicates that you must choose one.
- The 'Ready' message will appear in the lower left corner of the data
+ The Ready message will appear in the lower left corner of the data
window. This indicates that you can now enter queries.
- Search for 'null_clause = "NULL"
+ Search for null_clause = "NULL".
- Change this to null_clause = ""
+ Change this to null_clause = "".
- Enter the value "sqldemo", then click OK.
+ Enter the value sqldemo, then click OK.
In this nested-loop join, the outer scan is the same index scan we had
in the example before last, and so its cost and row count are the same
- because we are applying the "unique1 < 50" WHERE clause at that node.
- The "t1.unique2 = t2.unique2" clause isn't relevant yet, so it doesn't
+ because we are applying the unique1 < 50 WHERE clause at that node.
+ The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn't
affect row count of the outer scan. For the inner scan, the unique2 value of the
current
outer-scan tuple is plugged into the inner index scan
to produce an index qualification like
- "t2.unique2 = constant". So we get the
+ t2.unique2 = constant. So we get the
same inner-scan plan and costs that we'd get from, say, explain select
* from tenk2 where unique2 = 42. The costs of the loop node are then set
on the basis of the cost of the outer scan, plus one repetition of the
of the two scans' row counts, but that's not true in general, because
in general you can have WHERE clauses that mention both relations and
so can only be applied at the join point, not to either input scan.
- For example, if we added "WHERE ... AND t1.hundred < t2.hundred",
+ For example, if we added WHERE ... AND t1.hundred < t2.hundred,
that would decrease the output row count of the join node, but not change
either input scan.
This plan proposes to extract the 50 interesting rows of tenk1
using ye same olde index scan, stash them into an in-memory hash table,
and then do a sequential scan of tenk2, probing into the hash table
- for possible matches of "t1.unique2 = t2.unique2" at each tenk2 tuple.
+ for possible matches of t1.unique2 = t2.unique2 at each tenk2 tuple.
The cost to read tenk1 and set up the hash table is entirely start-up
cost for the hash join, since we won't get any tuples out until we can
start reading tenk2. The total time estimate for the join also
-
+
PL/Python - Python Procedural Language
dictionary, as mentioned above.
- </para>
When a function is used in a trigger, the dictionary TD contains
- transaction related values. The trigger tuples are in TD["new"]
- and/or TD["old"] depending on the trigger event. TD["event"]
- contains the event as a string ("INSERT", "UPDATE", "DELETE", or
- "UNKNOWN"). TD["when"] contains one of ("BEFORE", "AFTER", or
- "UNKNOWN"). TD["level"] contains one of ("ROW", "STATEMENT", or
- "UNKNOWN"). TD["name"] contains the trigger name, and TD["relid"]
+ transaction related values. The trigger tuples are in TD["new"]>
+ and/or TD["old"]> depending on the trigger event. TD["event"]>
+ contains the event as a string (INSERT>, UPDATE>, DELETE>, or
+ UNKNOWN>). TD["when"] contains one of (BEFORE>, AFTER>, or
+ UNKNOWN>). TD["level"]> contains one of ROW>, STATEMENT>, or
+ UNKNOWN>. TD["name"]> contains the trigger name, and TD["relid"]>
contains the relation id of the table on which the trigger occurred.
If the trigger was called with arguments they are available
- in TD["args"][0] to TD["args"][(n -1)]
+ in TD["args"][0]> to TD["args"][(n -1)]>.
- If the trigger 'when' is "BEFORE", you may Return 'None' or "OK"
- from the python function to indicate the tuple is unmodified,
- "SKIP" to abort the event, or "MODIFIED" to indicate you've
+ If the trigger when
is BEFORE>, you may return None or "OK"
+ from the Python function to indicate the tuple is unmodified,
+ "SKIP"> to abort the event, or "MODIFIED"> to indicate you've
modified the tuple.
When working with dynamic queries you will have to face
escaping of single quotes in
PL/pgSQL>. Please refer to the
- table available at the "Porting from Oracle PL/SQL" chapter
+ table available at the
for a detailed explanation that will save you some effort.
- When you use the "ELSE IF" statement, you are actually
+ When you use the ELSE IF> statement, you are actually
nesting an IF statement inside the ELSE
statement. Thus you need one END IF statement for each
nested IF and one for the parent IF-ELSE.
Once a record or row has been assigned to a RECORD variable,
- you can use the "." (dot) notation to access fields in that
+ you can use the .> (dot) notation to access fields in that
record:
DECLARE
spi_execp).
Think about a query string like
"SELECT '$val' AS ret"
-
+
- where the Tcl variable val actually contains "doesn't". This would result
+ where the Tcl variable val actually contains doesn't. This would result
in the final query string
-"SELECT 'doesn't' AS ret"
-
+SELECT 'doesn't' AS ret
+
which would cause a parse error during
spi_exec or
spi_prepare.
It should contain
-"SELECT 'doesn''t' AS ret"
-
+SELECT 'doesn''t' AS ret
+
and has to be written as
-"SELECT '[ quote $val ]' AS ret"
-
+SELECT '[ quote $val ]' AS ret
+
The most important thing to remember about bug reporting is to state all
the facts and only facts. Do not speculate what you think went wrong, what
- "it seemed to do", or which part of the program has a fault.
+ it seemed to do
, or which part of the program has a fault.
If you are not familiar with the implementation you would probably guess
wrong and not help us a bit. And even if you are, educated explanations are
a great supplement to but no substitute for facts. If we are going to fix
please try to isolate the offending queries. We will probably not set up a
web server to reproduce your problem. In any case remember to provide
the exact input files, do not guess that the problem happens for
- "large files" or "mid-size databases", etc. since this
+ large files
or mid-size databases
, etc. since this
information is too inexact to be of use.
The output you expected is very important to state. If you just write
- "This command gives me that output." or "This is not
- what I expected.", we might run it ourselves, scan the output, and
+ This command gives me that output.
or
This is not
+ what I expected., we might run it ourselves, scan the output, and
think it looks OK and is exactly what we expected. We should not have to
spend the time to decode the exact semantics behind your commands.
- Especially refrain from merely saying that "This is not what SQL says/Oracle
- does.
" Digging out the correct behavior from
SQL
+ Especially refrain from merely saying that
This is not what SQL says/Oracle
+ does.
Digging out the correct behavior from
SQL
is not a fun undertaking, nor do we all know how all the other relational
databases out there behave. (If your problem is a program crash you can
obviously omit this item.)
Platform information. This includes the kernel name and version, C library,
processor, memory information. In most cases it is sufficient to report
the vendor and version, but do not assume everyone knows what exactly
- "Debian" contains or that everyone runs on Pentiums. If
+ Debian
contains or that everyone runs on Pentiums. If
you have installation problems then information about compilers, make,
etc. is also necessary.
When writing a bug report, please choose non-confusing terminology.
- The software package as such is called "PostgreSQL",
- sometimes "Postgres" for short. (Sometimes
- the abbreviation "Pgsql" is used but don't do that.) When you
+ The software package as such is called PostgreSQL
,
+ sometimes Postgres
for short. (Sometimes
+ the abbreviation Pgsql
is used but don't do that.) When you
are specifically talking about the backend server, mention that, do not
- just say "Postgres crashes". The interactive frontend is called
- "psql" and is for all intends and purposes completely separate
+ just say Postgres crashes
. The interactive frontend is called
+ psql
and is for all intends and purposes completely separate
from the backend.
-
+
Frontend/Backend Protocol
- A character array of exactly n bytes interpreted as a '\0'
- terminated string. The '\0' is omitted if there is
+ A character array of exactly n bytes interpreted as a
+ null-terminated string. The zero-byte is omitted if there is
insufficient room. If s is specified it is the literal value.
Eg. LimString32, LimString64("user").
- A conventional C '\0' terminated string with no length
+ A conventional C null-terminated string with no length
limitation.
If s is specified it is the literal value.
Eg. String, String("user").
Specifies the value of the field itself in
ASCII
characters. n is the above
size minus 4.
- There is no trailing '\0' in the field data; the front
+ There is no trailing zero-byte in the field data; the front
end must add one if it wants one.
The cancel request code. The value is chosen to contain
- "1234" in the most significant 16 bits, and "5678" in the
+ 1234> in the most significant 16 bits, and 5678> in the
least 16 significant bits. (To avoid confusion, this code
must not be the same as any protocol version number.)
- The name of the cursor. This will be "blank" if the cursor is
+ The name of the cursor. This will be blank>
if the cursor is
implicit.
-
+
PyGreSQL - Python Interface
- Find the directory where your 'Setup'
+ Find the directory where your Setup
file lives (usually ??/Modules) in
the
Python source hierarchy and
- copy or symlink the 'pgmodule.c' file there.
+ copy or symlink the pgmodule.c file there.
If you want a shared module, make sure that the
- "*shared*" keyword is uncommented and
+ *shared* keyword is uncommented and
add the above line below it. You used to need to install
- your shared modules with "make sharedinstall" but this no
+ your shared modules with make sharedinstall> but this no
longer seems to be true.
Signature
-12-byte sequence "PGBCOPY\n\377\r\n\0" --- note that the null
+12-byte sequence PGBCOPY\n\377\r\n\0> --- note that the null
is a required part of the signature. (The signature is designed to allow
easy identification of files that have been munged by a non-8-bit-clean
transfer. This signature will be changed by newline-translation
- If the state transition function is declared "strict" in pg_proc,
+ If the state transition function is declared strict
,
then it cannot be called with NULL inputs. With such a transition
function, aggregate execution behaves as follows. NULL input values
are ignored (the function is not called and the previous state value
- If the final function is declared "strict", then it will not
+ If the final function is declared strict
, then it will not
be called when the ending state value is NULL; instead a NULL result
will be output automatically. (Of course this is just the normal
behavior of strict functions.) In any case the final function has
as an environment variable name, which must be known to the
server process. This way the database administrator can
exercise control over locations in which databases can be created.
- (A customary choice is, e.g., 'PGDATA2'.)
+ (A customary choice is, e.g., PGDATA2.)
If the server is compiled with ALLOW_ABSOLUTE_DBPATHS
(not so by default), absolute path names, as identified by
a leading slash
- (e.g., '/usr/local/pgsql/data'),
+ (e.g., /usr/local/pgsql/data),
are allowed as well.
- "$" cannot be defined as a single-character operator,
+ $ cannot be defined as a single-character operator,
although it can be part of a multi-character operator name.
- "--" and "/*" cannot appear anywhere in an operator name,
+ -- and /* cannot appear anywhere in an operator name,
since they will be taken as the start of a comment.
- A multi-character operator name cannot end in "+" or "-",
+ A multi-character operator name cannot end in + or
+ -,
unless the name also contains at least one of these characters:
~ ! @ # % ^ & | ` ? $
When working with non-SQL-standard operator names, you will usually
need to separate adjacent operators with spaces to avoid ambiguity.
- For example, if you have defined a left-unary operator named "@",
+ For example, if you have defined a left-unary operator named @,
you cannot write X*@Y; you must write
X* @Y to ensure that
Postgres reads it as two operator names
- The operator "!=" is mapped to "<>" on input, so these two names
+ The operator != is mapped to <> on input, so these two names
are always equivalent.
that obtain numbers from the same sequence, a nextval operation
is never rolled back; that is, once a value has been fetched it is
considered used, even if the transaction that did the nextval later
- aborts. This means that aborted transactions may leave unused "holes"
+ aborts. This means that aborted transactions may leave unused holes
in the sequence of assigned values. setval operations are never
rolled back, either.
Each backend uses its own cache to store preallocated numbers.
Numbers that are cached but not used in the current session will be
- lost, resulting in "holes" in the sequence.
+ lost, resulting in holes
in the sequence.
CREATE TABLE will enter a new, initially empty table
- into the current database. The table will be "owned" by the user issuing the
+ into the current database. The table will be owned by the user issuing the
command.
A table constraint is an integrity constraint defined on one or
- more columns of a table. The four variations of "Table
- Constraint" are:
+ more columns of a table. The four variations of
Table
+ Constraint are:
UNIQUE
CHECK
NULL clause
- The NULL "constraint" (actually a non-constraint) is a
+ The NULL constraint
(actually a non-constraint) is a
Postgres extension to SQL92 that is
included for symmetry with the NOT NULL clause (and for compatibility
with some other RDBMSes). Since it is the
being inserted (for INSERT and
UPDATE operations only). If
the trigger fires after the event, all changes, including the
- last insertion, update, or deletion, are "visible" to the trigger.
+ last insertion, update, or deletion, are visible
to the trigger.
Storage alignment requirement of the data type. If specified, must
- be 'char', 'int2',
- 'int4', or 'double';
- the default is 'int4'.
+ be char, int2,
+ int4, or double;
+ the default is int4.
Storage technique for the data type. If specified, must
- be 'plain', 'external',
- 'extended', or 'main';
- the default is 'plain'.
+ be plain, external,
+ extended, or main;
+ the default is plain.
output_function
performs the reverse transformation. Both
the input and output functions must be declared to take
- one or two arguments of type "opaque".
+ one or two arguments of type opaque.
positive integer, or variable length,
in which case Postgres assumes that the new type has the
same format
- as the Postgres-supplied data type, "text".
+ as the Postgres-supplied data type, text.
To indicate that a type is variable length, set
internallength
to .
A default value is optionally available in case a user
- wants some specific bit pattern to mean "data not present."
+ wants some specific bit pattern to mean data not present
.
Specify the default with the DEFAULT keyword.
How does the user specify that bit pattern and associate
it with the fact that the data is not present>
As an example, if a query returns a value of one from an integer column,
- you would get a string of '1' with a default cursor
+ you would get a string of 1> with a default cursor
whereas with a binary cursor you would get
- a 4-byte value equal to control-A ('^A').
+ a 4-byte value equal to control-A (^A).
Postgres does not resolve
byte ordering or representation issues for binary cursors.
Therefore, if your client machine and server machine use different
- representations (e.g., "big-endian" versus "little-endian"),
+ representations (e.g., big-endian
versus little-endian
),
you will probably not want your data returned in
binary format.
However, binary cursors may be a
SQL92 allows one to repetitively retrieve the cursor
- at its "current position" using the syntax
+ at its current position
using the syntax
FETCH RELATIVE 0 FROM cursor.
which is referenced. See the examples at the end.
- In order to use this command you must be logged in (using 'su', for example)
+ In order to use this command you must be logged in (using su, for example)
as the database superuser.
Commonly, the notify condition name is the same as the name of some table in
- the database, and the notify event essentially means "I changed this table,
- take a look at it to see what's new". But no such association is enforced by
+ the database, and the notify event essentially means
I changed this table,
+ take a look at it to see what's new. But no such association is enforced by
the NOTIFY and LISTEN commands. For
example, a database designer could use several different condition names
to signal different sorts of changes to a single table.
after the transaction is completed (either committed or aborted). Again, the
reasoning is that if a notify were delivered within a transaction that was
later aborted, one would want the notification to be undone somehow---but
- the backend cannot "take back" a notify once it has sent it to the frontend.
+ the backend cannot take back
a notify once it has sent it to the frontend.
So notify events are only delivered between transactions. The upshot of this
is that applications using NOTIFY for real-time signaling
should try to keep their transactions short.
-
+
Restore elements in list-file only, and in the
- order they appear in the file. Lines can be moved and may also be commented out by placing a ';' at the
+ order they appear in the file. Lines can be moved and may also be commented out by placing a ; at the
start of the line.
In normal operation,
psql provides a prompt with
the name of the database to which
psql is currently
- connected, followed by the string "=>". For example,
+ connected, followed by the string =>. For example,
$ psql testdb
Welcome to psql, the PostgreSQL interactive terminal.
-test=> \z
+test=> \z
Access permissions for database "test"
Relation | Access permissions
----------+-------------------------------------
specified expressions, keeping only the first row of each set of
duplicates. The DISTINCT ON expressions are interpreted using the
same rules as for ORDER BY items; see below.
- Note that "the first row" of each set is unpredictable
+ Note that the first row
of each set is unpredictable
unless ORDER BY is used to ensure that the desired
row appears first. For example,
SELECT Clause
- In the
SQL92 standard, the optional keyword
"AS"
+ In the
SQL92 standard, the optional keyword
AS>
is just noise and can be
omitted without affecting the meaning.
The
Postgres parser requires this keyword when
renaming output columns because the type extensibility features lead to
parsing ambiguities
- in this context. "AS" is optional in FROM items, however.
+ in this context. AS is optional in FROM items, however.
The DISTINCT ON phrase is not part of
SQL92.
UNLISTEN cancels any existing registration of the current
Postgres session as a listener on the notify
condition notifyname.
- The special condition wildcard "*" cancels all listener registrations
+ The special condition wildcard * cancels all listener registrations
for the current session.
- Change word "Drama" with "Dramatic" on column kind:
+ Change word Drama> with Dramatic> on column kind>:
UPDATE films
-
+
- The query rewrite rule system (the "rule system" from now on)
+ The query rewrite rule system (the rule system> from now on)
is totally different from stored procedures and triggers.
It modifies queries to
take rules into consideration, and then passes the modified
It's the simplest SELECT Al can do on our views, so we take this
to explain the basics of view rules.
- The 'SELECT * FROM shoelace' was interpreted by the parser and
+ The SELECT * FROM shoelace was interpreted by the parser and
produced the parsetree
It turns out that the planner will collapse this tree into a two-level
- query tree: the bottommost selects will be "pulled up" into the middle
+ query tree: the bottommost selects will be pulled up
into the middle
select since there's no need to process them separately. But the
middle select will remain separate from the top, because it contains
aggregate functions. If we pulled those up it would change the behavior
in mind.
- In the following, "update rules" means rules that are defined
+ In the following, update rules> means rules that are defined
ON INSERT, UPDATE or DELETE.
WHERE bpchareq(shoelace_data.sl_name, 'sl7');
- There is a rule 'log_shoelace' that is ON UPDATE with the rule
+ There is a rule log_shoelace that is ON UPDATE with the rule
qualification expression
FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok;
- Now the first rule 'shoelace_ok_ins' is applied and turns it
+ Now the first rule shoelace_ok_ins is applied and turns it
into
hole, but in fact it isn't. If this would not work, the secretary
could setup a table with the same columns as phone_number and
copy the data to there once per day. Then it's his own data and
- he can grant access to everyone he wants. A GRANT means "I trust you".
+ he can grant access to everyone he wants. A GRANT means I trust you
.
If someone you trust does the thing above, it's time to
think it over and then REVOKE.
will cause indexes to be sorted in an order that prevents them from
being used for LIKE and regular-expression searches. If you need
good performance of such searches, you should set your current locale
- to "C" and re-run initdb. On most systems, setting the
+ to C> and re-run initdb. On most systems, setting the
current locale is done by changing the value of the environment variable
LC_ALL or LANG. The sort order used
within a particular database cluster is set by initdb
Array describing what parameters get NULLs
-'n' indicates NULL allowed
-' ' indicates NULL not allowed
+n indicates NULL allowed
+A space indicates NULL not allowed
where delim is the delimiter character
for the type, as recorded in its pg_type
entry. (For all built-in types, this is the comma character
- ",".) Each val is either a constant
+ ,>
.) Each val is either a constant
of the array element type, or a sub-array. An example of an
array constant is
- "$" (dollar) cannot be a single-character operator, although it
+ $> (dollar) cannot be a single-character operator, although it
can be part of a multiple-character operator name.
- A multiple-character operator name cannot end in "+" or "-",
+ A multiple-character operator name cannot end in +> or ->,
unless the name also contains at least one of these characters:
~ ! @ # % ^ & | ` ? $
When working with non-SQL-standard operator names, you will usually
need to separate adjacent operators with spaces to avoid ambiguity.
- For example, if you have defined a left-unary operator named "@",
+ For example, if you have defined a left-unary operator named @,
you cannot write X*@Y; you must write
X* @Y to ensure that
Postgres reads it as two operator names
The precedence and associativity of the operators is hard-wired
into the parser. Most operators have the same precedence and are
left-associative. This may lead to non-intuitive behavior; for
- example the Boolean operators "<" and ">" have a different
- precedence than the Boolean operators "<=" and ">=". Also,
+ example the Boolean operators <> and >> have a different
+ precedence than the Boolean operators <=> and >=>. Also,
you will sometimes need to add parentheses when using combinations
of binary and unary operators. For instance
The trigger function must be created before the trigger is created as a
function taking no arguments and returning opaque. If the function is
- written in C, it must use the "version 1" function manager interface.
+ written in C, it must use the version 1>
function manager interface.
Also, procedure
may be used for triggering different relations (these
- functions are named as "general trigger functions").
+ functions are named as general trigger functions>).
function that takes as its arguments two field names and puts the current
user in one and the current timestamp in the other. This allows triggers to
be written on INSERT events to automatically track creation of records in a
- transaction table for example. It could also be used as a "last updated"
+ transaction table for example. It could also be used as a last updated>
function if used in an UPDATE event.
When a function is called by the trigger manager, it is not passed any
- normal parameters, but it is passed a "context" pointer pointing to a
+ normal parameters, but it is passed a context>
pointer pointing to a
TriggerData structure. C functions can check whether they were called
from the trigger manager or not by executing the macro
CALLED_AS_TRIGGER(fcinfo), which expands to
User-defined types, of which the parser has no a-priori knowledge, should be
-"higher" in the type hierarchy. In mixed-type expressions, native types shall always
+higher
in the type hierarchy. In mixed-type expressions, native types shall always
be converted to a user-defined type (of course, only if conversion is necessary).
and finds that there are candidates accepting both string-category and
bit-string-category inputs. Since string category is preferred when available,
that category is selected, and then the
-"preferred type" for strings, text, is used as the specific
+preferred type
for strings, text, is used as the specific
type to resolve the unknown literals to.
-If any input arguments are "unknown", check the type categories accepted
+If any input arguments are unknown, check the type categories accepted
at those argument positions by the remaining candidates. At each position,
-select "string"
+select string
category if any candidate accepts that category (this bias towards string
is appropriate since an unknown-type literal does look like a string).
Otherwise, if all the remaining candidates accept the same type category,
Actually, the parser is aware that text and varchar
-are "binary compatible", meaning that one can be passed to a function that
+are binary compatible>, meaning that one can be passed to a function that
accepts the other without doing any physical conversion. Therefore, no
explicit type conversion call is really inserted in this case.
b
(2 rows)
-Here, the unknown-type literal 'b' will be resolved as type text.
+Here, the unknown-type literal 'b' will be resolved as type text.
If we define an aggregate that does not use a final function,
we have an aggregate that computes a running function of
- the column values from each row. "Sum" is an
- example of this kind of aggregate. "Sum" starts at
+ the column values from each row. Sum> is an
+ example of this kind of aggregate. Sum> starts at
zero and always adds the current row's value to
its running total. For example, if we want to make a Sum
aggregate to work on a data type for complex numbers,
+------------+
- (In practice, we'd just name the aggregate "sum", and rely on
+ (In practice, we'd just name the aggregate sum, and rely on
Postgres to figure out which kind
of sum to apply to a complex column.)
- The above definition of "Sum" will return zero (the initial
+ The above definition of Sum will return zero (the initial
state condition) if there are no non-null input values.
Perhaps we want to return NULL in that case instead --- SQL92
- expects "Sum" to behave that way. We can do this simply by
+ expects Sum to behave that way. We can do this simply by
omitting the initcond phrase, so that the initial state
condition is NULL. Ordinarily this would mean that the sfunc
would need to check for a NULL state-condition input, but for
- "Sum" and some other simple aggregates like "Max" and "Min",
+ Sum and some other simple aggregates like Max> and Min>,
it's sufficient to insert the first non-null input value into
the state variable and then start applying the transition function
at the second non-null input value.
Postgres
will do that automatically if the initial condition is NULL and
- the transition function is marked "strict" (i.e., not to be called
+ the transition function is marked strict>
(i.e., not to be called
for NULL inputs).
- Another bit of default behavior for a "strict" transition function
+ Another bit of default behavior for a strict>
transition function
is that the previous state value is retained unchanged whenever a
NULL input value is encountered. Thus, NULLs are ignored. If you
need some other behavior for NULL inputs, just define your transition
- "Average" is a more complex example of an aggregate. It requires
+ Average> is a more complex example of an aggregate. It requires
two pieces of running state: the sum of the inputs and the count
of the number of inputs. The final result is obtained by dividing
these quantities. Average is typically implemented by using a
Arguments to the SQL function may be referenced in the queries using
a $n syntax: $1 refers to the first argument, $2 to the second, and so
on. If an argument is complex, then a dot
- notation (e.g. "$1.emp") may be
+ notation (e.g. $1.emp) may be
used to access attributes of the argument or
to invoke functions.
Two different calling conventions are currently used for C functions.
- The newer "version 1" calling convention is indicated by writing
+ The newer version 1
calling convention is indicated by writing
a PG_FUNCTION_INFO_V1() macro call for the function,
as illustrated below. Lack of such a macro indicates an old-style
- ("version 0") function. The language name specified in CREATE FUNCTION
- is 'C' in either case. Old-style functions are now deprecated
+ ("version 0") function. The language name specified in CREATE FUNCTION
+ is C in either case. Old-style functions are now deprecated
because of portability problems and lack of functionality, but they
are still supported for compatibility reasons.
The following table gives the C type required for parameters in the C
- functions that will be loaded into Postgres. The "Defined In"
+ functions that will be loaded into Postgres. The Defined In
column gives the actual header file (in the
.../src/backend/
directory) that the equivalent C type is defined. Note that you should
Internally,
Postgres regards a
- base type as a "blob of memory." The user-defined
+ base type as a blob of memory
. The user-defined
functions that you define over a type in turn define the
way that
Postgres can operate
on it. That is,
Postgres will
- Notice that we have specified the functions as "strict", meaning that
+ Notice that we have specified the functions as strict
, meaning that
the system should automatically assume a NULL result if any input
value is NULL. By doing this, we avoid having to check for NULL inputs
in the function code. Without this, we'd have to check for NULLs
must appear in the same source file (conventionally it's written
just before the function itself). This macro call is not needed
- for "internal"-language functions, since Postgres currently assumes
+ for internal>-language functions, since Postgres currently assumes
all internal functions are version-1. However, it is
required for dynamically-loaded functions.
An example is that in coding add_one_float8, we no longer need to
be aware that float8 is a pass-by-reference type. Another
example is that the GETARG macros for variable-length types hide
- the need to deal with fetching "toasted" (compressed or
+ the need to deal with fetching toasted
(compressed or
out-of-line) values. The old-style copytext
and concat_text functions shown above are
actually wrong in the presence of toasted values, because they
impose a strict ordering on keys, lesser to greater. Since
Postgres allows the user to define operators,
Postgres cannot look at the name of an operator
- (e.g., ">" or "<") and tell what kind of comparison it is. In fact,
+ (e.g., >> or <>) and tell what kind of comparison it is. In fact,
some access methods don't impose any ordering at all. For example,
R-trees express a rectangle-containment relationship,
whereas a hashed data structure expresses only bitwise similarity based
needs some consistent way of taking a qualification in your query,
looking at the operator and then deciding if a usable index exists. This
implies that
Postgres needs to know, for
- example, that the "<=" and ">" operators partition a
+ example, that the <=> and >> operators partition a
uses strategies to express these relationships between
operators and the way they can be used to scan indexes.
does, amorderstrategy is the number of the strategy
routine that corresponds to the ordering operator. For example, B-tree
has amorderstrategy = 1 which is its
- "less than" strategy number.
+ less than
strategy number.
The final routine in the
- file is the "support routine" mentioned when we discussed the amsupport
+ file is the support routine
mentioned when we discussed the amsupport
column of the pg_am table. We will use this
later on. For now, ignore it.
c.oprname = '<';
- Now do this for the other operators substituting for the "1" in the
- second line above and the "<" in the last line. Note the order:
- "less than" is 1, "less than or equal" is 2, "equal" is 3, "greater
- than or equal" is 4, and "greater than" is 5.
+ Now do this for the other operators substituting for the 1> in the
+ second line above and the <> in the last line. Note the order:
+ less than> is 1, less than or equal> is 2,
+ equal> is 3, greater than or equal
is 4, and
+ greater than
is 5.
- The final step is registration of the "support routine" previously
+ The final step is registration of the support routine
previously
described in our discussion of pg_am. The
oid of this support routine is stored in the
pg_amproc table, keyed by the operator class
- Every operator is "syntactic sugar" for a call to an
+ Every operator is syntactic sugar
for a call to an
underlying function that does the real work; so you must
first create the underlying function before you can create
the operator. However, an operator is not
commutator of the operator being defined. We say that operator A is the
commutator of operator B if (x A y) equals (y B x) for all possible input
values x,y. Notice that B is also the commutator of A. For example,
- operators '<' and '>' for a particular data type are usually each others'
- commutators, and operator '+' is usually commutative with itself.
- But operator '-' is usually not commutative with anything.
+ operators <> and >> for a particular data type are usually each others'
+ commutators, and operator +> is usually commutative with itself.
+ But operator -> is usually not commutative with anything.
is the negator of operator B if both return boolean results and
(x A y) equals NOT (x B y) for all possible inputs x,y.
Notice that B is also the negator of A.
- For example, '<' and '>=' are a negator pair for most data types.
+ For example, <> and >=> are a negator pair for most data types.
An operator can never be validly be its own negator.
scalargtsel for > or >=
It might seem a little odd that these are the categories, but they
- make sense if you think about it. '=' will typically accept only
- a small fraction of the rows in a table; '<>' will typically reject
- only a small fraction. '<' will accept a fraction that depends on
+ make sense if you think about it. => will typically accept only
+ a small fraction of the rows in a table; <>> will typically reject
+ only a small fraction. <> will accept a fraction that depends on
where the given constant falls in the range of values for that table
column (which, it just so happens, is information collected by
ANALYZE and made available to the selectivity estimator).
- '<=' will accept a slightly larger fraction than '<' for the same
+ <=> will accept a slightly larger fraction than <> for the same
comparison constant, but they're close enough to not be worth
distinguishing, especially since we're not likely to do better than a
- rough guess anyhow. Similar remarks apply to '>' and '>='.
+ rough guess anyhow. Similar remarks apply to >> and >=>.
time intervals is not bitwise equality; the interval equality operator
considers two time intervals equal if they have the same
duration, whether or not their endpoints are identical. What this means
- is that a join using "=" between interval fields would yield different
+ is that a join using = between interval fields would yield different
results if implemented as a hash join than if implemented another way,
because a large fraction of the pairs that should match will hash to
different values and will never be compared by the hash join. But
Merge join is based on the idea of sorting the left and righthand tables
into order and then scanning them in parallel. So, both data types must
be capable of being fully ordered, and the join operator must be one
- that can only succeed for pairs of values that fall at the "same place"
+ that can only succeed for pairs of values that fall at the same place>
in the sort order. In practice this means that the join operator must
behave like equality. But unlike hashjoin, where the left and right
data types had better be the same (or at least bitwise equivalent),
- In practice you should only write SORT clauses for an '=' operator,
- and the two referenced operators should always be named '<'. Trying
+ In practice you should only write SORT clauses for an => operator,
+ and the two referenced operators should always be named <>. Trying
to use merge join with operators named anything else will result in
hopeless confusion, for reasons we'll see in a moment.
- There must be '<' and '>' ordering operators having the same left and
+ There must be <> and >> ordering operators having the same left and
right input data types as the mergejoinable operator itself. These
- operators must be named '<' and '>'; you do
+ operators must be named <> and >>; you do
not have any choice in the matter, since there is no provision for
specifying them explicitly. Note that if the left and right data types
are different, neither of these operators is the same as either
are documented in the current User's Guide
in the chapter on data types.
For two-digit years, the significant transition year is 1970, not 2000;
- e.g. "70-01-01" is interpreted as 1970-01-01,
- whereas "69-01-01" is interpreted as 2069-01-01.
+ e.g. 70-01-01 is interpreted as 1970-01-01,
+ whereas 69-01-01 is interpreted as 2069-01-01.
- Any Y2K problems in the underlying OS related to obtaining "the
- current time" may propagate into apparent Y2K problems in
+ Any Y2K problems in the underlying OS related to obtaining the
+ current time
may propagate into apparent Y2K problems in