Use of symbolic references is enabled in a particular catalog column
by attaching BKI_LOOKUP(lookuprule )
to the column's definition, where lookuprule
- is the name of the referenced catalog, e.g. pg_proc .
+ is the name of the referenced catalog, e.g., pg_proc .
BKI_LOOKUP can be attached to columns of
type Oid , regproc , oidvector ,
or Oid[] ; in the latter two cases it implies performing a
start with the . character are also ignored, to
prevent mistakes since such files are hidden on some platforms. Multiple
files within an include directory are processed in file name order
- (according to C locale rules, i.e. numbers before letters, and
+ (according to C locale rules, i.e., numbers before letters, and
uppercase letters before lowercase ones).
With this parameter enabled, you can still create ordinary global
users. Simply append @ when specifying the user
- name in the client, e.g. joe@ . The @
+ name in the client, e.g., joe@ . The @
will be stripped off before the user name is looked up by the
server.
disabled, but the server continues to accumulate WAL segment files in
the expectation that a command will soon be provided. Setting
archive_command to a command that does nothing but
- return true, e.g. /bin/true (REM on
+ return true, e.g., /bin/true (REM on
Windows), effectively disables
archiving, but also breaks the chain of WAL files needed for
archive recovery, so it should only be used in unusual circumstances.
This parameter specifies that recovery should end as soon as a
- consistent state is reached, i.e. as early as possible. When restoring
+ consistent state is reached, i.e., as early as possible. When restoring
from an online backup, this means the point where taking the backup
ended.
variable has been set
earlier in the configuration file). Preferred style is to use a
numeric offset from UTC, or you can write a full time zone name,
- e.g. Europe/Helsinki not EEST .
+ e.g., Europe/Helsinki not EEST .
if your data is likely to be completely in cache, such as when
the database is smaller than the total server memory, decreasing
random_page_cost can be appropriate. Storage that has a low random
- read cost relative to sequential, e.g. solid-state drives, might
+ read cost relative to sequential, e.g., solid-state drives, might
also be better modeled with a lower value for random_page_cost,
e.g., 1.1 .
rows that can be locked; that value is unlimited. The default,
64, has historically proven sufficient, but you might need to
raise this value if you have queries that touch many different
- tables in a single transaction, e.g. query of a parent table with
+ tables in a single transaction, e.g., query of a parent table with
many children. This parameter can only be set at server start.
with assertions enabled. That is the case if the
macro USE_ASSERT_CHECKING is defined
when
PostgreSQL is built (accomplished
- e.g. by the configure option
+ e.g., by the configure option
--enable-cassert ). By
default
PostgreSQL is built without
assertions.
very large number of digits. It is especially recommended for
storing monetary amounts and other quantities where exactness is
required. Calculations with numeric values yield exact
- results where possible, e.g. addition, subtraction, multiplication.
+ results where possible, e.g., addition, subtraction, multiplication.
However, calculations on numeric values are very slow
compared to the integer types, or to the floating-point types
described in the next section.
|
unknown
- Identifies a not-yet-resolved type, e.g. of an undecorated
+ Identifies a not-yet-resolved type, e.g., of an undecorated
string literal.
- However, if the default value is volatile (e.g.
+ However, if the default value is volatile (e.g.,
clock_timestamp() )
each row will need to be updated with the value calculated at the time
ALTER TABLE is executed. To avoid a potentially
schema (assuming that the objects' own privilege requirements are
also met). Essentially this allows the grantee to look up
objects within the schema. Without this permission, it is still
- possible to see the object names, e.g. by querying system catalogs.
+ possible to see the object names, e.g., by querying system catalogs.
Also, after revoking this permission, existing sessions might have
statements that have previously performed this lookup, so this is not
a completely secure way to prevent object access.
EXEC SQL FETCH NEXT FROM mycursor INTO SQL DESCRIPTOR mydesc;
If the result set is empty, the Descriptor Area will still contain
- the metadata from the query, i.e. the field names.
+ the metadata from the query, i.e., the field names.
sqllen
- Contains the binary length of the field. e.g. 4 bytes for ECPGt_int .
+ Contains the binary length of the field. e.g., 4 bytes for ECPGt_int .
FREE cursor_name
- Due to the differences how ECPG works compared to Informix's ESQL/C (i.e. which steps
+ Due to the differences how ECPG works compared to Informix's ESQL/C (i.e., which steps
are purely grammar transformations and which steps rely on the underlying run-time library)
there is no FREE cursor_name statement in ECPG. This is because in ECPG,
DECLARE CURSOR doesn't translate to a function call into
An extension is relocatable if it is possible to move
its contained objects into a different schema after initial creation
- of the extension. The default is false , i.e. the
+ of the extension. The default is false , i.e., the
extension is not relocatable.
See for more information.
NO_INSTALLCHECK
- don't define an
installcheck target, useful e.g. if tests require special configuration, or don't use
pg_regress
+ don't define an
installcheck target, useful e.g.
, if tests require special configuration, or don't use
pg_regress
In to_timestamp and to_date ,
- if the year format specification is less than four digits, e.g.
+ if the year format specification is less than four digits, e.g.,
YYY , and the supplied year is less than four digits,
- the year will be adjusted to be nearest to the year 2020, e.g.
+ the year will be adjusted to be nearest to the year 2020, e.g.,
95 becomes 1995.
|
objsubid
integer
- Sub-object ID (e.g. attribute number for a column)
+ Sub-object ID (e.g., attribute number for a column)
|
command_tag
|
objsubid
integer
- Sub-object ID (e.g. attribute number for a column)
+ Sub-object ID (e.g., attribute number for a column)
|
original
Contains the values of row
- attributes (i.e. the data) for a
+ attributes (i.e., the data) for a
relation .
The heap is realized within one or more
file segments
this is unacceptable, either the middleware or the application
must query such values from a single server and then use those
values in write queries. Another option is to use this replication
- option with a traditional primary-standby setup, i.e. data modification
+ option with a traditional primary-standby setup, i.e., data modification
queries are sent only to the primary and are propagated to the
standby servers via primary-standby replication, not by the replication
middleware. Care must also be taken that all
Set up continuous archiving on the primary to an archive directory
accessible from the standby, as described
in . The archive location should be
- accessible from the standby even when the primary is down, i.e. it should
+ accessible from the standby even when the primary is down, i.e., it should
reside on the standby server itself or another trusted server, not on
the primary server.
- Data Definition Language (DDL): e.g. CREATE INDEX
+ Data Definition Language (DDL): e.g., CREATE INDEX
WAL file control commands will not work during recovery,
- e.g. pg_start_backup , pg_switch_wal etc.
+ e.g., pg_start_backup , pg_switch_wal etc.
linkend="indexes-index-only-scans">index-only scans on
the given column, by returning the indexed column values for an index entry
in the form of an IndexTuple . The attribute number
- is 1-based, i.e. the first column's attno is 1. Returns true if supported,
+ is 1-based, i.e., the first column's attno is 1. Returns true if supported,
else false. If the access method does not support index-only scans at all,
the amcanreturn field in its IndexAmRoutine
struct can be set to NULL.
same purpose.
From the
Visual Studio Command Prompt , you can
change the targeted CPU architecture, build type, and target OS by using the
- vcvarsall.bat command, e.g.
+ vcvarsall.bat command, e.g.,
vcvarsall.bat x64 10.0.10240.0 to target Windows 10
with a 64-bit release build. See -help for the other
options of vcvarsall.bat . All commands should be run from
installations C:\Program Files\GnuWin32 .
Consider installing into C:\GnuWin32 or use the
NTFS short name path to GnuWin32 in your PATH environment setting
- (e.g. C:\PROGRA~1\GnuWin32 ).
+ (e.g., C:\PROGRA~1\GnuWin32 ).
Conversely, if PQconnectPoll(conn) last returned
PGRES_POLLING_WRITING , wait until the socket is ready
to write, then call PQconnectPoll(conn) again.
- On the first iteration, i.e. if you have yet to call
+ On the first iteration, i.e., if you have yet to call
PQconnectPoll , behave as if it last returned
PGRES_POLLING_WRITING . Continue this loop until
PQconnectPoll(conn) returns
hostaddr , and port options accept a comma-separated
list of values. The same number of elements must be given in each
option that is specified, such
- that e.g. the first hostaddr corresponds to the first host name,
+ that e.g., the first hostaddr corresponds to the first host name,
the second hostaddr corresponds to the second host name, and so
forth. As an exception, if only one port is specified, it
applies to all the hosts.
If a password file is used, you can have different passwords for
different hosts. All the other connection options are the same for every
- host in the list; it is not possible to e.g. specify different
+ host in the list; it is not possible to e.g., specify different
usernames for different hosts.
Maximum wait for connection, in seconds (write as a decimal integer,
- e.g. 10 ). Zero, negative, or not specified means
+ e.g., 10 ). Zero, negative, or not specified means
wait indefinitely. The minimum allowed timeout is 2 seconds, therefore
a value of 1 is interpreted as 2 .
This timeout applies separately to each host name or IP address.
cipher
- A short name of the ciphersuite used, e.g.
+ A short name of the ciphersuite used, e.g.,
"DHE-RSA-DES-CBC3-SHA" . The names are specific
to each SSL implementation.
again. Repeat until
returns 0. (It is necessary to check for
read-ready and drain the input with ,
- because the server can block trying to send us data, e.g. NOTICE
+ because the server can block trying to send us data, e.g., NOTICE
messages, and won't read our data until we read its.) Once
returns 0, wait for the socket to be
read-ready and then read the response as described above.
For a connection to be known SSL-secured, SSL usage must be configured
on both the client and the server before the connection
is made. If it is only configured on the server, the client may end up
- sending sensitive information (e.g. passwords) before
+ sending sensitive information (e.g., passwords) before
it knows that the server requires high security. In libpq, secure
connections can be ensured
by setting the sslmode parameter to verify-full or
Streaming of Large Transactions for Logical Decoding
- The basic output plugin callbacks (e.g. begin_cb ,
+ The basic output plugin callbacks (e.g., begin_cb ,
change_cb , commit_cb and
message_cb ) are only invoked when the transaction
actually commits. The changes are still decoded from the transaction
currently used for decoded changes) is selected and streamed. However, in
some cases we still have to spill to the disk even if streaming is enabled
because if we cross the memory limit but we still have not decoded the
- complete tuple e.g. only decoded toast table insert but not the main table
+ complete tuple e.g., only decoded toast table insert but not the main table
insert.
When the server shuts down cleanly, a permanent copy of the statistics
data is stored in the pg_stat subdirectory, so that
statistics can be retained across server restarts. When recovery is
- performed at server start (e.g. after immediate shutdown, server crash,
+ performed at server start (e.g., after immediate shutdown, server crash,
and point-in-time recovery), all statistics counters are reset.
In
PostgreSQL , you can request any of
the four standard transaction isolation levels, but internally only
- three distinct isolation levels are implemented, i.e. PostgreSQL's
+ three distinct isolation levels are implemented, i.e., PostgreSQL's
Read Uncommitted mode behaves like Read Committed. This is because
it is the only sensible way to map the standard isolation levels to
PostgreSQL's multiversion concurrency control architecture.
. Of course, this plan may turn
out to be slower than the serial plan which the planner preferred, but
this will not always be the case. If you don't get a parallel
- plan even with very small values of these settings (e.g. after setting
+ plan even with very small values of these settings (e.g., after setting
them both to zero), there may be some reason why the query planner is
unable to generate a parallel plan for your query. See
and
Functions and aggregates must be marked PARALLEL UNSAFE if
they write to the database, access sequences, change the transaction state
- even temporarily (e.g. a PL/pgSQL function which establishes an
+ even temporarily (e.g., a PL/pgSQL function which establishes an
EXCEPTION block to catch errors), or make persistent changes to
settings. Similarly, functions must be marked PARALLEL
RESTRICTED if they access temporary tables, client connection state,
Place the database cluster's data directory in a memory-backed
- file system (i.e.
RAM disk). This eliminates all
+ file system (i.e.
, RAM disk). This eliminates all
database disk I/O, but limits data storage to the amount of
available memory (and perhaps swap).
A superuser may override this check on a per-user-mapping basis by setting
- the user mapping option password_required 'false' , e.g.
+ the user mapping option password_required 'false' , e.g.,
ALTER USER MAPPING FOR some_non_superuser SERVER loopback_nopw
OPTIONS (ADD password_required 'false');
the server, the connection will be rejected (for example, this would occur
if the client requested protocol version 4.0, which does not exist as of
this writing). If the minor version requested by the client is not
- supported by the server (e.g. the client requests version 3.1, but the
+ supported by the server (e.g., the client requests version 3.1, but the
server supports only 3.0), the server may either reject the connection or
may respond with a NegotiateProtocolVersion message containing the highest
minor protocol version which it supports. The client may then choose either
by the client, but does support an earlier version of the protocol;
this message indicates the highest supported minor version. This
message will also be sent if the client requested unsupported protocol
- options (i.e. beginning with _pq_. ) in the
+ options (i.e., beginning with _pq_. ) in the
startup packet. This message will be followed by an ErrorResponse or
a message indicating the success or failure of authentication.
( )
)
- This is commonly used for analysis over hierarchical data; e.g. total
+ This is commonly used for analysis over hierarchical data; e.g., total
salary by department, division, and company-wide total.
CUBE ( e1 , e2 , ... )
- represents the given list and all of its possible subsets (i.e. the power
+ represents the given list and all of its possible subsets (i.e., the power
set). Thus
CUBE ( a, b, c )
Collation order (LC_COLLATE ) to use in the new database.
- This affects the sort order applied to strings, e.g. in queries with
+ This affects the sort order applied to strings, e.g., in queries with
ORDER BY, as well as the order used in indexes on text columns.
The default is to use the collation order of the template database.
See below for additional restrictions.
Character classification (LC_CTYPE ) to use in the new
- database. This affects the categorization of characters, e.g. lower,
+ database. This affects the categorization of characters, e.g., lower,
upper and digit. The default is to use the character classification of
the template database. See below for additional restrictions.
A list of values for the
associated filter_variable
for which the trigger should fire. For TAG , this means a
- list of command tags (e.g. 'DROP FUNCTION' ).
+ list of command tags (e.g., 'DROP FUNCTION' ).
The name of the language that the function is implemented in.
It can be sql , c ,
internal , or the name of a user-defined
- procedural language, e.g. plpgsql . Enclosing the
+ procedural language, e.g., plpgsql . Enclosing the
name in single quotes is deprecated and requires matching case.
Functions should be labeled parallel unsafe if they modify any database
state, or if they make changes to the transaction such as using
sub-transactions, or if they access sequences or attempt to make
- persistent changes to settings (e.g. setval ). They should
+ persistent changes to settings (e.g., setval ). They should
be labeled as parallel restricted if they access temporary tables,
client connection state, cursors, prepared statements, or miscellaneous
backend-local state which the system cannot synchronize in parallel mode
- (e.g. setseed cannot be executed other than by the group
+ (e.g., setseed cannot be executed other than by the group
leader because a change made by another process would not be reflected
in the leader). In general, if a function is labeled as being safe when
it is restricted or unsafe, or if it is labeled as being restricted when
The name of the language that the procedure is implemented in.
It can be sql , c ,
internal , or the name of a user-defined
- procedural language, e.g. plpgsql . Enclosing the
+ procedural language, e.g., plpgsql . Enclosing the
name in single quotes is deprecated and requires matching case.
Examples
- Create table t1 with two functionally dependent columns, i.e.
+ Create table t1 with two functionally dependent columns, i.e.,
knowledge of a value in the first column is sufficient for determining the
value in the other column. Then functional dependency statistics are built
on those columns:
one or more columns on which the uniqueness is not enforced.
Note that although the constraint is not enforced on the included columns,
it still depends on them. Consequently, some operations on these columns
- (e.g. DROP COLUMN ) can cause cascaded constraint and
+ (e.g., DROP COLUMN ) can cause cascaded constraint and
index deletion.
of columns to be specified which will be included in the non-key portion
of the index. Although uniqueness is not enforced on the included columns,
the constraint still depends on them. Consequently, some operations on the
- included columns (e.g. DROP COLUMN ) can cause cascaded
+ included columns (e.g., DROP COLUMN ) can cause cascaded
constraint and index deletion.
initdb initializes the database cluster's default
locale and character set encoding. The character set encoding,
collation order (LC_COLLATE ) and character set classes
- (LC_CTYPE , e.g. upper, lower, digit) can be set separately
+ (LC_CTYPE , e.g., upper, lower, digit) can be set separately
for a database when it is created. initdb determines
those settings for the template1 database, which will
serve as the default for all other databases.
--if-exists
- Use conditional commands (i.e. add an IF EXISTS
+ Use conditional commands (i.e., add an IF EXISTS
clause) when cleaning database objects. This option is not valid
unless --clean is also specified.
--if-exists
- Use conditional commands (i.e. add an IF EXISTS
+ Use conditional commands (i.e., add an IF EXISTS
clause) to drop databases and other objects. This option is not valid
unless --clean is also specified.
--if-exists
- Use conditional commands (i.e. add an IF EXISTS
+ Use conditional commands (i.e., add an IF EXISTS
clause) to drop database objects. This option is not valid
unless --clean is also specified.
from the WAL archive to the pg_wal directory, or run
pg_rewind with the
-c option to
automatically retrieve them from the WAL archive. The use of
-
pg_rewind is not limited to failover, e.g. a standby
+
pg_rewind is not limited to failover, e.g.
, a standby
server can be promoted, run some write transactions, and then rewinded
to become a standby again.
transaction to finish. The wait time is called the schedule lag time,
and its average and maximum are also reported separately. The
transaction latency with respect to the actual transaction start time,
- i.e. the time spent executing the transaction in the database, can be
+ i.e., the time spent executing the transaction in the database, can be
computed by subtracting the schedule lag time from the reported
latency.
client per thread and there are no external or data dependencies.
From a statistical viewpoint reproducing runs exactly is a bad idea because
it can hide the performance variability or improve performance unduly,
- e.g. by hitting the same pages as a previous run.
+ e.g., by hitting the same pages as a previous run.
However, it may also be of great help for debugging, for instance
re-running a tricky case which leads to an error.
Use wisely.
Remember to take the sampling rate into account when processing the
log file. For example, when computing TPS values, you need to multiply
- the numbers accordingly (e.g. with 0.01 sample rate, you'll only get
+ the numbers accordingly (e.g., with 0.01 sample rate, you'll only get
1/100 of the actual TPS).
2.0 / parameter , that is a relative
1.0 / parameter around the mean; for instance, if
parameter is 4.0, 67% of values are drawn from the
- middle quarter (1.0 / 4.0) of the interval (i.e. from
+ middle quarter (1.0 / 4.0) of the interval (i.e., from
3.0 / 8.0 to 5.0 / 8.0 ) and 95% from
the middle half (2.0 / 4.0 ) of the interval (second and third
quartiles). The minimum allowed parameter
and max_lag , are only present if the --rate
option is used.
They provide statistics about the time each transaction had to wait for the
- previous one to finish, i.e. the difference between each transaction's
+ previous one to finish, i.e., the difference between each transaction's
scheduled start time and the time it actually started.
The very last field, skipped ,
is only present if the --latency-limit option is used, too.
pg_upgrade (formerly called
pg_migrator ) allows data
stored in
PostgreSQL data files to be upgraded to a later
PostgreSQL
major version without the data dump/reload typically required for
- major version upgrades, e.g. from 9.5.8 to 9.6.4 or from 10.7 to 11.2.
- It is not required for minor version upgrades, e.g. from 9.6.2 to 9.6.3
+ major version upgrades, e.g., from 9.5.8 to 9.6.4 or from 10.7 to 11.2.
+ It is not required for minor version upgrades, e.g., from 9.6.2 to 9.6.3
or from 10.1 to 10.2.
pg_upgrade does its best to
- make sure the old and new clusters are binary-compatible, e.g. by
+ make sure the old and new clusters are binary-compatible, e.g., by
checking for compatible compile-time settings, including 32/64-bit
binaries. It is important that
any external modules are also binary compatible, though this cannot
Optionally move the old cluster
- If you are using a version-specific installation directory, e.g.
+ If you are using a version-specific installation directory, e.g.,
/opt/PostgreSQL/&majorversion; , you do not need to move the old cluster. The
graphical installers all use version-specific installation directories.
- If your installation directory is not version-specific, e.g.
+ If your installation directory is not version-specific, e.g.,
/usr/local/pgsql , it is necessary to move the current PostgreSQL install
directory so it does not interfere with the new
PostgreSQL installation.
Once the current
PostgreSQL server is shut down, it is safe to rename the
Install any custom shared object files (or DLLs) used by the old cluster
- into the new cluster, e.g. pgcrypto.so ,
+ into the new cluster, e.g., pgcrypto.so ,
whether they are from contrib
- or some other source. Do not install the schema definitions, e.g.
+ or some other source. Do not install the schema definitions, e.g.,
CREATE EXTENSION pgcrypto , because these will be upgraded
from the old cluster.
Also, any custom full text search files (dictionary, synonym,
Save any configuration files from the old standbys' configuration
- directories you need to keep, e.g. postgresql.conf
+ directories you need to keep, e.g., postgresql.conf
(and any files included by it), postgresql.auto.conf ,
pg_hba.conf , because these will be overwritten
or removed in the next step.
on the standby. The directory structure under the specified
directories on the primary and standbys must match. Consult the
rsync manual page for details on specifying the
- remote directory, e.g.
+ remote directory, e.g.,
rsync --archive --delete --hard-links --size-only --no-inc-recursive /opt/PostgreSQL/9.5 \
pg_upgrade completes. (Automatic deletion is not
possible if you have user-defined tablespaces inside the old data
directory.) You can also delete the old installation directories
- (e.g. bin , share ).
+ (e.g., bin , share ).
If you are upgrading a pre-
PostgreSQL 9.2 cluster
that uses a configuration-file-only directory, you must pass the
real data directory location to
pg_upgrade , and
- pass the configuration directory location to the server, e.g.
+ pass the configuration directory location to the server, e.g.,
-d /real-data-directory -o '-D /configuration-directory' .
copy with any changes to make it consistent. (--checksum
is necessary because rsync only has file modification-time
granularity of one second.) You might want to exclude some
- files, e.g.
postmaster.pid , as documented in
+ files, e.g.
, postmaster.pid , as documented in
linkend="backup-lowlevel-base-backup"/>. If your file system supports
file system snapshots or copy-on-write file copies, you can use that
to make a backup of the old cluster and tablespaces, though the snapshot
To start postgres with a specific
- port, e.g. 1234:
+ port, e.g., 1234:
Prepared statements potentially have the largest performance advantage
when a single session is being used to execute a large number of similar
statements. The performance difference will be particularly
- significant if the statements are complex to plan or rewrite, e.g.
+ significant if the statements are complex to plan or rewrite, e.g.,
if the query involves a join of many tables or requires
the application of several rules. If the statement is relatively simple
to plan and rewrite but relatively expensive to execute, the
psql returns 0 to the shell if it
- finished normally, 1 if a fatal error of its own occurs (e.g. out of memory,
+ finished normally, 1 if a fatal error of its own occurs (e.g., out of memory,
file not found), 2 if the connection to the server went bad
and the session was not interactive, and 3 if an error occurred in a
script and the variable ON_ERROR_STOP was set.
In latex-longtable format, this controls
the proportional width of each column containing a left-aligned
data type. It is specified as a whitespace-separated list of values,
- e.g. '0.2 0.2 0.6' . Unspecified output columns
+ e.g., '0.2 0.2 0.6' . Unspecified output columns
use the last specified value.
psql starts up. Tab-completion is also
supported, although the completion logic makes no claim to be an
SQL parser. The queries generated by tab-completion
- can also interfere with other SQL commands, e.g. SET
+ can also interfere with other SQL commands, e.g., SET
TRANSACTION ISOLATION LEVEL.
If for some reason you do not like the tab completion, you
can turn it off by putting this in a file named
which is what should be used to refer to the origin across systems, is
free-form text . It should be used in a way that makes conflicts
between replication origins created by different replication solutions
- unlikely; e.g. by prefixing the replication solution's name to it.
+ unlikely; e.g., by prefixing the replication solution's name to it.
The OID is used only to avoid having to store the long version
in situations where space efficiency is important. It should never be shared
across systems.
manner. Replay progress for all replication origins can be seen in the
pg_replication_origin_status
- view. An individual origin's progress, e.g. when resuming
+ view. An individual origin's progress, e.g., when resuming
replication, can be acquired using
pg_replication_origin_progress()
for any origin or
output plugin callbacks (see )
generated by the session is tagged with the replication origin of the
generating session. This allows treating them differently in the output
- plugin, e.g. ignoring all but locally-originating rows. Additionally
+ plugin, e.g., ignoring all but locally-originating rows. Additionally
the
filter_by_origin_cb callback can be used
to filter the logical decoding change stream based on the
be migrated in-place from one major
PostgreSQL
version to another. Upgrades can be performed in minutes,
particularly with --link mode. It requires steps similar to
-
pg_dumpall above, e.g. starting/stopping the server,
+
pg_dumpall above, e.g.
, starting/stopping the server,
running
initdb . The
pg_upgrade
linkend="pgupgrade">documentation outlines the necessary steps.
commands.
SELinux provides a feature to allow trusted
code to run using a security label different from that of the client,
generally for the purpose of providing highly controlled access to
- sensitive data (e.g. rows might be omitted, or the precision of stored
+ sensitive data (e.g., rows might be omitted, or the precision of stored
values might be reduced). Whether or not a function acts as a trusted
procedure is controlled by its security label and the operating system
security policy. For example:
Both, macros with arguments and static inline
functions, may be used. The latter are preferable if there are
- multiple-evaluation hazards when written as a macro, as e.g. the
+ multiple-evaluation hazards when written as a macro, as e.g., the
case with
#define Max(x, y) ((x) > (y) ? (x) : (y))
When the definition of an inline function references symbols
- (i.e. variables, functions) that are only available as part of the
+ (i.e., variables, functions) that are only available as part of the
backend, the function may not be visible when included from frontend
code.
- Returns the name of the protocol used for the SSL connection (e.g. TLSv1.0
+ Returns the name of the protocol used for the SSL connection (e.g., TLSv1.0
TLSv1.1, or TLSv1.2).
Returns the name of the cipher used for the SSL connection
- (e.g. DHE-RSA-AES256-SHA).
+ (e.g., DHE-RSA-AES256-SHA).
necessary for each tuple to have a tuple identifier (
TID )
consisting of a block number and an item number (see also
linkend="storage-page-layout"/>). It is not strictly necessary that the
- sub-parts of
TIDs have the same meaning they e.g. have
+ sub-parts of
TIDs have the same meaning they e.g.
, have
for heap , but if bitmap scan support is desired (it is
optional), the block number needs to provide locality.
- A GiST index can be covering, i.e. use the INCLUDE
+ A GiST index can be covering, i.e., use the INCLUDE
clause. Included columns can have data types without any GiST operator
class. Included attributes will be stored uncompressed.
allows the implementation of very fast searches with online update.
Partitioning can be done at the database level using table inheritance,
or by distributing documents over
- servers and collecting external search results, e.g. via
+ servers and collecting external search results, e.g.
, via
linkend="ddl-foreign-data">Foreign Data access.
The latter is possible because ranking functions use
only local information.
overhead can reduce performance, especially if journaling
causes file system data to be flushed
to disk. Fortunately, data flushing during journaling can
- often be disabled with a file system mount option, e.g.
+ often be disabled with a file system mount option, e.g.,
data=writeback on a Linux ext3 file system.
Journaled file systems do improve boot speed after a crash.
Besides SELECT queries, the commands can include data
modification queries (INSERT ,
UPDATE , and DELETE ), as well as
- other SQL commands. (You cannot use transaction control commands, e.g.
+ other SQL commands. (You cannot use transaction control commands, e.g.,
COMMIT , SAVEPOINT , and some utility
- commands, e.g.
VACUUM , in
SQL functions.)
+ commands, e.g.
, VACUUM , in
SQL functions.)
However, the final command
must be a SELECT or have a RETURNING
clause that returns whatever is
exceptions. Any exceptions must be caught and appropriate errors
passed back to the C interface. If possible, compile C++ with
-fno-exceptions to eliminate exceptions entirely; in such
- cases, you must check for failures in your C++ code, e.g. check for
+ cases, you must check for failures in your C++ code, e.g., check for
NULL returned by new() .
The calling SELECT statement doesn't necessarily have to be
just SELECT * — it can reference the output
columns by name or join them to other tables. The function produces a
- virtual table with which you can perform any operation you wish (e.g.
+ virtual table with which you can perform any operation you wish (e.g.,
aggregation, joining, sorting etc). So we could also have:
SELECT t.title, p.fullname, p.email