Joe Conway [Thu, 22 Dec 2016 17:47:36 +0000 (09:47 -0800)]
Make dblink try harder to form useful error messages
When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.
Back-patch to all supported branches.
Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/
02fa2d90-2efd-00bc-fefc-
c23c00eb671e%40joeconway.com
Joe Conway [Thu, 22 Dec 2016 17:19:08 +0000 (09:19 -0800)]
Protect dblink from invalid options when using postgres_fdw server
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.
Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.
Author: Corey Huinker
Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
Tom Lane [Thu, 22 Dec 2016 16:19:04 +0000 (11:19 -0500)]
Give a useful error message if uuid-ossp is built without preconfiguration.
Before commit
b8cc8f947, it was possible to build contrib/uuid-ossp without
having told configure you meant to; you could just cd into that directory
and "make". That no longer works because the code depends on configure to
have done header and library probes, but the ensuing error messages are
not so easy to interpret if you're not an old C hand. We've gotten a
couple of complaints recently from people trying to do this the low-tech
way, so add an explicit #error directing the user to use --with-uuid.
(In principle we might want to do something similar in the other
optionally-built contrib modules; but I don't think any of the others have
ever worked without preconfiguration, so there are no bad habits to break
people of.)
Back-patch to 9.4 where the previous commit came in.
Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com
Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
Michael Meskes [Thu, 22 Dec 2016 07:28:13 +0000 (08:28 +0100)]
Fix buffer overflow on particularly named files and clarify documentation about
output file naming.
Patch by Tsunakawa, Takayuki
Joe Conway [Wed, 21 Dec 2016 23:48:28 +0000 (15:48 -0800)]
Improve dblink error message when remote does not provide it
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.
Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.
Discussion: https://postgr.es/m/19872.
1482338965%40sss.pgh.pa.us
Tom Lane [Wed, 21 Dec 2016 22:39:32 +0000 (17:39 -0500)]
Fix detection of unfinished Unicode surrogate pair at end of string.
The U&'...' and U&"..." syntaxes silently discarded a surrogate pair
start (that is, a code between U+D800 and U+DBFF) if it occurred at
the very end of the string. This seems like an obvious oversight,
since we throw an error for every other invalid combination of surrogate
characters, including the very same situation in E'...' syntax.
This has been wrong since the pair processing was added (in 9.0),
so back-patch to all supported branches.
Discussion: https://postgr.es/m/19113.
1482337898@sss.pgh.pa.us
Stephen Frost [Wed, 21 Dec 2016 20:05:14 +0000 (15:05 -0500)]
Improve ALTER TABLE documentation
The ALTER TABLE documentation wasn't terribly clear when it came to
which commands could be combined together and what it meant when they
were.
In particular, SET TABLESPACE *can* be combined with other commands,
when it's operating against a single table, but not when multiple tables
are being moved with ALL IN TABLESPACE. Further, the actions are
applied together but not really in 'parallel', at least today.
Pointed out by: Amit Langote
Improved wording from Tom.
Back-patch to 9.4, where the ALL IN TABLESPACE option was added.
Discussion: https://www.postgresql.org/message-id/
14c535b4-13ef-0590-1b98-
76af355a0763%40lab.ntt.co.jp
Stephen Frost [Wed, 21 Dec 2016 18:47:23 +0000 (13:47 -0500)]
Fix dumping of casts and transforms using built-in functions
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the
cast or transform if it happened to use a built-in function because we
weren't including the information about built-in functions when querying
pg_proc from getFuncs().
Modify the query in getFuncs() to also gather information about
functions which are used by user-defined casts and transforms (where
"user-defined" means "has an OID >= FirstNormalObjectId"). This also
adds to the TAP regression tests for 9.6 and master to cover these
types of objects.
Back-patch all the way for casts, back to 9.5 for transforms.
Discussion: https://www.postgresql.org/message-id/flat/
20160504183952.GE10850%40tamriel.snowman.net
Stephen Frost [Wed, 21 Dec 2016 18:47:23 +0000 (13:47 -0500)]
For 8.0 servers, get last built-in oid from pg_database
We didn't start ensuring that all built-in objects had OIDs less than
16384 until 8.1, so for 8.0 servers we still need to query the value out
of pg_database. We need this, in particular, to distinguish which casts
were built-in and which were user-defined.
For HEAD, we only worry about going back to 8.0, for the back-branches,
we also ensure that 7.0-7.4 work.
Discussion: https://www.postgresql.org/message-id/flat/
20160504183952.GE10850%40tamriel.snowman.net
Dean Rasheed [Wed, 21 Dec 2016 17:03:54 +0000 (17:03 +0000)]
Fix order of operations in CREATE OR REPLACE VIEW.
When CREATE OR REPLACE VIEW acts on an existing view, don't update the
view options until after the view query has been updated.
This is necessary in the case where CREATE OR REPLACE VIEW is used on
an existing view that is not updatable, and the new view is updatable
and specifies the WITH CHECK OPTION. In this case, attempting to apply
the new options to the view before updating its query fails, because
the options are applied using the ALTER TABLE infrastructure which
checks that WITH CHECK OPTION is only applied to an updatable view.
If new columns are being added to the view, that is also done using
the ALTER TABLE infrastructure, but it is important that that still be
done before updating the view query, because the rules system checks
that the query columns match those on the view relation. Added a
comment to explain that, in case someone is tempted to move that to
where the view options are now being set.
Back-patch to 9.4 where WITH CHECK OPTION was added.
Report: https://postgr.es/m/CAEZATCUp%3Dz%3Ds4SzZjr14bfct_bdJNwMPi-gFi3Xc5k1ntbsAgQ%40mail.gmail.com
Magnus Hagander [Mon, 19 Dec 2016 09:11:04 +0000 (10:11 +0100)]
Fix base backup rate limiting in presence of slow i/o
When source i/o on disk was too slow compared to the rate limiting
specified, the system could end up with a negative value for sleep that
it never got out of, which caused rate limiting to effectively be
turned off.
Discussion: https://postgr.es/m/CABUevEy_-e0YvL4ayoX8bH_Ja9w%2BBHoP6jUgdxZuG2nEj3uAfQ%40mail.gmail.com
Analysis by me, patch by Antonin Houska
Tom Lane [Sun, 18 Dec 2016 03:24:13 +0000 (22:24 -0500)]
In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
Oversight in commit
b8cc8f947. I just noticed this causes compiler
warnings on FreeBSD, and it really ought to cause warnings elsewhere too:
all references I can find say that
is required for these.
We have a lot of code elsewhere that thinks that both
and should be included for these functions, so do it that
way here too, even though ought to be sufficient according
to the references I consulted.
Back-patch to 9.4 where the previous commit landed.
Heikki Linnakangas [Fri, 16 Dec 2016 10:50:20 +0000 (12:50 +0200)]
Fix off-by-one in memory allocation for quote_literal_cstr().
The calculation didn't take into account the NULL terminator. That lead
to overwriting the palloc'd buffer by one byte, if the input consists
entirely of backslashes. For example "format('%L', E'\\')".
Fixes bug #14468. Backpatch to all supported versions.
Report: https://www.postgresql.org/message-id/
20161216105001.13334.42819%40wrigleys.postgresql.org
Tom Lane [Thu, 15 Dec 2016 19:32:42 +0000 (14:32 -0500)]
Sync our copy of the timezone library with IANA release tzcode2016j.
This is a trivial update (consisting in fact only in the addition of
a comment). The point is just to get back to being synced with an
official release of tzcode, rather than some ad-hoc point in their
commit history, which is where commit
1f87181e1 left it.
Kevin Grittner [Wed, 14 Dec 2016 01:05:12 +0000 (19:05 -0600)]
Back-patch
fcff8a575198478023ada8a48e13b50f70054766 as a bug fix.
When there is both a serialization failure and a unique violation,
throw the former rather than the latter. When initially pushed,
this was viewed as a feature to assist application framework
developers, so that they could more accurately determine when to
retry a failed transaction, but a test case presented by Ian
Jackson has shown that this patch can prevent serialization
anomalies in some cases where a unique violation is caught within a
subtransaction, the work of that subtransaction is discarded, and
no error is thrown. That makes this a bug fix, so it is being
back-patched to all supported branches where it is not already
present (i.e., 9.2 to 9.5).
Discussion: https://postgr.es/m/
1481307991[email protected]
Discussion: https://postgr.es/m/
[email protected]
Tom Lane [Sun, 11 Dec 2016 23:04:28 +0000 (18:04 -0500)]
Use "%option prefix" to set API names in ecpg's lexer.
Back-patch commit
92fb64983 into the pre-9.6 branches.
Without this, ecpg fails to build with the latest version of flex.
It's not unreasonable that people would want to compile our old branches
with recent tools. Per report from Дилян Палаузов.
Discussion: https://postgr.es/m/
d845c1af-e18d-6651-178f-
9f08cdf37e10@aegee.org
Tom Lane [Sun, 11 Dec 2016 22:44:16 +0000 (17:44 -0500)]
Build backend/parser/scan.l and interfaces/ecpg/preproc/pgc.l standalone.
Back-patch commit
72b1e3a21 into the pre-9.6 branches.
As noted in the original commit, this has some extra benefits: we can
narrow the scope of the -Wno-error flag that's forced on scan.c. Also,
since these grammar and lexer files are so large, splitting them into
separate build targets should have some advantages in build speed,
particularly in parallel or ccache'd builds.
However, the real reason for doing this now is that it avoids symbol-
redefinition warnings (or worse) with the latest version of flex.
It's not unreasonable that people would want to compile our old branches
with recent tools. Per report from Дилян Палаузов.
Discussion: https://postgr.es/m/
d845c1af-e18d-6651-178f-
9f08cdf37e10@aegee.org
Tom Lane [Sun, 11 Dec 2016 18:09:57 +0000 (13:09 -0500)]
Prevent crash when ts_rewrite() replaces a non-top-level subtree with null.
When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed
to simplify any operator nodes whose operand(s) become NULL; but it failed
to do that reliably, because dropvoidsubtree() only examined the top level
of the result tree. Rather than make a second recursive pass, let's just
give the responsibility to dofindsubquery() to simplify while it's doing
the main replacement pass. Per report from Andreas Seltenreich.
Artur Zakirov, with some cosmetic changes by me. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/
[email protected]
Tom Lane [Fri, 9 Dec 2016 20:27:23 +0000 (15:27 -0500)]
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit
4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
Tom Lane [Fri, 9 Dec 2016 17:01:14 +0000 (12:01 -0500)]
Fix reporting of column typmods for multi-row VALUES constructs.
expandRTE() and get_rte_attribute_type() reported the exprType() and
exprTypmod() values of the expressions in the first row of the VALUES as
being the column type/typmod returned by the VALUES RTE. That's fine for
the data type, since we coerce all expressions in a column to have the same
common type. But we don't coerce them to have a common typmod, so it was
possible for rows after the first one to return values that violate the
claimed column typmod. This leads to the incorrect result seen in bug
#14448 from Hassan Mahmood, as well as some other corner-case misbehaviors.
The desired behavior is the same as we use in other type-unification
cases: report the common typmod if there is one, but otherwise return -1
indicating no particular constraint.
We fixed this in HEAD by deriving the typmods during transformValuesClause
and storing them in the RTE, but that's not a feasible solution in the back
branches. Instead, just use a brute-force approach of determining the
correct common typmod during expandRTE() and get_rte_attribute_type().
Simple testing says that that doesn't really cost much, at least not in
common cases where expandRTE() is only used once per query. It turns out
that get_rte_attribute_type() is typically never used at all on VALUES
RTEs, so the inefficiency there is of no great concern.
Report: https://postgr.es/m/
20161205143037[email protected]
Discussion: https://postgr.es/m/27429.
1480968538@sss.pgh.pa.us
Robert Haas [Thu, 8 Dec 2016 19:09:09 +0000 (14:09 -0500)]
Log the creation of an init fork unconditionally.
Previously, it was thought that this only needed to be done for the
benefit of possible standbys, so wal_level = minimal skipped it.
But that's not safe, because during crash recovery we might replay
XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively
removes the directory that contains the new init fork. So log it
always.
The user-visible effect of this bug is that if you create a database
or tablespace, then create an unlogged table, then crash without
checkpointing, then restart, accessing the table will fail, because
the it won't have been properly reset. This commit fixes that.
Michael Paquier, per a report from Konstantin Knizhnik. Wording of
the comments per a suggestion from me.
Tom Lane [Wed, 7 Dec 2016 17:39:24 +0000 (12:39 -0500)]
Restore psql's SIGPIPE setting if popen() fails.
Ancient oversight in PageOutput(): if popen() fails, we'd better reset
the SIGPIPE handler before returning stdout, because ClosePager() won't.
Noticed while fixing the empty-PAGER issue.
Tom Lane [Wed, 7 Dec 2016 17:19:56 +0000 (12:19 -0500)]
Handle empty or all-blank PAGER setting more sanely in psql.
If the PAGER environment variable is set but contains an empty string,
psql would pass it to "sh" which would silently exit, causing whatever
query output we were printing to vanish entirely. This is quite
mystifying; it took a long time for us to figure out that this was the
cause of Joseph Brenner's trouble report. Rather than allowing that
to happen, we should treat this as another way to specify "no pager".
(We could alternatively treat it as selecting the default pager, but
it seems more likely that the former is what the user meant to achieve
by setting PAGER this way.)
Nonempty, but all-white-space, PAGER values have the same behavior, and
it's pretty easy to test for that, so let's handle that case the same way.
Most other cases of faulty PAGER values will result in the shell printing
some kind of complaint to stderr, which should be enough to diagnose the
problem, so we don't need to work harder than this. (Note that there's
been an intentional decision not to be very chatty about apparent failure
returns from the pager process, since that may happen if, eg, the user
quits the pager with control-C or some such. I'd just as soon not start
splitting hairs about which exit codes might merit making our own report.)
libpq's old PQprint() function was already on board with ignoring empty
PAGER values, but for consistency, make it ignore all-white-space values
as well.
It's been like this a long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAFfgvXWLOE2novHzYjmQK8-J6TmHz42G8f3X0SORM44+stUGmw@mail.gmail.com
Noah Misch [Sat, 3 Dec 2016 20:46:36 +0000 (15:46 -0500)]
Make pgwin32_putenv() visit debug CRTs.
This has no effect in the most conventional case, where no relevant DLL
uses a debug build. For an example where it does matter, given a debug
build of MIT Kerberos, the krb_server_keyfile parameter usually had no
effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all
supported versions).
Christian Ullrich, reviewed by Michael Paquier.
Noah Misch [Sat, 3 Dec 2016 20:46:35 +0000 (15:46 -0500)]
Remove wrong CloseHandle() call.
In accordance with its own documentation, invoke CloseHandle() only when
directed in the documentation for the function that furnished the
handle. GetModuleHandle() does not so direct. We have been issuing
this call only in the rare event that a CRT DLL contains no "_putenv"
symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all
supported versions).
Christian Ullrich, reviewed by Michael Paquier.
Noah Misch [Sat, 3 Dec 2016 20:46:35 +0000 (15:46 -0500)]
Refine win32env.c cosmetics.
Replace use of plain 0 as a null pointer constant. In comments, update
terminology and lessen redundancy. Back-patch to 9.2 (all supported
versions) for the convenience of back-patching the next two commits.
Christian Ullrich and Noah Misch, reviewed (in earlier versions) by
Michael Paquier.
Tom Lane [Wed, 30 Nov 2016 18:34:14 +0000 (13:34 -0500)]
Doc: improve description of trim() and related functions.
Per bug #14441 from Mark Pether, the documentation could be misread,
mainly because some of the examples failed to show what happens with
a multicharacter "characters to trim" string. Also, while the text
description in most of these entries was fairly clear that the
"characters" argument is a set of characters not a substring to match,
some of them used variant wording that was a bit less clear.
trim() itself suffered from both deficiencies and was thus pretty
misinterpretable.
Also fix failure to explain which of LEADING/TRAILING/BOTH is the
default.
Discussion: https://postgr.es/m/
20161130011710[email protected]
Stephen Frost [Tue, 29 Nov 2016 15:35:12 +0000 (10:35 -0500)]
Clarify pg_dump -b documentation
The documentation around the -b/--blobs option to pg_dump seemed to
imply that it might be possible to add blobs to a "schema-only" dump or
similar. Clarify that blobs are data and therefore will only be
included in dumps where data is being included, even when -b is used to
request blobs be included.
The -b option has been around since before 9.2, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/
20161119173316[email protected]
Magnus Hagander [Sun, 27 Nov 2016 16:10:02 +0000 (17:10 +0100)]
Mention server start requirement for ssl parameters
Fix that the documentation for three ssl related parameters did not
specify that they can only be changed at server start.
Michael Paquier
Tom Lane [Sat, 26 Nov 2016 18:31:35 +0000 (13:31 -0500)]
Fix test about ignoring extension dependencies during extension scripts.
Commit
08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts. However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects. Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one. (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)
To fix, revert
08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.
Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level. However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check). Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.
Back-patch to all active branches. In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.
Report: <
20161125151448[email protected]>
Discussion: <13224.
1480177514@sss.pgh.pa.us>
Tom Lane [Fri, 25 Nov 2016 18:44:48 +0000 (13:44 -0500)]
Check for pending trigger events on far end when dropping an FK constraint.
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
we refuse the drop if there are any pending trigger events on the named
table; this ensures that we won't remove the pg_trigger row that will be
consulted by those events. But we should make the same check for the
referenced relation, else we might remove a due-to-be-referenced pg_trigger
row for that relation too, resulting in "could not find trigger NNN" or
"relation NNN has no triggers" errors at commit. Per bug #14431 from
Benjie Gillam. Back-patch to all supported branches.
Report: <
20161124114911[email protected]>
Tom Lane [Wed, 23 Nov 2016 18:45:56 +0000 (13:45 -0500)]
Make sure ALTER TABLE preserves index tablespaces.
When rebuilding an existing index, ALTER TABLE correctly kept the
physical file in the same tablespace, but it messed up the pg_class
entry if the index had been in the database's default tablespace
and "default_tablespace" was set to some non-default tablespace.
This led to an inaccessible index.
Fix by fixing pg_get_indexdef_string() to always include a tablespace
clause, whether or not the index is in the default tablespace. The
previous behavior was installed in commit
537e92e41, and I think it just
wasn't thought through very clearly; certainly the possible effect of
default_tablespace wasn't considered. There's some risk in changing the
behavior of this function, but there are no other call sites in the core
code. Even if it's being used by some third party extension, it's fairly
hard to envision a usage that is okay with a tablespace clause being
appended some of the time but can't handle it being appended all the time.
Back-patch to all supported versions.
Code fix by me, investigation and test cases by Michael Paquier.
Discussion: <
1479294998857-
5930602[email protected]>
Tom Lane [Tue, 22 Nov 2016 22:56:16 +0000 (17:56 -0500)]
Doc: improve documentation about composite-value usage.
Create a section specifically for the syntactic rules around whole-row
variable usage, such as expansion of "foo.*". This was previously
documented only haphazardly, with some critical info buried in
unexpected places like xfunc-sql-composite-functions. Per repeated
questions in different mailing lists.
Discussion: <16288.
1479610770@sss.pgh.pa.us>
Tom Lane [Tue, 22 Nov 2016 19:02:52 +0000 (14:02 -0500)]
Doc: add a section in Part II concerning RETURNING.
There are assorted references to RETURNING in Part II, but nothing
that would qualify as an explanation of the feature, which seems
like an oversight considering how useful it is. Add something.
Noted while looking for a place to point a cross-reference to ...
Tom Lane [Tue, 22 Nov 2016 01:39:28 +0000 (20:39 -0500)]
Make contrib/test_decoding regression tests safe for CZ locale.
A little COLLATE "C" goes a long way.
Pavel Stehule, per suggestion from Craig Ringer
Discussion:
Tom Lane [Mon, 21 Nov 2016 23:21:56 +0000 (18:21 -0500)]
Fix PGLC_localeconv() to handle errors better.
The code was intentionally not very careful about leaking strdup'd
strings in case of an error. That was forgivable probably, but it
also failed to notice strdup() failures, which could lead to subsequent
null-pointer-dereference crashes, since many callers unsurprisingly
didn't check for null pointers in the struct lconv fields. An even
worse problem is that it could throw error while we were setlocale'd
to a non-C locale, causing unwanted behavior in subsequent libc calls.
Rewrite to ensure that we cannot throw elog(ERROR) until after we've
restored the previous locale settings, or at least attempted to.
(I'm sorely tempted to make restore failure be a FATAL error, but
will refrain for the moment.) Having done that, it's not much more
work to ensure that we clean up strdup'd storage on the way out, too.
This code is substantially the same in all supported branches, so
back-patch all the way.
Michael Paquier and Tom Lane
Discussion:
Tom Lane [Sun, 20 Nov 2016 19:26:19 +0000 (14:26 -0500)]
Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.
Because we use transformTargetList() for UPDATE as well as SELECT
tlists, the code accidentally tried to expand a "*" reference into
several columns. This is nonsensical, because the UPDATE syntax
provides exactly one target column to put the value into. The
immediate result was that transformUpdateTargetList() got confused
and reported "UPDATE target count mismatch --- internal error".
It seems better to treat such a reference as a plain whole-row
variable, as it would be in other contexts. (This could produce
useful results when the target column is of composite type.)
Fix by tweaking transformTargetList() to perform *-expansion only
conditionally, depending on its exprKind parameter.
Back-patch to 9.3. The problem exists further back, but a fix would be
much more invasive before that, because transformTargetList() wasn't
told what kind of list it was working on. Doesn't seem worth the
trouble given the lack of field reports. (I only noticed it because
I was checking the code while trying to improve the documentation about
how we handle "foo.*".)
Discussion: <4308.
1479595330@sss.pgh.pa.us>
Tom Lane [Thu, 17 Nov 2016 19:59:26 +0000 (14:59 -0500)]
Improve pg_dump/pg_restore --create --if-exists logic.
Teach it not to complain if the dropStmt attached to an archive entry
is actually spelled CREATE OR REPLACE VIEW, since that will happen due to
an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it
print a WARNING and then emit the dropStmt unmodified. That seems like a
much saner behavior than Assert'ing or dumping core due to a null-pointer
dereference, which is what would happen before :-(.
Back-patch to 9.4 where this option was introduced.
Discussion: <19092.
1479325184@sss.pgh.pa.us>
Alvaro Herrera [Thu, 17 Nov 2016 16:31:30 +0000 (13:31 -0300)]
Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
require complex interlocking that matched the requirements on the
master. This required an O(N) operation that became a significant
problem with large indexes, causing replication delays of seconds or in
some cases minutes while the XLOG_BTREE_VACUUM was replayed.
This commit skips the “pin scan” that was previously required, by
observing in detail when and how it is safe to do so, with full
documentation. The pin scan is skipped only in replay; the VACUUM code
path on master is not touched here.
No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.
This is a back-patch of commits
687f2cd7a015,
3e4b7d87988f,
b60284261375
by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is
possible because this depends on catalog scans being MVCC. I (Álvaro)
additionally updated a slight problem in the README, which explains why
this touches the 9.6 and master branches.
Tom Lane [Tue, 15 Nov 2016 21:17:19 +0000 (16:17 -0500)]
Allow DOS-style line endings in ~/.pgpass files.
On Windows, libc will mask \r\n line endings for us, since we read the
password file in text mode. But that doesn't happen on Unix. People
who share password files across both systems might have \r\n line endings
in a file they use on Unix, so as a convenience, ignore trailing \r.
Per gripe from Josh Berkus.
In passing, put the existing check for empty line somewhere where it's
actually useful, ie after stripping the newline not before.
Vik Fearing, adjusted a bit by me
Discussion: <
0de37763-5843-b2cc-855e-
5d0e5df25807@agliodbs.com>
Tom Lane [Tue, 15 Nov 2016 20:55:36 +0000 (15:55 -0500)]
Account for catalog snapshot in PGXACT->xmin updates.
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting
for whether MyPgXact->xmin could be cleared or advanced. In normal
transactions this was masked by the fact that the transaction snapshot
would be older, but during backend startup and certain utility commands
it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had
been cleared, meaning that recently-deleted rows could be pruned even
though this snapshot could still see them, causing unexpected catalog
lookup failures. This effect appears to be the explanation for a recent
failure on buildfarm member piculet.
To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever
it is valid.
In the previous logic, it was possible for the CatalogSnapshot to remain
valid across waits for client input, but with this change that would mean
it delays advance of global xmin in cases where it did not before. To
avoid possibly causing new table-bloat problems with clients that sit idle
for long intervals, add code to invalidate the CatalogSnapshot before
waiting for client input. (When the backend is busy, it's unlikely that
the CatalogSnapshot would be the oldest snap for very long, so we don't
worry about forcing early invalidation of it otherwise.)
In passing, remove the CatalogSnapshotStale flag in favor of using
"CatalogSnapshot != NULL" to represent validity, as we do for the other
special snapshots in snapmgr.c. And improve some obsolete comments.
No regression test because I don't know a deterministic way to cause this
failure. But the stress test shown in the original discussion provokes
"cache lookup failed for relation 1255" within a few dozen seconds for me.
Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's
quite easy to produce similar failures with the same test case in branches
before 9.4. But MVCC catalog scans were supposed to fix that.)
Discussion: <16447.
1478818294@sss.pgh.pa.us>
Alvaro Herrera [Mon, 14 Nov 2016 14:14:34 +0000 (11:14 -0300)]
Fix duplication in ALTER MATERIALIZE VIEW synopsis
Commit
3c4cf080879b should have removed SET TABLESPACE from the synopsis
of ALTER MATERIALIZE VIEW as a possible "action" when it added a
separate line for it in the main command listing, but failed to.
Repair.
Backpatch to 9.4, like the aforementioned commit.
Magnus Hagander [Tue, 8 Nov 2016 17:34:59 +0000 (18:34 +0100)]
Fix typo
Magnus Hagander [Mon, 7 Nov 2016 13:47:30 +0000 (14:47 +0100)]
Fix handling of symlinked pg_stat_tmp and pg_replslot
This was already fixed in HEAD as part of
6ad8ac60 but was not
backpatched.
Also change the way pg_xlog is handled to be the same as the other
directories.
Patch from me with pg_xlog addition from Michael Paquier, test updates
from David Steele.
Tom Lane [Sun, 6 Nov 2016 19:43:13 +0000 (14:43 -0500)]
Rationalize and document pltcl's handling of magic ".tupno" array element.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented. Fix that.
For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple. Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name. We can fix it by not
checking for ".tupno" until after we check for a column name match. This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.
In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.
Noted while working on Jim Nasby's patch to support composite results
from pltcl. The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
Tom Lane [Sun, 6 Nov 2016 15:45:58 +0000 (10:45 -0500)]
More zic cleanup.
The workaround the IANA guys chose to get rid of the clang warning
we'd silenced in commit
23ed2ba81 turns out not to satisfy Coverity.
Go back to the previous solution, ie, remove the useless comparison
to SIZE_MAX. (In principle, there could be machines out there where
it's not useless because ptrdiff_t is wider than size_t. But the whole
thing is pretty academic anyway, as we could never approach this limit
for any sane estimate of the amount of data that zic will ever be asked
to work with.)
Also, s/lineno/lineno_t/g, because if we accept their decision to start
using "lineno" as a typedef, it is going to have very unpleasant
consequences in our next pgindent run. Noted that while fooling with
pltcl yesterday.
Tom Lane [Fri, 4 Nov 2016 14:44:16 +0000 (10:44 -0400)]
Sync our copy of the timezone library with IANA tzcode
This patch absorbs some unreleased fixes for symlink manipulation bugs
introduced in tzcode 2016g. Ordinarily I'd wait around for a released
version, but in this case it seems like we could do with extra testing,
in particular checking whether it works in EDB's VMware build environment.
This corresponds to commit
aec59156abbf8472ba201b6c7ca2592f9c10e077 in
https://github.com/eggert/tz.
Per a report from Sandeep Thakkar, building in an environment where hard
links are not supported in the timezone data installation directory failed,
because upstream code refactoring had broken the case of symlinking from an
existing symlink. Further experimentation also showed that the symlinks
were sometimes made incorrectly, with too many or too few "../"'s in the
symlink contents.
Back-patch of commit
1f87181e12beb067d21b79493393edcff14c190b.
Report:
Discussion: http://mm.icann.org/pipermail/tz/2016-November/024431.html
Tom Lane [Sun, 30 Oct 2016 21:35:43 +0000 (17:35 -0400)]
Fix nasty performance problem in tsquery_rewrite().
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions;
for example, in the query 'a | b | c' the substitution subquery 'a | c'
should match and lead to replacement of the first and third items.
That's fine, but the matching algorithm apparently takes about O(2^N)
for an N-clause query (I say "apparently" because the code is also both
unintelligible and uncommented). We could probably do better than that
even without any extra assumptions --- but actually, we know that the
subclauses are sorted, indeed are depending on that elsewhere in this very
same function. So we can just scan the two lists a single time to detect
matches, as though we were doing a merge join.
Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just
to make sure that the tree fits the expectations of the next search cycle.
I didn't try to devise a test case for this, but I'm pretty sure that the
oversight could have led to failure to match in some cases where a match
would be expected.
Improve comments, and also stick a CHECK_FOR_INTERRUPTS into
dofindsubquery, just in case it's still too slow for somebody.
Per report from Andreas Seltenreich. Back-patch to all supported branches.
Discussion: <
[email protected]>
Tom Lane [Sun, 30 Oct 2016 19:24:40 +0000 (15:24 -0400)]
Fix bogus tree-flattening logic in QTNTernary().
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c',
which is all well and good, but it tries to do that to NOT nodes as well,
so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to
be done only on AND and OR nodes, and add a test case illustrating the bug.
In passing, provide some comments for the sadly naked functions in
tsquery_util.c, and simplify some baroque logic in QTNFree(), which
I think may have been leaking some items it intended to free.
Noted while investigating a complaint from Andreas Seltenreich.
Back-patch to all supported versions.
Robert Haas [Thu, 27 Oct 2016 18:27:40 +0000 (14:27 -0400)]
If the stats collector dies during Hot Standby, restart it.
This bug exists as far back as 9.0, when Hot Standby was introduced,
so back-patch to all supported branches.
Report and patch by Takayuki Tsunakawa, reviewed by Michael Paquier
and Kuntal Ghosh.
Robert Haas [Thu, 27 Oct 2016 15:19:51 +0000 (11:19 -0400)]
Fix possible pg_basebackup failure on standby with "include WAL".
If a restartpoint flushed no dirty buffers, it could fail to update
the minimum recovery point, leading to a minimum recovery point prior
to the starting REDO location. perform_base_backup() would interpret
that as meaning that no WAL files at all needed to be included in the
backup, failing an internal sanity check. To fix, have restartpoints
always update the minimum recovery point to just after the checkpoint
record itself, so that the file (or files) containing the checkpoint
record will always be included in the backup.
Code by Amit Kapila, per a design suggestion by me, with some
additional work on the code comment by me. Test case by Michael
Paquier. Report by Kyotaro Horiguchi.
Tom Lane [Wed, 26 Oct 2016 21:05:06 +0000 (17:05 -0400)]
Fix incorrect trigger-property updating in ALTER CONSTRAINT.
The code to change the deferrability properties of a foreign-key constraint
updated all the associated triggers to match; but a moment's examination of
the code that creates those triggers in the first place shows that only
some of them should track the constraint's deferrability properties. This
leads to odd failures in subsequent exercise of the foreign key, as the
triggers are fired at the wrong times. Fix that, and add a regression test
comparing the trigger properties produced by ALTER CONSTRAINT with those
you get by creating the constraint as-intended to begin with.
Per report from James Parks. Back-patch to 9.4 where this ALTER
functionality was introduced.
Report:
Tom Lane [Wed, 26 Oct 2016 17:40:41 +0000 (13:40 -0400)]
Fix not-HAVE_SYMLINK code in zic.c.
I broke this in commit
f3094920a. Apparently it's dead code anyway,
at least as far as our buildfarm is concerned (and the upstream IANA
code doesn't worry at all about symlink() not being present).
But as long as the rest of our code is willing to guard against not
having symlink(), this should too. Noted while investigating a
tangentially-related complaint from Sandeep Thakkar.
Back-patch to keep branches in sync.
Tom Lane [Mon, 24 Oct 2016 20:12:53 +0000 (16:12 -0400)]
Stamp 9.4.10.
Peter Eisentraut [Mon, 24 Oct 2016 16:00:00 +0000 (12:00 -0400)]
Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash:
475f2bcc7c56f293db4e62d31f85b3bfc0f9f279
Tom Lane [Mon, 24 Oct 2016 02:13:28 +0000 (22:13 -0400)]
Release notes for 9.6.1, 9.5.5, 9.4.10, 9.3.15, 9.2.19, 9.1.24.
Tom Lane [Sun, 23 Oct 2016 19:01:24 +0000 (15:01 -0400)]
Avoid testing tuple visibility without buffer lock in RI_FKey_check().
Despite the argumentation I wrote in commit
7a2fe85b0, it's unsafe to do
this, because in corner cases it's possible for HeapTupleSatisfiesSelf
to try to set hint bits on the target tuple; and at least since 8.2 we
have required the buffer content lock to be held while setting hint bits.
The added regression test exercises one such corner case. Unpatched, it
causes an assertion failure in assert-enabled builds, or otherwise would
cause a hint bit change in a buffer we don't hold lock on, which given
the right race condition could result in checksum failures or other data
consistency problems. The odds of a problem in the field are probably
pretty small, but nonetheless back-patch to all supported branches.
Report: <19391.
1477244876@sss.pgh.pa.us>
Tom Lane [Sat, 22 Oct 2016 18:04:51 +0000 (14:04 -0400)]
Improve documentation about use of Linux huge pages.
Show how to get the system's huge page size, rather than misleadingly
referring to PAGE_SIZE (which is usually understood to be the regular
page size). Show how to confirm whether huge pages have been allocated.
Minor wordsmithing. Back-patch to 9.4 where this section appeared.
Tom Lane [Fri, 21 Oct 2016 15:01:35 +0000 (11:01 -0400)]
Doc: wording tweak for PERL, PYTHON, TCLSH configuration variables.
Replace "Full path to ..." with "Full path name of ...". At least one
user has misinterpreted the existing wording as meaning "Directory
containing ...".
Tom Lane [Thu, 20 Oct 2016 21:17:50 +0000 (17:17 -0400)]
Fix EXPLAIN so that it doesn't emit invalid XML in corner cases.
With track_io_timing = on, EXPLAIN (ANALYZE, BUFFERS) will emit fields
named like "I/O Read Time". The slash makes that invalid as an XML
element name, so that adding FORMAT XML would produce invalid XML.
We already have code in there to translate spaces to dashes, so let's
generalize that to convert anything that isn't a valid XML name character,
viz letters, digits, hyphens, underscores, and periods. We could just
reject slashes, which would run a bit faster. But the fact that this went
unnoticed for so long doesn't give me a warm feeling that we'd notice the
next creative violation, so let's make it a permanent fix.
Reported by Markus Winand, though this isn't his initial patch proposal.
Back-patch to 9.2 where track_io_timing was added. The problem is only
latent in 9.1, so I don't feel a need to fix it there.
Discussion: <
E0BF6A45-68E8-45E6-918F-
741FB332C6BB@winand.at>
Tom Lane [Thu, 20 Oct 2016 19:40:07 +0000 (15:40 -0400)]
Sync our copy of the timezone library with IANA release tzcode2016h.
This absorbs a fix for a symlink-manipulation bug in zic that was
introduced in 2016g. It probably isn't interesting for our use-case,
but I'm not quite sure, so let's update while we're at it.
Tom Lane [Thu, 20 Oct 2016 19:20:11 +0000 (15:20 -0400)]
Update time zone data files to tzdata release 2016h.
(Didn't I just do this? Oh well.)
DST law changes in Palestine. Historical corrections for Turkey.
Switch to numeric abbreviations for Asia/Colombo.
Tom Lane [Thu, 20 Oct 2016 03:32:08 +0000 (23:32 -0400)]
Another portability fix for tzcode2016g update.
clang points out that SIZE_MAX wouldn't fit into an int, which means
this comparison is pretty useless. Per report from Thomas Munro.
Tom Lane [Wed, 19 Oct 2016 23:28:11 +0000 (19:28 -0400)]
Windows portability fix.
Per buildfarm.
Tom Lane [Wed, 19 Oct 2016 22:55:52 +0000 (18:55 -0400)]
Sync our copy of the timezone library with IANA release tzcode2016g.
This is mostly to absorb some corner-case fixes in zic for year-2037
timestamps. The other changes that have been made are unlikely to affect
our usage, but nonetheless we may as well take 'em.
Tom Lane [Wed, 19 Oct 2016 22:11:49 +0000 (18:11 -0400)]
Suppress "Factory" zone in pg_timezone_names view for tzdata >= 2016g.
IANA got rid of the really silly "abbreviation" and replaced it with one
that's only moderately silly. But it's still pointless, so keep on not
showing it.
Tom Lane [Wed, 19 Oct 2016 21:56:38 +0000 (17:56 -0400)]
Update time zone data files to tzdata release 2016g.
DST law changes in Turkey. Historical corrections for America/Los_Angeles,
Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk.
Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in Antarctica and the former Soviet Union, for instance Antarctica/Casey
now reports "+08" not "AWST" in the pg_timezone_names view. I kept these
abbreviations in the tznames/ data files, however, so that we will still
accept them for input. (We may want to start trimming those files someday,
but today is not that day.)
An exception is that since IANA no longer claims that "AMT" is in use
in Armenia for GMT+4, I replaced it in the Default file with GMT-4,
corresponding to Amazon Time which is in use in South America. It may be
that that meaning is also invented and IANA will drop it in a future
update; but for now, it seems silly to give pride of place to a meaning
not traceable to IANA over one that is.
Heikki Linnakangas [Wed, 19 Oct 2016 11:43:34 +0000 (14:43 +0300)]
Fix WAL-logging of FSM and VM truncation.
When a relation is truncated, it is important that the FSM is truncated as
well. Otherwise, after recovery, the FSM can return a page that has been
truncated away, leading to errors like:
ERROR: could not read block 28991 in file "base/16390/572026": read only 0
of 8192 bytes
We were using MarkBufferDirtyHint() to dirty the buffer holding the last
remaining page of the FSM, but during recovery, that might in fact not
dirty the page, and the FSM update might be lost.
To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty()
requires us to do WAL-logging ourselves, to protect from a torn page, if
checksumming is enabled.
Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log
when checksumming is enabled.
Analysis by Pavan Deolasee.
Discussion:
Backpatch to 9.3, where we got data checksums.
Tom Lane [Tue, 18 Oct 2016 16:24:46 +0000 (12:24 -0400)]
Fix cidin() to handle values above 2^31 platform-independently.
CommandId is declared as uint32, and values up to 4G are indeed legal.
cidout() handles them properly by treating the value as unsigned int.
But cidin() was just using atoi(), which has platform-dependent behavior
for values outside the range of signed int, as reported by Bart Lengkeek
in bug #14379. Use strtoul() instead, as xidin() does.
In passing, make some purely cosmetic changes to make xidin/xidout
look more like cidin/cidout; the former didn't have a monopoly on
best practice IMO.
Neither xidin nor cidin make any attempt to throw error for invalid input.
I didn't change that here, and am not sure it's worth worrying about
since neither is really a user-facing type. The point is just to ensure
that indubitably-valid inputs work as expected.
It's been like this for a long time, so back-patch to all supported
branches.
Report: <
20161018152550[email protected]>
Tom Lane [Fri, 14 Oct 2016 20:28:34 +0000 (16:28 -0400)]
Fix assorted integer-overflow hazards in varbit.c.
bitshiftright() and bitshiftleft() would recursively call each other
infinitely if the user passed INT_MIN for the shift amount, due to integer
overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN.
That doesn't change the results since any shift distance larger than the
input bit string's length produces an all-zeroes result.
Also fix some places that seemed inadequately paranoid about input typmods
exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will
certainly be much less than that, at least some of these spots are
reachable with user-chosen integer values.
Andreas Seltenreich and Tom Lane
Discussion: <
[email protected]>
Tom Lane [Thu, 13 Oct 2016 21:05:15 +0000 (17:05 -0400)]
Fix another bug in merging of inherited CHECK constraints.
It's not good for an inherited child constraint to be marked connoinherit;
that would result in the constraint not propagating to grandchild tables,
if any are created later. The code mostly prevented this from happening
but there was one case that was missed.
This is somewhat related to commit
e55a946a8, which also tightened checks
on constraint merging. Hence, back-patch to 9.2 like that one. This isn't
so much because there's a concrete feature-related reason to stop there,
as to avoid having more distinct behaviors than we have to in this area.
Amit Langote
Discussion: <
b28ee774-7009-313d-dd55-
5bdd81242c41@lab.ntt.co.jp>
Tom Lane [Thu, 13 Oct 2016 19:06:46 +0000 (15:06 -0400)]
Try to find out the actual hugepage size when making a MAP_HUGETLB request.
Even if Linux's mmap() is okay with a partial-hugepage request, munmap()
is not, as reported by Chris Richards. Therefore it behooves us to try
a bit harder to find out the actual hugepage size, instead of assuming
that we can skate by with a guess.
For the moment, just look into /proc/meminfo to find out the default
hugepage size, and use that. Later, on kernels that support requests
for nondefault sizes, we might try to consider other alternatives.
But that smells more like a new feature than a bug fix, especially if
we want to provide any way for the DBA to control it, so leave it for
another day.
I set this up to allow easy addition of platform-specific code for
non-Linux platforms, if needed; but right now there are no reports
suggesting that we need to work harder on other platforms.
Back-patch to 9.4 where hugepage support was introduced.
Discussion: <31056.
1476303954@sss.pgh.pa.us>
Tom Lane [Thu, 13 Oct 2016 17:59:56 +0000 (13:59 -0400)]
Clean up handling of anonymous mmap'd shared-memory segment.
Fix detaching of the mmap'd segment to have its own on_shmem_exit callback,
rather than piggybacking on the one for detaching from the SysV segment.
That was confusing, and given the distance between the two attach calls,
it was trouble waiting to happen.
Make the detaching calls idempotent by clearing AnonymousShmem to show
we've already unmapped. I spent quite a bit of time yesterday trying
to find a path that would allow the munmap()'s to be done twice, and
while I did not succeed, it seems silly that there's even a question.
Make the #ifdef logic less confusing by separating "do we want to use
anonymous shmem" from EXEC_BACKEND. Even though there's no current
scenario where those conditions are different, it is not helpful for
different places in the same file to be testing EXEC_BACKEND for what
are fundamentally different reasons.
Don't do on_exit_reset() in StartBackgroundWorker(). At best that's
useless (InitPostmasterChild would have done it already) and at worst
it could zap some callback that's unrelated to shared memory.
Improve comments, and simplify the huge_pages enablement logic slightly.
Back-patch to 9.4 where hugepage support was introduced.
Arguably this should go into 9.3 as well, but the code looks
significantly different there, and I doubt it's worth the
trouble of adapting the patch given I can't show a live bug.
Tom Lane [Wed, 12 Oct 2016 22:01:43 +0000 (18:01 -0400)]
Revert addition of PGDLLEXPORT in PG_FUNCTION_INFO_V1 macro.
This turns out not to be as harmless as I thought: MSVC will complain
if it sees an "extern" declaration without PGDLLEXPORT and then one with.
(Seems fairly silly, given that this can be changed after the fact by the
linker, but there you have it.) Therefore, contrib modules that have
extern's for V1 functions in header files are falling over in the
buildfarm, since none of those externs are marked PGDLLEXPORT.
We might or might not conclude that we're willing to plaster those
declarations with PGDLLEXPORT in HEAD, but in any case there's no way we're
going to ship this change in the back branches. Third-party authors would
not thank us for breaking their code in a minor release. Hence, revert
the addition of PGDLLEXPORT (but let's keep the extra info in the comment).
If we do the other changes we can revert this commit in HEAD.
Per buildfarm.
Tom Lane [Wed, 12 Oct 2016 16:45:50 +0000 (12:45 -0400)]
Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.
This isn't really necessary for our own code, because we use a .DEF file
in MSVC builds (see gendef.pl), or --export-all-symbols in MinGW and
Cygwin builds, to ensure that all global symbols in loadable modules
will be exported on Windows. However, third-party authors might use
different build processes that need this marker, and it's harmless
enough for our own builds.
To some extent, this is an oversight in commit
e7128e8db, so back-patch
to 9.4 where that was added.
Laurenz Albe
Discussion: <
A737B7A37273E048B164557ADEF4A58B539300BD@ntex2010a.host.magwien.gv.at>
Heikki Linnakangas [Wed, 12 Oct 2016 09:07:54 +0000 (12:07 +0300)]
Fix copy-pasto in comment.
Amit Langote
Tom Lane [Tue, 11 Oct 2016 14:08:45 +0000 (10:08 -0400)]
Improve documentation for CREATE RECURSIVE VIEW.
It was perhaps not entirely clear that internal self-references shouldn't
be schema-qualified even if the view name is written with a schema.
Spell it out.
Discussion: <
[email protected]>
Tom Lane [Mon, 10 Oct 2016 14:35:58 +0000 (10:35 -0400)]
In PQsendQueryStart(), avoid leaking any left-over async result.
Ordinarily there would not be an async result sitting around at this
point, but it appears that in corner cases there can be. Considering
all the work we're about to launch, it's hardly going to cost anything
noticeable to check.
It's been like this forever, so back-patch to all supported branches.
Report:
Tom Lane [Sat, 8 Oct 2016 23:29:27 +0000 (19:29 -0400)]
Fix two bugs in merging of inherited CHECK constraints.
Historically, we've allowed users to add a CHECK constraint to a child
table and then add an identical CHECK constraint to the parent. This
results in "merging" the two constraints so that the pre-existing
child constraint ends up with both conislocal = true and coninhcount > 0.
However, if you tried to do it in the other order, you got a duplicate
constraint error. This is problematic for pg_dump, which needs to issue
separated ADD CONSTRAINT commands in some cases, but has no good way to
ensure that the constraints will be added in the required order.
And it's more than a bit arbitrary, too. The goal of complaining about
duplicated ADD CONSTRAINT commands can be served if we reject the case of
adding a constraint when the existing one already has conislocal = true;
but if it has conislocal = false, let's just make the ADD CONSTRAINT set
conislocal = true. In this way, either order of adding the constraints
has the same end result.
Another problem was that the code allowed creation of a parent constraint
marked convalidated that is merged with a child constraint that is
!convalidated. In this case, an inheritance scan of the parent table could
emit some rows violating the constraint condition, which would be an
unexpected result given the marking of the parent constraint as validated.
Hence, forbid merging of constraints in this case. (Note: valid child and
not-valid parent seems fine, so continue to allow that.)
Per report from Benedikt Grundmann. Back-patch to 9.2 where we introduced
possibly-not-valid check constraints. The second bug obviously doesn't
apply before that, and I think the first doesn't either, because pg_dump
only gets into this situation when dealing with not-valid constraints.
Report:
Discussion: <22108.1475874586@sss.pgh.pa.us>
Tom Lane [Sat, 8 Oct 2016 22:43:01 +0000 (18:43 -0400)]
Remove user_relns() SRF from regression tests.
Back-patch commit
0dba54f1666ead71c54ce100b39efda67596d297 into the older
branches. This test is almost as much of a patching hazard there as it is
in HEAD, and it has no more reason to be needed than it does in HEAD.
I went back as far as 9.2; I judged 9.1 not worth the trouble since
it's on the verge of being EOL'd.
Heikki Linnakangas [Fri, 7 Oct 2016 18:48:21 +0000 (21:48 +0300)]
Make TAP test suites to work, when @INC does not contain current dir.
Recent Perl and/or new Linux distributions are starting to remove "." from
the @INC list by default. That breaks pg_rewind and ssl test suites, which
use helper perl modules that reside in the same directory. To fix, add the
current source directory explicitly to prove's include dir.
The vcregress.pl script probably also needs something like this, but I
wasn't able to remove '.' from @INC on Windows to test this, and don't want
to try doing that blindly.
Discussion: <
20160908204529[email protected]>
Heikki Linnakangas [Fri, 7 Oct 2016 09:53:45 +0000 (12:53 +0300)]
Clear OpenSSL error queue after failed X509_STORE_load_locations() call.
Leaving the error in the error queue used to be harmless, because the
X509_STORE_load_locations() call used to be the last step in
initialize_SSL(), and we would clear the queue before the next
SSL_connect() call. But previous commit moved things around. The symptom
was that if a CRL file was not found, and one of the subsequent
initialization steps, like loading the client certificate or private key,
failed, we would incorrectly print the "no such file" error message from
the earlier X509_STORE_load_locations() call as the reason.
Backpatch to all supported versions, like the previous patch.
Heikki Linnakangas [Fri, 7 Oct 2016 09:20:39 +0000 (12:20 +0300)]
Don't share SSL_CTX between libpq connections.
There were several issues with the old coding:
1. There was a race condition, if two threads opened a connection at the
same time. We used a mutex around SSL_CTX_* calls, but that was not
enough, e.g. if one thread SSL_CTX_load_verify_locations() with one
path, and another thread set it with a different path, before the first
thread got to establish the connection.
2. Opening two different connections, with different sslrootcert settings,
seemed to fail outright with "SSL error: block type is not 01". Not sure
why.
3. We created the SSL object, before calling SSL_CTX_load_verify_locations
and SSL_CTX_use_certificate_chain_file on the SSL context. That was
wrong, because the options set on the SSL context are propagated to the
SSL object, when the SSL object is created. If they are set after the
SSL object has already been created, they won't take effect until the
next connection. (This is bug #14329)
At least some of these could've been fixed while still using a shared
context, but it would've been more complicated and error-prone. To keep
things simple, let's just use a separate SSL context for each connection,
and accept the overhead.
Backpatch to all supported versions.
Report, analysis and test case by Kacper Zuk.
Discussion: <
20160920101051[email protected]>
Andres Freund [Tue, 4 Oct 2016 05:11:36 +0000 (22:11 -0700)]
Correct logical decoding restore behaviour for subtransactions.
Before initializing iteration over a subtransaction's changes, the last
few changes were not spilled to disk. That's correct if the transaction
didn't spill to disk, but otherwise... This bug can lead to missed or
misorderd subtransaction contents when they were spilled to disk.
Move spilling of the remaining in-memory changes to
ReorderBufferIterTXNInit(), where it can easily be applied to the top
transaction and, if present, subtransactions.
Since this code had too many bugs already, noticeably increase test
coverage.
Fixes: #14319
Reported-By: Huan Ruan
Discussion: <
20160909012610[email protected]>
Backport: 9,4-, where logical decoding was added
Tom Lane [Sat, 1 Oct 2016 21:15:10 +0000 (17:15 -0400)]
Do ClosePostmasterPorts() earlier in SubPostmasterMain().
In standard Unix builds, postmaster child processes do ClosePostmasterPorts
immediately after InitPostmasterChild, that is almost immediately after
being spawned. This is important because we don't want children holding
open the postmaster's end of the postmaster death watch pipe.
However, in EXEC_BACKEND builds, SubPostmasterMain was postponing this
responsibility significantly, in order to make it slightly more convenient
to pass the right flag value to ClosePostmasterPorts. This is bad,
particularly seeing that process_shared_preload_libraries() might invoke
nearly-arbitrary code. Rearrange so that we do it as soon as we've
fetched the socket FDs via read_backend_variables().
Also move the comment explaining about randomize_va_space to before the
call of PGSharedMemoryReAttach, which is where it's relevant. The old
placement was appropriate when the reattach happened inside
CreateSharedMemoryAndSemaphores, but that was a long time ago.
Back-patch to 9.3; the patch doesn't apply cleanly before that, and
it doesn't seem worth a lot of effort given that we've had no actual
field complaints traceable to this.
Discussion: <4157.
1475178360@sss.pgh.pa.us>
Magnus Hagander [Fri, 30 Sep 2016 09:19:30 +0000 (11:19 +0200)]
Retry opening new segments in pg_xlogdump --folllow
There is a small window between when the server closes out the existing
segment and the new one is created. Put a loop around the open call in
this case to make sure we wait for the new file to actually appear.
Robert Haas [Wed, 28 Sep 2016 16:38:33 +0000 (12:38 -0400)]
worker_spi: Call pgstat_report_stat.
Without this, statistics changes accumulated by the worker never get
reported to the stats collector, which is bad.
Julien Rouhaud
Alvaro Herrera [Tue, 27 Sep 2016 04:05:21 +0000 (01:05 -0300)]
Include where needed
is required by POSIX.1-2001 to get the prototype of
select(2), but nearly no systems enforce that because older standards
let you get away with including some other headers. Recent OpenBSD
hacking has removed that frail touch of friendliness, however, which
broke some compiles; fix all the way back to 9.1 by adding the required
standard. Only vacuumdb.c was reported to fail, but it seems easier to
fix the whole lot in a fell swoop.
Per bug #14334 by Sean Farrell.
Tom Lane [Mon, 26 Sep 2016 15:50:35 +0000 (11:50 -0400)]
Document has_type_privilege().
Evidently an oversight in commit
729205571. Back-patch to 9.2 where
privileges for types were introduced.
Report: <
20160922173517[email protected]>
Tom Lane [Fri, 23 Sep 2016 19:50:00 +0000 (15:50 -0400)]
Install TAP test infrastructure so it's available for extension testing.
When configured with --enable-tap-tests, "make install" will now install
the Perl support files for TAP testing where PGXS will find them.
This allows extensions to rely on $(prove_check) even when being built
out-of-tree. Back-patch to 9.4 where we first started to support TAP
testing, to reduce the number of cases extension makefiles need to
consider.
Craig Ringer
Discussion:
Tom Lane [Fri, 23 Sep 2016 18:22:07 +0000 (14:22 -0400)]
Doc: fix examples of # operators so they actually work.
These worked as-is until around 7.0, but fail in newer versions because
there are more operators named "#". Besides it's a bit inconsistent that
only two of the examples on this page lack type names on their constants.
Report: <
20160923081530[email protected]>
Tom Lane [Fri, 23 Sep 2016 17:49:27 +0000 (13:49 -0400)]
Fix incorrect logic for excluding range constructor functions in pg_dump.
Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to
dumping range constructor functions if they are part of an extension
and we're in binary-upgrade mode. Actually, we don't want to dump them
separately even then, since CREATE TYPE AS RANGE will create the range's
constructor functions regardless. Per report from Andrew Dunstan.
It looks like this mistake was introduced by me, in commit
b985d4877, in
perhaps-overzealous refactoring to reduce code duplication. I'm suitably
embarrassed.
Report: <
34854939-02d7-f591-5677-
ce2994104599@dunslane.net>
Tom Lane [Fri, 23 Sep 2016 14:09:52 +0000 (10:09 -0400)]
Don't trust CreateFileMapping() to clear the error code on success.
We must test GetLastError() even when CreateFileMapping() returns a
non-null handle. If that value were left over from some previous system
call, we might be fooled into thinking the segment already existed.
Experimentation on Windows 7 suggests that CreateFileMapping() clears
the error code on success, but it is not documented to do so, so let's
not rely on that happening in all Windows releases.
Amit Kapila
Discussion: <20811.
1474390987@sss.pgh.pa.us>
Tom Lane [Fri, 23 Sep 2016 13:54:11 +0000 (09:54 -0400)]
Avoid using PostmasterRandom() for DSM control segment ID.
Commits
470d886c3 et al intended to fix the problem that the postmaster
selected the same "random" DSM control segment ID on every start. But
using PostmasterRandom() for that destroys the intended property that the
delay between random_start_time and random_stop_time will be unpredictable.
(Said delay is probably already more predictable than we could wish, but
that doesn't mean that reducing it by a couple orders of magnitude is OK.)
Revert the previous patch and add a comment warning against misuse of
PostmasterRandom. Fix the original problem by calling srandom() early in
PostmasterMain, using a low-security seed that will later be overwritten
by PostmasterRandom.
Discussion: <20789.
1474390434@sss.pgh.pa.us>
Tom Lane [Thu, 22 Sep 2016 15:34:44 +0000 (11:34 -0400)]
Be sure to rewind the tuplestore read pointer in non-leader CTEScan nodes.
ExecInitCteScan supposed that it didn't have to do anything to the extra
tuplestore read pointer it gets from tuplestore_alloc_read_pointer.
However, it needs this read pointer to be positioned at the start of the
tuplestore, while tuplestore_alloc_read_pointer is actually defined as
cloning the current position of read pointer 0. In normal situations
that accidentally works because we initialize the whole plan tree at once,
before anything gets read. But it fails in an EvalPlanQual recheck, as
illustrated in bug #14328 from Dima Pavlov. To fix, just forcibly rewind
the pointer after tuplestore_alloc_read_pointer. The cost of doing so is
negligible unless the tuplestore is already in TSS_READFILE state, which
wouldn't happen in normal cases. We could consider altering tuplestore's
API to make that case cheaper, but that would make for a more invasive
back-patch and it doesn't seem worth it.
This has been broken probably for as long as we've had CTEs, so back-patch
to all supported branches.
Discussion: <32468.
1474548308@sss.pgh.pa.us>
Heikki Linnakangas [Wed, 21 Sep 2016 10:14:48 +0000 (13:14 +0300)]
Fix pgbench's calculation of average latency, when -T is not used.
If the test duration was given in # of transactions (-t or no option),
rather as a duration (-T), the latency average was always printed as 0.
It has been broken ever since the display of latency average was added,
in 9.4.
Fabien Coelho
Discussion:
1607131015370.7486@sto>
Peter Eisentraut [Tue, 20 Sep 2016 16:00:00 +0000 (12:00 -0400)]
doc: Fix documentation to match actual make output
based on patch from Takeshi Ideriha
Peter Eisentraut [Tue, 20 Sep 2016 16:00:00 +0000 (12:00 -0400)]
doc: Correct ALTER USER MAPPING example
The existing example threw an error.
From: gabrielle
Robert Haas [Tue, 20 Sep 2016 16:24:44 +0000 (12:24 -0400)]
Use PostmasterRandom(), not random(), for DSM control segment ID.
Otherwise, every startup gets the same "random" value, which is
definitely not what was intended.
Robert Haas [Tue, 20 Sep 2016 16:04:41 +0000 (12:04 -0400)]
Retry DSM control segment creation if Windows indicates access denied.
Otherwise, attempts to run multiple postmasters running on the same
machine may fail, because Windows sometimes returns ERROR_ACCESS_DENIED
rather than ERROR_ALREADY_EXISTS when there is an existing segment.
Hitting this bug is much more likely because of another defect not
fixed by this patch, namely that dsm_postmaster_startup() uses
random() which returns the same value every time. But that's not
a reason not to fix this.
Kyotaro Horiguchi and Amit Kapila, reviewed by Michael Paquier
Discussion: