-
+
Regression Tests
======================
- All 96 tests passed.
+ All 98 tests passed.
======================
PostgreSQL installations can
fail
some of these regression tests due to
platform-specific artifacts such as varying floating-point representation
- and time zone support. The tests are currently evaluated using a simple
+ and message wording. The tests are currently evaluated using a simple
diff comparison against the outputs
generated on a reference system, so the results are sensitive to
small system differences. When a test is reported as
can run diff yourself, if you prefer.)
+ If for some reason a particular platform generates a failure>
+ for a given test, but inspection of the output convinces you that
+ the result is valid, you can add a new comparison file to silence
+ the failure report in future test runs. See
+ for details.
+
+
Error message differences
there may be differences due to sort order and follow-up
failures. The regression test suite is set up to handle this
problem by providing alternative result files that together are
- known to handle a large number of locales. For example, for the
- char test, the expected file
- char.out handles the C> and POSIX> locales,
- and the file char_1.out handles many other
- locales. The regression test driver will automatically pick the
- best file to match against when checking for success and for
- computing failure differences. (This means that the regression
- tests cannot detect whether the results are appropriate for the
- configured locale. The tests will simply pick the one result
- file that works best.)
-
-
- If for some reason the existing expected files do not cover some
- locale, you can add a new file. The naming scheme is
- testname>_digit>.out>.
- The actual digit is not significant. Remember that the
- regression test driver will consider all such files to be equally
- valid test results. If the test results are platform-specific,
- the technique described in
- should be used instead.
+ known to handle a large number of locales.
Date and time differences
- A few of the queries in the horology test will
- fail if you run the test on the day of a daylight-saving time
- changeover, or the day after one. These queries expect that
- the intervals between midnight yesterday, midnight today and
- midnight tomorrow are exactly twenty-four hours — which is wrong
- if daylight-saving time went into or out of effect meanwhile.
-
-
-
- Because USA daylight-saving time rules are used, this problem always
- occurs on the first Sunday of April, the last Sunday of October,
- and their following Mondays, regardless of when daylight-saving time
- is in effect where you live. Also note that the problem appears or
- disappears at midnight Pacific time (UTC-7 or UTC-8), not midnight
- your local time. Thus the failure may appear late on Saturday or
- persist through much of Tuesday, depending on where you live.
-
-
-
Most of the date and time results are dependent on the time zone
environment. The reference files are generated for time zone
-
-
Platform-specific comparison files
+
+
Variant Comparison Files
+
+ Since some of the tests inherently produce environment-dependent
+ results, we have provided ways to specify alternative expected>
+ result files. Each regression test can have several comparison files
+ showing possible results on different platforms. There are two
+ independent mechanisms for determining which comparison file is used
+ for each test.
+
- Since some of the tests inherently produce platform-specific
- results, we have provided a way to supply platform-specific result
- comparison files. Frequently, the same variation applies to
- multiple platforms; rather than supplying a separate comparison
- file for every platform, there is a mapping file that defines
- which comparison file to use. So, to eliminate bogus test
- failures
for a particular platform, you must choose
- or make a variant result file, and then add a line to the mapping
- file, which is src/test/regress/resultmap.
+ The first mechanism allows comparison files to be selected for
+ specific platforms. There is a mapping file,
+ src/test/regress/resultmap, that defines
+ which comparison file to use for each platform.
+ To eliminate bogus test failures
for a particular platform,
+ you first choose or make a variant result file, and then add a line to the
+ resultmap file.
:gcc or :cc, depending on
whether you use the GNU compiler or the system's native compiler
(on systems where there is a difference). The comparison file
- name is the name of the substitute result comparison file.
+ name is the base name of the substitute result comparison file.
in resultmap> select the variant comparison file for other
platforms where it's appropriate.
+
+ The second selection mechanism for variant comparison files is
+ much more automatic: it simply uses the best match> among
+ several supplied comparison files. The regression test driver
+ script considers both the standard comparison file for a test,
+ testname>.out>, and variant files named
+ testname>_digit>.out>
+ (where the digit> is any single digit
+ 0>-9>). If any such file is an exact match,
+ the test is considered to pass; otherwise, the one that generates
+ the shortest diff is used to create the failure report. (If
+ resultmap includes an entry for the particular
+ test, then the base testname> is the substitute
+ name given in resultmap.)
+
+
+ For example, for the char test, the comparison file
+ char.out contains results that are expected
+ in the C> and POSIX> locales, while
+ the file char_1.out contains results sorted as
+ they appear in many other locales.
+
+
+ The best-match mechanism was devised to cope with locale-dependent
+ results, but it can be used in any situation where the test results
+ cannot be predicted easily from the platform name alone. A limitation of
+ this mechanism is that the test driver cannot tell which variant is
+ actually correct> for the current environment; it will just pick
+ the variant that seems to work best. Therefore it is safest to use this
+ mechanism only for variant results that you are willing to consider
+ equally valid in all contexts.
+