- Sets the maximum amount of memory to be used by a query operation
+ Sets the base maximum amount of memory to be used by a query operation
(such as a sort or hash table) before writing to temporary disk files.
If this value is specified without units, it is taken as kilobytes.
The default value is four megabytes (4MB ).
Note that for a complex query, several sort or hash operations might be
- running in parallel; each operation will be allowed to use as much memory
- as this value specifies before it starts to write data into temporary
- files. Also, several running sessions could be doing such operations
- concurrently. Therefore, the total memory used could be many
- times the value of work_mem ; it is necessary to
- keep this fact in mind when choosing the value. Sort operations are
- used for ORDER BY , DISTINCT , and
- merge joins.
+ running in parallel; each operation will generally be allowed
+ to use as much memory as this value specifies before it starts
+ to write data into temporary files. Also, several running
+ sessions could be doing such operations concurrently.
+ Therefore, the total memory used could be many times the value
+ of work_mem ; it is necessary to keep this
+ fact in mind when choosing the value. Sort operations are used
+ for ORDER BY , DISTINCT ,
+ and merge joins.
Hash tables are used in hash joins, hash-based aggregation, and
hash-based processing of IN subqueries.
+ Hash-based operations are generally more sensitive to memory
+ availability than equivalent sort-based operations. The
+ memory available for hash tables is computed by multiplying
+ work_mem by
+ hash_mem_multiplier . This makes it
+ possible for hash-based operations to use an amount of memory
+ that exceeds the usual work_mem base
+ amount.
+
+
+
+
+
+ hash_mem_multiplier (floating point )
+
+
hash_mem_multiplier configuration parameter
+
+
+
+ Used to compute the maximum amount of memory that hash-based
+ operations can use. The final limit is determined by
+ multiplying work_mem by
+ hash_mem_multiplier . The default value is
+ 1.0, which makes hash-based operations subject to the same
+ simple work_mem maximum as sort-based
+ operations.
+
+ Consider increasing hash_mem_multiplier in
+ environments where spilling by query operations is a regular
+ occurrence, especially when simply increasing
+ work_mem results in memory pressure (memory
+ pressure typically takes the form of intermittent out of
+ memory errors). A setting of 1.5 or 2.0 may be effective with
+ mixed workloads. Higher settings in the range of 2.0 - 8.0 or
+ more may be effective in environments where
+ work_mem has already been increased to 40MB
+ or more.
+
-S work-mem
- Specifies the amount of memory to be used by internal sorts and hashes
- before resorting to temporary disk files. See the description of the
-
work_mem configuration parameter in
- linkend="runtime-config-resource-memory"/>.
+ Specifies the base amount of memory to be used by sorts and
+ hash tables before resorting to temporary disk files. See the
+ description of the work_mem configuration
+ parameter in linkend="runtime-config-resource-memory"/>.
system running out of memory, you can avoid the problem by changing
your configuration. In some cases, it may help to lower memory-related
configuration parameters, particularly
- shared_buffers
- and work_mem . In
- other cases, the problem may be caused by allowing too many connections
- to the database server itself. In many cases, it may be better to reduce
+ shared_buffers ,
+ work_mem , and
+ hash_mem_multiplier .
+ In other cases, the problem may be caused by allowing too many
+ connections to the database server itself. In many cases, it may
+ be better to reduce
max_connections
and instead make use of external connection-pooling software.
{
TupleHashTable hashtable;
Size entrysize = sizeof(TupleHashEntryData) + additionalsize;
+ int hash_mem = get_hash_mem();
MemoryContext oldcontext;
bool allow_jit;
Assert(nbuckets > 0);
- /* Limit initial table size request to not more than work _mem */
- nbuckets = Min(nbuckets, (long) ((work _mem * 1024L) / entrysize));
+ /* Limit initial table size request to not more than hash _mem */
+ nbuckets = Min(nbuckets, (long) ((hash _mem * 1024L) / entrysize));
oldcontext = MemoryContextSwitchTo(metacxt);
* entries (and initialize new transition states), we instead spill them to
* disk to be processed later. The tuples are spilled in a partitioned
* manner, so that subsequent batches are smaller and less likely to exceed
- * work_mem (if a batch does exceed work _mem, it must be spilled
+ * hash_mem (if a batch does exceed hash _mem, it must be spilled
* recursively).
*
* Spilled data is written to logical tapes. These provide better control
*
* Note that it's possible for transition states to start small but then
* grow very large; for instance in the case of ARRAY_AGG. In such cases,
- * it's still possible to significantly exceed work _mem. We try to avoid
+ * it's still possible to significantly exceed hash _mem. We try to avoid
* this situation by estimating what will fit in the available memory, and
* imposing a limit on the number of groups separately from the amount of
* memory consumed.
/*
* Used to make sure initial hash table allocation does not exceed
- * work _mem. Note that the estimate does not include space for
+ * hash _mem. Note that the estimate does not include space for
* pass-by-reference transition data values, nor for the representative
* tuple of each group.
*/
}
/*
- * Set limits that trigger spilling to avoid exceeding work _mem. Consider the
+ * Set limits that trigger spilling to avoid exceeding hash _mem. Consider the
* number of partitions we expect to create (if we do spill).
*
* There are two limits: a memory limit, and also an ngroups limit. The
{
int npartitions;
Size partition_mem;
+ int hash_mem = get_hash_mem();
- /* if not expected to spill, use all of work _mem */
- if (input_groups * hashentrysize < work _mem * 1024L)
+ /* if not expected to spill, use all of hash _mem */
+ if (input_groups * hashentrysize < hash _mem * 1024L)
{
if (num_partitions != NULL)
*num_partitions = 0;
- *mem_limit = work _mem * 1024L;
+ *mem_limit = hash _mem * 1024L;
*ngroups_limit = *mem_limit / hashentrysize;
return;
}
HASHAGG_WRITE_BUFFER_SIZE * npartitions;
/*
- * Don't set the limit below 3/4 of work _mem. In that case, we are at the
+ * Don't set the limit below 3/4 of hash _mem. In that case, we are at the
* minimum number of partitions, so we aren't going to dramatically exceed
* work mem anyway.
*/
- if (work _mem * 1024L > 4 * partition_mem)
- *mem_limit = work _mem * 1024L - partition_mem;
+ if (hash _mem * 1024L > 4 * partition_mem)
+ *mem_limit = hash _mem * 1024L - partition_mem;
else
- *mem_limit = work _mem * 1024L * 0.75;
+ *mem_limit = hash _mem * 1024L * 0.75;
if (*mem_limit > hashentrysize)
*ngroups_limit = *mem_limit / hashentrysize;
int partition_limit;
int npartitions;
int partition_bits;
+ int hash_mem = get_hash_mem();
/*
* Avoid creating so many partitions that the memory requirements of the
- * open partition files are greater than 1/4 of work _mem.
+ * open partition files are greater than 1/4 of hash _mem.
*/
partition_limit =
- (work _mem * 1024L * 0.25 - HASHAGG_READ_BUFFER_SIZE) /
+ (hash _mem * 1024L * 0.25 - HASHAGG_READ_BUFFER_SIZE) /
HASHAGG_WRITE_BUFFER_SIZE;
mem_wanted = HASHAGG_PARTITION_FACTOR * input_groups * hashentrysize;
/* make enough partitions so that each one is likely to fit in memory */
- npartitions = 1 + (mem_wanted / (work _mem * 1024L));
+ npartitions = 1 + (mem_wanted / (hash _mem * 1024L));
if (npartitions > partition_limit)
npartitions = partition_limit;
#include "port/atomics.h"
#include "port/pg_bitutils.h"
#include "utils/dynahash.h"
+#include "utils/guc.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/syscache.h"
hashtable->spaceAllowed = space_allowed;
hashtable->spaceUsedSkew = 0;
hashtable->spaceAllowedSkew =
- hashtable->spaceAllowed * SKEW_WORK _MEM_PERCENT / 100;
+ hashtable->spaceAllowed * SKEW_HASH _MEM_PERCENT / 100;
hashtable->chunks = NULL;
hashtable->current_chunk = NULL;
hashtable->parallel_state = state->parallel_state;
void
ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
- bool try_combined_work _mem,
+ bool try_combined_hash _mem,
int parallel_workers,
size_t *space_allowed,
int *numbuckets,
int nbatch = 1;
int nbuckets;
double dbuckets;
+ int hash_mem = get_hash_mem();
/* Force a plausible relation size if no info */
if (ntuples <= 0.0)
inner_rel_bytes = ntuples * tupsize;
/*
- * Target in-memory hashtable size is work _mem kilobytes.
+ * Target in-memory hashtable size is hash _mem kilobytes.
*/
- hash_table_bytes = work _mem * 1024L;
+ hash_table_bytes = hash _mem * 1024L;
/*
- * Parallel Hash tries to use the combined work _mem of all workers to
- * avoid the need to batch. If that won't work, it falls back to work _mem
+ * Parallel Hash tries to use the combined hash _mem of all workers to
+ * avoid the need to batch. If that won't work, it falls back to hash _mem
* per worker and tries to process batches in parallel.
*/
- if (try_combined_work _mem)
+ if (try_combined_hash _mem)
hash_table_bytes += hash_table_bytes * parallel_workers;
*space_allowed = hash_table_bytes;
*/
if (useskew)
{
- skew_table_bytes = hash_table_bytes * SKEW_WORK _MEM_PERCENT / 100;
+ skew_table_bytes = hash_table_bytes * SKEW_HASH _MEM_PERCENT / 100;
/*----------
* Divisor is:
/*
* Set nbuckets to achieve an average bucket load of NTUP_PER_BUCKET when
* memory is filled, assuming a single batch; but limit the value so that
- * the pointer arrays we'll try to allocate do not exceed work _mem nor
+ * the pointer arrays we'll try to allocate do not exceed hash _mem nor
* MaxAllocSize.
*
* Note that both nbuckets and nbatch must be powers of 2 to make
long bucket_size;
/*
- * If Parallel Hash with combined work _mem would still need multiple
- * batches, we'll have to fall back to regular work _mem budget.
+ * If Parallel Hash with combined hash _mem would still need multiple
+ * batches, we'll have to fall back to regular hash _mem budget.
*/
- if (try_combined_work _mem)
+ if (try_combined_hash _mem)
{
ExecChooseHashTableSize(ntuples, tupwidth, useskew,
false, parallel_workers,
}
/*
- * Estimate the number of buckets we'll want to have when work _mem is
+ * Estimate the number of buckets we'll want to have when hash _mem is
* entirely full. Each bucket will contain a bucket pointer plus
* NTUP_PER_BUCKET tuples, whose projected size already includes
* overhead for the hash code, pointer to the next tuple, etc.
/*
* Buckets are simple pointers to hashjoin tuples, while tupsize
* includes the pointer, hash code, and MinimalTupleData. So buckets
- * should never really exceed 25% of work _mem (even for
- * NTUP_PER_BUCKET=1); except maybe for work _mem values that are not
+ * should never really exceed 25% of hash _mem (even for
+ * NTUP_PER_BUCKET=1); except maybe for hash _mem values that are not
* 2^N bytes, where we might get more because of doubling. So let's
* look for 50% here.
*/
/* Figure out how many batches to use. */
if (hashtable->nbatch == 1)
{
+ int hash_mem = get_hash_mem();
+
/*
* We are going from single-batch to multi-batch. We need
* to switch from one large combined memory budget to the
- * regular work _mem budget.
+ * regular hash _mem budget.
*/
- pstate->space_allowed = work _mem * 1024L;
+ pstate->space_allowed = hash _mem * 1024L;
/*
- * The combined work _mem of all participants wasn't
+ * The combined hash _mem of all participants wasn't
* enough. Therefore one batch per participant would be
* approximately equivalent and would probably also be
* insufficient. So try two batches per participant,
/*
* Check if our space limit would be exceeded. To avoid choking on
- * very large tuples or very low work _mem setting, we'll always allow
+ * very large tuples or very low hash _mem setting, we'll always allow
* each backend to allocate at least one chunk.
*/
if (hashtable->batches[0].at_least_one_chunk &&
return true;
}
+
+/*
+ * Get a hash_mem value by multiplying the work_mem GUC's value by the
+ * hash_mem_multiplier GUC's value.
+ *
+ * Returns a work_mem style KB value that hash-based nodes (including but not
+ * limited to hash join) use in place of work_mem. This is subject to the
+ * same restrictions as work_mem itself. (There is no such thing as the
+ * hash_mem GUC, but it's convenient for our callers to pretend that there
+ * is.)
+ *
+ * Exported for use by the planner, as well as other hash-based executor
+ * nodes. This is a rather random place for this, but there is no better
+ * place.
+ */
+int
+get_hash_mem(void)
+{
+ double hash_mem;
+
+ Assert(hash_mem_multiplier >= 1.0);
+
+ hash_mem = (double) work_mem * hash_mem_multiplier;
+
+ /*
+ * guc.c enforces a MAX_KILOBYTES limitation on work_mem in order to
+ * support the assumption that raw derived byte values can be stored in
+ * 'long' variables. The returned hash_mem value must also meet this
+ * assumption.
+ *
+ * We clamp the final value rather than throw an error because it should
+ * be possible to set work_mem and hash_mem_multiplier independently.
+ */
+ if (hash_mem < MAX_KILOBYTES)
+ return (int) hash_mem;
+
+ return MAX_KILOBYTES;
+}
* PHJ_BUILD_HASHING_INNER so we can skip loading.
*
* Initially we try to plan for a single-batch hash join using the combined
- * work _mem of all participants to create a large shared hash table. If that
+ * hash _mem of all participants to create a large shared hash table. If that
* turns out either at planning or execution time to be impossible then we
- * fall back to regular work _mem sized hash tables.
+ * fall back to regular hash _mem sized hash tables.
*
* To avoid deadlocks, we never wait for any barrier unless it is known that
* all other backends attached to it are actively executing the node or have
* Get hash table size that executor would use for inner relation.
*
* XXX for the moment, always assume that skew optimization will be
- * performed. As long as SKEW_WORK _MEM_PERCENT is small, it's not worth
+ * performed. As long as SKEW_HASH _MEM_PERCENT is small, it's not worth
* trying to determine that for sure.
*
* XXX at some point it might be interesting to try to account for skew
ExecChooseHashTableSize(inner_path_rows_total,
inner_path->pathtarget->width,
true, /* useskew */
- parallel_hash, /* try_combined_work _mem */
+ parallel_hash, /* try_combined_hash _mem */
outer_path->parallel_workers,
&space_allowed,
&numbuckets,
Cost run_cost = workspace->run_cost;
int numbuckets = workspace->numbuckets;
int numbatches = workspace->numbatches;
+ int hash_mem;
Cost cpu_per_tuple;
QualCost hash_qual_cost;
QualCost qp_qual_cost;
}
/*
- * If the bucket holding the inner MCV would exceed work _mem, we don't
+ * If the bucket holding the inner MCV would exceed hash _mem, we don't
* want to hash unless there is really no other alternative, so apply
* disable_cost. (The executor normally copes with excessive memory usage
* by splitting batches, but obviously it cannot separate equal values
- * that way, so it will be unable to drive the batch size below work _mem
+ * that way, so it will be unable to drive the batch size below hash _mem
* when this is true.)
*/
+ hash_mem = get_hash_mem();
if (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),
inner_path->pathtarget->width) >
- (work _mem * 1024L))
+ (hash _mem * 1024L))
startup_cost += disable_cost;
/*
double dNumGroups)
{
Query *parse = root->parse;
+ int hash_mem = get_hash_mem();
/*
* If we're not being offered sorted input, then only consider plans that
* can be done entirely by hashing.
*
- * We can hash everything if it looks like it'll fit in work _mem. But if
+ * We can hash everything if it looks like it'll fit in hash _mem. But if
* the input is actually sorted despite not being advertised as such, we
* prefer to make use of that in order to use less memory.
*
- * If none of the grouping sets are sortable, then ignore the work _mem
+ * If none of the grouping sets are sortable, then ignore the hash _mem
* limit and generate a path anyway, since otherwise we'll just fail.
*/
if (!is_sorted)
/*
* gd->rollups is empty if we have only unsortable columns to work
- * with. Override work _mem in that case; otherwise, we'll rely on the
+ * with. Override hash _mem in that case; otherwise, we'll rely on the
* sorted-input case to generate usable mixed paths.
*/
- if (hashsize > work _mem * 1024L && gd->rollups)
+ if (hashsize > hash _mem * 1024L && gd->rollups)
return; /* nope, won't fit */
/*
{
List *rollups = NIL;
List *hash_sets = list_copy(gd->unsortable_sets);
- double availspace = (work _mem * 1024.0);
+ double availspace = (hash _mem * 1024.0);
ListCell *lc;
/*
/*
* We treat this as a knapsack problem: the knapsack capacity
- * represents work _mem, the item weights are the estimated memory
+ * represents hash _mem, the item weights are the estimated memory
* usage of the hashtables needed to implement a single rollup,
* and we really ought to use the cost saving as the item value;
* however, currently the costs assigned to sort nodes don't
rollup->numGroups);
/*
- * If sz is enormous, but work _mem (and hence scale) is
+ * If sz is enormous, but hash _mem (and hence scale) is
* small, avoid integer overflow here.
*/
k_weights[i] = (int) Min(floor(sz / scale),
* XXX If an ANY subplan is uncorrelated, build_subplan may decide to hash
* its output. In that case it would've been better to specify full
* retrieval. At present, however, we can only check hashability after
- * we've made the subplan :-(. (Determining whether it'll fit in work _mem
+ * we've made the subplan :-(. (Determining whether it'll fit in hash _mem
* is the really hard part.) Therefore, we don't want to be too
* optimistic about the percentage of tuples retrieved, for fear of
* selecting a plan that's bad for the materialization case.
plan = create_plan(subroot, best_path);
- /* Now we can check if it'll fit in work _mem */
+ /* Now we can check if it'll fit in hash _mem */
/* XXX can we check this at the Path stage? */
if (subplan_is_hashable(plan))
{
subplan_is_hashable(Plan *plan)
{
double subquery_size;
+ int hash_mem = get_hash_mem();
/*
- * The estimated size of the subquery result must fit in work _mem. (Note:
+ * The estimated size of the subquery result must fit in hash _mem. (Note:
* we use heap tuple overhead here even though the tuples will actually be
* stored as MinimalTuples; this provides some fudge factor for hashtable
* overhead.)
*/
subquery_size = plan->plan_rows *
(MAXALIGN(plan->plan_width) + MAXALIGN(SizeofHeapTupleHeader));
- if (subquery_size > work _mem * 1024L)
+ if (subquery_size > hash _mem * 1024L)
return false;
return true;
const char *construct)
{
int numGroupCols = list_length(groupClauses);
+ int hash_mem = get_hash_mem();
bool can_sort;
bool can_hash;
Size hashentrysize;
/*
* Don't do it if it doesn't look like the hashtable will fit into
- * work _mem.
+ * hash _mem.
*/
hashentrysize = MAXALIGN(input_path->pathtarget->width) + MAXALIGN(SizeofMinimalTupleHeader);
- if (hashentrysize * dNumGroups > work _mem * 1024L)
+ if (hashentrysize * dNumGroups > hash _mem * 1024L)
return false;
/*
- * See if the estimated cost is no more than doing it the other way.
+ * See if the estimated cost is no more than doing it the other way. We
+ * deliberately give the hash case more memory when hash_mem exceeds
+ * standard work mem (i.e. when hash_mem_multiplier exceeds 1.0).
*
* We need to consider input_plan + hashagg versus input_plan + sort +
* group. Note that the actual result plan might involve a SetOp or
* planner.c).
*/
int hashentrysize = subpath->pathtarget->width + 64;
+ int hash_mem = get_hash_mem();
- if (hashentrysize * pathnode->path.rows > work _mem * 1024L)
+ if (hashentrysize * pathnode->path.rows > hash _mem * 1024L)
{
/*
* We should not try to hash. Hack the SpecialJoinInfo to
* enough to not use a multiple of work_mem, and one typically would not
* have many large foreign-key validations happening concurrently. So
* this seems to meet the criteria for being considered a "maintenance"
- * operation, and accordingly we use maintenance_work_mem.
+ * operation, and accordingly we use maintenance_work_mem. However, we
+ * must also set hash_mem_multiplier to 1, since it is surely not okay to
+ * let that get applied to the maintenance_work_mem value.
*
* We use the equivalent of a function SET option to allow the setting to
* persist for exactly the duration of the check query. guc.c also takes
(void) set_config_option("work_mem", workmembuf,
PGC_USERSET, PGC_S_SESSION,
GUC_ACTION_SAVE, true, 0, false);
+ (void) set_config_option("hash_mem_multiplier", "1",
+ PGC_USERSET, PGC_S_SESSION,
+ GUC_ACTION_SAVE, true, 0, false);
if (SPI_connect() != SPI_OK_CONNECT)
elog(ERROR, "SPI_connect failed");
elog(ERROR, "SPI_finish failed");
/*
- * Restore work_mem.
+ * Restore work_mem and hash_mem_multiplier .
*/
AtEOXact_GUC(true, save_nestlevel);
* enough to not use a multiple of work_mem, and one typically would not
* have many large foreign-key validations happening concurrently. So
* this seems to meet the criteria for being considered a "maintenance"
- * operation, and accordingly we use maintenance_work_mem.
+ * operation, and accordingly we use maintenance_work_mem. However, we
+ * must also set hash_mem_multiplier to 1, since it is surely not okay to
+ * let that get applied to the maintenance_work_mem value.
*
* We use the equivalent of a function SET option to allow the setting to
* persist for exactly the duration of the check query. guc.c also takes
(void) set_config_option("work_mem", workmembuf,
PGC_USERSET, PGC_S_SESSION,
GUC_ACTION_SAVE, true, 0, false);
+ (void) set_config_option("hash_mem_multiplier", "1",
+ PGC_USERSET, PGC_S_SESSION,
+ GUC_ACTION_SAVE, true, 0, false);
if (SPI_connect() != SPI_OK_CONNECT)
elog(ERROR, "SPI_connect failed");
elog(ERROR, "SPI_finish failed");
/*
- * Restore work_mem.
+ * Restore work_mem and hash_mem_multiplier .
*/
AtEOXact_GUC(true, save_nestlevel);
}
bool enableFsync = true;
bool allowSystemTableMods = false;
int work_mem = 4096;
+double hash_mem_multiplier = 1.0;
int maintenance_work_mem = 65536;
int max_parallel_maintenance_workers = 2;
NULL, NULL, NULL
},
+ {
+ {"hash_mem_multiplier", PGC_USERSET, RESOURCES_MEM,
+ gettext_noop("Multiple of work_mem to use for hash tables."),
+ NULL,
+ GUC_EXPLAIN
+ },
+ &hash_mem_multiplier,
+ 1.0, 1.0, 1000.0,
+ NULL, NULL, NULL
+ },
+
{
{"bgwriter_lru_multiplier", PGC_SIGHUP, RESOURCES_BGWRITER,
gettext_noop("Multiple of the average buffer usage to free per round."),
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
+#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash table work_mem
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
* outer relation tuples with these hash values are matched against that
* table instead of the main one. Thus, tuples with these hash values are
* effectively handled as part of the first batch and will never go to disk.
- * The skew hashtable is limited to SKEW_WORK _MEM_PERCENT of the total memory
+ * The skew hashtable is limited to SKEW_HASH _MEM_PERCENT of the total memory
* allowed for the join; while building the hashtables, we decrease the number
* of MCVs being specially treated if needed to stay under this limit.
*
#define SKEW_BUCKET_OVERHEAD MAXALIGN(sizeof(HashSkewBucket))
#define INVALID_SKEW_BUCKET_NO (-1)
-#define SKEW_WORK _MEM_PERCENT 2
+#define SKEW_HASH _MEM_PERCENT 2
#define SKEW_MIN_OUTER_FRACTION 0.01
/*
extern void ExecHashTableReset(HashJoinTable hashtable);
extern void ExecHashTableResetMatchFlags(HashJoinTable hashtable);
extern void ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
- bool try_combined_work _mem,
+ bool try_combined_hash _mem,
int parallel_workers,
size_t *space_allowed,
int *numbuckets,
extern bool enableFsync;
extern PGDLLIMPORT bool allowSystemTableMods;
extern PGDLLIMPORT int work_mem;
+extern PGDLLIMPORT double hash_mem_multiplier;
extern PGDLLIMPORT int maintenance_work_mem;
extern PGDLLIMPORT int max_parallel_maintenance_workers;
extern bool BackupInProgress(void);
extern void CancelBackup(void);
+/* in executor/nodeHash.c */
+extern int get_hash_mem(void);
+
#endif /* MISCADMIN_H */