Oskari Saarenmaa. Backpatch to stable branches where applicable.
newdbl[2];
/*
- * We are allways using "double" timestamps here. Precision should be good
+ * We are always using "double" timestamps here. Precision should be good
* enough.
*/
orgdbl[0] = ((double) origentry->lower);
PG_RETURN_POINTER(entry);
}
-/* Returns a better readable representaion of variable key ( sets pointer ) */
+/* Returns a better readable representation of variable key ( sets pointer ) */
GBT_VARKEY_R
gbt_var_key_readable(const GBT_VARKEY *k)
{
Max(LL_COORD(b, i), UR_COORD(b, i))
);
}
- /* continue on the higher dimemsions only present in 'a' */
+ /* continue on the higher dimensions only present in 'a' */
for (; i < DIM(a); i++)
{
result->x[i] = Max(0,
- errdetail_log_plural(const char *fmt_singuar, const char
+ errdetail_log_plural(const char *fmt_singular, const char
*fmt_plural, unsigned long n, ...) is like
errdetail_log>, but with support for various plural forms of
the message.
* repl information, as appropriate.
*
* NOTE: it's debatable whether to use heap_deform_tuple() here or just
- * heap_getattr() only the non-replaced colums. The latter could win if
+ * heap_getattr() only the non-replaced columns. The latter could win if
* there are many replaced columns and few non-replaced ones. However,
* heap_deform_tuple costs only O(N) while the heap_getattr way would cost
* O(N^2) if there are many non-replaced columns, so it seems better to
* locking */
/*
- * remove readed pages from pending list, at this point all
- * content of readed pages is in regular structure
+ * remove read pages from pending list, at this point all
+ * content of read pages is in regular structure
*/
if (shiftList(index, metabuffer, blkno, stats))
{
* We first consider splits where b is the lower bound of an entry.
* We iterate through all entries, and for each b, calculate the
* smallest possible a. Then we consider splits where a is the
- * uppper bound of an entry, and for each a, calculate the greatest
+ * upper bound of an entry, and for each a, calculate the greatest
* possible b.
*
* In the above example, the first loop would consider splits:
}
/*
- * Iterate over upper bound of left group finding greates possible
+ * Iterate over upper bound of left group finding greatest possible
* lower bound of right group.
*/
i1 = nentries - 1;
*
* The initial tuple is assumed to be already locked.
*
- * This function doesn't check visibility, it just inconditionally marks the
+ * This function doesn't check visibility, it just unconditionally marks the
* tuple(s) as locked. If any tuple in the updated chain is being deleted
* concurrently (or updated with the key being modified), sleep until the
* transaction doing it is finished.
/*
* NB -- some of these transformations are only valid because we
* know the return Xid is a tuple updater (i.e. not merely a
- * locker.) Also note that the only reason we don't explicitely
+ * locker.) Also note that the only reason we don't explicitly
* worry about HEAP_KEYS_UPDATED is because it lives in
* t_infomask2 rather than t_infomask.
*/
*
* Crash-Safety: This module diverts from the usual patterns of doing WAL
* since it cannot rely on checkpoint flushing out all buffers and thus
- * waiting for exlusive locks on buffers. Usually the XLogInsert() covering
+ * waiting for exclusive locks on buffers. Usually the XLogInsert() covering
* buffer modifications is performed while the buffer(s) that are being
- * modified are exlusively locked guaranteeing that both the WAL record and
+ * modified are exclusively locked guaranteeing that both the WAL record and
* the modified heap are on either side of the checkpoint. But since the
* mapping files we log aren't in shared_buffers that interlock doesn't work.
*
/*
* The TID qual expressions will be computed once, any other baserestrict
- * quals once per retrived tuple.
+ * quals once per retrieved tuple.
*/
cost_qual_eval(&tid_qual_cost, tidquals, root);
/*
* Would this oper be found (given the right args) by regoperatorin?
- * If not, or if caller explicitely requests it, we need to qualify
+ * If not, or if caller explicitly requests it, we need to qualify
* it.
*/
if (force_qualify || !OperatorIsVisible(operator_oid))