Update the PG12-specific append test. Earlier, the test produced
a different chunk order in plans across PostgreSQL versions because
the ordering of the inserted data was not deterministic.
The ordering was made deterministic and this change updates the PG12
output after these changes, now making the plan output consistent
with other PostgreSQL versions.
`INSERT`s on a hypertable requires wrapping the top-level
`ModifyTable` plan node with a `CustomScan` node (`HypertableInsert`)
that setups the tuple routing that occurs during execution. However,
using `CustomScan` nodes at the top-level requires implementing a
number of work-arounds for assumptions made in the planner on how
top-level nodes deal with targetlists.
The existing way this was handled wasn't robust enough, and problems
occur when enabling JIT compilation for expression evaluation. This
change improves the handling to work also for JIT.
Intuitively, our `HypertableInsert` node should adopt the target list
of the `ModifyTable` subplan without further projection. For a
`CustomScan` this means setting the "input" `custom_scan_tlist` to the
`ModifyTable`'s target list and having an "output" `targetlist` that
references the `TupleDesc` that is created from the
`custom_scan_tlist` at execution time. Now, while this seems
straight-forward, there are several things with how `ModifyTable`
nodes are handled in the planner that complicates this:
- First, `ModifyTable` doesn't set a `targetlist` when the node is
created. It is only set later in `set_plan_references` (`setrefs.c`)
if there's a `RETURNING` clause. Thus, there's no `targetlist`
available when the `HypertableInsert` plan is created.
- Second, top-level plan nodes, except for `ModifyTable` nodes, need
to have a `targetlist` matching `root->processed_tlist`. This is
asserted in `apply_tlist_labeling`, which is called in `create_plan`
(`createplan.c`) immediately after the `HypertableInsert` plan node
is created. `ModifyTable` is exempted because it doesn't always have
a `targetlist` that matches `processed_tlist`. So, even if we had
access to `ModifyTable`'s targetlist at plan creation we wouldn't be
able to use it since the `HypertableInsert` is a `CustomScan` and
thus not exempted.
- Third, a `CustomScan`'s `targetlist` should reference the attributes
of the `TupleDesc` that gets created from the `custom_scan_tlist` at
the start of execution. This means we need to make the `targetlist`
into all `Var`s with attribute numbers that correspond to the
`TupleDesc` instead of result relation in the `ModifyTable`.
To get around these issues, we set the `CustomScan`'s `targetlist` to
`root->processed_tlist` when we create the plan node, and at the end
of planning when the `ModifyTable`'s `targetlist` is set, we go back
and fix up the `CustomScan`'s `targetlist`.
If the telemetry response is malformed, strange errors will be
generated in the log because the use of `DirectFunctionCall2` expect
the result of function calls to not be NULL and will throw an error if
it is not.
By printing the response in the log we can debug what went wrong.
The telemetry response processing is handled in the function
`process_response`, which was an internal function and cannot be tested
using unit tests.
This commit rename the function to follow conventions for extern
functions and add test functions and tests to check that it can handle
well-formed responses.
No tests for malformed responses are added since the function cannot
currently handle that.
When comparing versions in the telemetry module when processing the
HTTP response, a direct call of `texteq` is made, but without using a
collation. This generate unnecessary errors in the log and also cause
the telemetry job to abort.
This commit fixes that by using the "C" collation when comparing the
version strings.
PG12 introduced support for custom table access methods. While one
could previously set a custom table access method on a hypertable, it
wasn't propagated to chunks. This is now fixed, and a test is added to
show that chunks inherit the table access method of the parent
hypertable.
This change modifies the query test to use the test template
mechanism so that we can capture the plan differences introduced by
Postgres 12 pruning append nodes.
The PG12 output file also contains a plan change which seems to
result in a GroupAggregate being replace by a less efficient Sort
plus HashAggregate. This new behavior is still being investigated,
but it is still a correct plan (just potentially suboptimal).
The `CMakeLists.txt` in `src/compat` was conditionally reusing the
variable `SOURCES` inherited from above, which contained relative paths
not related to this directory. This caused `target_sources` to generate
a warning.
This commit fixes this by setting `SOURCES` to be empty first in the
file.
Drop Foreign Key constraints from uncompressed chunks during the
compression. This allows to cascade data deletion in FK-referenced
tables to compressed chunks. The foreign key constrains are restored
during decompression.
This test changes the multi_transaction_index test to sort the
chunk_index selection commands by index_name. This fixes an issue
where the order would vary depending on the version of postgres.
This also made the multi_transaction_index test a test template so
that we can capture the explain difference for a plan that has an
append node pruned in PostgresQL 12.
This just addresses some minor test changes for PostgresQL version
12. Specifically it changes the tests to set client_min_message to
error, as fatal is no longer a valid setting here. This also hides
a new NOTICE message in bgw_job_delete, which was including a PID
and was thus nondeterministic.
Some variables are not used any more in PG12, so it generates warnings.
This commit remove the variables from the code and also all uses of the
variables.
PostgreSQL 12 introduced space optimizations for indexes, which caused
the adaptive chunking test to fail since its measure of chunk size
includes indexes that now report different sizes.
To fix this, the adaptive chunking test now has version-specific
output files.
This change moves the custom_type, ddl, ddl_single, insert, and
partition tests under test/sql, as well as the move and reorder
tests under /tsl/test/sql to our test template framework. This
allows them to have different golden files for different version of
Postgres.
With the exception of the reorder test, all of these tests produced
new output in PG12 which only differed by the exclusion of append
nodes from plan explanations (this exclusion is a new feature of
PG12). The reorder test had some of these modified plans, but also
included a test using a table with OIDs, which is not valid in PG12.
This test was modified to allow the table creation to fail, and we
captured the expected output in the new golden file for PG12.
PG12 allows users to add a WHERE clause when copying from from a
file into a table. This change adds support for such clauses on
hypertables. Also fixes a issue that would have arisen in cases
where a table being copied into had a trigger that caused a row to
be skipped.
The `pg_dump` command has slightly different informational output
across PostgreSQL versions, which causes problems for tests. This
change makes sure that all tests that use `pg_dump` uses the
appropriate wrapper script where we can better control the output to
make it the same across PostgreSQL versions.
Note that the main `pg_dump` test still fails for other reasons that
will be fixed separately.
When `CLUSTER` is run in verbose mode on a hypertable, it prints the
chunks that are being clustered. The ordering of these chunks differs
in PG12 compared to previous versions, which caused the 'cluster' test
to fail.
The chunk index mappings that decide which chunks to cluster is now
sorted on chunk OID to make the output predicitible across PostgreSQL
versions. This comes at a slight cost, however, but shouldn't be much
overhead if one considers that the actual clustering is a lot more
costly.
One could optionally do the sorting only in case "verbose" output is
requested, but this doesn't seem worthwhile as it requires additional
code logic to handle both cases. It can always be done later as an
optimization.
PostgreSQL 12 removes Append and MergeAppend nodes when there is only
one child node to scan. This removal happens when creating the plan
from the selected path, which confused the constraint-aware append
node since it expected an Append as child.
To fix this issue, constraint-aware append is no longer applied when
there is only one child in the append. Note that ChunkAppend is not
affected since it completely replaces the Append path, thus PostgreSQL
won't remove the Append since it is no longer there.
In addition, this change updates expected file for the append test for
PG12, since it didn't seem to be updated.
PG12 introduced new feature which allows to define auto generated
columns. This PR adds check which prevents using such columns for
hypertable partitioning.
The `ddl_hook` test failed on PG12 because dropped objects in the DDL
hook had a different order compared to previous versions. This change
therefore adds a sort of the dropped objects list before processing so
that the order objects when printed in the test is predictable.
The ordering happens on object type, which assumes the order of
objects with the same type is predictable. This seems sufficient for
the test to pass.
The `plan_gapfill` test for PG12 wasn't updated after having removed
the notice about expired license that was previously in the test. This
change updates the test to account for removal of the license message.
WITH OIDS option doesn't exist in PG12. This fix moves the test of
blocking compression for tables WITH OIDS into a separate file, which
is run only for PG version before 12.
PostgreSQL 12 changed the log level in client tools, such as
`pg_dump`, which makes some of our tests fail due to different log
level labels.
This change filters and modifies the log level output of `pg_dump` in
earlier PostgreSQL versions to adopt the new PostgreSQL 12 format.
PostgreSQL 12 changed how rounding of floating point values are
displayed by default. This caused different output for PostgreSQL 12
in some tests. Setting `extra_float_digits=0` in our test
configuration adopts the old behavior for all PostgreSQL versions,
thus fixing the test output issues.
Cache queries support multiple optional behaviors, such as "missing
ok" (do not fail on cache miss) and "no create" (do not create a new
entry if one doesn't exist in the cache). With multiple boolean
parameters, the query API has become unwieldy so this change turns
these booleans into one flag parameter.
This change refactors our main planner hooks in `planner.c` with the
intention of providing a consistent way to classify planned relations
across hooks. In our hooks, we'd like to know whether a planned
relation (`RelOptInfo`) is one of the following:
* Hypertable
* Hypertable child (a hypertable can appear as a child of itself)
* Chunk as a child of hypertable (from expansion)
* Chunk as standalone (operation directly on chunk)
* Any other relation
Previously, there was no way to consistently know which of these one
was dealing with. Instead, a mix of various functions was used without
"remembering" the classification for reuse in later sections of the
code.
When classifying relations according to the above categories, the only
source of truth about a relation is our catalog metadata. In case of
hypertables, this is cached in the hypertable cache. However, this
cache is read-through, so, in case of a cache miss, the metadata will
always be scanned to resolve a new entry. To avoid unnecessary
metadata scans, this change introduces a way to do cache-only
queries. This requires maintaining a single warmed cache throughout
planning and is enabled by using a planner-global cache object. The
pre-planning query processing warms the cache by populating it with
all hypertables in the to-be-planned query.
The `CustomScan` node is oddly hard-coded to use fixed tuple table
slots in its initialization, thus expecting both the in and out slots
(scan and result slots) to always have the same type (underlying ops).
However, when implementing an append-type node on `CustomScan`, the
children that supply tuples can have slots of any type. In PG12, the
regular `Append` node just passes on these slots, signaling this by
setting `resultopsfixed = false`.
This change adopts the behavior of `Append` for out custom
`ChunkAppend` and `ConstraintAwareAppend` nodes.
An alternative would be to copy the contents of the child's slot into
the `CustomScan`'s own scan slot to get it to the "expected" type. But
this would require materializing the tuple, which seems unnecessary.
Expected output for cagg query 12 is different to 11 due to removed
unnecessary nodes in the query plans. Also the cost is slightly
different between the query plans of 12 and 11.
PG12 implementation of ExecBRInsertTriggers changes given slot in
place and return Boolean indicating success, which is different from
earlier implementations where it returned new slot value. This commit
fixes a bug in the compatability macro and its usage.
New planner code was imported for PG12, but this code isn't needed for
other versions and resulted in compilation errors. This change fixes
these errors by not compiling the code for versions where it is not
needed.
This change adopts the PG12 table/index scan API for the adaptive
chunking functions that scan for min/max values in a table.
PostgreSQL versions older than 12 use compatibility functions and
wrappers for the new API.
A lot of planner code was imported to support hypertable expansion in
PG12. This code is now moved to a `import` directory to avoid having
this code mix with regular TimescaleDB code.
It is not necessary to create a new range table entry for each chunk
during inserts. Instead, we can point to the range table entry of the
hypertable's root table.
The INSERT and COPY paths have been refactored to better handle
differences between PostgreSQL versions. In particular, PostgreSQL 12
introduced the new table access mangement (AM) API which ties tuple
table slots to specific table AM implementations, requiring more
careful management of those data structures.
The code tries to adopt the new (PG12) API to the extent possible,
providing compatibility layers and functions for older PostgreSQL
versions where needed.
Replace relation_close with appropriate specific functions. Remove
todo comment. Remove unnecessary headers. Capitalize new macro. Remove
code duplication in compatibility macros. Improve comment.
Correcting conditions in #ifdefs, adding missing includes, removing
and rearranging existing includes, replacing PG12 with PG12_GE for
forward compatibility. Fixed number of places with relation_close to
table_close, which were missed earlier.
relation_open is a general function, which is called from more
specific functions per database type. This commit replaces them
with the specific functions, which control correct types.