Modify table state is not created with an empty tables, which lead
to NULL pointer evaluation.
Starting from PG12 the planner injects a gating plan node above
any node that has pseusdo-constant quals.
To fix this, we need to check for such a gating node and handle the case.
We could optionally prune the extra node, since there's already
such a node below ChunkDispatch.
Fixes#1883.
Change appveyor script to run regression tests against PG12 on
Windows. It is short term solution with official PG alpine image.
No EXEC_BACKEND is enabled.
When running a parallel ChunkAppend query, the code would use a
subplan selection routine for the leader that would return an empty
slot unless it determined there were no workers started. While this
had the desired effect of forcing the workers to do the work of
evaluating the subnodes, returning an empty slot is not always safe.
In particular, Append nodes will interpret an empty slot as a sign
that a given subplan has completed, so a plan resulting in a
parallel MergeAppend under an Append node (very possible under a
UNION of hypertables) might fail to execute some or all of the
subplans.
This change modifies the ChunkAppend so that the leader uses the
same subplan function as the workers. This may result in the leader
being less responsive as it try to fetch a batch of results on its
own if no worker has any results yet. However, if this isn't the
desired behavior, PostgresQL already exposes a GUC option,
parallel_leader_participation, which will prevent the leader from
executing any subplans.
Windows builds, for Debug and Release configurations, are now tested
using GitHub Actions. The build test covers all currently supported
PostgreSQL versions (9.6, 10, 11, 12) using a build matrix.
Currently, the TimescaleDB regression test suite is not run since it
requires a Windows-specific test client that we do not
support. Instead, the binaries are tested by doing a `CREATE
EXTENSION` to verify that they load and run without, e.g., missing
symbols or similar.
The test configuration is optimized for speed by using a cache for the
PostgreSQL installation. This avoids repeated downloads and
installations of PostgreSQL from EnterpriseDB's servers. But it also
means that we won't have a full installation on cache hits, which
means no running PostgreSQL service. This requires manual PostgreSQL
starts, which is what we want anyway since we need to preload our
extension.
It's anticipated that this build configuration can be extended to
produce release binaries in the future.
This change removes a `DEBUG1` level log message that was added to the
`ts_extension_is_loaded` check. The problem with this message is that
it is called very frequently, e.g., in planning, and it will flood the
output log.
This commit adds the optimizations we do for regular time bucket,
pushing down sort orders to the underlying time column, to time_bucket_gapfill.
This is made slightly more invasive by the fact that time_bucket_gapfill is
marked as VOLATILE (to prevent postgres from moving it in inappropriate ways).
We handle this by time_bucket_gapfill EC as not volatile, and letting the ususal
sort machinery do its job. This should be safe since we've already planned all
the per-relation paths and have installed the appropriate metadata to know we
will gapfill.
The `plan_hypertable_cache` test used `current_date` to generate data,
which is inherently flaky since it can create a different number of
chunks depending on which date you start at. When the number of chunks
differ, the test output changes too.
Reorder policy does not skip compressed chunks when selecting
next chunk to apply reordering. This causes an error in job
execution since it's not possible to reorder compressed chunks.
With this fix the job excludes compressed chunks from selection.
Fixes#1810
When classify_relation is called for relations of subqueries it
would not be able to correctly classify the relation unless it
was already in cache. This patch changes classify_relation to
call get_hypertable without CACHE_FLAG_NOCREATE when the RangeTblEntry
has the inheritance flag set.
If a binary upgrade is in progress (when using `pg_upgrade`) the
per-database setting of `timescaledb.restoring` can be included in the
dump, which causes `pg_upgrade` to fail.
This commit fixes this by checking the global variable
`IsBinaryUpgrade` and not refreshing cache if we are in the middle of
doing a binary upgrade.
Fixes#1844
The internal chunk API is updated to avoid returning `Chunk` objects
that are marked `dropped=true` along with some refactoring, hardening,
and cleanup of the internal chunk APIs. In particular, apart from
being returned in a dropped state, chunks could also be returned in a
partial state (without all fields set, partial constraints,
etc.). None of this is allowed as of this change. Further, lock
handling was unclear when joining chunk metadata from different
catalog tables. This is made clear by having chunks built within
nested scan loops so that proper locks are held when joining in
additional metadata (such as constraints).
This change also fixes issues with dropped chunks that caused chunk
metadata to be processed many times instead of just once, leading to
potential bugs or bad performance.
In particular, since the introduction of the “dropped” flag, chunk
metadata can exist in two states: 1. `dropped=false`
2. `dropped=true`. When dropping chunks (e.g., via `drop_chunks`,
`DROP TABLE <chunk>`, or `DROP TABLE <hypertable>`) there are also two
modes of dropping: 1. DELETE row and 2. UPDATE row and SET
dropped=true.
The deletion mode and the current state of chunk lead to a
cross-product resulting in 4 cases when dropping/deleting a chunk:
1. DELETE row when dropped=false
2. DELETE row when dropped=true
3. UPDATE row when dropped=false
4. UPDATE row when dropped=true
Unfortunately, the code didn't distinguish between these cases. In
particular, case (4) should not be able to happen, but since it did it
lead to a recursing loop where an UPDATE created a new tuple that then
is recursed to in the same loop, and so on.
To fix this recursing loop and make the code for dropping chunks less
error prone, a number of assertions have been added, including some
new light-weight scan functions to access chunk information without
building a full-blown chunk.
This change also removes the need to provide the number of constraints
when scanning for chunks. This was really just a hint anyway, but this
is no longer needed since all constraints are joined in anyway.
The formatting requires `clang-format` version 7 or 8, but if a later
distro is used it will find a version that cannot be used and default
to using docker even if the user installs an earlier version of
`clang-format` in parallel with the default version.
This commit fixes this by looking for `clang-format-8` and
`clang-format-7` before `clang-format`.
Trying to use an invalid time for a job raises an error.
This case should be checked by the scheduler. Failure to
do so results in the scheduler being killed.
This change fixes various compiler warnings that show up on different
compilers and platforms. In particular, MSVC is sensitive to functions
that do not return a value after throwing an error since it doesn't
realize that the code path is not reachable.
When copying from standard input the range table was not set up to
handle the constraints for the target table and instead is initialized
to null. In addition, the range table index was set to zero, causing an
underflow when executing the constraint check. This commit fixes this
by initializing the range table and setting the index correctly.
The code worked correctly for PG12, so the code is also refactored to
ensure that the range table and index is set the same way in all
versions.
Fixes#1840
When doing a UNION ALL query between a hypertable and a regular
table the hypertable would not get expanded leading to
no data of the hypertable included in the resultset.
PostgreSQL 12 is supported since 1.7, while 9.6 and 10 are deprecated.
This PR updates the building from source instruction with this change
and uses 1.7.0 in examples.
Adding a custom target `clang-tidy` that will run `run-clang-tidy` if
it is available and the compiler database is enabled. If the compiler
database is not enabled but `run-clang-tidy` is found, a warning will
be printed and the custom command will not be added.
Due to the changes of the default view behaviour of continuous
aggregates we need a new testsuite for the update tests for 1.7.0
This patch also changes the update test for 9.6 and 10 to run on
cron and 11 and 12 on pull request.
The view definition for the realtime aggregation union view
would use a INT Const as argument to cagg_watermark, but
the function argument is defined as OID leading to a cast being
inserted in a view definition restored from backup. This leads
to a difference between the original view definition and a view
definition from a restored view.
The symbol `pgwin32_socket_strerror` was undefined on windows builds at
PG12 and later. This because the function was removed and instead
`pg_strerror` was introduced to be used on all platforms.
This commit fixes the issue by ensuring to use `pg_strerror` on PG12
and later, and `pgwin32_socket_strerror` on Windows builds before PG12.
Found using `clang-tidy`
This release adds major new features and bugfixes since the 1.6.1 release.
We deem it moderate priority for upgrading.
This release adds the long-awaited support for PostgreSQL 12 to TimescaleDB.
This release also adds a new default behavior when querying continuous
aggregates that we call real-time aggregation. A query on a continuous
aggregate will now combine materialized data with recent data that has
yet to be materialized.
Note that only newly created continuous aggregates will have this
real-time query behavior, although it can be enabled on existing
continuous aggregates with a configuration setting as follows:
ALTER VIEW continuous_view_name SET (timescaledb.materialized_only=false);
This release also moves several data management lifecycle features
to the Community version of TimescaleDB (from Enterprise), including
data reordering and data retention policies.
**Major Features**
* #1456 Add support for PostgreSQL 12
* #1685 Add support for real-time aggregation on continuous aggregates
**Bugfixes**
* #1665 Add ignore_invalidation_older_than to timescaledb_information.continuous_aggregates view
* #1750 Handle undefined ignore_invalidation_older_than
* #1757 Restrict watermark to max for continuous aggregates
* #1769 Add rescan function to CompressChunkDml CustomScan node
* #1785 Fix last_run_success value in continuous_aggregate_stats view
* #1801 Include parallel leader in plan execution
* #1808 Fix ts_hypertable_get_all for compressed tables
* #1828 Ignore dropped chunks in compressed_chunk_stats
**Licensing changes**
* Reorder and policies around reorder and drop chunks are now
accessible to community users, not just enterprise
* Gapfill functionality no longer warns about expired license
**Thanks**
* @t0k4rt for reporting an issue with parallel chunk append plans
* @alxndrdude for reporting an issue when trying to insert into compressed chunks
* @Olernov for reporting and fixing an issue with show_chunks and drop_chunks for compressed hypertables
* @mjb512 for reporting an issue with INSERTs in CTEs in cached plans
* @dmarsh19 for reporting and fixing an issue with dropped chunks in compressed_chunk_stats
There is a potential null pointer dereference in that `raw_hypertable`
might be NULL when counting the number of continuous aggregates
attached. This commit fixes this by assuming that no continuous
aggregates are attached if `raw_hypertable` is NULL.
Found using `clang-tidy`.
When calling show_chunks or drop_chunks without specifying
a particular hypertable TimescaleDB iterates through all
existing hypertables and builds a list. While doing this
it adds the internal '_compressed_hypertable_*' tables
which leads to incorrect behaviour of
ts_chunk_get_chunks_in_time_range function. This fix
filters out the internal compressed tables while scanning
at ts_hypertable_get_all function.
This adds a test for INSERTs with cached plans. This test causes
a segfault before 1.7 but was fixed independently by the refactoring
of the INSERT path when adding PG12 support.
The memory leak job pulls TSDB dev tools from bitbucket but only two
files from that repository are needed.
This commit copy the files from the `tsdb-dev-tools` repository, remove
the need to clone the repository, update the memory leak job to use
these files, and remove the two secrets containing the environment
variables `USR` and `PASSWORD`.
This change removes some place-holder code for supporting `INSTEAD OF`
triggers in the code that implements `COPY` support on
hypertables. The code was introduced when updating the copy path with
PostgreSQL 12 changes. However, such triggers are only supported on
views, and, since the hypertable copy path only inserts on chunks, the
code isn't needed.
PG12 by default optimizes CTEs from WITH clause with the rest of the
queries. MATERIALIZED is used in rowsecurity tests to produce the same
query plans as in PG11. This commit adds query plans tests with
default behavior, which is equivalent to NOT MATERIALIZED.
Since hypertable expansion happens later in PG12 the propagated
constraints will not be pushed down to the actual scans but stay
as join filter. This patch adds the constraint to baserestrictinfo
or as joinfilter depending on the constraint type.