We used to add join clauses that referenced a compressed column at the
level of the compressed scan, and later remove them. This is wrong and
useless, just don't add them.
When looking up a pathkey for compressed scan, we used to do a lot of
work, including a quadratic lookup through all the equivalence members,
to always arrive at the same canonical pathkey we started from. Just
remove this useless code for a significant planning speedup.
This uncovers two bugs in parameterization of decompressed paths and
generation of equivalence members for segmentby columns, fix them as
well.
The parallel version of the ChunkAppend node uses shared memory to
coordinate the plan selection for the parallel workers. If the workers
perform the startup exclusion individually, it may choose different
subplans for each worker (e.g., due to a "constant" function that claims
to be constant but returns different results). In that case, we have a
disagreement about the plans between the workers. This would lead to
hard-to-debug problems and out-of-bounds reads when pstate->next_plan is
used for subplan selection.
With this patch, startup exclusion is only performed in the parallel
leader. The leader stores this information in shared memory. The
parallel workers read the information from shared memory and don't
perform startup exclusion.
The telemetry isolation test `telemetry_iso` does not test anything and
does not seem to work, so it is removed. The debug waitpoint was taken
in the same session, so the waitpoint was not waited on.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
Due to the postman-echo endpoint redirecting http requests to https, we
get an unexpected 301 response in the tests, leading to repeated test
failures. This commit removes these function calls.
This PR improves the way the number of parallel workers for the
DecompressChunk node are calculated. Since
1a93c2d482b50a43c105427ad99e6ecb58fcac7f, no partial paths for small
relations are generated, which could cause a fallback to a sequential
plan and a performance regression. This patch ensures that for all
relations, a partial path is created again.
This release contains bug fixes since the 2.11.1 release.
We recommend that you upgrade at the next available opportunity.
**Features**
* #5923 Feature flags for TimescaleDB features
**Bugfixes**
* #5680 Fix DISTINCT query with JOIN on multiple segmentby columns
* #5774 Fixed two bugs in decompression sorted merge code
* #5786 Ensure pg_config --cppflags are passed
* #5906 Fix quoting owners in sql scripts.
* #5912 Fix crash in 1-step integer policy creation
**Thanks**
* @mrksngl for submitting a PR to fix extension upgrade scripts
* @ericdevries for reporting an issue with DISTINCT queries using
segmentby columns of compressed hypertable
Added logical replication messages (PG14+) as markers for (partial)
decompression events (mutual compression), which makes it possible to
differentiate inserts happening as part of the decompression vs actual
inserts by the user, and filter the former out of the event stream.
While some tools may be interested in all events, synching the pure
"state" (without internal behavior) is required for others.
As of now this PR is missing tests. I wonder if anyone has a good idea
how to create an automatic test for it.
PG16 replaces pg_foo_ownercheck() functions with a common
object_ownercheck() function. Added a new compat function for
pg_class_ownercheck() function affected by this change and replaced all
its callers.
postgres/postgres@afbfc029
PG16 replaced most of the aclcheck functions with a common object_aclcheck
function. Updated the various aclcheck calls in the code to use the new
function when compiled with PG16.
postgres/postgres@c727f511
PG16 also optimized the PlaceFolderInfo lookups to perform in constant
time, so there is no need to do an additional cheap/quick test using
bms_overlap to see if the PHV might be evaluated in the outer rels.
postgres/postgres@6569ca439postgres/postgres@b3ff6c742
Need to ensure that we should try to take a lock only if a valid
transaction is around. Otherwise assert is hit due to an error within
an error.
Fixes#5917
PG16 adds a new parameter to DefineIndex, total_parts, that takes in the
total number of direct and indirect partitions of the relation. Updated
all the callers to pass either the actual number if it is known or -1 if
it is unknown at that point.
postgres/postgres@27f5c712
PG16 adds a new boolean parameter to the ExecInsertIndexTuples function
to denote if the index is a BRIN index, which is then used to determine
if the index update can be skipped. The fix also removes the
INDEX_ATTR_BITMAP_ALL enum value.
Adapt these changes by updating the compat function to accomodate the
new parameter added to the ExecInsertIndexTuples function and using an
alternative for the removed INDEX_ATTR_BITMAP_ALL enum value.
postgres/postgres@19d8e23
Since we want to be able to handle update of weird user names we add
some to the update tests and create policies on them. This will create
jobs with the strange name as owner.
This PR adds several GUCs which allow to enable/disable major
timescaledb features:
- enable_hypertable_create
- enable_hypertable_compression
- enable_cagg_create
- enable_policy_create
When referring to a role from a string type, it must be properly quoted
using pg_catalog.quote_ident before it can be casted to regrole.
Fixed this, especially in update scripts.
PG16 removed the recursion-marker values used to handle certain
subcommands during an ALTER TABLE execution and provides an alternative
flag. Removed the references to the recursion-marker values from
timescaledb code.
postgres/postgres@840ff5f4
Previously when a retention policy existed on the underlying hypertable,
we would get a segmentation fault when trying to add a Cagg refresh
policy, due to passing a bool instead of pointer to bool argument to
function `ts_jsonb_get_int64_field` in a particular code path.
Fixed by passing the expected pointer type.
Fixes#5907
* Remove unneeded data from batch states to use less memory
* keep only the compressed column data because only for them we
have to do something per row, other columns don't change
* Adjust batch memory context size so that the bulk decompression
results fit into it.
* Determine whether we're going to use bulk decompression for each
column at planning time, not at execution time.
* Introduce "batch queue" to unify control flow for normal and batch
sorted merge decompression
* In batch sorted merge, compare batches on scan slot, not on projected
slot. This avoids keeping the second slot in batches, and projection
can be performed after we find the top batch.
* this requires some custom code to build sort infos relative to
scan tuple, not to targetlist.
* Return a reference for the current top batch scan tuple as a result of
DecompressChunk exec, don't copy it out. It is guaranteed to live
until the next exec, which is the usual lifetime guarantee.
This is needed to prepare for vectorized filters.
When enabling compression on a table, there is a check of unique
and primary key *constraints*, but no check if there is just a unique
index. This means that it is possible to create a compressed table
without getting a warning that you should include the columns in the
index into the segmentby field, which can lead to suboptimal query
times.
This commit adds a check for the unique index as well and ensure that a
similar warning is printed as for a unique constraint.
Fixes#5892
The commit cdea343cc updated the gh_matrix_builder.py script but failed
to import PG_LATEST variable into the script thus breaking the CI.
Import that variable to fix the CI tests.
Removed the PG12 specific macros and all the now, dead code. Also updated
the testcases which had workarounds in place to make them compatible
with PG12.
When a job finishes execution, either because of an error or a success,
this commit will print out the execution time of the job in the log
together with a message about what job that finished.
For continuous aggregate refreshes, the number of rows deleted from and
inserted into the materialization table will be printed.
When the uncompressed part of a partially compressed chunk is read by a
non-partial path and the compressed part by a partial path, the append
node on top could process the uncompressed part multiple times because
the path was declared as a partial path and the append node assumed it
could be executed in all workers in parallel without producing
duplicates.
This PR fixes the declaration of the path.
For continuous aggregates with variable bucket size, the interval
was wrongly manipulated in the process. Now it is corrected by
creating a copy of interval structure for validation purposes
and keeping the original structure untouched.
Fixes#5734
This patch adds support to pass continuous aggregate names to
`chunk_detailed_size` to align it to the behavior of other functions
such as `show_chunks`, `drop_chunks`, `hypertable_size`.
This patch adds support to pass continuous aggregate names to the
`set_chunk_time_interval` function to align it with functions, such as
`show_chunks`, `drop_chunks`, and others.
It reuses the previously existing function to find a hypertable or
resolve a continuous aggregate to its underlying hypertable found in
chunk.c. It, however, moves the function to hypertable.c and exports it
from here. There is some discussion if this functionality should stay in
chunk.c, though, it feels wrong in that file now that it is exported.
This commit is a follow up of #5515, which added support for ALTER TABLE
... REPLICA IDENTITY (FULL | INDEX) on hypertables.
This commit allows the execution against materialized hypertables to
enable update / delete operations on continuous aggregates when logical
replication in enabled for them.
* Restore default batch context size to fix a performance regression on
sorted batch merge plans.
* Support reverse direction.
* Improve gorilla decompression by computing prefix sums of tag bitmaps
during decompression.
The ts_set_flags_32 function takes a bitmap and flags and returns an
updated bitmap. However, if the returned value is not used, the function
call has no effect. An unused result may indicate the improper use of this
function.
This patch adds the qualifier pg_nodiscard to the function which
triggers a warning if the returned value is not used.
In #4664 we introduced fixed schedules for jobs. This was done by
introducing additional parameters fixed_schedule, initial_start and
timezone for our add_job and add_policy APIs.
These fields were not updatable by alter_job so it was not
possible to switch from one type of schedule to another without dropping
and recreating existing jobs and policies.
This patch adds the missing parameters to alter_job to enable switching
from one type of schedule to another.
Fixes#5681
The backport script for the PRs does not have the permission to backport
PRs which include workflow changes. So, these PRs are excluded from
being automatically backported.
Failed CI run:
https://github.com/timescale/timescaledb/actions/runs/5387338161/
jobs/9780701395
> refusing to allow a GitHub App to create or update workflow
> `.github/workflows/xxx.yaml` without `workflows` permission)