The code we inherited from postgres expects that if we have a const null
or false clause, it's going to be the single one, but that's not true
for runtime chunk exclusion because we don't try to fold such
restrictinfos after evaluating the mutable functions. Fix it to also
work for multiple restrictinfos.
Job ids are locked using an advisory lock rather than a row lock on the
jobs table, but this lock is not taken in the job API functions
(`alter_job`, `delete_job`, etc.), which appears to cause a race
condition resulting in addition of multiple rows with the same job id.
This commit adds an advisory `RowExclusiveLock` on the job id while
altering it to match the advisory locks taken while performing other
modifications.
Closes#4863
Compress chunk interval is set using an ALTER TABLE statement.
This change makes it so you can update the compress chunk interval
while keeping the rest of the compression settings intact.
Updating it will only affect chunks that are compressed and rolled
up after the change.
The dist_move_chunk causes the CI to hang when compiled and run with
PG15 as explained in #4972.
Also fixed schema permission issues in data_node and dist_param tests.
This patch will report a warning when upgrading to new timescaledb extension,
if their exists any caggs with partial aggregates only on release builds.
Also restrict users from creating cagss with old format on timescaledb with
PG15.
This allows us to perform a nested loop join of a small outer local
table to an inner distributed hypertable, without downloading the
entire hypertable to the access node.
When deleting a job in the test, the job does not necessarily terminate
immediately, so wait for a log entries from the job before checking the
jobs table.
Fixed#4859
It was a leftover from the original implementation where we didn't add
tests for time dimension using `timestamp without timezone`.
Fixed it by dealing with this datatype and added regression tests.
Fixes#4956
Add a new function, `alter_data_node()`, which can be used to change
the data node's configuration originally set up via `add_data_node()`
on the access node.
The new functions introduces a new option "available" that allows
configuring the availability of the data node. Setting
`available=>false` means that the node should no longer be used for
reads and writes. Only read "failover" is implemented as part of this
change, however.
To fail over reads, the alter data node function finds all the chunks
for which the unavailable data node is the "primary" query target and
"fails over" to a chunk replica on another data node instead. If some
chunks do not have a replica to fail over to, a warning will be
raised.
When a data node is available again, the function can be used to
switch back to using the data node for queries.
Closes#2104
compress_segmentby should never be on a column with random() values
as that will result in very inefficient compression as the batches
will only have 1 tuple each.
The downgrade script has printed a message in which the same variable is
used for the upgrade and the downgrade version. This patch corrects the
output and uses the correct variables.
As a result of editing, trailing whitespace is often resulting and
since some editors automatically remove trailing whitespace this
creates diffs with more changed lines than necessary.
Add a check that files do not have trailing whitespace and fail if
there are.
We don't want to support BitmapScans below DecompressChunk
as this adds additional complexity to support and there
is little benefit in doing so.
This fixes a bug that can happen when we have a parameterized
BitmapScan that is parameterized on a compressed column and
will lead to an execution failure with an error regarding
incorrect attribute types in the expression.
The current implementation update the jobs table directly and to make it
consistent with other parts of the code we changed it to use the
`alter_job` API instead to enable and disable the jobs during the
migration. This refactoring is related to #4863.
On PG15 CustomScan by default is not projection capable, thus wraps this
node in Result node. THis change in PG15 causes tests result files which
have EXPLAIN output to fail. This patch fixes the plan outputs.
Fixes#4833
We have a rare condition where a debug build asserts on more than one
job with the same job id. Since it is hard to create a reproduction,
this commit adds a printout for those conditions and print out all the
jobs with that job id in the postgres log.
Part of #4863
A chunk is in this state when it is compressed but also has
uncompressed data in the uncompressed chunk. Individual tuples
can only ever exist in either area. This is preparation patch
to add support for uncompressed staging area for DML operations.
The initial version of the check did not include a detailed message
about the code failure in the CI output and did not check for
expressions with operands in wrong order.
On PG15 new flag CUSTOMPATH_SUPPORT_PROJECTION is introduced. This flag
tells if a planner node is projection capable or not. CustomScan created
in TimescaleDB by default is not projection capable, this causes CustomScan
node to be wrapped around Result node. Update query on a hypertable has
a logic which is based on assumption that "ModifyTable" plan nodes lefttree
should be CustomScan node. With PG15 this assumption is broken which causes
"ERROR: variable not found in subplan target list".
Fixes#4834
When using `override => true` the migration procedure rename the
current cagg using the suffix `_old` and rename the new created with
suffix `_new` to the original name.
The problem is the `copy polices` step was executed after the
`override` step and then we didn't found the new cagg name because it
was renamed to the the original name leading the policy orphan (without
connection with the materialization hypertable).
Fixed it by reordering the steps executin the `copy policies` before
the `override` step. Also made some ajustments to properly copy all
`bgw_job` columns even if this catalog table was changed.
Fixes#4885
This patch adds two new fields to the telemetry report,
`stats_by_job_type` and `errors_by_sqlerrcode`. Both report results
grouped by job type (different types of policies or
user defined action).
The patch also adds a new field to the `bgw_job_stats` table,
`total_duration_errors` to separate the duration of the failed runs
from the duration of successful ones.
Postgres source code define the macro `OidIsValid()` to check if the Oid
is valid or not (comparing against the `InvalidOid` type). See
`src/include/c.h` in Postgres source three.
Changed all direct comparisons against `InvalidOid` for the `OidIsValid`
call and add a coccinelle check to make sure the future changes will use
it correctly.
Trying to resume a failed Continuous Aggregate raise an exception that
the migration plan already exists, but this is wrong and the expected
behaviour should be resume the migration and continue from the last
failed step.
This change introduces a new option to the compression procedure which
decouples the uncompressed chunk interval from the compressed chunk
interval. It does this by allowing multiple uncompressed chunks into one
compressed chunk as part of the compression procedure. The main use-case
is to allow much smaller uncompressed chunks than compressed ones. This
has several advantages:
- Reduce the size of btrees on uncompressed data (thus allowing faster
inserts because those indexes are memory-resident).
- Decrease disk-space usage for uncompressed data.
- Reduce number of chunks over historical data.
From a UX point of view, we simple add a compression with clause option
`compress_chunk_time_interval`. The user should set that according to
their needs for constraint exclusion over historical data. Ideally, it
should be a multiple of the uncompressed chunk interval and so we throw
a warning if it is not.
Version 15 pg_dump program does not log any messages with log level <
PG_LOG_WARNING to stdout. Whereas this check is not present in version
14, thus we see corresponding tests fail with missing log information.
This patch fixes by supressing those log information, so that the tests
pass on all versions of postgresql.
Fixes#4832
Look for the binary with exact version before looking for the
generic name to prevent failure when clang-format is lower then
required version but clang-format-14 exists.