If a datanode goes down for whatever reason then DML activity to
chunks residing on (or targeted to) that DN will start erroring out.
We now handle this by marking the target chunk as "stale" for this
DN by changing the metadata on the access node. This allows us to
continue to do DML to replicas of the same chunk data on other DNs
in the setup. This obviously will only work for chunks which have
"replication_factor" > 1. Note that for chunks which do not have
undergo any change will continue to carry the appropriate DN related
metadata on the AN.
This means that such "stale" chunks will become underreplicated and
need to be re-balanced by using the copy_chunk functionality by a micro
service or some such process.
Fixes#4846
This function drops chunks on a specified data node if those chunks are
not known by the access node.
Call drop_stale_chunks() automatically when data node becomes
available again.
Fix#4848
When truncating a cagg that had another cagg defined on
top of it, the nested cagg would not get invalidated accordingly.
That was because we were not adding a corresponding entry in
the hypertable invalidation log for the materialization hypertable
of the base cagg.
This commit adds an invalidation entry in the table so that
subsequent refreshes see and properly process this invalidation.
Co-authored-by: name <fabriziomello@gmail.com>
We're facing some weird `portal snapshot` issues running the
`refresh_continuous_aggregate` procedure called from other procedures.
Fixed it by ignoring the Refresh Continuous Aggregate step from the
`cagg_migrate` and warning users to run it manually after the execution.
Fixes#4913
Since we now use the date as a part of the cache key to ensure no
stale cache entries hiding build failures we need to make sure
we have a cache entry present before workflows that depend on cache
are run.
Commit #4668 introduced hierarchical caggs. This patch adds
a field `num_caggs_nested` to the telemetry report to include the
number of caggs defined on top of other caggs.
This patch changes an Assert in get_or_add_baserel_from_cache to an
Ensure. Therefore, this check is also performed in release builds. This
is done to detect metadata corruptions at an early stage.
PR #4668 introduced the Hierarchical Continuous Aggregates (aka
Continuous Aggregate on top of another Continuous Aggregate) but
unfortunately we miss to fix the regression tests on PG15.
In commit 1f807153 we added a CI check for trailing whitespaces over
our source code files (.c and .h).
This commit add SQL test files (.sql and .sql.in) to this check.
Enable users create Hierarchical Continuous Aggregates (aka Continuous
Aggregates on top of another Continuous Aggregates).
With this PR users can create levels of aggregation granularity in
Continuous Aggregates making the refresh process even faster.
A problem with this feature can be in upper levels we can end up with
the "average of averages". But to get the "real average" we can rely on
"stats_aggs" TimescaleDB Toolkit function that calculate and store the
partials that can be finalized with other toolkit functions like
"average" and "sum".
Closes#1400
Commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces the
get_or_add_baserel_from_cache function. It contains a performance
regression, since an expensive metadata scan
(ts_chunk_get_hypertable_id_by_relid) is performed even when it could be
avoided.
The commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces frozen
chunks. Checking whether a chunk is frozen or not has been done so far
in the query planner. If it is not possible to determine which chunks
are affected by a query in the planner (e.g., due to a cast in the WHERE
condition), all chunks are checked. This leads (1) to an increased
planning time and (2) to the situation that a single frozen chunk could
reject queries, even if the frozen chunk is not addressed by the query.
INSERT .. SELECT query containing distributed hypertables generates plan
with DataNodeCopy node which is not supported. Issue is in function
tsl_create_distributed_insert_path() where we decide if we should
generate DataNodeCopy or DataNodeDispatch node based on the kind of
query. In PG15 for INSERT .. SELECT query timescaledb planner generates
DataNodeCopy as rte->subquery is set to NULL. This is because of a commit
in PG15 where rte->subquery is set to NULL as part of a fix.
This patch checks if SELECT subquery has distributed hypertables or not
by looking into root->parse->jointree which represents subquery.
Fixes#4983
PG15 introduced a ProcSignalBarrier mechanism in drop database
implementation to force all backends to close the file handles for
dropped tables. The backend that is executing the drop database command
will emit a new process signal barrier and wait for other backends to
accept it. But the backend which is executing the delete_data_node
function will not be able to process the above mentioned signal as it
will be stuck waiting for the drop database query to return. Thus the
two backends end up waiting for each other causing a deadlock.
Fixed it by using the async API to execute the drop database command
from delete_data_node instead of the blocking remote_connection_cmdf_ok
call.
Fixes#4838
The code we inherited from postgres expects that if we have a const null
or false clause, it's going to be the single one, but that's not true
for runtime chunk exclusion because we don't try to fold such
restrictinfos after evaluating the mutable functions. Fix it to also
work for multiple restrictinfos.
Job ids are locked using an advisory lock rather than a row lock on the
jobs table, but this lock is not taken in the job API functions
(`alter_job`, `delete_job`, etc.), which appears to cause a race
condition resulting in addition of multiple rows with the same job id.
This commit adds an advisory `RowExclusiveLock` on the job id while
altering it to match the advisory locks taken while performing other
modifications.
Closes#4863
Compress chunk interval is set using an ALTER TABLE statement.
This change makes it so you can update the compress chunk interval
while keeping the rest of the compression settings intact.
Updating it will only affect chunks that are compressed and rolled
up after the change.
The dist_move_chunk causes the CI to hang when compiled and run with
PG15 as explained in #4972.
Also fixed schema permission issues in data_node and dist_param tests.
This patch will report a warning when upgrading to new timescaledb extension,
if their exists any caggs with partial aggregates only on release builds.
Also restrict users from creating cagss with old format on timescaledb with
PG15.
This allows us to perform a nested loop join of a small outer local
table to an inner distributed hypertable, without downloading the
entire hypertable to the access node.
When deleting a job in the test, the job does not necessarily terminate
immediately, so wait for a log entries from the job before checking the
jobs table.
Fixed#4859
It was a leftover from the original implementation where we didn't add
tests for time dimension using `timestamp without timezone`.
Fixed it by dealing with this datatype and added regression tests.
Fixes#4956
Add a new function, `alter_data_node()`, which can be used to change
the data node's configuration originally set up via `add_data_node()`
on the access node.
The new functions introduces a new option "available" that allows
configuring the availability of the data node. Setting
`available=>false` means that the node should no longer be used for
reads and writes. Only read "failover" is implemented as part of this
change, however.
To fail over reads, the alter data node function finds all the chunks
for which the unavailable data node is the "primary" query target and
"fails over" to a chunk replica on another data node instead. If some
chunks do not have a replica to fail over to, a warning will be
raised.
When a data node is available again, the function can be used to
switch back to using the data node for queries.
Closes#2104
compress_segmentby should never be on a column with random() values
as that will result in very inefficient compression as the batches
will only have 1 tuple each.