mirror of
https://github.com/timescale/timescaledb.git
synced 2025-05-23 14:39:15 +08:00
Allow dropping raw chunks on the raw hypertable while keeping the continuous aggregate. This allows for downsampling data and allows users to save on TCO. We only allow dropping such data when the dropped data is older than the `ignore_invalidation_older_than` parameter on all the associated continuous aggs. This ensures that any modifications to the region of data which was dropped should never be reflected in the continuous agg and thus avoids semantic ambiguity if chunks are dropped but then again recreated due to an insert. Before we drop a chunk we need to make sure to process any continuous aggregate invalidations that were registed on data inside the chunk. Thus we add an option to materialization to perform materialization transactionally, to only process invalidations, and to process invalidation only before a timestamp. We fix drop_chunks and policy to properly process `cascade_to_materialization` as a tri-state variable (unknown, true, false); Existing policy rows should change false to NULL (unknown) and true stays as true since it was explicitly set. Remove the form data for bgw_policy_drop_chunk because there is no good way to represent the tri-state variable in the form data. When dropping chunks with cascade_to_materialization = false, all invalidations on the chunks are processed before dropping the chunk. If we are so far behind that even the completion threshold is inside the chunks being dropped, we error. There are 2 reasons that we error: 1) We can't safely process new ranges transactionally without taking heavy weight locks and potentially locking the entire sytem 2) If a completion threshold is that far behind the system probably has some serious issues anyway.