During some tests on #7104 I've noticed some flakiness on Continuous
Aggregates migration tests due to some output ordering.
Fixed it by explicity order policies by their IDs.
The function time_bucket_ng is deprecated. This PR adds a migration path
for existing CAggs. Since time_bucket and time_bucket_ng use different
origin values, a custom origin is set if needed to let time_bucket
create the same buckets as created by time_bucket_ng so far.
The CAgg migration path contained two bugs. This PR fixes both. A typo
in the column type prevented 'timestamp with time zone' buckets from
being handled properly. In addition, a custom setting of the datestyle
could create errors during the parsing of the generated timestamp
values.
Fixes: #5359
Historically we preserve chunk metadata because the old format of the
Continuous Aggregate has the `chunk_id` column in the materialization
hypertable so in order to don't have chunk ids left over there we just
mark it as dropped when dropping chunks.
In #4269 we introduced a new Continuous Aggregate format that don't
store the `chunk_id` in the materialization hypertable anymore so it's
safe to also remove the metadata when dropping chunk and all associated
Continuous Aggregates are in the new format.
Also added a post-update SQL script to cleanup unnecessary dropped chunk
metadata in our catalog.
Closes#6570
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_watermark(integer)
- cagg_watermark_materialized(integer)
- hypertable_invalidation_log_delete(integer)
- invalidation_cagg_log_add_entry(integer,bigint,bigint)
- invalidation_hyper_log_add_entry(integer,bigint,bigint)
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[])
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[],text[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[],text[])
- materialization_invalidation_log_delete(integer)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_migrate_create_plan(_timescaledb_catalog.continuous_agg,text,boolean,boolean)
- cagg_migrate_execute_copy_data(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_copy_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_create_new_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_disable_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_drop_old_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_enable_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_override_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_plan(_timescaledb_catalog.continuous_agg)
- cagg_migrate_execute_refresh_new_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_plan_exists(integer)
- cagg_migrate_pre_validation(text,text,text)
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com
Co-authored-by: Sven Klemm <sven@timescale.com>
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
During the `cagg_migrate` execution if the user set the `drop_old`
parameter to `true` the routine will drop the old Continuous Aggregate
leading to an inconsistent state because the catalog code don't handle
this table as a normal catalog table so the records are not removed
when dropping a Continuous Aggregate. The same problem will happen if
you manually drop the old Continuous Aggregate after the migration.
Fixed it by removing the useless Foreign Key and also adding another
column named `user_view_definition` to the main plan table just to store
the original user view definition for troubleshooting purposes.
Fixed#5662
We're facing some weird `portal snapshot` issues running the
`refresh_continuous_aggregate` procedure called from other procedures.
Fixed it by ignoring the Refresh Continuous Aggregate step from the
`cagg_migrate` and warning users to run it manually after the execution.
Fixes#4913
It was a leftover from the original implementation where we didn't add
tests for time dimension using `timestamp without timezone`.
Fixed it by dealing with this datatype and added regression tests.
Fixes#4956
The current implementation update the jobs table directly and to make it
consistent with other parts of the code we changed it to use the
`alter_job` API instead to enable and disable the jobs during the
migration. This refactoring is related to #4863.
When using `override => true` the migration procedure rename the
current cagg using the suffix `_old` and rename the new created with
suffix `_new` to the original name.
The problem is the `copy polices` step was executed after the
`override` step and then we didn't found the new cagg name because it
was renamed to the the original name leading the policy orphan (without
connection with the materialization hypertable).
Fixed it by reordering the steps executin the `copy policies` before
the `override` step. Also made some ajustments to properly copy all
`bgw_job` columns even if this catalog table was changed.
Fixes#4885
Trying to resume a failed Continuous Aggregate raise an exception that
the migration plan already exists, but this is wrong and the expected
behaviour should be resume the migration and continue from the last
failed step.
After migrate a Continuous Aggregate from the old format to the new
using `cagg_migrate` procedure we end up with the following problems:
* Refresh policy is not copied from the OLD to the NEW cagg;
* Compression setting is not copied from the OLD to the NEW cagg.
Fixed it by properly copying the refresh policy and setting the
`timescaledb.compress=true` flag to the new CAGG.
Fix#4710
Removed the underline character prefix '_' from the parameter names of
the procedure `cagg_migrate`. The new signature is:
cagg_migrate(
IN cagg regclass,
IN override boolean DEFAULT false,
IN drop_old boolean DEFAULT false
)
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.
When upgrading to Timescale 2.7, new created Continuous Aggregates
are using the new format, but existing Continuous Aggregates keep
using the format they were defined with.
Created a procedure to upgrade existing Continuous Aggregates from
the old format to the new format, by calling a simple procedure:
test=# CALL cagg_migrate('conditions_summary_daily');
Closes#4424