The `include_tiered_data` option allows user to override the value of
`timescaledb.enable_tiered_reads` defined on instance level for a
particular continuous aggregate policy.
This adds a set of aliases for existing compression functions and view
to better reflect the column store capabilities of compressed
hypertables.
The procedures `convert_to_rowstore` and `convert_to_columnstore` are
added as aliases to `decompress_chunk` and `compress_chunk`
respectively. We change these from functions to procedures to be able
to split the conversion into multiple transactions and avoid heavy
locking for longer periods.
The procedures `add_columnstore_policy` and `remove_columnstore_policy`
are added as alias for `add_compression_policy` and
`remove_compression_policy` respectively.
The functions `hypertable_columnstore_stats` and
`chunk_columnstore_stats` are added as aliases for
`hypertable_compression_stats` and `chunk_compression_stats`
respectively.
The views `hypertable_columnstore_settings`,
`chunk_columnstore_settings`, and `columnstore_settings` are added as
aliases for the corresponding views.
We also add aliases for parameters to functions and procedures that
take these.
- The parameter `timescaledb.enable_columnstore` is an alias for
`timescaledb.compress`
- The parameter `timescaledb.segmentby` is an alias for
`timescaledb.compress_segmentby`.
- The parameter `timescaledb.orderby` is an alias for
`timescaledb.compress_orderby`.
Changing from using the `compress_using` parameter with a table access
method name to use the boolean parameter `hypercore_use_access_method`
instead to avoid having to provide a name when using the table access
method for compression.
The retention and compression policies can now use drop_created_before
and compress_created_before arguments respectively to specify chunk
selection using their creation times.
We don't support creation times for CAggs, yet.
Currently, the next start of a scheduled background job is
calculated by adding the `schedule_interval` to its finish
time. This does not allow scheduling jobs to execute at fixed
times, as the next execution is "shifted" by the job duration.
This commit introduces the option to execute a job on a fixed
schedule instead. Users are expected to provide an initial_start
parameter on which subsequent job executions are aligned. The next
start is calculated by computing the next time_bucket of the finish
time with initial_start origin.
An `initial_start` parameter is added to the compression, retention,
reorder and continuous aggregate `add_policy` signatures. By passing
that upon policy creation users indicate the policy will execute on
a fixed schedule, or drifting schedule if `initial_start` is not
provided.
To allow users to pick a drifting schedule when registering a UDA,
an additional parameter `fixed_schedule` is added to `add_job`
to allow users to specify the old behavior by setting it to false.
Additionally, an optional TEXT parameter, `timezone`, is added to both
add_job and add_policy signatures, to address the 1-hour shift in
execution time caused by DST switches. As internally the next start of
a fixed schedule job is calculated using time_bucket, the timezone
parameter allows using timezone-aware buckets to calculate
the next start.
At the time of adding or updating policies, it is
checked if the policies are compatible with each
other and to those already on the CAgg.
These checks are:
- refresh and compression policies should not overlap
- refresh and retention policies should not overlap
- compression and retention policies should not overlap
Co-authored-by: Markos Fountoulakis <markos@timescale.com>
-Add infinity for refresh window range
Now to create open ended refresh policy
use +/- infinity for end_offset and star_offset
respectivly for the refresh policy.
-Add remove_all_policies function
This will remove all the policies on a given
CAgg.
-Remove parameter refresh_schedule_interval
-Fix downgrade scripts
-Fix IF EXISTS case
Co-authored-by: Markos Fountoulakis <markos@timescale.com>
This simplifies the process of adding the policies
for the CAggs. Now, with one single sql statements
all the policies can be added for a given CAgg.
Similarly, all the policies can be removed or modified
via single sql statement only.
This also adds a new function as well as a view to show all
the policies on a continuous aggregate.
Add a parameter `schedule_interval` to retention and
compression policies to allow users to define the schedule
interval. Fall back to previous default if no value is
specified.
Fixes#3806
This patch locks down search_path in extension install and update
scripts to only contain pg_catalog, this requires that any reference
in those scripts is fully qualified. Additionally we add explicit
create commands to all update scripts for objects added to the
public schema. This change will make update scripts fail if a
function with identical signature already exists when installing
or upgrading instead reusing the existing object.
Renaming the parameter `hypertable_or_cagg` in functions `drop_chunks`
and `show_chunks` to `relation` and changing parameter name from
`main_table` to `hypertable` or `relation` depending on context.
This moves the SQL definitions for policy and job APIs to their
separate files to improve code structure. Previously, all of these
user-visible API functions were located in the `bgw_scheduler.sql`
file, mixing internal and public functions and APIs.
To improved the structure, all API-related functions are now located
in their own distinct SQL files that have the `_api.sql` file
ending. Internal policy functions have been moved to
`policy_internal.sql`.