This change makes detach_data_node() function consistent with
other data node management functions by adding missing
if_attach argument.
The function will not show an error in case if data node is not
attached and if_attached is set to true.
Issue: #2506
This release adds major new features and bugfixes since the 1.7.4 release.
We deem it moderate priority for upgrading.
This release adds the long-awaited support for distributed hypertables to
TimescaleDB. With 2.0, users can create distributed hypertables across
multiple instances of TimescaleDB, configured so that one instance serves
as an access node and multiple others as data nodes. All queries for a
distributed hypertable are issued to the access node, but inserted data
and queries are pushed down across data nodes for greater scale and
performance.
This release also adds support for user-defined actions allowing users to
define actions that are run by the TimescaleDB automation framework.
In addition to these major new features, the 2.0 branch introduces _breaking_ changes
to APIs and existing features, such as continuous aggregates. These changes are not
backwards compatible and might require changes to clients and/or scripts that rely on
the previous APIs. Please review our updated documentation and do proper testing to
ensure compatibility with your existing applications.
The noticeable breaking changes in APIs are:
- Redefined functions for policies
- A continuous aggregate is now created with `CREATE MATERIALIZED VIEW`
instead of `CREATE VIEW` and automated refreshing requires adding a policy
via `add_continuous_aggregate_policy`
- Redesign of informational views, including new (and more general) views for
information about policies and user-defined actions
This release candidate is upgradable, so if you are on a previous release (e.g., 1.7.4)
you can upgrade to the release candidate and later expect to be able to upgrade to the
final 2.0 release. However, please carefully consider your compatibility requirements
_before_ upgrading.
**Major Features**
* #1923 Add support for distributed hypertables
* #2006 Add support for user-defined actions
* #2435 Move enterprise features to community
* #2437 Update Timescale License
**Minor Features**
* #2011 Constify TIMESTAMPTZ OP INTERVAL in constraints
* #2105 Support moving compressed chunks
**Bugfixes**
* #1843 Improve handling of "dropped" chunks
* #1886 Change ChunkAppend leader to use worker subplan
* #2116 Propagate privileges from hypertables to chunks
* #2263 Fix timestamp overflow in time_bucket optimization
* #2270 Fix handling of non-reference counted TupleDescs in gapfill
* #2325 Fix rename constraint/rename index
* #2370 Fix detection of hypertables in subqueries
* #2376 Fix caggs width expression handling on int based hypertables
* #2416 Check insert privileges to create chunk
* #2428 Allow owner change of continuous aggregate
* #2436 Propagate grants in continuous aggregates
This change updates the timescaledb_information.job_stats view to
check whether a job is currently scheduled in the bgw_config table.
If it is not, the `job_status` field will show `Paused` and the
`next_start` field will be NULL.
Fixes#2488
Renaming the parameter `hypertable_or_cagg` in functions `drop_chunks`
and `show_chunks` to `relation` and changing parameter name from
`main_table` to `hypertable` or `relation` depending on context.
This change will add an invalidation to the
materialization_invalidation_log for any region earlier than the
ignore_invalidation_older_than parameter when updating a continuous
aggregate to 2.0. This is needed as we do not record invalidations
in this region prior to 2.0 and there is no way to ensure the
aggregate is up to date within this range.
Fixes#2450
This patch removes enterprise license support and moves
move_chunk() function under community license (TSL).
Licensing validation code been reworked and simplified.
Previously used timescaledb.license_key guc been renamed to
timescaledb.license.
This change also makes testing code more strict against
used license. Apache test suite now can test only apache-licensed
functions.
Fixes#2359
Commit 8e1e6036 changed chunk compression to disable autovacuum
on compressed chunks but did not apply the setting to chunks
compressed before that change. So this patch changes chunks
compressed with previous version to disable autovacuum as well.
As part of the 2.0 continous aggregate changes, we are removing the
continuous_aggs_completed_threshold table. However, this may result
in currently running aggregates being considered complete even if
their completed threshold hadn't reached the invalidation threshold.
This change fixes this by adding an entry to the invalidation log
for any such aggregates.
Fixes#2314
This change removes the catalog options `refresh_lag`,
`max_interval_per_job` and `ignore_invalidation_older_than`, which are
no longer used.
Closes#2396
Rename the `refresh_interval` field in
`timescaledb_information.continuous_aggregate` view to match the
parameter name in `add_continuous_aggregate_policy`.
The is_compressed column for timescaledb_information.chunks
view is defined as TEXT instead of BOOLEAN
as true and false were specified using string literals.
Fixes#2409
Stats for policies are exposed via
the policy_stats view. Remove continuous aggregate
stats view - the thresholds exposed via this
view are not relevant with the new API.
This change filters materialized hypertables from the hypertables
view, similar to how internal compression hypertables are
filtered.
Materialized hypertables are internal objects created as a side effect
of creating a continuous aggregate, and these internal hypertables are
still listed in the continuous_aggregates view.
Fixes#2383
When rebuilding the bgw_job table the update script wouldnt remember
the state of the sequence and reset it back to the default leading
to failed job inserts until the sequence catches up.
When the extension is updated to 2.0, we need to migrate
existing ignore_invalidation_older_than settings to the new
continuous aggregate policy framework.
ignore_invalidation_older_than setting is mapped to start_interval
of the refresh policy.If the default value is used, it is mapped
to NULL start_interval, otherwise it is converted to an
interval value.
This moves the SQL definitions for policy and job APIs to their
separate files to improve code structure. Previously, all of these
user-visible API functions were located in the `bgw_scheduler.sql`
file, mixing internal and public functions and APIs.
To improved the structure, all API-related functions are now located
in their own distinct SQL files that have the `_api.sql` file
ending. Internal policy functions have been moved to
`policy_internal.sql`.
This change simplifies the name of the functions for adding and
removing a continuous aggregate policy. The functions are renamed
from:
- `add_refresh_continuous_aggregate_policy`
- `remove_refresh_continuous_aggregate_policy`
to
- `add_continuous_aggregate_policy`
- `remove_continuous_aggregate_policy`
Fixes#2320
With the new continuous aggregate API, some of
the parameters used to create a continuous agg are
now obsolete. Remove refresh_lag, max_interval_per_job
and ignore_invalidation_older_than information from
timescaledb_information.continuous_aggregates.
This maintenance release contains bugfixes since the 1.7.3 release. We deem it
high priority for upgrading if TimescaleDB is deployed with replicas (synchronous
or asynchronous).
In particular the fixes contained in this maintenance release address an issue with
running queries on compressed hypertables on standby nodes.
**Bugfixes**
* #2340 Remove tuple lock on select path
The function `cagg_watermark` returns the time threshold at which
materialized data ends and raw query data begins in a real-time
aggregation query (union view).
The watermark is simply the completed threshold of the continuous
aggregate materializer. However, since the completed threshold will no
longer exist with the new continuous aggregates, the watermark
function has been changed to return the end of the last bucket in the
materialized hypertable.
In most cases, the completed threshold is the same as the end of the
last materialized bucket. However, there are situations when it is
not; for example, when there is a filter in the view query some
buckets might not be materialized because no data matched the
filter. The completed threshold would move ahead regardless. For
instance, if there is only data from "device_2" in the raw hypertable
and the aggregate has a filter `device=1`, there will be no buckets
materialized although the completed threshold moves forward. Therefore
the new watermark function might sometimes return a lower watermark
than the old function. A similar situation explains the different
output in one of the union view tests.
This change renames function to approximate_row_count() and adds
support for regular tables. Return a row count estimate for a
table instead of a table list.
Support add and remove continuous agg policy functions
Integrate policy execution with refresh api for continuous
aggregates
The old api for continuous aggregates adds a job automatically
for a continuous aggregate. This is an explicit step with the
new API. So remove this functionality.
Refactor some of the utility functions so that the code can be shared
by multiple policies.
We change the syntax for defining continuous aggregates to use `CREATE
MATERIALIZED VIEW` rather than `CREATE VIEW`. The command still creates
a view, while `CREATE MATERIALIZED VIEW` creates a table. Raise an
error if `CREATE VIEW` is used to create a continuous aggregate and
redirect to `CREATE MATERIALIZED VIEW`.
In a similar vein, `DROP MATERIALIZED VIEW` is used for continuous
aggregates and continuous aggregates cannot be dropped with `DROP
VIEW`.
Continuous aggregates are altered using `ALTER MATERIALIZED VIEW`
rather than `ALTER VIEW`, so we ensure that it works for `ALTER
MATERIALIZED VIEW` and gives an error if you try to use `ALTER VIEW` to
change a continuous aggregate.
Note that we allow `ALTER VIEW ... SET SCHEMA` to be used with the
partial view as well as with the direct view, so this is handled as a
special case.
Fixes#2233
Co-authored-by: =?UTF-8?q?Erik=20Nordstr=C3=B6m?= <erik@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>
This maintenance release contains bugfixes since the 1.7.2 release. We deem it high
priority for upgrading.
In particular the fixes contained in this maintenance release address issues in compression,
drop_chunks and the background worker scheduler.
**Bugfixes**
* #2059 Improve infering start and stop arguments from gapfill query
* #2067 Support moving compressed chunks
* #2068 Apply SET TABLESPACE for compressed chunks
* #2090 Fix index creation with IF NOT EXISTS for existing indexes
* #2092 Fix delete on tables involving hypertables with compression
* #2164 Fix telemetry installed_time format
* #2184 Fix background worker scheduler memory consumption
* #2222 Fix `negative bitmapset member not allowed` in decompression
* #2255 Propagate privileges from hypertables to chunks
* #2256 Fix segfault in chunk_append with space partitioning
* #2259 Fix recursion in cache processing
* #2261 Lock dimension slice tuple when scanning
**Thanks**
* @akamensky for reporting an issue with drop_chunks and ChunkAppend with space partitioning
* @dewetburger430 for reporting an issue with setting tablespace for compressed chunks
* @fvannee for reporting an issue with cache invalidation
* @nexces for reporting an issue with ChunkAppend on space-partitioned hypertables
* @PichetGoulu for reporting an issue with index creation and IF NOT EXISTS
* @prathamesh-sonpatki for contributing a typo fix
* @sezaru for reporting an issue with background worker scheduler memory consumption
This change removes, simplifies, and unifies code related to
`drop_chunks` and `show_chunks`. As a result of prior changes to
`drop_chunks`, e.g., making table relid mandatory and removing
cascading options, there's an opportunity to clean up and simplify the
rather complex code for dropping and showing chunks.
In particular, `show_chunks` is now consistent with `drop_chunks`; the
relid argument is mandatory, a continuous aggregate can be used in
place of a hypertable, and the input time ranges are checked and
handled in the same way.
Unused code is also removed, for instance, code that cascaded drop
chunks to continuous aggregates remained in the code base while the
option no longer exists.
This patch adds functionality to schedule arbitrary functions
or procedures as background jobs.
New functions:
add_job(
proc REGPROC,
schedule_interval INTERVAL,
config JSONB DEFAULT NULL,
initial_start TIMESTAMPTZ DEFAULT NULL,
scheduled BOOL DEFAULT true
)
Add a job that runs proc every schedule_interval. Proc can
be either a function or a procedure implemented in any language.
delete_job(job_id INTEGER)
Deletes the job.
run_job(job_id INTEGER)
Execute a job in the current session.
This patch changes the signature from cagg_watermark(oid) to
cagg_watermark(int). Since this is an API breaking change it couldn't
be done in an earlier release.
When a continuous aggregate is refreshed, it also needs to move the
invalidation threshold in case the refresh window stretches beyond the
current threshold. The new invalidation threshold must be set in its
own transaction during the refresh, which can only be done if the
refresh command is a procedure.
This patch adds policies to the update test to ensure their
configuration is properly migrated during updates. This patch
also fixes the inconsistent background job application_name
and adjusts them in the update script.
This patch changes the application name for background worker jobs
to include the job_id which makes the application name unique and
allows joining against pg_stat_activity to get a list of currently
running background worker processes. This change also makes
identifying misbehaving jobs easier from the postgres log as the
application name can be included in the log line.