This release contains bug fixes since the 2.14.1 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #6655 Fix segfault in cagg_validate_query
* #6660 Fix refresh on empty CAgg with variable bucket
* #6670 Don't try to compress osm chunks
**Thanks**
* @kav23alex for reporting a segfault in cagg_validate_query
So far, bucket_origin was defined as a Timestamp but used as a
TimestampTz in many places. This commit changes this and unifies the
usage of the variable.
The catalog table continuous_aggs_bucket_function is currently only used
for variable bucket sizes. Information about the fixed-size buckets is
stored in the table continuous_agg only. This causes some problems
(e.g., we have redundant fields for the bucket_size, fixes size buckets
with offsets are not supported, ...).
This commit is the first in a row of commits that refactor the catalog
for the CAgg time_bucket function. The goals are:
* Remove the CAgg redundant attributes in the catalog
* Create an entry in continuous_aggs_bucket_function for all CAggs
that use time_bucket
This first commit refactors the continuous_aggs_bucket_function table
and prepares it for more generic use. Not all attributes are used yet,
but these will change in follow-up PRs.
Historically we preserve chunk metadata because the old format of the
Continuous Aggregate has the `chunk_id` column in the materialization
hypertable so in order to don't have chunk ids left over there we just
mark it as dropped when dropping chunks.
In #4269 we introduced a new Continuous Aggregate format that don't
store the `chunk_id` in the materialization hypertable anymore so it's
safe to also remove the metadata when dropping chunk and all associated
Continuous Aggregates are in the new format.
Also added a post-update SQL script to cleanup unnecessary dropped chunk
metadata in our catalog.
Closes#6570
This release contains bug fixes since the 2.14.0 release.
We recommend that you upgrade at the next available opportunity.
**Features**
* #6630 Add views for per chunk compression settings
**Bugfixes**
* #6636 Fixes extension update of compressed hypertables with dropped columns
* #6637 Reset sequence numbers on non-rollup compression
* #6639 Disable default indexscan for compression
* #6651 Fix DecompressChunk path generation with per chunk settings
**Thanks**
* @anajavi for reporting an issue with extension update of compressed hypertables
While the downgrade script doesnt combine multiple version into a single
script since we only create the script for the previous version, fixing
this will make backporting in the release branch easier.
Adjust DecompressChunk path generation to use the per chunk settings
and not the hypertable settings when building compression info.
This patch also fixes the missing chunk configuration generation
in the update script which was masked by this bug.
Version 2.14.0 removes the multi-node code. However, there were a few
leftovers for the handling of distributed CAggs. This commit cleans up
the CAgg code and removes the no longer needed functions:
invalidation_cagg_log_add_entry(integer,bigint,bigint);
invalidation_hyper_log_add_entry(integer,bigint,bigint);
materialization_invalidation_log_delete(integer);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
bigint,integer[],bigint[],bigint[]);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
bigint,integer[],bigint[],bigint[],text[]);
invalidation_process_hypertable_log(integer,integer,regtype,
integer[],bigint[],bigint[]);
invalidation_process_hypertable_log(integer,integer,regtype,
integer[],bigint[],bigint[],text[]);
hypertable_invalidation_log_delete(integer);
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.
In addition, it includes these noteworthy features:
* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings
**For this release only**, you will need to restart the database before running `ALTER EXTENSION`
**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.
TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).
If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).
**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).
**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk
**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive
**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This patch deprecates the recompress_chunk procedure as all that
functionality is covered by compress_chunk now. This patch also adds a
new optional boolean argument to compress_chunk to force applying
changed compression settings to existing compressed chunks.
If a lot of chunks are involved then the current pl/pgsql function
to compute the size of each chunk via a nested loop is pretty slow.
Additionally, the current functionality makes a system call to get the
file size on disk for each chunk everytime this function is called.
That again slows things down. We now have an approximate function which
is implemented in C to avoid the issues in the pl/pgsql function.
Additionally, this function also uses per backend caching using the
smgr layer to compute the approximate size cheaply. The PG cache
invalidation clears off the cached size for a chunk when DML happens
into it. That size cache is thus able to get the latest size in a
matter of minutes. Also, due to the backend caching, any long running
session will only fetch latest data for new or modified chunks and can
use the cached data (which is calculated afresh the first time around)
effectively for older chunks.
Logging and caching related tables from the timescaledb extension
should not be dumped using pg_dump. Our scripts specify a few such
unwanted tables. Apart from being unnecessary, the "job_errors" had
some restricted permissions causing additional problems in pg_dump.
We now don't include such tables for dumping.
Fixes#5449
This patch changes the dump configuration for
_timescaledb_catalog.metadata to include all entries. To allow loading
logical dumps with this configuration an insert trigger is added that
turns uniqueness conflicts into updates to not block the restore.
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
Previously we would only check for data nodes defined or distributed
hypertables being present which might not be true on data nodes so
we prevent update on any installation that has dist_uuid defined which
is also true on data nodes.
The extension state is not easily accessible in release builds, which
makes debugging issue with the loader very difficult. This commit
introduces a new schema `_timescaledb_debug` and makes the function
`ts_extension_get_state` available also in release builds as
`_timescaledb_debug.extension_state`.
See #1682
This release contains bug fixes since the 2.13.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #6365 Use numrows_pre_compression in approximate row count
* #6377 Use processed group clauses in PG16
* #6384 Change bgw_log_level to use PGC_SUSET
* #6393 Disable vectorized sum for expressions.
* #6408 Fix groupby pathkeys for gapfill in PG16
* #6428 Fix index matching during DML decompression
* #6439 Fix compressed chunk permission handling on PG16
* #6443 Fix lost concurrent CAgg updates
* #6454 Fix unique expression indexes on compressed chunks
* #6465 Fix use of freed path in decompression sort logic
**Thanks**
* @MA-MacDonald for reporting an issue with gapfill in PG16
* @aarondglover for reporting an issue with unique expression indexes on compressed chunks
* @adriangb for reporting an issue with security barrier views on pg16
Before this change during downgrade script generation we would
always fetch the pre-update script from the previous version and
prepend it to the generated scripts. This limits what can be
referenced in the pre-update script and also what is possible
within the downgrade itself.
This patch splits the pre-update script into a generic part that
is used for update/downgrade and an update specific part. We could
later also add a downgrade specific part but currently it is not
needed. This change is necessary because we reference a timescaledb
view in the pre-update script which prevents changes to that view.
Remove the code used by multinode to handle remote connections.
This patch completely removes tsl/src/remote and any remaining
distributed hypertable checks.
This patch drops the catalog table _timescaledb_catalog.hypertable_compression
and stores those settings in _timescaledb_catalog.compression_settings instead.
The storage format is changed and the new table will have 1 entry per relation
instead of 1 entry per column and has no dependancy on hypertables.
All other aspects of compression will remain the same. This is refactoring is
to enable per chunk compression settings in a follow-up patch.
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.
In addition, it includes these noteworthy features:
* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies
**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).
If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).
**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.
**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default
**Features**
* #5575 Add chunk-wise sorted paths for compressed chunks
* #5761 Simplify hypertable DDL API
* #5890 Reduce WAL activity by freezing compressed tuples immediately
* #6050 Vectorized aggregation execution for sum()
* #6062 Add metadata for chunk creation time
* #6077 Make Continous Aggregates materialized only (non-realtime) by default
* #6177 Change show_chunks/drop_chunks using chunk creation time
* #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* #6185 Keep track of catalog version
* #6227 Use creation time in retention/compression policy
* #6307 Add SQL function cagg_validate_query
**Bugfixes**
* #6188 Add GUC for setting background worker log level
* #6222 Allow enabling compression on hypertable with unique expression index
* #6240 Check if worker registration succeeded
* #6254 Fix exception detail passing in compression_policy_execute
* #6264 Fix missing bms_del_member result assignment
* #6275 Fix negative bitmapset member not allowed in compression
* #6280 Potential data loss when compressing a table with a partial index that matches compression order.
* #6289 Add support for startup chunk exclusion with aggs
* #6290 Repair relacl on upgrade
* #6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* #6305 Make timescaledb_functions.makeaclitem strict
* #6332 Fix typmod and collation for segmentby columns
* #6339 Fix tablespace with constraints
* #6343 Enable segmentwise recompression in compression policy
**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
If a hypertable uses a non-default tablespace for its primary or
unique constraints with additional DEFERRABLE or INITIALLY DEFERRED
characteristics then any chunk creation will fail with syntax error. We
now set the tablespace via a separate command for such constraints for
the chunks.
Fixes#6338
We added a workaround for segmentby columns with incorrect typmod
and collation in c73c5a74b9 but did not adjust pre-existing relations.
This patch will fix any existing relations where the segmentby columns
of compressed chunks have incorrect typmod and collation and remove
the code workaround.
The retention and compression policies can now use drop_created_before
and compress_created_before arguments respectively to specify chunk
selection using their creation times.
We don't support creation times for CAggs, yet.
With this function is possible to execute the Continuous Aggregate query
validation over an arbitrary query string, without the need to actually
create the Continuous Aggregate.
It can be used, for example, to check for most frequent queries maybe
using `pg_stat_statements`, validate them and check if there are queries
that potenttialy can turned into a Continuous Aggregate.
If users have accidentally been removed from `pg_authid` as a result of
bugs where dropping a user did not revoke privileges from all tables
where the had privileges, it will not be possible to create new chunks
since these require the user to be found when copying the privileges
for the parent table (either compressed hypertable or normal
hypertable).
To fix the situation, we repair the `pg_class` table when updating the
extension by modifying the `relacl` for relations and remove any user
that do not have an entry in `pg_authid`.
A repair function `_timescaledb_functions.repair_relation_acls` is
added that will perform the job. A `makeaclitem` from PG16 that accepts
a list of comma and used as part of the repair is also added as
`_timescaledb_functions.makeaclitem`.
- Updated show_chunks, drop_chunks APIs to get the affected
chunks using chunk creation time metadata based on the
"date/time/interval" like boundary specified for the INTEGER
columns.
- We honor "integer_now" function if it's specified so as to keep
backwards compatibility with the existing behavior
Co-authored-by: Dipesh Pandit <dipesh@timescale.com>
When we compress a chunk, we create a new compressed chunk for storing
the compressed data. So far, the tuples were just inserted into the
compressed chunk and frozen by a later vacuum run.
However, freezing tuples causes WAL activity can be optimized because
the compressed chunk is created in the same transaction as the tuples.
This patch reduces the WAL activity by storing these tuples directly as
frozen and preventing a freeze operation in the future. This approach is
similar to PostgreSQL's COPY FREEZE.
This release contains bug fixes since the 2.12.1 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #6155 Align gapfill bucket generation with time_bucket
* #6181 Ensure fixed_schedule field is populated
* #6210 Fix EXPLAIN ANALYZE for compressed DML
This patch stores the current catalog version in an internal
table to allow us to verify catalog and code version match.
When doing dump/restore people occasionally report very unusual
errors and during investigation it is discovered that they loaded
a dump from an older version and run it with a later code version.
This allows to detect mismatches between installed code version
and loaded dump version. The version number in the metadata table
will be kept updated in upgrade and downgrade scripts.