This release contains performance improvements, an improved hypertable DDL API and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Full PostgreSQL 16 support for all existing features * Vectorized aggregation execution for sum() * Track chunk creation time used in retention/compression policies **Deprecation notice: Multi-node support** TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **PostgreSQL 13 deprecation announcement** We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be included going forward. **Starting from TimescaleDB 2.13.0** * No Amazon Machine Images (AMI) are published. If you previously used AMI, please use another [installation method](https://docs.timescale.com/self-hosted/latest/install/) * Continuous Aggregates are materialized only (non-realtime) by default **Features** * #5575 Add chunk-wise sorted paths for compressed chunks * #5761 Simplify hypertable DDL API * #5890 Reduce WAL activity by freezing compressed tuples immediately * #6050 Vectorized aggregation execution for sum() * #6062 Add metadata for chunk creation time * #6077 Make Continous Aggregates materialized only (non-realtime) by default * #6177 Change show_chunks/drop_chunks using chunk creation time * #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output * #6185 Keep track of catalog version * #6227 Use creation time in retention/compression policy * #6307 Add SQL function cagg_validate_query **Bugfixes** * #6188 Add GUC for setting background worker log level * #6222 Allow enabling compression on hypertable with unique expression index * #6240 Check if worker registration succeeded * #6254 Fix exception detail passing in compression_policy_execute * #6264 Fix missing bms_del_member result assignment * #6275 Fix negative bitmapset member not allowed in compression * #6280 Potential data loss when compressing a table with a partial index that matches compression order. * #6289 Add support for startup chunk exclusion with aggs * #6290 Repair relacl on upgrade * #6297 Fix segfault when creating a cagg using a NULL width in time bucket function * #6305 Make timescaledb_functions.makeaclitem strict * #6332 Fix typmod and collation for segmentby columns * #6339 Fix tablespace with constraints * #6343 Enable segmentwise recompression in compression policy **Thanks** * @fetchezar for reporting an issue with compression policy error messages * @jflambert for reporting the background worker log level issue * @torazem for reporting an issue with compression and large oids * @fetchezar for reporting an issue in the compression policy * @lyp-bobi for reporting an issue with tablespace with constraints * @pdipesh02 for contributing to the implementation of the metadata for chunk creation time, the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time * @lkshminarayanan for all his work on PG16 support
General principles for statements in update/downgrade scripts
- The
search_path
for these scripts will be locked down topg_catalog, pg_temp
. Locking downsearch_path
happens inpre-update.sql
. Therefore all object references need to be fully qualified unless they reference objects frompg_catalog
. Use@extschema@
to refer to the target schema of the installation (resolves topublic
by default). - Creating objects must not use IF NOT EXISTS as this will introduce privilege escalation vulnerabilities.
- All functions should have explicit
search_path
. Setting explicitsearch_path
will prevent SQL function inlining for functions and transaction control for procedures so for some functions/procedures it is acceptable to not have explicitsearch_path
. Special care needs to be taken with those functions/procedures by either settingsearch_path
in function body or having only fully qualified object references including operators. - When generating the install scripts
CREATE OR REPLACE
will be changed toCREATE
to prevent users from precreating extension objects. Since we needCREATE OR REPLACE
for update scripts and we don't want to maintain two versions of the sql files containing the function definitions we useCREATE OR REPLACE
in those. - Any object added in a new version needs to have an equivalent
CREATE
statement in the update script withoutOR REPLACE
to prevent precreation of the object. - The creation of new metadata tables need to be part of modfiles,
similar to
ALTER
s of such tables. Otherwise, later modfiles cannot rely on those tables being present.
Extension updates
This directory contains "modfiles" (SQL scripts) with modifications that are applied when updating from one version of the extension to another.
The actual update scripts are compiled from modfiles by concatenating them with the current source code (which should come at the end of the resulting update script). Update scripts can "jump" several versions by using multiple modfiles in order. There are two types of modfiles:
- Transition modfiles named
<from>-<to>.sql
, wherefrom
andto
indicate the (adjacent) versions transitioning between. Transition modfiles are concatenated to form the lineage from an origin version to any later version. - Origin modfiles named .sql, which are included only in
update scripts that origin at the particular version given in the
name. So, for instance,
0.7.0.sql
is only included in the script moving from0.7.0
to the current version, but not in, e.g., the update script for0.4.0
to the current version. These files typically contain fixes for bugs that are specific to the origin version, but are no longer present in the transition modfiles.
Notes on post_update.sql
We use a special config var (timescaledb.update_script_stage )
to notify that dependencies have been setup and now timescaledb
specific queries can be enabled. This is useful if we want to,
for example, modify objects that need timescaledb specific syntax as
part of the extension update).
The scripts in post_update.sql are executed as part of the ALTER EXTENSION
stmt.
Note that modfiles that contain no changes need not exist as a
file. Transition modfiles must, however, be listed in the
CMakeLists.txt
file in the parent directory for an update script to
be built for that version.
Extension downgrades
You can enable the generation of a downgrade file by setting
GENERATE_DOWNGRADE_SCRIPT
to ON
, for example:
./bootstrap -DGENERATE_DOWNGRADE_SCRIPT=ON
To support downgrades to previous versions of the extension, it is necessary to execute CMake from a Git repository since the generation of a downgrade script requires access to the previous version files that are used to generate an update script. In addition, we only generate a downgrade script to the immediate preceeding version and not to any other preceeding versions.
The source and target versions are found in be found in the file
version.config
file in the root of the source tree, where version
is the source version and downgrade_to_version
is the target
version. Note that we have a separate field for the downgrade.
A downgrade file consists of:
- A prolog that is retrieved from the target version.
- A version-specific piece of code that exists on the source version.
- An epilog that is retrieved from the target version.
The prolog consists of the files mentioned in the PRE_UPDATE_FILES
variable in the target version of cmake/ScriptFiles.cmake
.
The version-specific code is found in the source version of the file
sql/updates/reverse-dev.sql
.
The epilog consists of the files in variables SOURCE_FILES
,
SET_POST_UPDATE_STAGE
, POST_UPDATE_FILES
, and UNSET_UPDATE_STAGE
in that order.
For downgrades to work correctly, some rules need to be followed:
- If you add new objects in
sql/updates/latest-dev.sql
, you need to remove them in the version-specific downgrade file. Thesql/updates/pre-update.sql
in the target version do not know about objects created in the source version, so they need to be dropped explicitly. - Since
sql/updates/pre-update.sql
can be executed on a later version of the extension, it might be that some objects have been removed and do not exist. HenceDROP
calls need to useIF NOT EXISTS
.
Note that, in contrast to update scripts, downgrade scripts are not built by composing several downgrade scripts into a more extensive downgrade script. The downgrade scripts are intended to be use only in special cases and are not intended to be use to move up and down between versions at will, which is why we only generate a downgrade script to the immediately preceeding version.
When releasing a new version
When releasing a new version, please rename the file reverse-dev.sql
to <version>--<downgrade_to_version>.sql
and add that name to
REV_FILES
variable in the sql/CMakeLists.txt
. This will allow
generation of downgrade scripts for any version in that list, but it
is currently not added.