This release contains performance improvements and bug fixes since the 2.14.2 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Support `time_bucket` with `origin` and/or `offset` on Continuous Aggregate * Compression improvements: - Improve expression pushdown - Add minmax sparse indexes when compressing columns with btree indexes - Make compression use the defaults functions - Vectorize filters in WHERE clause that contain text equality operators and LIKE expressions **Deprecation warning** * Starting on this release will not be possible to create Continuous Aggregate using `time_bucket_ng` anymore and it will be completely removed on the upcoming releases. * Recommend users to [migrate their old Continuous Aggregate format to the new one](https://docs.timescale.com/use-timescale/latest/continuous-aggregates/migrate/) because it support will be completely removed in next releases prevent them to migrate. * This is the last release supporting PostgreSQL 13. **For on-premise users and this release only**, you will need to run [this SQL script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql) after running `ALTER EXTENSION`. More details can be found in the pull request [#6786](https://github.com/timescale/timescaledb/pull/6797). **Features** * #6382 Support for time_bucket with origin and offset in CAggs * #6696 Improve defaults for compression segment_by and order_by * #6705 Add sparse minmax indexes for compressed columns that have uncompressed btree indexes * #6754 Allow DROP CONSTRAINT on compressed hypertables * #6767 Add metadata table `_timestaledb_internal.bgw_job_stat_history` for tracking job execution history * #6798 Prevent usage of deprecated time_bucket_ng in CAgg definition * #6810 Add telemetry for access methods * #6811 Remove no longer relevant timescaledb.allow_install_without_preload GUC * #6837 Add migration path for CAggs using time_bucket_ng * #6865 Update the watermark when truncating a CAgg **Bugfixes** * #6617 Fix error in show_chunks * #6621 Remove metadata when dropping chunks * #6677 Fix snapshot usage in CAgg invalidation scanner * #6698 Define meaning of 0 retries for jobs as no retries * #6717 Fix handling of compressed tables with primary or unique index in COPY path * #6726 Fix constify cagg_watermark using window function when querying a CAgg * #6729 Fix NULL start value handling in CAgg refresh * #6732 Fix CAgg migration with custom timezone / date format settings * #6752 Remove custom autovacuum setting from compressed chunks * #6770 Fix plantime chunk exclusion for OSM chunk * #6789 Fix deletes with subqueries and compression * #6796 Fix a crash involving a view on a hypertable * #6797 Fix foreign key constraint handling on compressed hypertables * #6816 Fix handling of chunks with no contraints * #6820 Fix a crash when the ts_hypertable_insert_blocker was called directly * #6849 Use non-orderby compressed metadata in compressed DML * #6867 Clean up compression settings when deleting compressed cagg * #6869 Fix compressed DML with constraints of form value OP column * #6870 Fix bool expression pushdown for queries on compressed chunks **Thanks** * @brasic for reporting a crash when the ts_hypertable_insert_blocker was called directly * @bvanelli for reporting an issue with the jobs retry count * @djzurawsk For reporting error when dropping chunks * @Dzuzepppe for reporting an issue with DELETEs using subquery on compressed chunk working incorrectly. * @hongquan For reporting a 'timestamp out of range' error during CAgg migrations * @kevcenteno for reporting an issue with the show_chunks API showing incorrect output when 'created_before/created_after' was used with time-partitioned columns. * @mahipv For starting working on the job history PR * @rovo89 For reporting constify cagg_watermark not working using window function when querying a CAgg
General principles for statements in update/downgrade scripts
- The
search_path
for these scripts will be locked down topg_catalog, pg_temp
. Locking downsearch_path
happens inpre-update.sql
. Therefore all object references need to be fully qualified unless they reference objects frompg_catalog
. Use@extschema@
to refer to the target schema of the installation (resolves topublic
by default). - Creating objects must not use IF NOT EXISTS as this will introduce privilege escalation vulnerabilities.
- All functions should have explicit
search_path
. Setting explicitsearch_path
will prevent SQL function inlining for functions and transaction control for procedures so for some functions/procedures it is acceptable to not have explicitsearch_path
. Special care needs to be taken with those functions/procedures by either settingsearch_path
in function body or having only fully qualified object references including operators. - When generating the install scripts
CREATE OR REPLACE
will be changed toCREATE
to prevent users from precreating extension objects. Since we needCREATE OR REPLACE
for update scripts and we don't want to maintain two versions of the sql files containing the function definitions we useCREATE OR REPLACE
in those. - Any object added in a new version needs to have an equivalent
CREATE
statement in the update script withoutOR REPLACE
to prevent precreation of the object. - The creation of new metadata tables need to be part of modfiles,
similar to
ALTER
s of such tables. Otherwise, later modfiles cannot rely on those tables being present.
Extension updates
This directory contains "modfiles" (SQL scripts) with modifications that are applied when updating from one version of the extension to another.
The actual update scripts are compiled from modfiles by concatenating them with the current source code (which should come at the end of the resulting update script). Update scripts can "jump" several versions by using multiple modfiles in order. There are two types of modfiles:
- Transition modfiles named
<from>-<to>.sql
, wherefrom
andto
indicate the (adjacent) versions transitioning between. Transition modfiles are concatenated to form the lineage from an origin version to any later version. - Origin modfiles named .sql, which are included only in
update scripts that origin at the particular version given in the
name. So, for instance,
0.7.0.sql
is only included in the script moving from0.7.0
to the current version, but not in, e.g., the update script for0.4.0
to the current version. These files typically contain fixes for bugs that are specific to the origin version, but are no longer present in the transition modfiles.
Notes on post_update.sql
We use a special config var (timescaledb.update_script_stage )
to notify that dependencies have been setup and now timescaledb
specific queries can be enabled. This is useful if we want to,
for example, modify objects that need timescaledb specific syntax as
part of the extension update).
The scripts in post_update.sql are executed as part of the ALTER EXTENSION
stmt.
Note that modfiles that contain no changes need not exist as a
file. Transition modfiles must, however, be listed in the
CMakeLists.txt
file in the parent directory for an update script to
be built for that version.
Extension downgrades
You can enable the generation of a downgrade file by setting
GENERATE_DOWNGRADE_SCRIPT
to ON
, for example:
./bootstrap -DGENERATE_DOWNGRADE_SCRIPT=ON
To support downgrades to previous versions of the extension, it is necessary to execute CMake from a Git repository since the generation of a downgrade script requires access to the previous version files that are used to generate an update script. In addition, we only generate a downgrade script to the immediate preceeding version and not to any other preceeding versions.
The source and target versions are found in be found in the file
version.config
file in the root of the source tree, where version
is the source version and downgrade_to_version
is the target
version. Note that we have a separate field for the downgrade.
A downgrade file consists of:
- A prolog that is retrieved from the target version.
- A version-specific piece of code that exists on the source version.
- An epilog that is retrieved from the target version.
The prolog consists of the files mentioned in the PRE_UPDATE_FILES
variable in the target version of cmake/ScriptFiles.cmake
.
The version-specific code is found in the source version of the file
sql/updates/reverse-dev.sql
.
The epilog consists of the files in variables SOURCE_FILES
,
SET_POST_UPDATE_STAGE
, POST_UPDATE_FILES
, and UNSET_UPDATE_STAGE
in that order.
For downgrades to work correctly, some rules need to be followed:
- If you add new objects in
sql/updates/latest-dev.sql
, you need to remove them in the version-specific downgrade file. Thesql/updates/pre-update.sql
in the target version do not know about objects created in the source version, so they need to be dropped explicitly. - Since
sql/updates/pre-update.sql
can be executed on a later version of the extension, it might be that some objects have been removed and do not exist. HenceDROP
calls need to useIF NOT EXISTS
.
Note that, in contrast to update scripts, downgrade scripts are not built by composing several downgrade scripts into a more extensive downgrade script. The downgrade scripts are intended to be use only in special cases and are not intended to be use to move up and down between versions at will, which is why we only generate a downgrade script to the immediately preceeding version.
When releasing a new version
When releasing a new version, please rename the file reverse-dev.sql
to <version>--<downgrade_to_version>.sql
and add that name to
REV_FILES
variable in the sql/CMakeLists.txt
. This will allow
generation of downgrade scripts for any version in that list, but it
is currently not added.