This release contains performance improvements for compressed hypertables and continuous aggregates and bug fixes since the 2.11.2 release. We recommend that you upgrade at the next available opportunity. This release moves all internal functions from the _timescaleb_internal schema into the _timescaledb_functions schema. This separates code from internal data objects and improves security by allowing more restrictive permissions for the code schema. If you are calling any of those internal functions you should adjust your code as soon as possible. This version also includes a compatibility layer that allows calling them in the old location but that layer will be removed in 2.14.0. **PostgreSQL 12 support removal announcement** Following the deprecation announcement for PostgreSQL 12 in TimescaleDB 2.10, PostgreSQL 12 is not supported starting with TimescaleDB 2.12. Currently supported PostgreSQL major versions are 13, 14 and 15. PostgreSQL 16 support will be added with a following TimescaleDB release. **Features** * #5137 Insert into index during chunk compression * #5150 MERGE support on hypertables * #5515 Make hypertables support replica identity * #5586 Index scan support during UPDATE/DELETE on compressed hypertables * #5596 Support for partial aggregations at chunk level * #5599 Enable ChunkAppend for partially compressed chunks * #5655 Improve the number of parallel workers for decompression * #5758 Enable altering job schedule type through `alter_job` * #5805 Make logrepl markers for (partial) decompressions * #5809 Relax invalidation threshold table-level lock to row-level when refreshing a Continuous Aggregate * #5839 Support CAgg names in chunk_detailed_size * #5852 Make set_chunk_time_interval CAggs aware * #5868 Allow ALTER TABLE ... REPLICA IDENTITY (FULL|INDEX) on materialized hypertables (continuous aggregates) * #5875 Add job exit status and runtime to log * #5909 CREATE INDEX ONLY ON hypertable creates index on chunks **Bugfixes** * #5860 Fix interval calculation for hierarchical CAggs * #5894 Check unique indexes when enabling compression * #5951 _timescaledb_internal.create_compressed_chunk doesn't account for existing uncompressed rows * #5988 Move functions to _timescaledb_functions schema * #5788 Chunk_create must add an existing table or fail * #5872 Fix duplicates on partially compressed chunk reads * #5918 Fix crash in COPY from program returning error * #5990 Place data in first/last function in correct mctx * #5991 Call eq_func correctly in time_bucket_gapfill * #6015 Correct row count in EXPLAIN ANALYZE INSERT .. ON CONFLICT output * #6035 Fix server crash on UPDATE of compressed chunk * #6044 Fix server crash when using duplicate segmentby column * #6045 Fix segfault in set_integer_now_func * #6053 Fix approximate_row_count for CAggs * #6081 Improve compressed DML datatype handling * #6084 Propagate parameter changes to decompress child nodes **Thanks** * @ajcanterbury for reporting a problem with lateral joins on compressed chunks * @alexanderlaw for reporting multiple server crashes * @lukaskirner for reporting a bug with monthly continuous aggregates * @mrksngl for reporting a bug with unusual user names * @willsbit for reporting a crash in time_bucket_gapfill
General principles for statements in update/downgrade scripts
- The
search_path
for these scripts will be locked down topg_catalog, pg_temp
. Locking downsearch_path
happens inpre-update.sql
. Therefore all object references need to be fully qualified unless they reference objects frompg_catalog
. Use@extschema@
to refer to the target schema of the installation (resolves topublic
by default). - Creating objects must not use IF NOT EXISTS as this will introduce privilege escalation vulnerabilities.
- All functions should have explicit
search_path
. Setting explicitsearch_path
will prevent SQL function inlining for functions and transaction control for procedures so for some functions/procedures it is acceptable to not have explicitsearch_path
. Special care needs to be taken with those functions/procedures by either settingsearch_path
in function body or having only fully qualified object references including operators. - When generating the install scripts
CREATE OR REPLACE
will be changed toCREATE
to prevent users from precreating extension objects. Since we needCREATE OR REPLACE
for update scripts and we don't want to maintain two versions of the sql files containing the function definitions we useCREATE OR REPLACE
in those. - Any object added in a new version needs to have an equivalent
CREATE
statement in the update script withoutOR REPLACE
to prevent precreation of the object. - The creation of new metadata tables need to be part of modfiles,
similar to
ALTER
s of such tables. Otherwise, later modfiles cannot rely on those tables being present.
Extension updates
This directory contains "modfiles" (SQL scripts) with modifications that are applied when updating from one version of the extension to another.
The actual update scripts are compiled from modfiles by concatenating them with the current source code (which should come at the end of the resulting update script). Update scripts can "jump" several versions by using multiple modfiles in order. There are two types of modfiles:
- Transition modfiles named
<from>-<to>.sql
, wherefrom
andto
indicate the (adjacent) versions transitioning between. Transition modfiles are concatenated to form the lineage from an origin version to any later version. - Origin modfiles named .sql, which are included only in
update scripts that origin at the particular version given in the
name. So, for instance,
0.7.0.sql
is only included in the script moving from0.7.0
to the current version, but not in, e.g., the update script for0.4.0
to the current version. These files typically contain fixes for bugs that are specific to the origin version, but are no longer present in the transition modfiles.
Notes on post_update.sql
We use a special config var (timescaledb.update_script_stage )
to notify that dependencies have been setup and now timescaledb
specific queries can be enabled. This is useful if we want to,
for example, modify objects that need timescaledb specific syntax as
part of the extension update).
The scripts in post_update.sql are executed as part of the ALTER EXTENSION
stmt.
Note that modfiles that contain no changes need not exist as a
file. Transition modfiles must, however, be listed in the
CMakeLists.txt
file in the parent directory for an update script to
be built for that version.
Extension downgrades
You can enable the generation of a downgrade file by setting
GENERATE_DOWNGRADE_SCRIPT
to ON
, for example:
./bootstrap -DGENERATE_DOWNGRADE_SCRIPT=ON
To support downgrades to previous versions of the extension, it is necessary to execute CMake from a Git repository since the generation of a downgrade script requires access to the previous version files that are used to generate an update script. In addition, we only generate a downgrade script to the immediate preceeding version and not to any other preceeding versions.
The source and target versions are found in be found in the file
version.config
file in the root of the source tree, where version
is the source version and downgrade_to_version
is the target
version. Note that we have a separate field for the downgrade.
A downgrade file consists of:
- A prolog that is retrieved from the target version.
- A version-specific piece of code that exists on the source version.
- An epilog that is retrieved from the target version.
The prolog consists of the files mentioned in the PRE_UPDATE_FILES
variable in the target version of cmake/ScriptFiles.cmake
.
The version-specific code is found in the source version of the file
sql/updates/reverse-dev.sql
.
The epilog consists of the files in variables SOURCE_FILES
,
SET_POST_UPDATE_STAGE
, POST_UPDATE_FILES
, and UNSET_UPDATE_STAGE
in that order.
For downgrades to work correctly, some rules need to be followed:
- If you add new objects in
sql/updates/latest-dev.sql
, you need to remove them in the version-specific downgrade file. Thesql/updates/pre-update.sql
in the target version do not know about objects created in the source version, so they need to be dropped explicitly. - Since
sql/updates/pre-update.sql
can be executed on a later version of the extension, it might be that some objects have been removed and do not exist. HenceDROP
calls need to useIF NOT EXISTS
.
Note that, in contrast to update scripts, downgrade scripts are not built by composing several downgrade scripts into a more extensive downgrade script. The downgrade scripts are intended to be use only in special cases and are not intended to be use to move up and down between versions at will, which is why we only generate a downgrade script to the immediately preceeding version.
When releasing a new version
When releasing a new version, please rename the file reverse-dev.sql
to <version>--<downgrade_to_version>.sql
and add that name to
REV_FILES
variable in the sql/CMakeLists.txt
. This will allow
generation of downgrade scripts for any version in that list, but it
is currently not added.