Fabrízio de Royes Mello defe4ef581 Release 2.15.0
This release contains performance improvements and bug fixes since
the 2.14.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:
* Support `time_bucket` with `origin` and/or `offset` on Continuous Aggregate
* Compression improvements:
  - Improve expression pushdown
  - Add minmax sparse indexes when compressing columns with btree indexes
  - Make compression use the defaults functions
  - Vectorize filters in WHERE clause that contain text equality operators and LIKE expressions

**Deprecation warning**
* Starting on this release will not be possible to create Continuous Aggregate using `time_bucket_ng` anymore and it will be completely removed on the upcoming releases.
* Recommend users to [migrate their old Continuous Aggregate format to the new one](https://docs.timescale.com/use-timescale/latest/continuous-aggregates/migrate/) because it support will be completely removed in next releases prevent them to migrate.
* This is the last release supporting PostgreSQL 13.

**For on-premise users and this release only**, you will need to run [this SQL script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql) after running `ALTER EXTENSION`. More details can be found in the pull request [#6786](https://github.com/timescale/timescaledb/pull/6797).

**Features**
* #6382 Support for time_bucket with origin and offset in CAggs
* #6696 Improve defaults for compression segment_by and order_by
* #6705 Add sparse minmax indexes for compressed columns that have uncompressed btree indexes
* #6754 Allow DROP CONSTRAINT on compressed hypertables
* #6767 Add metadata table `_timestaledb_internal.bgw_job_stat_history` for tracking job execution history
* #6798 Prevent usage of deprecated time_bucket_ng in CAgg definition
* #6810 Add telemetry for access methods
* #6811 Remove no longer relevant timescaledb.allow_install_without_preload GUC
* #6837 Add migration path for CAggs using time_bucket_ng
* #6865 Update the watermark when truncating a CAgg

**Bugfixes**
* #6617 Fix error in show_chunks
* #6621 Remove metadata when dropping chunks
* #6677 Fix snapshot usage in CAgg invalidation scanner
* #6698 Define meaning of 0 retries for jobs as no retries
* #6717 Fix handling of compressed tables with primary or unique index in COPY path
* #6726 Fix constify cagg_watermark using window function when querying a CAgg
* #6729 Fix NULL start value handling in CAgg refresh
* #6732 Fix CAgg migration with custom timezone / date format settings
* #6752 Remove custom autovacuum setting from compressed chunks
* #6770 Fix plantime chunk exclusion for OSM chunk
* #6789 Fix deletes with subqueries and compression
* #6796 Fix a crash involving a view on a hypertable
* #6797 Fix foreign key constraint handling on compressed hypertables
* #6816 Fix handling of chunks with no contraints
* #6820 Fix a crash when the ts_hypertable_insert_blocker was called directly
* #6849 Use non-orderby compressed metadata in compressed DML
* #6867 Clean up compression settings when deleting compressed cagg
* #6869 Fix compressed DML with constraints of form value OP column
* #6870 Fix bool expression pushdown for queries on compressed chunks

**Thanks**
* @brasic for reporting a crash when the ts_hypertable_insert_blocker was called directly
* @bvanelli for reporting an issue with the jobs retry count
* @djzurawsk For reporting error when dropping chunks
* @Dzuzepppe for reporting an issue with DELETEs using subquery on compressed chunk working incorrectly.
* @hongquan For reporting a 'timestamp out of range' error during CAgg migrations
* @kevcenteno for reporting an issue with the show_chunks API showing incorrect output when 'created_before/created_after' was used with time-partitioned columns.
* @mahipv For starting working on the job history PR
* @rovo89 For reporting constify cagg_watermark not working using window function when querying a CAgg
2024-05-06 12:40:40 -03:00
..
2021-07-02 09:12:38 +02:00
2021-07-28 17:08:13 +02:00
2021-08-24 10:42:29 +02:00
2021-08-24 10:42:29 +02:00
2021-10-27 17:28:26 -03:00
2021-10-27 17:28:26 -03:00
2021-12-02 15:11:36 -05:00
2021-12-03 13:51:11 -05:00
2022-02-10 12:13:01 +01:00
2022-04-12 11:52:31 +02:00
2022-05-23 17:58:20 +02:00
2022-07-12 12:18:25 +02:00
2022-07-12 12:18:25 +02:00
2022-07-25 13:16:28 +02:00
2022-10-05 14:40:25 +02:00
2022-10-07 10:10:22 +02:00
2022-12-23 14:38:45 +01:00
2023-01-23 15:55:10 +05:30
2023-01-25 17:54:54 +05:30
2023-02-03 20:04:18 +05:30
2023-02-07 15:53:42 +05:30
2023-02-20 11:06:05 +05:30
2023-04-20 17:55:18 +04:00
2023-04-28 10:05:11 -03:00
2023-04-28 10:05:11 -03:00
2023-05-17 15:04:30 +02:00
2023-06-28 16:49:02 +02:00
2023-08-09 19:24:21 +03:00
2023-10-10 12:33:48 +03:00
2023-10-19 14:37:28 +02:00
2023-10-23 18:31:36 +02:00
2023-11-27 16:13:51 +01:00
2024-01-04 10:04:10 +01:00
2024-02-14 17:19:04 +01:00
2024-02-15 06:15:59 +01:00
2024-02-20 07:39:51 +01:00
2024-02-20 13:02:10 +01:00
2024-05-06 12:40:40 -03:00
2024-05-06 12:40:40 -03:00
2022-05-03 07:55:43 +02:00
2024-05-03 09:05:57 +02:00

General principles for statements in update/downgrade scripts

  1. The search_path for these scripts will be locked down to pg_catalog, pg_temp. Locking down search_path happens in pre-update.sql. Therefore all object references need to be fully qualified unless they reference objects from pg_catalog. Use @extschema@ to refer to the target schema of the installation (resolves to public by default).
  2. Creating objects must not use IF NOT EXISTS as this will introduce privilege escalation vulnerabilities.
  3. All functions should have explicit search_path. Setting explicit search_path will prevent SQL function inlining for functions and transaction control for procedures so for some functions/procedures it is acceptable to not have explicit search_path. Special care needs to be taken with those functions/procedures by either setting search_path in function body or having only fully qualified object references including operators.
  4. When generating the install scripts CREATE OR REPLACE will be changed to CREATE to prevent users from precreating extension objects. Since we need CREATE OR REPLACE for update scripts and we don't want to maintain two versions of the sql files containing the function definitions we use CREATE OR REPLACE in those.
  5. Any object added in a new version needs to have an equivalent CREATE statement in the update script without OR REPLACE to prevent precreation of the object.
  6. The creation of new metadata tables need to be part of modfiles, similar to ALTERs of such tables. Otherwise, later modfiles cannot rely on those tables being present.

Extension updates

This directory contains "modfiles" (SQL scripts) with modifications that are applied when updating from one version of the extension to another.

The actual update scripts are compiled from modfiles by concatenating them with the current source code (which should come at the end of the resulting update script). Update scripts can "jump" several versions by using multiple modfiles in order. There are two types of modfiles:

  • Transition modfiles named <from>-<to>.sql, where from and to indicate the (adjacent) versions transitioning between. Transition modfiles are concatenated to form the lineage from an origin version to any later version.
  • Origin modfiles named .sql, which are included only in update scripts that origin at the particular version given in the name. So, for instance, 0.7.0.sql is only included in the script moving from 0.7.0 to the current version, but not in, e.g., the update script for 0.4.0 to the current version. These files typically contain fixes for bugs that are specific to the origin version, but are no longer present in the transition modfiles.

Notes on post_update.sql We use a special config var (timescaledb.update_script_stage ) to notify that dependencies have been setup and now timescaledb specific queries can be enabled. This is useful if we want to, for example, modify objects that need timescaledb specific syntax as part of the extension update). The scripts in post_update.sql are executed as part of the ALTER EXTENSION stmt.

Note that modfiles that contain no changes need not exist as a file. Transition modfiles must, however, be listed in the CMakeLists.txt file in the parent directory for an update script to be built for that version.

Extension downgrades

You can enable the generation of a downgrade file by setting GENERATE_DOWNGRADE_SCRIPT to ON, for example:

./bootstrap -DGENERATE_DOWNGRADE_SCRIPT=ON

To support downgrades to previous versions of the extension, it is necessary to execute CMake from a Git repository since the generation of a downgrade script requires access to the previous version files that are used to generate an update script. In addition, we only generate a downgrade script to the immediate preceeding version and not to any other preceeding versions.

The source and target versions are found in be found in the file version.config file in the root of the source tree, where version is the source version and downgrade_to_version is the target version. Note that we have a separate field for the downgrade.

A downgrade file consists of:

  • A prolog that is retrieved from the target version.
  • A version-specific piece of code that exists on the source version.
  • An epilog that is retrieved from the target version.

The prolog consists of the files mentioned in the PRE_UPDATE_FILES variable in the target version of cmake/ScriptFiles.cmake.

The version-specific code is found in the source version of the file sql/updates/reverse-dev.sql.

The epilog consists of the files in variables SOURCE_FILES, SET_POST_UPDATE_STAGE, POST_UPDATE_FILES, and UNSET_UPDATE_STAGE in that order.

For downgrades to work correctly, some rules need to be followed:

  1. If you add new objects in sql/updates/latest-dev.sql, you need to remove them in the version-specific downgrade file. The sql/updates/pre-update.sql in the target version do not know about objects created in the source version, so they need to be dropped explicitly.
  2. Since sql/updates/pre-update.sql can be executed on a later version of the extension, it might be that some objects have been removed and do not exist. Hence DROP calls need to use IF NOT EXISTS.

Note that, in contrast to update scripts, downgrade scripts are not built by composing several downgrade scripts into a more extensive downgrade script. The downgrade scripts are intended to be use only in special cases and are not intended to be use to move up and down between versions at will, which is why we only generate a downgrade script to the immediately preceeding version.

When releasing a new version

When releasing a new version, please rename the file reverse-dev.sql to <version>--<downgrade_to_version>.sql and add that name to REV_FILES variable in the sql/CMakeLists.txt. This will allow generation of downgrade scripts for any version in that list, but it is currently not added.