930 Commits

Author SHA1 Message Date
Alexander Kuzmenkov
98780f7d59
Release 2.17.2 -- main branch (#7421) 2024-11-07 12:30:15 +00:00
Sven Klemm
64c36719fb Remove obsolete job
policy_job_error_retention was removed in 2.15.0 but we did not
get rid of the job that called it back then. This patch removes
the defunct job definition calling that function.
2024-10-22 08:39:40 +02:00
Fabrízio de Royes Mello
3c707bf28a Release 2.17.1 on main
This release contains performance improvements and bug fixes since
the 2.17.0 release. We recommend that you upgrade at the next
available opportunity.

**Features**
* #7360 Add chunk skipping GUC

**Bugfixes**
* #7335 Change log level used in compression
* #7342 Fix collation for in-memory tuple filtering

**Thanks**
* @gmilamjr for reporting an issue with the log level of compression messages
* @hackbnw for reporting an issue with collation during tuple filtering
2024-10-21 15:16:05 -03:00
Mats Kindahl
e0a7a6f6e1 Hyperstore renamed to hypercore
This changes the names of all symbols, comments, files, and functions
to use "hypercore" rather than "hyperstore".
2024-10-16 13:13:34 +02:00
Mats Kindahl
406901d838 Rename files using "hyperstore" to use "hypercore"
Files and directories using "hyperstore" as part of the name is moved
to the new name using "hypercore".
2024-10-16 13:13:34 +02:00
Erik Nordström
f5eae6dc70 Support hyperstore in compression policy
Make sure that hyperstore can be used in a compression policy by
setting `compress_using => 'hyperstore'` in the policy configuration.
2024-10-16 13:13:34 +02:00
Mats Kindahl
1025eb4def Add tests for hash index scans
Duplicate the existing index tests to use hash index and make sure that
it works correctly. We rename the existing file to be able to have
tests for all the different kinds of indexes.

We also rename the existing `explain_anonymize` to
`explain_analyze_anonymize` and introduce a non-analyze version with
the original name.

It also modifies the permissions of `_timescaledb_debug` schema and
`_timescaledb_debug.is_compressed_tid` function to make sure we can use
it correctly.
2024-10-16 13:13:34 +02:00
Erik Nordström
d1a2ea4961 Make compress_chunk() work with Hyperstore
The `compress_chunk()` function can now be used to create hyperstores
by passing the option `compress_using => 'hyperstore'` to the
function.

Using the `compress_chunk()` function is an alternative to using
`ALTER TABLE my_hyperstore SET ACCESS METHOD` that is compatible with
the existing way of compressing hypertable chunks. It will also make
it easier to support hyperstore compression via compression policies.

Additionally, implement "fast migration" to hyperstore when a table is
already compressed. In that case, simply update the PG catalog to say
that the table is using hyperstore as TAM without rewriting the
table. This fast migration works with both with `...SET ACCESS METHOD`
and `compress_chunk()`.
2024-10-16 13:13:34 +02:00
Erik Nordström
2216e9238d Use Hyperstore naming in more places
After having adopted "Hyperstore" as a name for the TAM, there are
some places in the code that still use the old "compression AM" naming
scheme so convert those to use Hyperstore naming.
2024-10-16 13:13:34 +02:00
Erik Nordström
e5fd18728c Add VACUUM support in hyperstore
Implement vacuum by internally calling vacuum on both the compressed
and non-compressed relations.

Since hyperstore indexes are defined on the non-compressed relation,
vacuuming the compressed relation won't clean up compressed tuples
from those indexes. To handle this, a proxy index is defined on each
compressed relation in order to direct index vacuum calls to the
corresponding indexes on the hyperstore relation. The proxy index also
translates the encoded TIDs stored in the index to proper TIDs for the
compressed relation.
2024-10-16 13:13:34 +02:00
Mats Kindahl
ab9f072df7 Replace compressionam with hyperstore
Replace "compressionam" in all functions and symbols with "hyperstore".
2024-10-16 13:13:34 +02:00
Mats Kindahl
00999801e2 Add test for SELECT FOR UPDATE
Add check that SELECT FOR UPDATE does not crash as well as an isolation
test to make sure that it locks rows properly.

Also adds a debug function to check if a TID is for a compressed tuple.
2024-10-16 13:13:34 +02:00
Mats Kindahl
1373ec31f8 Rename compression TAM to hyperstore
The access method and associated tests is renamed to "hyperstore".
2024-10-16 13:13:34 +02:00
Erik Nordström
cb8c756a1d Add initial compression TAM
Implement the table-access method API around compression in order to
have, among other things, seamless index support on compressed data.

The current functionality is rudimentary but common operations work,
including sequence scans.
2024-10-16 13:13:34 +02:00
Fabrízio de Royes Mello
8d34c86d22 Remove some cagg_migrate test flakiness
During some tests on #7104 I've noticed some flakiness on Continuous
Aggregates migration tests due to some output ordering.

Fixed it by explicity order policies by their IDs.
2024-10-14 03:53:22 -03:00
Pallavi Sontakke
5858892d54
Release 2.17.0
This release adds support for PostgreSQL 17, significantly improves the
performance of continuous aggregate refreshes,
and contains performance improvements for analytical queries and delete
operations over compressed hypertables.
We recommend that you upgrade at the next available opportunity.

**Highlighted features in TimescaleDB v2.17.0**

* Full PostgreSQL 17 support for all existing features. TimescaleDB
v2.17 is available for PostgreSQL 14, 15, 16, and 17.

* Significant performance improvements for continuous aggregate
policies: continuous aggregate refresh is now using
`merge` instead of deleting old materialized data and re-inserting.

This update can decrease dramatically the amount of data that must be
written on the continuous aggregate in the
presence of a small number of changes, reduce the `i/o` cost of
refreshing a continuous aggregate, and generate fewer
  Write-Ahead Logs (`WAL`).
Overall, continuous aggregate policies will be more lightweight, use
less system resources, and complete faster.

* Increased performance for real-time analytical queries over compressed
hypertables:
we are excited to introduce additional Single Instruction, Multiple Data
(`SIMD`) vectorization optimization to our
engine by supporting vectorized execution for queries that group by
using the `segment_by` column(s) and
aggregate using the basic aggregate functions (`sum`, `count`, `avg`,
`min`, `max`).

Stay tuned for more to come in follow-up releases! Support for grouping
on additional columns, filtered aggregation,
  vectorized expressions, and `time_bucket` is coming soon.

* Improved performance of deletes on compressed hypertables when a large
amount of data is affected.

This improvement speeds up operations that delete whole segments by
skipping the decompression step.
It is enabled for all deletes that filter by the `segment_by` column(s).

**PostgreSQL 14 deprecation announcement**

We will continue supporting PostgreSQL 14 until April 2025. Closer to
that time, we will announce the specific
version of TimescaleDB in which PostgreSQL 14 support will not be
included going forward.

**Features**
* #6882: Allow delete of full segments on compressed chunks without
decompression.
* #7033: Use `merge` statement on continuous aggregates refresh.
* #7126: Add functions to show the compression information.
* #7147: Vectorize partial aggregation for `sum(int4)` with grouping on
`segment by` columns.
* #7204: Track additional extensions in telemetry.
* #7207: Refactor the `decompress_batches_scan` functions for easier
maintenance.
* #7209: Add a function to drop the `osm` chunk.
* #7275: Add support for the `returning` clause for `merge`.
* #7200: Vectorize common aggregate functions like `min`, `max`, `sum`,
`avg`, `stddev`, `variance` for compressed columns
of arithmetic types, when there is grouping on `segment by` columns or
no grouping.

**Bug fixes**
* #7187: Fix the string literal length for the `compressed_data_info`
function.
* #7191: Fix creating default indexes on chunks when migrating the data.
* #7195: Fix the `segment by` and `order by` checks when dropping a
column from a compressed hypertable.
* #7201: Use the generic extension description when building `apt` and
`rpm` loader packages.
* #7227: Add an index to the `compression_chunk_size` catalog table.
* #7229: Fix the foreign key constraints where the index and the
constraint column order are different.
* #7230: Do not propagate the foreign key constraints to the `osm`
chunk.
* #7234: Release the cache after accessing the cache entry.
* #7258: Force English in the `pg_config` command executed by `cmake` to
avoid the unexpected building errors.
* #7270: Fix the memory leak in compressed DML batch filtering.
* #7286: Fix the index column check while searching for the index.
* #7290: Add check for null offset for continuous aggregates built on
top of continuous aggregates.
* #7301: Make foreign key behavior for hypertables consistent.
* #7318: Fix chunk skipping range filtering.
* #7320: Set the license specific extension comment in the install
script.

**Thanks**
* @MiguelTubio for reporting and fixing the Windows build error.
* @posuch for reporting the misleading extension description in the generic loader packages.
* @snyrkill for discovering and reporting the issue with continuous
aggregates built on top of continuous aggregates.

---------

Signed-off-by: Pallavi Sontakke <pallavi@timescale.com>
Signed-off-by: Yannis Roussos <iroussos@gmail.com>
Signed-off-by: Sven Klemm <31455525+svenklemm@users.noreply.github.com>
Co-authored-by: Yannis Roussos <iroussos@gmail.com>
Co-authored-by: atovpeko <114177030+atovpeko@users.noreply.github.com>
Co-authored-by: Sven Klemm <31455525+svenklemm@users.noreply.github.com>
2024-10-08 15:37:13 +02:00
Sven Klemm
2959bd4294 Set license specific extension comment in install script
Remove the license information from the control file as we package
that with the generic loader and will be used for both Apache licensed
and TSL licensed versions of the extension. This patch adjusts
the extension comment in the install and update script and adds the
appropriate license information.

The effective change from this PR will be that pg_available_extensions
will show the generic message but `\dx` whill show the correct info
depending on what you installed. This setup is mainly due to packaging
constraints as we have 3 extension packages. The loader package,
apache 2 package and community package. The loader package is shared
between apache 2 and community and has the control file with the
extension commment so we dont know which timescaledb version we are
used with in that package.
2024-10-08 07:13:35 +02:00
Ante Kresic
0ac3e3429f Removal of sequence number in compression
Sequence numbers were an optimization for ordering batches based on the
orderby configuration setting. It was used for ordered append and
avoiding sorting compressed data when it matched the query ordering.
However, with enabling changes to compressed data, bookkeeping of
sequence numbers is becoming more of a hassle. Removing them and
using the metadata columns for ordering reduces that burden while
keeping all the existing optimizations that relied on the sequences
in place.
2024-09-30 13:45:47 +02:00
Erik Nordström
616080ef10 Grant USAGE on debug schema
Make it possible for regular users to execute functions in the schema
`_timescaledb_debug`.

As a consequence, the `extension_state()` function in this schema can
now be executed by "public". This is the only function in this schema
and there is no security risk in making it accessible.
2024-09-19 17:31:19 +02:00
Ildar Musin
01231bafd4 Add function to drop the OSM chunk
The function is used by OSM to disable tiering. It removes catalog records
associated with OSM chunk and resets hypertable status.
2024-09-04 11:29:32 +02:00
Mats Kindahl
e1eeedb276 Add index to compression_chunk_size catalog table
During upgrade the function `remove_dropped_chunk_metadata` is used to
update the metadata tables and remove data for chunks marked as
dropped. The function iterates of the chunks of the provided hypertable
and internally does a sequence scan of `compression_chunk_size` table
to locate the `compressed_chunk_id`, resulting in quadratic execution
time. This is usually not noticed for small number of chunks, but for
large number of chunks this becomes a problem.

This commit fixes this by adding an index to `compression_chunk_size`
catalog table, turning the sequence scan into an index scan.
2024-09-04 10:28:13 +02:00
Sven Klemm
bedc86e3d2 Create temporary bgw function in _timescaledb_functions schema
Only creating functions/procedures in a dedicated schema allows us to
further lockdown the _timescaledb_internal schema by removing EXECUTE
privilege for it.
2024-08-24 15:36:11 +02:00
Sven Klemm
8bb266acd1 Add ts_update_placeholder function
To not introduce shared object dependencies on functions in extension
update scripts we use this stub function as placeholder whenever we
need to reference c functions in the update scripts.
2024-08-19 09:47:00 +02:00
Pallavi Sontakke
74a2e5ae7b
2.16.1 post release
Add 2.16.1 to update tests and adjust version configuration
2024-08-07 15:32:39 +05:30
Sven Klemm
90ecc0d58f Release 2.16.1
This release contains bug fixes since the 2.16.0 release. We recommend
that you upgrade at the next available opportunity.

**Bugfixes**
* #7182 Fix untier_chunk for hypertables with foreign keys
2024-08-06 16:51:45 +02:00
Erik Nordström
19239ff8dd Add function to show compression information
Add a function that can be used on a compressed data value to show
some metadata information, such as the compression algorithm used and
the presence of any null values.
2024-08-05 17:34:41 +02:00
Sven Klemm
801d32c63c Post-release adjustments for 2.16.0 2024-08-01 07:08:34 +02:00
Sven Klemm
e4eb666ca3 Release 2.16.0
This release contains significant performance improvements when working with compressed data, extended join
support in continuous aggregates, and the ability to define foreign keys from regular tables towards hypertables.
We recommend that you upgrade at the next available opportunity.

In TimescaleDB v2.16.0 we:

* Introduce multiple performance focused optimizations for data manipulation operations (DML) over compressed chunks.

  Improved upsert performance by more than 100x in some cases and more than 1000x in some update/delete scenarios.

* Add the ability to define chunk skipping indexes on non-partitioning columns of compressed hypertables

  TimescaleDB v2.16.0 extends chunk exclusion to use those skipping (sparse) indexes when queries filter on the relevant columns,
  and prune chunks that do not include any relevant data for calculating the query response.

* Offer new options for use cases that require foreign keys defined.

  You can now add foreign keys from regular tables towards hypertables. We have also removed
  some really annoying locks in the reverse direction that blocked access to referenced tables
  while compression was running.

* Extend Continuous Aggregates to support more types of analytical queries.

  More types of joins are supported, additional equality operators on join clauses, and
  support for joins between multiple regular tables.

**Highlighted features in this release**

* Improved query performance through chunk exclusion on compressed hypertables.

  You can now define chunk skipping indexes on compressed chunks for any column with one of the following
  integer data types: `smallint`, `int`, `bigint`, `serial`, `bigserial`, `date`, `timestamp`, `timestamptz`.

  After you call `enable_chunk_skipping` on a column, TimescaleDB tracks the min and max values for
  that column. TimescaleDB uses that information to exclude chunks for queries that filter on that
  column, and would not find any data in those chunks.

* Improved upsert performance on compressed hypertables.

  By using index scans to verify constraints during inserts on compressed chunks, TimescaleDB speeds
  up some ON CONFLICT clauses by more than 100x.

* Improved performance of updates, deletes, and inserts on compressed hypertables.

  By filtering data while accessing the compressed data and before decompressing, TimescaleDB has
  improved performance for updates and deletes on all types of compressed chunks, as well as inserts
  into compressed chunks with unique constraints.

  By signaling constraint violations without decompressing, or decompressing only when matching
  records are found in the case of updates, deletes and upserts, TimescaleDB v2.16.0 speeds
  up those operations more than 1000x in some update/delete scenarios, and 10x for upserts.

* You can add foreign keys from regular tables to hypertables, with support for all types of cascading options.
  This is useful for hypertables that partition using sequential IDs, and need to reference those IDs from other tables.

* Lower locking requirements during compression for hypertables with foreign keys

  Advanced foreign key handling removes the need for locking referenced tables when new chunks are compressed.
  DML is no longer blocked on referenced tables while compression runs on a hypertable.

* Improved support for queries on Continuous Aggregates

  `INNER/LEFT` and `LATERAL` joins are now supported. Plus, you can now join with multiple regular tables,
  and you can have more than one equality operator on join clauses.

**PostgreSQL 13 support removal announcement**

Following the deprecation announcement for PostgreSQL 13 in TimescaleDB v2.13,
PostgreSQL 13 is no longer supported in TimescaleDB v2.16.

The Currently supported PostgreSQL major versions are 14, 15 and 16.
2024-07-31 18:43:01 +02:00
Fabrízio de Royes Mello
a4a023e89a Rename {enable|disable}_column_stats API
For better understanding we've decided to rename the public API from
`{enable|disable}_column_stats` to `{enable|disable}_chunk_skipping`.
2024-07-26 18:28:17 -03:00
Sven Klemm
af6b4a3911 Change hypertable foreign key handling
Don't copy foreign key constraints to the individual chunks and
instead modify the lookup query to propagate to individual chunks
to mimic how postgres does this for partitioned tables.
This patch also removes the requirement for foreign key columns
to be segmentby columns.
2024-07-22 14:33:00 +02:00
Sven Klemm
b8d958cb9e Block c function definitions in latest-dev.sql
Having c function references in the versioned part of the sql
scripts introduces linking requirements to the update script
potentially preventing version updates. To prevent this we can
have a dummy function in latest-dev.sql since it will get over-
written as the final step of the extension update.
2024-07-21 10:49:57 +02:00
Nikhil Sontakke
50bca31130 Add support for chunk column statistics tracking
Allow users to specify that ranges (min/max values) be tracked
for a specific column using the enable_column_stats() API. We
will store such min/max ranges in a new timescaledb catalog table
_timescaledb_catalog.chunk_column_stats. As of now we support tracking
min/max ranges for smallint, int, bigint, serial, bigserial, date,
timestamp, timestamptz data types. Support for other stats for bloom
filters etc. will be added in the future.

We add an entry of the form (ht_id, invalid_chunk_id, col, -INF, +INF)
into this catalog to indicate that min/max values need to be calculated
for this column in a given hypertable for chunks. We also iterate
through existing chunks and add -INF, +INF entries for them in the
catalog. This allows for selection of these chunks by default since no
min/max values have been calculated for them.

This actual min-max start/end range is calculated later. One of the
entry points is during compression for now. The range is stored in
start (inclusive) and end (exclusive) form. If DML happens into a
compressed chunk then as part of marking it as partial, we also mark
the corresponding catalog entries as "invalid". So partial chunks do
not get excluded further. When recompression happens we get the new
min/max ranges from the uncompressed portion and try to reconcile the
ranges in the catalog based on these new values. This is safe to do in
case of INSERTs and UPDATEs. In case of DELETEs, since we are deleting
rows, it's possible that the min/max ranges change, but as of now we
err on the side of caution and retain the earlier values which can be
larger than the actual range.

We can thus store the min/max values for such columns in this catalog
table at the per-chunk level. Note that these min/max range values do
not participate in partitioning of the data. Such data ranges will be
used for chunk pruning if the WHERE clause of an SQL query specifies
ranges on such a column.

Note that Executor startup time chunk exclusion logic is also able to
use this metadata effectively.

A "DROP COLUMN" on a column with a statistics tracking enabled on it
ends up removing all relevant entries from the catalog tables.

A "decompress_chunk" on a compressed chunk removes its entries from the
"chunk_column_stats" catalog table since now it's available for DML.

Also a new "disable_column_stats" API has been introduced to allow
removal of min/max entries from the catalog for a specific column.
2024-07-12 14:43:16 +05:30
Sven Klemm
26e9eb521a TimescaleDB 2.15.3 post-release adjustments
Adjust CHANGELOG and downgrade scripts for 2.15.3
2024-07-03 13:04:38 +02:00
Pallavi Sontakke
e41b183ee5
Release 2.15.3 (#7089)
This release contains bug fixes since the 2.15.2 release.
Best practice is to upgrade at the next available opportunity.

**Migrating from self-hosted TimescaleDB v2.14.x and earlier**

After you run `ALTER EXTENSION`, you must run [this SQL
script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql).
For more details, see the following pull request
[#6797](https://github.com/timescale/timescaledb/pull/6797).

If you are migrating from TimescaleDB v2.15.0, v2.15.1 or v2.15.2, no
changes are required.

**Bugfixes**
* #7061: Fix the handling of multiple unique indexes in a compressed
INSERT.
* #7080: Fix the `corresponding equivalence member not found` error.
* #7088: Fix the leaks in the DML functions.
* #7035: Fix the error when acquiring a tuple lock on the OSM chunks on
the replica.

**Thanks**
* @Kazmirchuk for reporting the issue about leaks with the functions in
DML.
2024-07-02 17:48:18 +05:30
Nikhil Sontakke
60c9f4d246 Fix bug in default segmentby calc. in compression
There was a typo in the query used for the calculation of default
segmentbys in the case of compression.
2024-06-27 17:50:38 +05:30
Fabrízio de Royes Mello
cdfa1560e5 Refactor code for getting time bucket function Oid
This is a small refactoring for getting time bucket function Oid from
a view definition. It will be necessary for a following PRs for
completely remove the uncessary catalog metadata table
`continuous_aggs_bucket_function`.

Also added a new SQL function `cagg_get_bucket_function_info` to return
all `time_bucket` information based on a user view definition.
2024-06-26 10:33:23 -03:00
Nikhil Sontakke
4e6e8e7e29 Fix a bug in default orderby calc. in compression
There was a "cute" typo in the query used for the calculation of
default orderbys in the case of compression. Fixed that.
2024-06-25 19:03:39 +05:30
Fabrízio de Royes Mello
be15ae68b1 Post release 2.15.2 2024-06-07 17:07:45 -03:00
Fabrízio de Royes Mello
54d5ba1b81 Release 2.15.2
This release contains performance improvements and bug fixes since
the 2.15.0 release. Best practice is to upgrade at the next
available opportunity.

**Migrating from self-hosted TimescaleDB v2.14.x and earlier**

After you run `ALTER EXTENSION`, you must run [this SQL script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql). For more details, see the following pull request [#6797](https://github.com/timescale/timescaledb/pull/6797).

If you are migrating from TimescaleDB v2.15.0 or v2.15.1, no changes are required.

**Bugfixes**
* #6975: Fix sort pushdown for partially compressed chunks.
* #6976: Fix removal of metadata function and update script.
* #6978: Fix segfault in compress_chunk with primary space partition.
* #6993: Disallow hash partitioning on primary column.

**Thanks**
* @gugu for reporting the issue with catalog corruption due to update.
* @srieding for reporting an issue with partially compressed chunks and ordering on joined columns.
2024-06-07 11:16:10 -03:00
Ante Kresic
5c2c80f845 Fix removal of metadata function and update script
Changing the code to remove the assumption of 1:1
mapping between chunks and chunk constraints. Including
a check if a chunk constraint is shared with another chunk.
In that case, skip deleting the dimension slice.
2024-06-05 13:58:57 +02:00
Fabrízio de Royes Mello
438736f6bd Post release 2.15.1 2024-05-30 14:08:38 -03:00
Fabrízio de Royes Mello
e0a21a060b Release 2.15.1
This release contains bug fixes since the 2.15.0 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6540 Segmentation fault when backfilling data with COPY into a compressed chunk
* #6858 Before update trigger not working correctly
* #6908 Fix gapfill with timezone behaviour around dst switches
* #6911 Fix dropped chunk metadata removal in update script
* #6940 Fix `pg_upgrade` failure by removing `regprocedure` from catalog table
* #6957 Fix segfault in UNION queries with ordering on compressed chunks

**Thanks**
* @DiAifU, @kiddhombre and @intermittentnrg for reporting issues with gapfill and daylight saving time
* @edgarzamora for reporting issue with update triggers
* @hongquan for reporting an issue with the update script
* @iliastsa and @SystemParadox for reporting an issue with COPY into a compressed chunk
2024-05-29 09:03:09 -03:00
Fabrízio de Royes Mello
8b994c717d Remove regprocedure oid type from catalog
In #6624 we refactored the time bucket catalog table to make it more
generic and save information for all Continuous Aggregates. Previously
it stored only variable bucket size information.

The problem is we used the `regprocedure` type to store the OID of the
given time bucket function but unfortunately it is not supported by
`pg_upgrade`.

Fixed it by changing the column to TEXT and resolve to/from OID using
builtin `regprocedurein` and `format_procedure_qualified` functions.

Fixes #6935
2024-05-22 11:01:56 -03:00
Sven Klemm
f9ccf1be07 Fix _timescaledb_functions.remove_dropped_chunk_metadata
The removal function would only remove chunk_constraints that are
part of dimension constraints. This patch changes it to remove all
constraints of a chunk.
2024-05-21 17:31:18 +02:00
Sven Klemm
009d970af9 Remove PG13 support
In order to keep the number of supported PG versions more managable
remove support for PG13.
2024-05-14 17:19:20 +02:00
Fabrízio de Royes Mello
ca125cf620 Post-release changes for 2.15.0. 2024-05-07 16:44:43 -03:00
Fabrízio de Royes Mello
defe4ef581 Release 2.15.0
This release contains performance improvements and bug fixes since
the 2.14.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:
* Support `time_bucket` with `origin` and/or `offset` on Continuous Aggregate
* Compression improvements:
  - Improve expression pushdown
  - Add minmax sparse indexes when compressing columns with btree indexes
  - Make compression use the defaults functions
  - Vectorize filters in WHERE clause that contain text equality operators and LIKE expressions

**Deprecation warning**
* Starting on this release will not be possible to create Continuous Aggregate using `time_bucket_ng` anymore and it will be completely removed on the upcoming releases.
* Recommend users to [migrate their old Continuous Aggregate format to the new one](https://docs.timescale.com/use-timescale/latest/continuous-aggregates/migrate/) because it support will be completely removed in next releases prevent them to migrate.
* This is the last release supporting PostgreSQL 13.

**For on-premise users and this release only**, you will need to run [this SQL script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql) after running `ALTER EXTENSION`. More details can be found in the pull request [#6786](https://github.com/timescale/timescaledb/pull/6797).

**Features**
* #6382 Support for time_bucket with origin and offset in CAggs
* #6696 Improve defaults for compression segment_by and order_by
* #6705 Add sparse minmax indexes for compressed columns that have uncompressed btree indexes
* #6754 Allow DROP CONSTRAINT on compressed hypertables
* #6767 Add metadata table `_timestaledb_internal.bgw_job_stat_history` for tracking job execution history
* #6798 Prevent usage of deprecated time_bucket_ng in CAgg definition
* #6810 Add telemetry for access methods
* #6811 Remove no longer relevant timescaledb.allow_install_without_preload GUC
* #6837 Add migration path for CAggs using time_bucket_ng
* #6865 Update the watermark when truncating a CAgg

**Bugfixes**
* #6617 Fix error in show_chunks
* #6621 Remove metadata when dropping chunks
* #6677 Fix snapshot usage in CAgg invalidation scanner
* #6698 Define meaning of 0 retries for jobs as no retries
* #6717 Fix handling of compressed tables with primary or unique index in COPY path
* #6726 Fix constify cagg_watermark using window function when querying a CAgg
* #6729 Fix NULL start value handling in CAgg refresh
* #6732 Fix CAgg migration with custom timezone / date format settings
* #6752 Remove custom autovacuum setting from compressed chunks
* #6770 Fix plantime chunk exclusion for OSM chunk
* #6789 Fix deletes with subqueries and compression
* #6796 Fix a crash involving a view on a hypertable
* #6797 Fix foreign key constraint handling on compressed hypertables
* #6816 Fix handling of chunks with no contraints
* #6820 Fix a crash when the ts_hypertable_insert_blocker was called directly
* #6849 Use non-orderby compressed metadata in compressed DML
* #6867 Clean up compression settings when deleting compressed cagg
* #6869 Fix compressed DML with constraints of form value OP column
* #6870 Fix bool expression pushdown for queries on compressed chunks

**Thanks**
* @brasic for reporting a crash when the ts_hypertable_insert_blocker was called directly
* @bvanelli for reporting an issue with the jobs retry count
* @djzurawsk For reporting error when dropping chunks
* @Dzuzepppe for reporting an issue with DELETEs using subquery on compressed chunk working incorrectly.
* @hongquan For reporting a 'timestamp out of range' error during CAgg migrations
* @kevcenteno for reporting an issue with the show_chunks API showing incorrect output when 'created_before/created_after' was used with time-partitioned columns.
* @mahipv For starting working on the job history PR
* @rovo89 For reporting constify cagg_watermark not working using window function when querying a CAgg
2024-05-06 12:40:40 -03:00
Sven Klemm
e298ecd532 Don't reuse job id
We shouldnt reuse job ids to make it easy to recognize the job
log entries for a job. We also need to keep the old job around
to not break loading dumps from older versions.
2024-05-03 09:05:57 +02:00
Jan Nidzwetzki
f88899171f Add migration for CAggs using time_bucket_ng
The function time_bucket_ng is deprecated. This PR adds a migration path
for existing CAggs. Since time_bucket and time_bucket_ng use different
origin values, a custom origin is set if needed to let time_bucket
create the same buckets as created by time_bucket_ng so far.
2024-04-25 16:08:48 +02:00
Jan Nidzwetzki
3ad948163c Remove MN leftover create distributed hypertable
This commit removes an unused function that was used in MN setups to
create distributed hypertables.
2024-04-25 10:55:27 +02:00