483 Commits

Author SHA1 Message Date
Sven Klemm
d0e07a404f Release 2.14.2
This release contains bug fixes since the 2.14.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6655 Fix segfault in cagg_validate_query
* #6660 Fix refresh on empty CAgg with variable bucket
* #6670 Don't try to compress osm chunks

**Thanks**
* @kav23alex for reporting a segfault in cagg_validate_query
2024-02-20 07:39:51 +01:00
Jan Nidzwetzki
b01c8e7377 Unify handling of CAgg bucket_origin
So far, bucket_origin was defined as a Timestamp but used as a
TimestampTz in many places. This commit changes this and unifies the
usage of the variable.
2024-02-16 18:28:21 +01:00
Jan Nidzwetzki
ab7a09e876 Make CAgg time_bucket catalog table more generic
The catalog table continuous_aggs_bucket_function is currently only used
for variable bucket sizes. Information about the fixed-size buckets is
stored in the table continuous_agg only. This causes some problems
(e.g., we have redundant fields for the bucket_size, fixes size buckets
with offsets are not supported, ...).

This commit is the first in a row of commits that refactor the catalog
for the CAgg time_bucket function. The goals are:

* Remove the CAgg redundant attributes in the catalog
* Create an entry in continuous_aggs_bucket_function for all CAggs
  that use time_bucket

This first commit refactors the continuous_aggs_bucket_function table
and prepares it for more generic use. Not all attributes are used yet,
but these will change in follow-up PRs.
2024-02-16 15:39:49 +01:00
Fabrízio de Royes Mello
5a359ac660 Remove metadata when dropping chunk
Historically we preserve chunk metadata because the old format of the
Continuous Aggregate has the `chunk_id` column in the materialization
hypertable so in order to don't have chunk ids left over there we just
mark it as dropped when dropping chunks.

In #4269 we introduced a new Continuous Aggregate format that don't
store the `chunk_id` in the materialization hypertable anymore so it's
safe to also remove the metadata when dropping chunk and all associated
Continuous Aggregates are in the new format.

Also added a post-update SQL script to cleanup unnecessary dropped chunk
metadata in our catalog.

Closes #6570
2024-02-16 10:45:04 -03:00
Sven Klemm
8d8f158302 2.14.1 post release
Adjust update tests to include new version.
2024-02-15 06:15:59 +01:00
Sven Klemm
59f50f2daa Release 2.14.1
This release contains bug fixes since the 2.14.0 release.
We recommend that you upgrade at the next available opportunity.

**Features**
* #6630 Add views for per chunk compression settings

**Bugfixes**
* #6636 Fixes extension update of compressed hypertables with dropped columns
* #6637 Reset sequence numbers on non-rollup compression
* #6639 Disable default indexscan for compression
* #6651 Fix DecompressChunk path generation with per chunk settings

**Thanks**
* @anajavi for reporting an issue with extension update of compressed hypertables
2024-02-14 17:19:04 +01:00
Sven Klemm
87430168b5 Fix downgrade script to make backports easier
While the downgrade script doesnt combine multiple version into a single
script since we only create the script for the previous version, fixing
this will make backporting in the release branch easier.
2024-02-14 14:45:06 +01:00
Sven Klemm
fc0e41cc13 Fix DecompressChunk path generation with per chunk settings
Adjust DecompressChunk path generation to use the per chunk settings
and not the hypertable settings when building compression info.
This patch also fixes the missing chunk configuration generation
in the update script which was masked by this bug.
2024-02-14 14:18:04 +01:00
Sven Klemm
ea6d826c12 Add compression settings informational view
This patch adds 2 new views hypertable_compression_settings and
chunk_compression_settings to query the per chunk compression
settings.
2024-02-13 07:33:37 +01:00
Sven Klemm
fa5c0a9b22 Fix update script for tables with dropped columns 2024-02-12 20:58:20 +01:00
Ante Kresic
ba3ccc46db Post-release fixes for 2.14.0
Bumping the previous version and adding tests for 2.14.0.
2024-02-12 09:32:40 +01:00
Jan Nidzwetzki
1765c82c50 Remove distributed CAgg leftovers
Version 2.14.0 removes the multi-node code. However, there were a few
leftovers for the handling of distributed CAggs. This commit cleans up
the CAgg code and removes the no longer needed functions:

invalidation_cagg_log_add_entry(integer,bigint,bigint);
invalidation_hyper_log_add_entry(integer,bigint,bigint);
materialization_invalidation_log_delete(integer);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
     bigint,integer[],bigint[],bigint[]);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
     bigint,integer[],bigint[],bigint[],text[]);
invalidation_process_hypertable_log(integer,integer,regtype,
     integer[],bigint[],bigint[]);
invalidation_process_hypertable_log(integer,integer,regtype,
     integer[],bigint[],bigint[],text[]);
hypertable_invalidation_log_delete(integer);
2024-02-08 16:02:19 +01:00
Ante Kresic
505b427a04 Release 2.14.0
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
2024-02-08 13:57:06 +01:00
Sven Klemm
101e4c57ef Add recompress optional argument to compress_chunk
This patch deprecates the recompress_chunk procedure as all that
functionality is covered by compress_chunk now. This patch also adds a
new optional boolean argument to compress_chunk to force applying
changed compression settings to existing compressed chunks.
2024-02-07 12:19:13 +01:00
Nikhil Sontakke
2b8f98c616 Support approximate hypertable size
If a lot of chunks are involved then the current pl/pgsql function
to compute the size of each chunk via a nested loop is pretty slow.
Additionally, the current functionality makes a system call to get the
file size on disk for each chunk everytime this function is called.
That again slows things down. We now have an approximate function which
is implemented in C to avoid the issues in the pl/pgsql function.
Additionally, this function also uses per backend caching using the
smgr layer to compute the approximate size cheaply. The PG cache
invalidation clears off the cached size for a chunk when DML happens
into it. That size cache is thus able to get the latest size in a
matter of minutes. Also, due to the backend caching, any long running
session will only fetch latest data for new or modified chunks and can
use the cached data (which is calculated afresh the first time around)
effectively for older chunks.
2024-02-01 13:25:41 +05:30
Nikhil Sontakke
c715d96aa4 Don't dump unnecessary extension tables
Logging and caching related tables from the timescaledb extension
should not be dumped using pg_dump. Our scripts specify a few such
unwanted tables. Apart from being unnecessary, the "job_errors" had
some restricted permissions causing additional problems in pg_dump.

We now don't include such tables for dumping.

Fixes #5449
2024-01-25 12:01:11 +05:30
Sven Klemm
0b23bab466 Include _timescaledb_catalog.metadata in dumps
This patch changes the dump configuration for
_timescaledb_catalog.metadata to include all entries. To allow loading
logical dumps with this configuration an insert trigger is added that
turns uniqueness conflicts into updates to not block the restore.
2024-01-23 12:53:48 +01:00
Matvey Arye
e89bc24af2 Add functions for determining compression defaults
Add functions to help determine defaults for segment_by and order_by.
2024-01-22 08:10:23 -05:00
Sven Klemm
754f77e083 Remove chunks_in function
This function was used to propagate chunk exclusion decisions from
an access node to data nodes and is no longer needed with the removal
of multinode.
2024-01-22 09:18:26 +01:00
Sven Klemm
f57d584dd2 Make compression settings per chunk
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
2024-01-17 12:53:07 +01:00
Sven Klemm
75cc4f7d7b Improve multinode detection in update script
Previously we would only check for data nodes defined or distributed
hypertables being present which might not be true on data nodes so
we prevent update on any installation that has dist_uuid defined which
is also true on data nodes.
2024-01-12 11:18:35 +01:00
Mats Kindahl
662fcc1b1b Make extension state available through function
The extension state is not easily accessible in release builds, which
makes debugging issue with the loader very difficult. This commit
introduces a new schema `_timescaledb_debug` and makes the function
`ts_extension_get_state` available also in release builds as
`_timescaledb_debug.extension_state`.

See #1682
2024-01-11 10:52:35 +01:00
Jan Nidzwetzki
df7a8fed6f Post-release fixes for 2.13.1
Bumping the previous version and adding tests for 2.13.1
2024-01-09 16:31:07 +01:00
Jan Nidzwetzki
a69c4682ce Release 2.13.1
This release contains bug fixes since the 2.13.0 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6365 Use numrows_pre_compression in approximate row count
* #6377 Use processed group clauses in PG16
* #6384 Change bgw_log_level to use PGC_SUSET
* #6393 Disable vectorized sum for expressions.
* #6408 Fix groupby pathkeys for gapfill in PG16
* #6428 Fix index matching during DML decompression
* #6439 Fix compressed chunk permission handling on PG16
* #6443 Fix lost concurrent CAgg updates
* #6454 Fix unique expression indexes on compressed chunks
* #6465 Fix use of freed path in decompression sort logic

**Thanks**
* @MA-MacDonald for reporting an issue with gapfill in PG16
* @aarondglover for reporting an issue with unique expression indexes on compressed chunks
* @adriangb for reporting an issue with security barrier views on pg16
2024-01-04 10:04:10 +01:00
Sven Klemm
8f73f95c2a Remove replication_factor field from _timescaledb_catalog.hypertable 2023-12-18 10:53:27 +01:00
Sven Klemm
11dd9af847 Remove multinode catalog objects
This patch removes the following objects:

tables:
- _timescaledb_catalog.chunk_data_node
- _timescaledb_catalog.dimension_partition
- _timescaledb_catalog.hypertable_data_node
- _timescaledb_catalog.remote_txn

views:
- timescaledb_information.data_nodes

functions:
- _timescaledb_functions.hypertable_remote_size
- _timescaledb_functions.chunks_remote_size
- _timescaledb_functions.indexes_remote_size
- _timescaledb_functions.compressed_chunk_remote_stats
2023-12-18 10:53:27 +01:00
Sven Klemm
ce90fde526 Alter pre-update handling during downgrade script generation
Before this change during downgrade script generation we would
always fetch the pre-update script from the previous version and
prepend it to the generated scripts. This limits what can be
referenced in the pre-update script and also what is possible
within the downgrade itself.
This patch splits the pre-update script into a generic part that
is used for update/downgrade and an update specific part. We could
later also add a downgrade specific part but currently it is not
needed. This change is necessary because we reference a timescaledb
view in the pre-update script which prevents changes to that view.
2023-12-18 10:13:03 +01:00
Sven Klemm
6395b249a9 Remove remote connection handling code
Remove the code used by multinode to handle remote connections.
This patch completely removes tsl/src/remote and any remaining
distributed hypertable checks.
2023-12-15 19:13:08 +01:00
Sven Klemm
06867af966 Remove multinode functions from crossmodule struct
This commit removes the multinode specific entries from the cross
module function struct. It also removes the function
set_chunk_default_data_node
2023-12-14 21:32:14 +01:00
Sven Klemm
11df1dd648 Remove experimental multinode functions
This commit removes the following functions:
- timescaledb_experimental.block_new_chunks
- timescaledb_experimental.allow_new_chunks
- timescaledb_experimental.subscription_exec
- timescaledb_experimental.move_chunk
- timescaledb_experimental.copy_chunk
- timescaledb_experimental.cleanup_copy_chunk_operation
2023-12-13 23:38:32 +01:00
Sven Klemm
8a2029f569 Remove rxid type and distributed size util functions 2023-12-13 23:38:32 +01:00
Sven Klemm
19f1395191 Remove internal multinode ddl functions
This commit removes the following functions:
- _timescaledb_functions.create_chunk_replica_table
- _timescaledb_functions.chunk_drop_replica
- _timescaledb_functions.wait_subscription_sync
- _timescaledb_functions.health
- _timescaledb_functions.drop_stale_chunks
2023-12-13 23:38:32 +01:00
Fabrízio de Royes Mello
78490c47b7 Remove support for creating CAggs with old format
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.

In 2.10.0 released on February 2023 the Continuous Aggregate old format
deprecation was announced.

With this PR the ability of creating Continuous Aggregate in the old
format was removed, but we still support migrate from the old to the
new format by running the `cagg_migrate` procedure.

This is the continuation of the PR #5977 started by @pdipesh02.

References:
https://docs.timescale.com/api/latest/continuous-aggregates/cagg_migrate/
https://github.com/timescale/timescaledb/releases/tag/2.10.0
https://github.com/timescale/timescaledb/releases/tag/2.7.0
https://github.com/timescale/timescaledb/pull/5977
2023-12-13 18:48:31 -03:00
Sven Klemm
c914d19fac Remove the timescaledb_fdw foreign data wrapper
This is the fdw implementation that was used for communication
between multinode instances.
2023-12-13 09:48:03 +01:00
Sven Klemm
36c36564a8 Refactor compression setting storage
This patch drops the catalog table _timescaledb_catalog.hypertable_compression
and stores those settings in _timescaledb_catalog.compression_settings instead.
The storage format is changed and the new table will have 1 entry per relation
instead of 1 entry per column and has no dependancy on hypertables.
All other aspects of compression will remain the same. This is refactoring is
to enable per chunk compression settings in a follow-up patch.
2023-12-12 21:45:33 +01:00
Sven Klemm
bc935ab2ca Remove multinode public API
This patch removes the following functions/procedures:
- add_data_node
- alter_data_node
- attach_data_node
- create_distributed_hypertable
- create_distributed_restore_point
- delete_data_node
- detach_data_node
- distributed_exec
- set_replication_factor
- _timescaledb_functions.ping_data_node
- _timescaledb_functions.remote_txn_heal_data_node
- _timescaledb_functions.set_dist_id
- _timescaledb_functions.set_peer_dist_id
- _timescaledb_functions.show_connection_cache
- _timescaledb_functions.validate_as_data_node
- _timescaledb_internal.ping_data_node
- _timescaledb_internal.remote_txn_heal_data_node
- _timescaledb_internal.set_dist_id
- _timescaledb_internal.set_peer_dist_id
- _timescaledb_internal.show_connection_cache
- _timescaledb_internal.validate_as_data_node
2023-12-12 20:37:35 +01:00
Jan Nidzwetzki
65f681537d Fix makeaclitem function creation 2023-11-29 21:49:17 +01:00
Jan Nidzwetzki
3b59a8a774 Post-release fixes for 2.13.0
Bumping the previous version and adding tests for 2.13.0.
2023-11-29 21:49:17 +01:00
Jan Nidzwetzki
337adb63fc Release 2.13.0
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* #5575 Add chunk-wise sorted paths for compressed chunks
* #5761 Simplify hypertable DDL API
* #5890 Reduce WAL activity by freezing compressed tuples immediately
* #6050 Vectorized aggregation execution for sum()
* #6062 Add metadata for chunk creation time
* #6077 Make Continous Aggregates materialized only (non-realtime) by default
* #6177 Change show_chunks/drop_chunks using chunk creation time
* #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* #6185 Keep track of catalog version
* #6227 Use creation time in retention/compression policy
* #6307 Add SQL function cagg_validate_query

**Bugfixes**
* #6188 Add GUC for setting background worker log level
* #6222 Allow enabling compression on hypertable with unique expression index
* #6240 Check if worker registration succeeded
* #6254 Fix exception detail passing in compression_policy_execute
* #6264 Fix missing bms_del_member result assignment
* #6275 Fix negative bitmapset member not allowed in compression
* #6280 Potential data loss when compressing a table with a partial index that matches compression order.
* #6289 Add support for startup chunk exclusion with aggs
* #6290 Repair relacl on upgrade
* #6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* #6305 Make timescaledb_functions.makeaclitem strict
* #6332 Fix typmod and collation for segmentby columns
* #6339 Fix tablespace with constraints
* #6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
2023-11-27 16:13:51 +01:00
Nikhil Sontakke
51d92a3638 Fix non-default tablespaces with constraints
If a hypertable uses a non-default tablespace for its primary or
unique constraints with additional DEFERRABLE or INITIALLY DEFERRED
characteristics then any chunk creation will fail with syntax error. We
now set the tablespace via a separate command for such constraints for
the chunks.

Fixes #6338
2023-11-23 19:09:48 +05:30
Sven Klemm
d1c8cb0673 Fix segmentby typmod and collation of compressed chunks
We added a workaround for segmentby columns with incorrect typmod
and collation in c73c5a74b9 but did not adjust pre-existing relations.
This patch will fix any existing relations where the segmentby columns
of compressed chunks have incorrect typmod and collation and remove
the code workaround.
2023-11-21 14:50:58 +01:00
Nikhil Sontakke
44817252b5 Use creation time in retention/compression policy
The retention and compression policies can now use drop_created_before
and compress_created_before arguments respectively to specify chunk
selection using their creation times.

We don't support creation times for CAggs, yet.
2023-11-16 20:17:17 +05:30
Fabrízio de Royes Mello
3e08d21ace Add SQL function cagg_validate_query
With this function is possible to execute the Continuous Aggregate query
validation over an arbitrary query string, without the need to actually
create the Continuous Aggregate.

It can be used, for example, to check for most frequent queries maybe
using `pg_stat_statements`, validate them and check if there are queries
that potenttialy can turned into a Continuous Aggregate.
2023-11-14 08:29:26 -03:00
Mats Kindahl
f2310216a8 Repair relacl on extension upgrade
If users have accidentally been removed from `pg_authid` as a result of
bugs where dropping a user did not revoke privileges from all tables
where the had privileges, it will not be possible to create new chunks
since these require the user to be found when copying the privileges
for the parent table (either compressed hypertable or normal
hypertable).

To fix the situation, we repair the `pg_class` table when updating the
extension by modifying the `relacl` for relations and remove any user
that do not have an entry in `pg_authid`.

A repair function `_timescaledb_functions.repair_relation_acls` is
added that will perform the job. A `makeaclitem` from PG16 that accepts
a list of comma and used as part of the repair is also added as
`_timescaledb_functions.makeaclitem`.
2023-11-09 11:35:27 +01:00
Nikhil Sontakke
844807a374 Change show_chunks/drop_chunks using creation time
- Updated show_chunks, drop_chunks APIs to get the affected
chunks using chunk creation time metadata based on the
"date/time/interval" like boundary specified for the INTEGER
columns.

- We honor "integer_now" function if it's specified so as to keep
backwards compatibility with the existing behavior

Co-authored-by: Dipesh Pandit <dipesh@timescale.com>
2023-11-02 18:37:09 +05:30
Jan Nidzwetzki
8767de658b Reduce WAL activity by freezing tuples immediately
When we compress a chunk, we create a new compressed chunk for storing
the compressed data. So far, the tuples were just inserted into the
compressed chunk and frozen by a later vacuum run.

However, freezing tuples causes WAL activity can be optimized because
the compressed chunk is created in the same transaction as the tuples.
This patch reduces the WAL activity by storing these tuples directly as
frozen and preventing a freeze operation in the future. This approach is
similar to PostgreSQL's COPY FREEZE.
2023-10-25 13:27:07 +02:00
Sven Klemm
1126b08567 2.12.2 Post-release
Add 2.12.2 to update test scripts and add update downgrade metadata.
2023-10-23 18:31:36 +02:00
Sven Klemm
21a3ebd77c Release 2.12.2
This release contains bug fixes since the 2.12.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6155 Align gapfill bucket generation with time_bucket
* #6181 Ensure fixed_schedule field is populated
* #6210 Fix EXPLAIN ANALYZE for compressed DML
2023-10-19 14:37:28 +02:00
Fabrízio de Royes Mello
a409065285 PG16: Prohibit use of multi-node
Since multi-node is not supported on PG16, add errors to multi-node
functions when run on this PostgreSQL version.
2023-10-18 11:45:06 -03:00
Sven Klemm
a664e685cd Keep track of catalog version
This patch stores the current catalog version in an internal
table to allow us to verify catalog and code version match.
When doing dump/restore people occasionally report very unusual
errors and during investigation it is discovered that they loaded
a dump from an older version and run it with a later code version.
This allows to detect mismatches between installed code version
and loaded dump version. The version number in the metadata table
will be kept updated in upgrade and downgrade scripts.
2023-10-14 22:28:21 +02:00