Release 2.14.0

This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This commit is contained in:
Ante Kresic 2024-02-07 11:58:06 +01:00 committed by Ante Kresic
parent e2d55cd9e8
commit 505b427a04
32 changed files with 529 additions and 491 deletions

View File

@ -1 +0,0 @@
Fixes: #6491 Enable now() plantime constification with BETWEEN

View File

@ -1,3 +0,0 @@
Implements: #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
Fixes: #5688

View File

@ -1,2 +0,0 @@
Implements: #6325 Add plan-time chunk exclusion for real-time CAggs
Thanks: @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries

View File

@ -1,2 +0,0 @@
Fixes: #6498 Suboptimal query plans when using time_bucket with query parameters
Thanks: @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters

View File

@ -1,2 +0,0 @@
Fixes: #6424, #6536 Inefficient join plans on compressed hypertables.
Thanks: @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.

View File

@ -1 +0,0 @@
Fixes: #6494 Fix create_hypertable referenced by fk succeeds

View File

@ -1,2 +0,0 @@
Fixes: #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
Thanks: @JerkoNikolic Thanks for reporting the issue with gapfill and DST

View File

@ -1 +0,0 @@
Fixes: #6509 Make extension state available through function

View File

@ -1 +0,0 @@
Fixes: #6512 Log extension state changes

View File

@ -1,2 +0,0 @@
Fixes: #6522 Disallow triggers on CAggs
Thanks: @HollowMan6 Thanks for reporting this issue

View File

@ -1 +0,0 @@
Fixes: #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code

View File

@ -1 +0,0 @@
Fixes: #6575 Fix compressed chunk not found during upserts

View File

@ -1 +0,0 @@
Fixes: #6610 Ensure qsort comparison function is transitive

View File

@ -1,3 +0,0 @@
Implements: #6360 Remove support for creating Continuous Aggregates with old format
Thanks: @pdipesh02 for working on removing the old Continuous Aggregate format

View File

@ -1 +0,0 @@
Implements: #6386 Add functions for determining compression defaults

View File

@ -1 +0,0 @@
Implements: #6410 Remove multinode public API

View File

@ -1 +0,0 @@
Implements: #6440 Allow SQLValueFunction pushdown into compressed scan

View File

@ -1 +0,0 @@
Implements: #6463 Support approximate hypertable size

View File

@ -1 +0,0 @@
Implements: #6513 Make compression settings per chunk

View File

@ -1 +0,0 @@
Implements: #6529 Remove reindex_relation from recompression

View File

@ -1 +0,0 @@
Implements: #6545 Remove restrictions for changing compression settings

View File

@ -1 +0,0 @@
Fixes: #6523 Reduce locking level on compressed chunk index during segmentwise recompression

View File

@ -1 +0,0 @@
Implements: #6566 Limit tuple decompression during DML operations

View File

@ -1 +0,0 @@
Implements: #6579 Change compress_chunk and decompress_chunk to idempotent version by default

View File

@ -1 +0,0 @@
Fixes: #6592 Fix recompression policy ignoring partially compressed chunks

View File

@ -1 +0,0 @@
Implements: #6608 Add LWLock for OSM usage in loader

View File

@ -1,2 +0,0 @@
Implements: #6609 Deprecate recompress_chunk
Implements: #6609 Add optional recompress argument to compress_chunk

View File

@ -4,6 +4,80 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**
## 2.14.0 (2023-02-08)
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.
In addition, it includes these noteworthy features:
* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings
**For this release only**, you will need to restart the database before running `ALTER EXTENSION`
**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.
TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).
If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).
**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).
**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk
**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive
**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
## 2.13.1 (2024-01-09)
This release contains bug fixes since the 2.13.0 release.

View File

@ -42,7 +42,8 @@ set(MOD_FILES
updates/2.12.0--2.12.1.sql
updates/2.12.1--2.12.2.sql
updates/2.12.2--2.13.0.sql
updates/2.13.0--2.13.1.sql)
updates/2.13.0--2.13.1.sql
updates/2.13.1--2.14.0.sql)
# The downgrade file to generate a downgrade script for the current version, as
# specified in version.config

View File

@ -0,0 +1,451 @@
-- ERROR if trying to update the extension while multinode is present
DO $$
DECLARE
data_nodes TEXT;
dist_hypertables TEXT;
BEGIN
SELECT string_agg(format('%I.%I', schema_name, table_name), ', ')
INTO dist_hypertables
FROM _timescaledb_catalog.hypertable
WHERE replication_factor > 0;
IF dist_hypertables IS NOT NULL THEN
RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'cannot upgrade because multi-node has been removed in 2.14.0',
DETAIL = 'The following distributed hypertables should be migrated to regular: '||dist_hypertables;
END IF;
SELECT string_agg(format('%I', srv.srvname), ', ')
INTO data_nodes
FROM pg_foreign_server srv
JOIN pg_foreign_data_wrapper fdw ON srv.srvfdw = fdw.oid AND fdw.fdwname = 'timescaledb_fdw';
IF data_nodes IS NOT NULL THEN
RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'cannot upgrade because multi-node has been removed in 2.14.0',
DETAIL = 'The following data nodes should be removed: '||data_nodes;
END IF;
IF EXISTS(SELECT FROM _timescaledb_catalog.metadata WHERE key = 'dist_uuid') THEN
RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'cannot upgrade because multi-node has been removed in 2.14.0',
DETAIL = 'This node appears to be part of a multi-node installation';
END IF;
END $$;
DROP FUNCTION IF EXISTS _timescaledb_functions.ping_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.ping_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.remote_txn_heal_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.remote_txn_heal_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.set_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_internal.set_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_functions.set_peer_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_internal.set_peer_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_functions.validate_as_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.validate_as_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.show_connection_cache;
DROP FUNCTION IF EXISTS _timescaledb_internal.show_connection_cache;
DROP FUNCTION IF EXISTS @extschema@.create_hypertable(relation REGCLASS, time_column_name NAME, partitioning_column NAME, number_partitions INTEGER, associated_schema_name NAME, associated_table_prefix NAME, chunk_time_interval ANYELEMENT, create_default_indexes BOOLEAN, if_not_exists BOOLEAN, partitioning_func REGPROC, migrate_data BOOLEAN, chunk_target_size TEXT, chunk_sizing_func REGPROC, time_partitioning_func REGPROC, replication_factor INTEGER, data_nodes NAME[], distributed BOOLEAN);
CREATE FUNCTION @extschema@.create_hypertable(
relation REGCLASS,
time_column_name NAME,
partitioning_column NAME = NULL,
number_partitions INTEGER = NULL,
associated_schema_name NAME = NULL,
associated_table_prefix NAME = NULL,
chunk_time_interval ANYELEMENT = NULL::bigint,
create_default_indexes BOOLEAN = TRUE,
if_not_exists BOOLEAN = FALSE,
partitioning_func REGPROC = NULL,
migrate_data BOOLEAN = FALSE,
chunk_target_size TEXT = NULL,
chunk_sizing_func REGPROC = '_timescaledb_functions.calculate_chunk_interval'::regproc,
time_partitioning_func REGPROC = NULL
) RETURNS TABLE(hypertable_id INT, schema_name NAME, table_name NAME, created BOOL) AS '@MODULE_PATHNAME@', 'ts_hypertable_create' LANGUAGE C VOLATILE;
DROP FUNCTION IF EXISTS @extschema@.create_distributed_hypertable;
DROP FUNCTION IF EXISTS @extschema@.add_data_node;
DROP FUNCTION IF EXISTS @extschema@.delete_data_node;
DROP FUNCTION IF EXISTS @extschema@.attach_data_node;
DROP FUNCTION IF EXISTS @extschema@.detach_data_node;
DROP FUNCTION IF EXISTS @extschema@.alter_data_node;
DROP PROCEDURE IF EXISTS @extschema@.distributed_exec;
DROP FUNCTION IF EXISTS @extschema@.create_distributed_restore_point;
DROP FUNCTION IF EXISTS @extschema@.set_replication_factor;
CREATE TABLE _timescaledb_catalog.compression_settings (
relid regclass NOT NULL,
segmentby text[],
orderby text[],
orderby_desc bool[],
orderby_nullsfirst bool[],
CONSTRAINT compression_settings_pkey PRIMARY KEY (relid),
CONSTRAINT compression_settings_check_segmentby CHECK (array_ndims(segmentby) = 1),
CONSTRAINT compression_settings_check_orderby_null CHECK ( (orderby IS NULL AND orderby_desc IS NULL AND orderby_nullsfirst IS NULL) OR (orderby IS NOT NULL AND orderby_desc IS NOT NULL AND orderby_nullsfirst IS NOT NULL) ),
CONSTRAINT compression_settings_check_orderby_cardinality CHECK (array_ndims(orderby) = 1 AND array_ndims(orderby_desc) = 1 AND array_ndims(orderby_nullsfirst) = 1 AND cardinality(orderby) = cardinality(orderby_desc) AND cardinality(orderby) = cardinality(orderby_nullsfirst))
);
INSERT INTO _timescaledb_catalog.compression_settings(relid, segmentby, orderby, orderby_desc, orderby_nullsfirst)
SELECT
format('%I.%I', ht.schema_name, ht.table_name)::regclass,
array_agg(attname ORDER BY segmentby_column_index) FILTER(WHERE segmentby_column_index >= 1) AS compress_segmentby,
array_agg(attname ORDER BY orderby_column_index) FILTER(WHERE orderby_column_index >= 1) AS compress_orderby,
array_agg(NOT orderby_asc ORDER BY orderby_column_index) FILTER(WHERE orderby_column_index >= 1) AS compress_orderby_desc,
array_agg(orderby_nullsfirst ORDER BY orderby_column_index) FILTER(WHERE orderby_column_index >= 1) AS compress_orderby_nullsfirst
FROM _timescaledb_catalog.hypertable_compression hc
INNER JOIN _timescaledb_catalog.hypertable ht ON ht.id = hc.hypertable_id
GROUP BY hypertable_id, ht.schema_name, ht.table_name;
GRANT SELECT ON _timescaledb_catalog.compression_settings TO PUBLIC;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.compression_settings', '');
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable_compression;
DROP VIEW IF EXISTS timescaledb_information.compression_settings;
DROP TABLE _timescaledb_catalog.hypertable_compression;
DROP FOREIGN DATA WRAPPER IF EXISTS timescaledb_fdw;
DROP FUNCTION IF EXISTS @extschema@.timescaledb_fdw_handler();
DROP FUNCTION IF EXISTS @extschema@.timescaledb_fdw_validator(text[], oid);
DROP FUNCTION IF EXISTS _timescaledb_functions.create_chunk_replica_table;
DROP FUNCTION IF EXISTS _timescaledb_functions.chunk_drop_replica;
DROP PROCEDURE IF EXISTS _timescaledb_functions.wait_subscription_sync;
DROP FUNCTION IF EXISTS _timescaledb_functions.health;
DROP FUNCTION IF EXISTS _timescaledb_functions.drop_stale_chunks;
DROP FUNCTION IF EXISTS _timescaledb_internal.create_chunk_replica_table;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunk_drop_replica;
DROP PROCEDURE IF EXISTS _timescaledb_internal.wait_subscription_sync;
DROP FUNCTION IF EXISTS _timescaledb_internal.health;
DROP FUNCTION IF EXISTS _timescaledb_internal.drop_stale_chunks;
ALTER TABLE _timescaledb_catalog.remote_txn DROP CONSTRAINT remote_txn_remote_transaction_id_check;
DROP TYPE IF EXISTS @extschema@.rxid CASCADE;
DROP FUNCTION IF EXISTS _timescaledb_functions.rxid_in;
DROP FUNCTION IF EXISTS _timescaledb_functions.rxid_out;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_hypertable_info;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_chunk_info;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_compressed_chunk_stats;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_index_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_hypertable_info;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_chunk_info;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_compressed_chunk_stats;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_index_size;
DROP FUNCTION IF EXISTS timescaledb_experimental.block_new_chunks;
DROP FUNCTION IF EXISTS timescaledb_experimental.allow_new_chunks;
DROP FUNCTION IF EXISTS timescaledb_experimental.subscription_exec;
DROP PROCEDURE IF EXISTS timescaledb_experimental.move_chunk;
DROP PROCEDURE IF EXISTS timescaledb_experimental.copy_chunk;
DROP PROCEDURE IF EXISTS timescaledb_experimental.cleanup_copy_chunk_operation;
DROP FUNCTION IF EXISTS _timescaledb_functions.set_chunk_default_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.set_chunk_default_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.drop_dist_ht_invalidation_trigger;
DROP FUNCTION IF EXISTS _timescaledb_internal.drop_dist_ht_invalidation_trigger;
-- remove multinode catalog tables
DROP VIEW IF EXISTS timescaledb_information.chunks;
DROP VIEW IF EXISTS timescaledb_information.data_nodes;
DROP VIEW IF EXISTS timescaledb_information.hypertables;
DROP VIEW IF EXISTS timescaledb_experimental.chunk_replication_status;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.remote_txn;
DROP TABLE _timescaledb_catalog.remote_txn;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable_data_node;
DROP TABLE _timescaledb_catalog.hypertable_data_node;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.chunk_data_node;
DROP TABLE _timescaledb_catalog.chunk_data_node;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.chunk_copy_operation;
DROP TABLE _timescaledb_catalog.chunk_copy_operation;
ALTER EXTENSION timescaledb DROP SEQUENCE _timescaledb_catalog.chunk_copy_operation_id_seq;
DROP SEQUENCE _timescaledb_catalog.chunk_copy_operation_id_seq;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.dimension_partition;
DROP TABLE _timescaledb_catalog.dimension_partition;
DROP FUNCTION IF EXISTS _timescaledb_functions.hypertable_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.hypertable_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_functions.chunks_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunks_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_functions.indexes_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.indexes_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_functions.compressed_chunk_remote_stats;
DROP FUNCTION IF EXISTS _timescaledb_internal.compressed_chunk_remote_stats;
-- rebuild _timescaledb_catalog.hypertable
ALTER TABLE _timescaledb_config.bgw_job
DROP CONSTRAINT bgw_job_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.chunk
DROP CONSTRAINT chunk_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.chunk_index
DROP CONSTRAINT chunk_index_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.continuous_agg
DROP CONSTRAINT continuous_agg_mat_hypertable_id_fkey,
DROP CONSTRAINT continuous_agg_raw_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.continuous_aggs_bucket_function
DROP CONSTRAINT continuous_aggs_bucket_function_mat_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.continuous_aggs_invalidation_threshold
DROP CONSTRAINT continuous_aggs_invalidation_threshold_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.dimension
DROP CONSTRAINT dimension_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.hypertable
DROP CONSTRAINT hypertable_compressed_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.tablespace
DROP CONSTRAINT tablespace_hypertable_id_fkey;
DROP VIEW IF EXISTS timescaledb_information.hypertables;
DROP VIEW IF EXISTS timescaledb_information.job_stats;
DROP VIEW IF EXISTS timescaledb_information.jobs;
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;
DROP VIEW IF EXISTS timescaledb_information.chunks;
DROP VIEW IF EXISTS timescaledb_information.dimensions;
DROP VIEW IF EXISTS timescaledb_information.compression_settings;
DROP VIEW IF EXISTS _timescaledb_internal.hypertable_chunk_local_size;
DROP VIEW IF EXISTS _timescaledb_internal.compressed_chunk_stats;
DROP VIEW IF EXISTS timescaledb_experimental.chunk_replication_status;
DROP VIEW IF EXISTS timescaledb_experimental.policies;
-- recreate table
CREATE TABLE _timescaledb_catalog.hypertable_tmp AS SELECT * FROM _timescaledb_catalog.hypertable;
CREATE TABLE _timescaledb_catalog.tmp_hypertable_seq_value AS SELECT last_value, is_called FROM _timescaledb_catalog.hypertable_id_seq;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable;
ALTER EXTENSION timescaledb DROP SEQUENCE _timescaledb_catalog.hypertable_id_seq;
SET timescaledb.restoring = on; -- must disable the hooks otherwise we can't do anything without the table _timescaledb_catalog.hypertable
DROP TABLE _timescaledb_catalog.hypertable;
CREATE SEQUENCE _timescaledb_catalog.hypertable_id_seq MINVALUE 1;
SELECT setval('_timescaledb_catalog.hypertable_id_seq', last_value, is_called) FROM _timescaledb_catalog.tmp_hypertable_seq_value;
DROP TABLE _timescaledb_catalog.tmp_hypertable_seq_value;
CREATE TABLE _timescaledb_catalog.hypertable (
id INTEGER PRIMARY KEY NOT NULL DEFAULT nextval('_timescaledb_catalog.hypertable_id_seq'),
schema_name name NOT NULL,
table_name name NOT NULL,
associated_schema_name name NOT NULL,
associated_table_prefix name NOT NULL,
num_dimensions smallint NOT NULL,
chunk_sizing_func_schema name NOT NULL,
chunk_sizing_func_name name NOT NULL,
chunk_target_size bigint NOT NULL, -- size in bytes
compression_state smallint NOT NULL DEFAULT 0,
compressed_hypertable_id integer,
status integer NOT NULL DEFAULT 0
);
SET timescaledb.restoring = off;
INSERT INTO _timescaledb_catalog.hypertable (
id,
schema_name,
table_name,
associated_schema_name,
associated_table_prefix,
num_dimensions,
chunk_sizing_func_schema,
chunk_sizing_func_name,
chunk_target_size,
compression_state,
compressed_hypertable_id
)
SELECT
id,
schema_name,
table_name,
associated_schema_name,
associated_table_prefix,
num_dimensions,
chunk_sizing_func_schema,
chunk_sizing_func_name,
chunk_target_size,
compression_state,
compressed_hypertable_id
FROM
_timescaledb_catalog.hypertable_tmp
ORDER BY id;
UPDATE _timescaledb_catalog.hypertable h
SET status = 3
WHERE EXISTS (
SELECT FROM _timescaledb_catalog.chunk c WHERE c.osm_chunk AND c.hypertable_id = h.id
);
ALTER SEQUENCE _timescaledb_catalog.hypertable_id_seq OWNED BY _timescaledb_catalog.hypertable.id;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.hypertable_id_seq', '');
GRANT SELECT ON _timescaledb_catalog.hypertable TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.hypertable_id_seq TO PUBLIC;
DROP TABLE _timescaledb_catalog.hypertable_tmp;
-- now add any constraints
ALTER TABLE _timescaledb_catalog.hypertable
ADD CONSTRAINT hypertable_associated_schema_name_associated_table_prefix_key UNIQUE (associated_schema_name, associated_table_prefix),
ADD CONSTRAINT hypertable_table_name_schema_name_key UNIQUE (table_name, schema_name),
ADD CONSTRAINT hypertable_schema_name_check CHECK (schema_name != '_timescaledb_catalog'),
ADD CONSTRAINT hypertable_dim_compress_check CHECK (num_dimensions > 0 OR compression_state = 2),
ADD CONSTRAINT hypertable_chunk_target_size_check CHECK (chunk_target_size >= 0),
ADD CONSTRAINT hypertable_compress_check CHECK ( (compression_state = 0 OR compression_state = 1 ) OR (compression_state = 2 AND compressed_hypertable_id IS NULL)),
ADD CONSTRAINT hypertable_compressed_hypertable_id_fkey FOREIGN KEY (compressed_hypertable_id) REFERENCES _timescaledb_catalog.hypertable (id);
GRANT SELECT ON TABLE _timescaledb_catalog.hypertable TO PUBLIC;
-- 3. reestablish constraints on other tables
ALTER TABLE _timescaledb_config.bgw_job
ADD CONSTRAINT bgw_job_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.chunk
ADD CONSTRAINT chunk_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id);
ALTER TABLE _timescaledb_catalog.chunk_index
ADD CONSTRAINT chunk_index_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.continuous_agg
ADD CONSTRAINT continuous_agg_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
ADD CONSTRAINT continuous_agg_raw_hypertable_id_fkey FOREIGN KEY (raw_hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.continuous_aggs_bucket_function
ADD CONSTRAINT continuous_aggs_bucket_function_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.continuous_aggs_invalidation_threshold
ADD CONSTRAINT continuous_aggs_invalidation_threshold_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.dimension
ADD CONSTRAINT dimension_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.tablespace
ADD CONSTRAINT tablespace_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
CREATE SCHEMA _timescaledb_debug;
-- Migrate existing compressed hypertables to new internal format
DO $$
DECLARE
chunk regclass;
hypertable regclass;
ht_id integer;
index regclass;
column_name name;
cmd text;
BEGIN
SET timescaledb.restoring TO ON;
-- Detach compressed chunks from their parent hypertables
FOR chunk, hypertable, ht_id IN
SELECT
format('%I.%I',ch.schema_name,ch.table_name)::regclass chunk,
format('%I.%I',ht.schema_name,ht.table_name)::regclass hypertable,
ht.id
FROM _timescaledb_catalog.chunk ch
INNER JOIN _timescaledb_catalog.hypertable ht_uncomp
ON ch.hypertable_id = ht_uncomp.compressed_hypertable_id
INNER JOIN _timescaledb_catalog.hypertable ht
ON ht.id = ht_uncomp.compressed_hypertable_id
LOOP
cmd := format('ALTER TABLE %s NO INHERIT %s', chunk, hypertable);
EXECUTE cmd;
-- remove references to indexes from the compressed hypertable
DELETE FROM _timescaledb_catalog.chunk_index WHERE hypertable_id = ht_id;
END LOOP;
FOR hypertable IN
SELECT
format('%I.%I',ht.schema_name,ht.table_name)::regclass hypertable
FROM _timescaledb_catalog.hypertable ht_uncomp
INNER JOIN _timescaledb_catalog.hypertable ht
ON ht.id = ht_uncomp.compressed_hypertable_id
LOOP
-- remove indexes from the compressed hypertable (but not chunks)
FOR index IN
SELECT indexrelid::regclass FROM pg_index WHERE indrelid = hypertable
LOOP
cmd := format('DROP INDEX %s', index);
EXECUTE cmd;
END LOOP;
-- remove columns from the compressed hypertable (but not chunks)
FOR column_name IN
SELECT attname FROM pg_attribute WHERE attrelid = hypertable AND attnum > 0
LOOP
cmd := format('ALTER TABLE %s DROP COLUMN %I', hypertable, column_name);
EXECUTE cmd;
END LOOP;
END LOOP;
SET timescaledb.restoring TO OFF;
END $$;
DROP FUNCTION IF EXISTS _timescaledb_internal.hypertable_constraint_add_table_fk_constraint;
DROP FUNCTION IF EXISTS _timescaledb_functions.hypertable_constraint_add_table_fk_constraint;
-- only define stub here, actual code will be filled in at end of update script
CREATE FUNCTION _timescaledb_functions.constraint_clone(constraint_oid OID,target_oid REGCLASS) RETURNS VOID LANGUAGE PLPGSQL AS $$BEGIN END$$ SET search_path TO pg_catalog, pg_temp;
DROP FUNCTION IF EXISTS _timescaledb_functions.chunks_in;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunks_in;
CREATE FUNCTION _timescaledb_functions.metadata_insert_trigger() RETURNS TRIGGER LANGUAGE PLPGSQL
AS $$
BEGIN
IF EXISTS (SELECT FROM _timescaledb_catalog.metadata WHERE key = NEW.key) THEN
UPDATE _timescaledb_catalog.metadata SET value = NEW.value WHERE key = NEW.key;
RETURN NULL;
END IF;
RETURN NEW;
END
$$ SET search_path TO pg_catalog, pg_temp;
CREATE TRIGGER metadata_insert_trigger BEFORE INSERT ON _timescaledb_catalog.metadata FOR EACH ROW EXECUTE PROCEDURE _timescaledb_functions.metadata_insert_trigger();
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.metadata', $$ WHERE key <> 'uuid' $$);
-- Remove unwanted entries from extconfig and extcondition in pg_extension
-- We use ALTER EXTENSION DROP TABLE to remove these entries.
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_cache.cache_inval_hypertable;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_cache.cache_inval_extension;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_cache.cache_inval_bgw_job;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_internal.job_errors;
-- Associate the above tables back to keep the dependencies safe
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_cache.cache_inval_hypertable;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_cache.cache_inval_extension;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_cache.cache_inval_bgw_job;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_internal.job_errors;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_catalog.hypertable;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.hypertable', 'WHERE id >= 1');
CREATE FUNCTION _timescaledb_functions.relation_approximate_size(relation REGCLASS)
RETURNS TABLE (total_size BIGINT, heap_size BIGINT, index_size BIGINT, toast_size BIGINT)
AS '@MODULE_PATHNAME@', 'ts_relation_approximate_size' LANGUAGE C STRICT VOLATILE;
CREATE FUNCTION @extschema@.hypertable_approximate_detailed_size(relation REGCLASS)
RETURNS TABLE (table_bytes BIGINT, index_bytes BIGINT, toast_bytes BIGINT, total_bytes BIGINT)
AS '@MODULE_PATHNAME@', 'ts_hypertable_approximate_size' LANGUAGE C VOLATILE;
--- returns approximate total-bytes for a hypertable (includes table + index)
CREATE FUNCTION @extschema@.hypertable_approximate_size(
hypertable REGCLASS)
RETURNS BIGINT
LANGUAGE SQL VOLATILE STRICT AS
$BODY$
SELECT sum(total_bytes)::bigint
FROM @extschema@.hypertable_approximate_detailed_size(hypertable);
$BODY$ SET search_path TO pg_catalog, pg_temp;
DROP FUNCTION IF EXISTS @extschema@.compress_chunk;
CREATE FUNCTION @extschema@.compress_chunk(uncompressed_chunk REGCLASS, if_not_compressed BOOLEAN = true, recompress BOOLEAN = false) RETURNS REGCLASS AS '' LANGUAGE SQL SET search_path TO pg_catalog, pg_temp;

View File

@ -1,451 +0,0 @@
-- ERROR if trying to update the extension while multinode is present
DO $$
DECLARE
data_nodes TEXT;
dist_hypertables TEXT;
BEGIN
SELECT string_agg(format('%I.%I', schema_name, table_name), ', ')
INTO dist_hypertables
FROM _timescaledb_catalog.hypertable
WHERE replication_factor > 0;
IF dist_hypertables IS NOT NULL THEN
RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'cannot upgrade because multi-node has been removed in 2.14.0',
DETAIL = 'The following distributed hypertables should be migrated to regular: '||dist_hypertables;
END IF;
SELECT string_agg(format('%I', srv.srvname), ', ')
INTO data_nodes
FROM pg_foreign_server srv
JOIN pg_foreign_data_wrapper fdw ON srv.srvfdw = fdw.oid AND fdw.fdwname = 'timescaledb_fdw';
IF data_nodes IS NOT NULL THEN
RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'cannot upgrade because multi-node has been removed in 2.14.0',
DETAIL = 'The following data nodes should be removed: '||data_nodes;
END IF;
IF EXISTS(SELECT FROM _timescaledb_catalog.metadata WHERE key = 'dist_uuid') THEN
RAISE USING
ERRCODE = 'feature_not_supported',
MESSAGE = 'cannot upgrade because multi-node has been removed in 2.14.0',
DETAIL = 'This node appears to be part of a multi-node installation';
END IF;
END $$;
DROP FUNCTION IF EXISTS _timescaledb_functions.ping_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.ping_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.remote_txn_heal_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.remote_txn_heal_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.set_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_internal.set_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_functions.set_peer_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_internal.set_peer_dist_id;
DROP FUNCTION IF EXISTS _timescaledb_functions.validate_as_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.validate_as_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.show_connection_cache;
DROP FUNCTION IF EXISTS _timescaledb_internal.show_connection_cache;
DROP FUNCTION IF EXISTS @extschema@.create_hypertable(relation REGCLASS, time_column_name NAME, partitioning_column NAME, number_partitions INTEGER, associated_schema_name NAME, associated_table_prefix NAME, chunk_time_interval ANYELEMENT, create_default_indexes BOOLEAN, if_not_exists BOOLEAN, partitioning_func REGPROC, migrate_data BOOLEAN, chunk_target_size TEXT, chunk_sizing_func REGPROC, time_partitioning_func REGPROC, replication_factor INTEGER, data_nodes NAME[], distributed BOOLEAN);
CREATE FUNCTION @extschema@.create_hypertable(
relation REGCLASS,
time_column_name NAME,
partitioning_column NAME = NULL,
number_partitions INTEGER = NULL,
associated_schema_name NAME = NULL,
associated_table_prefix NAME = NULL,
chunk_time_interval ANYELEMENT = NULL::bigint,
create_default_indexes BOOLEAN = TRUE,
if_not_exists BOOLEAN = FALSE,
partitioning_func REGPROC = NULL,
migrate_data BOOLEAN = FALSE,
chunk_target_size TEXT = NULL,
chunk_sizing_func REGPROC = '_timescaledb_functions.calculate_chunk_interval'::regproc,
time_partitioning_func REGPROC = NULL
) RETURNS TABLE(hypertable_id INT, schema_name NAME, table_name NAME, created BOOL) AS '@MODULE_PATHNAME@', 'ts_hypertable_create' LANGUAGE C VOLATILE;
DROP FUNCTION IF EXISTS @extschema@.create_distributed_hypertable;
DROP FUNCTION IF EXISTS @extschema@.add_data_node;
DROP FUNCTION IF EXISTS @extschema@.delete_data_node;
DROP FUNCTION IF EXISTS @extschema@.attach_data_node;
DROP FUNCTION IF EXISTS @extschema@.detach_data_node;
DROP FUNCTION IF EXISTS @extschema@.alter_data_node;
DROP PROCEDURE IF EXISTS @extschema@.distributed_exec;
DROP FUNCTION IF EXISTS @extschema@.create_distributed_restore_point;
DROP FUNCTION IF EXISTS @extschema@.set_replication_factor;
CREATE TABLE _timescaledb_catalog.compression_settings (
relid regclass NOT NULL,
segmentby text[],
orderby text[],
orderby_desc bool[],
orderby_nullsfirst bool[],
CONSTRAINT compression_settings_pkey PRIMARY KEY (relid),
CONSTRAINT compression_settings_check_segmentby CHECK (array_ndims(segmentby) = 1),
CONSTRAINT compression_settings_check_orderby_null CHECK ( (orderby IS NULL AND orderby_desc IS NULL AND orderby_nullsfirst IS NULL) OR (orderby IS NOT NULL AND orderby_desc IS NOT NULL AND orderby_nullsfirst IS NOT NULL) ),
CONSTRAINT compression_settings_check_orderby_cardinality CHECK (array_ndims(orderby) = 1 AND array_ndims(orderby_desc) = 1 AND array_ndims(orderby_nullsfirst) = 1 AND cardinality(orderby) = cardinality(orderby_desc) AND cardinality(orderby) = cardinality(orderby_nullsfirst))
);
INSERT INTO _timescaledb_catalog.compression_settings(relid, segmentby, orderby, orderby_desc, orderby_nullsfirst)
SELECT
format('%I.%I', ht.schema_name, ht.table_name)::regclass,
array_agg(attname ORDER BY segmentby_column_index) FILTER(WHERE segmentby_column_index >= 1) AS compress_segmentby,
array_agg(attname ORDER BY orderby_column_index) FILTER(WHERE orderby_column_index >= 1) AS compress_orderby,
array_agg(NOT orderby_asc ORDER BY orderby_column_index) FILTER(WHERE orderby_column_index >= 1) AS compress_orderby_desc,
array_agg(orderby_nullsfirst ORDER BY orderby_column_index) FILTER(WHERE orderby_column_index >= 1) AS compress_orderby_nullsfirst
FROM _timescaledb_catalog.hypertable_compression hc
INNER JOIN _timescaledb_catalog.hypertable ht ON ht.id = hc.hypertable_id
GROUP BY hypertable_id, ht.schema_name, ht.table_name;
GRANT SELECT ON _timescaledb_catalog.compression_settings TO PUBLIC;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.compression_settings', '');
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable_compression;
DROP VIEW IF EXISTS timescaledb_information.compression_settings;
DROP TABLE _timescaledb_catalog.hypertable_compression;
DROP FOREIGN DATA WRAPPER IF EXISTS timescaledb_fdw;
DROP FUNCTION IF EXISTS @extschema@.timescaledb_fdw_handler();
DROP FUNCTION IF EXISTS @extschema@.timescaledb_fdw_validator(text[], oid);
DROP FUNCTION IF EXISTS _timescaledb_functions.create_chunk_replica_table;
DROP FUNCTION IF EXISTS _timescaledb_functions.chunk_drop_replica;
DROP PROCEDURE IF EXISTS _timescaledb_functions.wait_subscription_sync;
DROP FUNCTION IF EXISTS _timescaledb_functions.health;
DROP FUNCTION IF EXISTS _timescaledb_functions.drop_stale_chunks;
DROP FUNCTION IF EXISTS _timescaledb_internal.create_chunk_replica_table;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunk_drop_replica;
DROP PROCEDURE IF EXISTS _timescaledb_internal.wait_subscription_sync;
DROP FUNCTION IF EXISTS _timescaledb_internal.health;
DROP FUNCTION IF EXISTS _timescaledb_internal.drop_stale_chunks;
ALTER TABLE _timescaledb_catalog.remote_txn DROP CONSTRAINT remote_txn_remote_transaction_id_check;
DROP TYPE IF EXISTS @extschema@.rxid CASCADE;
DROP FUNCTION IF EXISTS _timescaledb_functions.rxid_in;
DROP FUNCTION IF EXISTS _timescaledb_functions.rxid_out;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_hypertable_info;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_chunk_info;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_compressed_chunk_stats;
DROP FUNCTION IF EXISTS _timescaledb_functions.data_node_index_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_hypertable_info;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_chunk_info;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_compressed_chunk_stats;
DROP FUNCTION IF EXISTS _timescaledb_internal.data_node_index_size;
DROP FUNCTION IF EXISTS timescaledb_experimental.block_new_chunks;
DROP FUNCTION IF EXISTS timescaledb_experimental.allow_new_chunks;
DROP FUNCTION IF EXISTS timescaledb_experimental.subscription_exec;
DROP PROCEDURE IF EXISTS timescaledb_experimental.move_chunk;
DROP PROCEDURE IF EXISTS timescaledb_experimental.copy_chunk;
DROP PROCEDURE IF EXISTS timescaledb_experimental.cleanup_copy_chunk_operation;
DROP FUNCTION IF EXISTS _timescaledb_functions.set_chunk_default_data_node;
DROP FUNCTION IF EXISTS _timescaledb_internal.set_chunk_default_data_node;
DROP FUNCTION IF EXISTS _timescaledb_functions.drop_dist_ht_invalidation_trigger;
DROP FUNCTION IF EXISTS _timescaledb_internal.drop_dist_ht_invalidation_trigger;
-- remove multinode catalog tables
DROP VIEW IF EXISTS timescaledb_information.chunks;
DROP VIEW IF EXISTS timescaledb_information.data_nodes;
DROP VIEW IF EXISTS timescaledb_information.hypertables;
DROP VIEW IF EXISTS timescaledb_experimental.chunk_replication_status;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.remote_txn;
DROP TABLE _timescaledb_catalog.remote_txn;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable_data_node;
DROP TABLE _timescaledb_catalog.hypertable_data_node;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.chunk_data_node;
DROP TABLE _timescaledb_catalog.chunk_data_node;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.chunk_copy_operation;
DROP TABLE _timescaledb_catalog.chunk_copy_operation;
ALTER EXTENSION timescaledb DROP SEQUENCE _timescaledb_catalog.chunk_copy_operation_id_seq;
DROP SEQUENCE _timescaledb_catalog.chunk_copy_operation_id_seq;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.dimension_partition;
DROP TABLE _timescaledb_catalog.dimension_partition;
DROP FUNCTION IF EXISTS _timescaledb_functions.hypertable_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.hypertable_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_functions.chunks_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunks_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_functions.indexes_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.indexes_remote_size;
DROP FUNCTION IF EXISTS _timescaledb_functions.compressed_chunk_remote_stats;
DROP FUNCTION IF EXISTS _timescaledb_internal.compressed_chunk_remote_stats;
-- rebuild _timescaledb_catalog.hypertable
ALTER TABLE _timescaledb_config.bgw_job
DROP CONSTRAINT bgw_job_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.chunk
DROP CONSTRAINT chunk_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.chunk_index
DROP CONSTRAINT chunk_index_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.continuous_agg
DROP CONSTRAINT continuous_agg_mat_hypertable_id_fkey,
DROP CONSTRAINT continuous_agg_raw_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.continuous_aggs_bucket_function
DROP CONSTRAINT continuous_aggs_bucket_function_mat_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.continuous_aggs_invalidation_threshold
DROP CONSTRAINT continuous_aggs_invalidation_threshold_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.dimension
DROP CONSTRAINT dimension_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.hypertable
DROP CONSTRAINT hypertable_compressed_hypertable_id_fkey;
ALTER TABLE _timescaledb_catalog.tablespace
DROP CONSTRAINT tablespace_hypertable_id_fkey;
DROP VIEW IF EXISTS timescaledb_information.hypertables;
DROP VIEW IF EXISTS timescaledb_information.job_stats;
DROP VIEW IF EXISTS timescaledb_information.jobs;
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;
DROP VIEW IF EXISTS timescaledb_information.chunks;
DROP VIEW IF EXISTS timescaledb_information.dimensions;
DROP VIEW IF EXISTS timescaledb_information.compression_settings;
DROP VIEW IF EXISTS _timescaledb_internal.hypertable_chunk_local_size;
DROP VIEW IF EXISTS _timescaledb_internal.compressed_chunk_stats;
DROP VIEW IF EXISTS timescaledb_experimental.chunk_replication_status;
DROP VIEW IF EXISTS timescaledb_experimental.policies;
-- recreate table
CREATE TABLE _timescaledb_catalog.hypertable_tmp AS SELECT * FROM _timescaledb_catalog.hypertable;
CREATE TABLE _timescaledb_catalog.tmp_hypertable_seq_value AS SELECT last_value, is_called FROM _timescaledb_catalog.hypertable_id_seq;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable;
ALTER EXTENSION timescaledb DROP SEQUENCE _timescaledb_catalog.hypertable_id_seq;
SET timescaledb.restoring = on; -- must disable the hooks otherwise we can't do anything without the table _timescaledb_catalog.hypertable
DROP TABLE _timescaledb_catalog.hypertable;
CREATE SEQUENCE _timescaledb_catalog.hypertable_id_seq MINVALUE 1;
SELECT setval('_timescaledb_catalog.hypertable_id_seq', last_value, is_called) FROM _timescaledb_catalog.tmp_hypertable_seq_value;
DROP TABLE _timescaledb_catalog.tmp_hypertable_seq_value;
CREATE TABLE _timescaledb_catalog.hypertable (
id INTEGER PRIMARY KEY NOT NULL DEFAULT nextval('_timescaledb_catalog.hypertable_id_seq'),
schema_name name NOT NULL,
table_name name NOT NULL,
associated_schema_name name NOT NULL,
associated_table_prefix name NOT NULL,
num_dimensions smallint NOT NULL,
chunk_sizing_func_schema name NOT NULL,
chunk_sizing_func_name name NOT NULL,
chunk_target_size bigint NOT NULL, -- size in bytes
compression_state smallint NOT NULL DEFAULT 0,
compressed_hypertable_id integer,
status integer NOT NULL DEFAULT 0
);
SET timescaledb.restoring = off;
INSERT INTO _timescaledb_catalog.hypertable (
id,
schema_name,
table_name,
associated_schema_name,
associated_table_prefix,
num_dimensions,
chunk_sizing_func_schema,
chunk_sizing_func_name,
chunk_target_size,
compression_state,
compressed_hypertable_id
)
SELECT
id,
schema_name,
table_name,
associated_schema_name,
associated_table_prefix,
num_dimensions,
chunk_sizing_func_schema,
chunk_sizing_func_name,
chunk_target_size,
compression_state,
compressed_hypertable_id
FROM
_timescaledb_catalog.hypertable_tmp
ORDER BY id;
UPDATE _timescaledb_catalog.hypertable h
SET status = 3
WHERE EXISTS (
SELECT FROM _timescaledb_catalog.chunk c WHERE c.osm_chunk AND c.hypertable_id = h.id
);
ALTER SEQUENCE _timescaledb_catalog.hypertable_id_seq OWNED BY _timescaledb_catalog.hypertable.id;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.hypertable_id_seq', '');
GRANT SELECT ON _timescaledb_catalog.hypertable TO PUBLIC;
GRANT SELECT ON _timescaledb_catalog.hypertable_id_seq TO PUBLIC;
DROP TABLE _timescaledb_catalog.hypertable_tmp;
-- now add any constraints
ALTER TABLE _timescaledb_catalog.hypertable
ADD CONSTRAINT hypertable_associated_schema_name_associated_table_prefix_key UNIQUE (associated_schema_name, associated_table_prefix),
ADD CONSTRAINT hypertable_table_name_schema_name_key UNIQUE (table_name, schema_name),
ADD CONSTRAINT hypertable_schema_name_check CHECK (schema_name != '_timescaledb_catalog'),
ADD CONSTRAINT hypertable_dim_compress_check CHECK (num_dimensions > 0 OR compression_state = 2),
ADD CONSTRAINT hypertable_chunk_target_size_check CHECK (chunk_target_size >= 0),
ADD CONSTRAINT hypertable_compress_check CHECK ( (compression_state = 0 OR compression_state = 1 ) OR (compression_state = 2 AND compressed_hypertable_id IS NULL)),
ADD CONSTRAINT hypertable_compressed_hypertable_id_fkey FOREIGN KEY (compressed_hypertable_id) REFERENCES _timescaledb_catalog.hypertable (id);
GRANT SELECT ON TABLE _timescaledb_catalog.hypertable TO PUBLIC;
-- 3. reestablish constraints on other tables
ALTER TABLE _timescaledb_config.bgw_job
ADD CONSTRAINT bgw_job_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.chunk
ADD CONSTRAINT chunk_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id);
ALTER TABLE _timescaledb_catalog.chunk_index
ADD CONSTRAINT chunk_index_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.continuous_agg
ADD CONSTRAINT continuous_agg_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
ADD CONSTRAINT continuous_agg_raw_hypertable_id_fkey FOREIGN KEY (raw_hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.continuous_aggs_bucket_function
ADD CONSTRAINT continuous_aggs_bucket_function_mat_hypertable_id_fkey FOREIGN KEY (mat_hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.continuous_aggs_invalidation_threshold
ADD CONSTRAINT continuous_aggs_invalidation_threshold_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.dimension
ADD CONSTRAINT dimension_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_catalog.tablespace
ADD CONSTRAINT tablespace_hypertable_id_fkey FOREIGN KEY (hypertable_id) REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
CREATE SCHEMA _timescaledb_debug;
-- Migrate existing compressed hypertables to new internal format
DO $$
DECLARE
chunk regclass;
hypertable regclass;
ht_id integer;
index regclass;
column_name name;
cmd text;
BEGIN
SET timescaledb.restoring TO ON;
-- Detach compressed chunks from their parent hypertables
FOR chunk, hypertable, ht_id IN
SELECT
format('%I.%I',ch.schema_name,ch.table_name)::regclass chunk,
format('%I.%I',ht.schema_name,ht.table_name)::regclass hypertable,
ht.id
FROM _timescaledb_catalog.chunk ch
INNER JOIN _timescaledb_catalog.hypertable ht_uncomp
ON ch.hypertable_id = ht_uncomp.compressed_hypertable_id
INNER JOIN _timescaledb_catalog.hypertable ht
ON ht.id = ht_uncomp.compressed_hypertable_id
LOOP
cmd := format('ALTER TABLE %s NO INHERIT %s', chunk, hypertable);
EXECUTE cmd;
-- remove references to indexes from the compressed hypertable
DELETE FROM _timescaledb_catalog.chunk_index WHERE hypertable_id = ht_id;
END LOOP;
FOR hypertable IN
SELECT
format('%I.%I',ht.schema_name,ht.table_name)::regclass hypertable
FROM _timescaledb_catalog.hypertable ht_uncomp
INNER JOIN _timescaledb_catalog.hypertable ht
ON ht.id = ht_uncomp.compressed_hypertable_id
LOOP
-- remove indexes from the compressed hypertable (but not chunks)
FOR index IN
SELECT indexrelid::regclass FROM pg_index WHERE indrelid = hypertable
LOOP
cmd := format('DROP INDEX %s', index);
EXECUTE cmd;
END LOOP;
-- remove columns from the compressed hypertable (but not chunks)
FOR column_name IN
SELECT attname FROM pg_attribute WHERE attrelid = hypertable AND attnum > 0
LOOP
cmd := format('ALTER TABLE %s DROP COLUMN %I', hypertable, column_name);
EXECUTE cmd;
END LOOP;
END LOOP;
SET timescaledb.restoring TO OFF;
END $$;
DROP FUNCTION IF EXISTS _timescaledb_internal.hypertable_constraint_add_table_fk_constraint;
DROP FUNCTION IF EXISTS _timescaledb_functions.hypertable_constraint_add_table_fk_constraint;
-- only define stub here, actual code will be filled in at end of update script
CREATE FUNCTION _timescaledb_functions.constraint_clone(constraint_oid OID,target_oid REGCLASS) RETURNS VOID LANGUAGE PLPGSQL AS $$BEGIN END$$ SET search_path TO pg_catalog, pg_temp;
DROP FUNCTION IF EXISTS _timescaledb_functions.chunks_in;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunks_in;
CREATE FUNCTION _timescaledb_functions.metadata_insert_trigger() RETURNS TRIGGER LANGUAGE PLPGSQL
AS $$
BEGIN
IF EXISTS (SELECT FROM _timescaledb_catalog.metadata WHERE key = NEW.key) THEN
UPDATE _timescaledb_catalog.metadata SET value = NEW.value WHERE key = NEW.key;
RETURN NULL;
END IF;
RETURN NEW;
END
$$ SET search_path TO pg_catalog, pg_temp;
CREATE TRIGGER metadata_insert_trigger BEFORE INSERT ON _timescaledb_catalog.metadata FOR EACH ROW EXECUTE PROCEDURE _timescaledb_functions.metadata_insert_trigger();
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.metadata', $$ WHERE key <> 'uuid' $$);
-- Remove unwanted entries from extconfig and extcondition in pg_extension
-- We use ALTER EXTENSION DROP TABLE to remove these entries.
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_cache.cache_inval_hypertable;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_cache.cache_inval_extension;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_cache.cache_inval_bgw_job;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_internal.job_errors;
-- Associate the above tables back to keep the dependencies safe
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_cache.cache_inval_hypertable;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_cache.cache_inval_extension;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_cache.cache_inval_bgw_job;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_internal.job_errors;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.hypertable;
ALTER EXTENSION timescaledb ADD TABLE _timescaledb_catalog.hypertable;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.hypertable', 'WHERE id >= 1');
CREATE FUNCTION _timescaledb_functions.relation_approximate_size(relation REGCLASS)
RETURNS TABLE (total_size BIGINT, heap_size BIGINT, index_size BIGINT, toast_size BIGINT)
AS '@MODULE_PATHNAME@', 'ts_relation_approximate_size' LANGUAGE C STRICT VOLATILE;
CREATE FUNCTION @extschema@.hypertable_approximate_detailed_size(relation REGCLASS)
RETURNS TABLE (table_bytes BIGINT, index_bytes BIGINT, toast_bytes BIGINT, total_bytes BIGINT)
AS '@MODULE_PATHNAME@', 'ts_hypertable_approximate_size' LANGUAGE C VOLATILE;
--- returns approximate total-bytes for a hypertable (includes table + index)
CREATE FUNCTION @extschema@.hypertable_approximate_size(
hypertable REGCLASS)
RETURNS BIGINT
LANGUAGE SQL VOLATILE STRICT AS
$BODY$
SELECT sum(total_bytes)::bigint
FROM @extschema@.hypertable_approximate_detailed_size(hypertable);
$BODY$ SET search_path TO pg_catalog, pg_temp;
DROP FUNCTION IF EXISTS @extschema@.compress_chunk;
CREATE FUNCTION @extschema@.compress_chunk(uncompressed_chunk REGCLASS, if_not_compressed BOOLEAN = true, recompress BOOLEAN = false) RETURNS REGCLASS AS '' LANGUAGE SQL SET search_path TO pg_catalog, pg_temp;

View File

@ -1,3 +1,3 @@
version = 2.14.0-dev
update_from_version = 2.13.1
version = 2.15.0-dev
update_from_version = 2.14.0
downgrade_to_version = 2.13.1