Release 2.0.0-rc1

This release adds major new features and bugfixes since the 1.7.4 release.
We deem it moderate priority for upgrading.

This release adds the long-awaited support for distributed hypertables to
TimescaleDB. With 2.0, users can create distributed hypertables across
multiple instances of TimescaleDB, configured so that one instance serves
as an access node and multiple others as data nodes. All queries for a
distributed hypertable are issued to the access node, but inserted data
and queries are pushed down across data nodes for greater scale and
performance.

This release also adds support for user-defined actions allowing users to
define actions that are run by the TimescaleDB automation framework.

In addition to these major new features, the 2.0 branch introduces _breaking_ changes
to APIs and existing features, such as continuous aggregates. These changes are not
backwards compatible and might require changes to clients and/or scripts that rely on
the previous APIs. Please review our updated documentation and do proper testing to
ensure compatibility with your existing applications.

The noticeable breaking changes in APIs are:
- Redefined functions for policies
- A continuous aggregate is now created with `CREATE MATERIALIZED VIEW`
  instead of `CREATE VIEW` and automated refreshing requires adding a policy
  via `add_continuous_aggregate_policy`
- Redesign of informational views, including new (and more general) views for
  information about policies and user-defined actions

This release candidate is upgradable, so if you are on a previous release (e.g., 1.7.4)
you can upgrade to the release candidate and later expect to be able to upgrade to the
final 2.0 release. However, please carefully consider your compatibility requirements
_before_ upgrading.

**Major Features**
* #1923 Add support for distributed hypertables
* #2006 Add support for user-defined actions
* #2435 Move enterprise features to community
* #2437 Update Timescale License

**Minor Features**
* #2011 Constify TIMESTAMPTZ OP INTERVAL in constraints
* #2105 Support moving compressed chunks

**Bugfixes**
* #1843 Improve handling of "dropped" chunks
* #1886 Change ChunkAppend leader to use worker subplan
* #2116 Propagate privileges from hypertables to chunks
* #2263 Fix timestamp overflow in time_bucket optimization
* #2270 Fix handling of non-reference counted TupleDescs in gapfill
* #2325 Fix rename constraint/rename index
* #2370 Fix detection of hypertables in subqueries
* #2376 Fix caggs width expression handling on int based hypertables
* #2416 Check insert privileges to create chunk
* #2428 Allow owner change of continuous aggregate
* #2436 Propagate grants in continuous aggregates
This commit is contained in:
Sven Klemm 2020-10-01 16:41:40 +02:00 committed by Sven Klemm
parent 3f5872ec61
commit 46f7914e19
5 changed files with 484 additions and 425 deletions

View File

@ -4,6 +4,64 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**
## 2.0.0-rc1 (2020-10-06)
This release adds major new features and bugfixes since the 1.7.4 release.
We deem it moderate priority for upgrading.
This release adds the long-awaited support for distributed hypertables to
TimescaleDB. With 2.0, users can create distributed hypertables across
multiple instances of TimescaleDB, configured so that one instance serves
as an access node and multiple others as data nodes. All queries for a
distributed hypertable are issued to the access node, but inserted data
and queries are pushed down across data nodes for greater scale and
performance.
This release also adds support for user-defined actions allowing users to
define actions that are run by the TimescaleDB automation framework.
In addition to these major new features, the 2.0 branch introduces _breaking_ changes
to APIs and existing features, such as continuous aggregates. These changes are not
backwards compatible and might require changes to clients and/or scripts that rely on
the previous APIs. Please review our updated documentation and do proper testing to
ensure compatibility with your existing applications.
The noticeable breaking changes in APIs are:
- Redefined functions for policies
- A continuous aggregate is now created with `CREATE MATERIALIZED VIEW`
instead of `CREATE VIEW` and automated refreshing requires adding a policy
via `add_continuous_aggregate_policy`
- Redesign of informational views, including new (and more general) views for
information about policies and user-defined actions
This release candidate is upgradable, so if you are on a previous release (e.g., 1.7.4)
you can upgrade to the release candidate and later expect to be able to upgrade to the
final 2.0 release. However, please carefully consider your compatibility requirements
_before_ upgrading.
**Major Features**
* #1923 Add support for distributed hypertables
* #2006 Add support for user-defined actions
* #2435 Move enterprise features to community
* #2437 Update Timescale License
**Minor Features**
* #2011 Constify TIMESTAMPTZ OP INTERVAL in constraints
* #2105 Support moving compressed chunks
**Bugfixes**
* #1843 Improve handling of "dropped" chunks
* #1886 Change ChunkAppend leader to use worker subplan
* #2116 Propagate privileges from hypertables to chunks
* #2263 Fix timestamp overflow in time_bucket optimization
* #2270 Fix handling of non-reference counted TupleDescs in gapfill
* #2325 Fix rename constraint/rename index
* #2370 Fix detection of hypertables in subqueries
* #2376 Fix caggs width expression handling on int based hypertables
* #2416 Check insert privileges to create chunk
* #2428 Allow owner change of continuous aggregate
* #2436 Propagate grants in continuous aggregates
## 2.0.0-beta6 (2020-09-14)
**For beta releases**, upgrading from an earlier version of the

View File

@ -96,6 +96,7 @@ set(MOD_FILES
updates/1.7.1--1.7.2.sql
updates/1.7.2--1.7.3.sql
updates/1.7.3--1.7.4.sql
updates/1.7.4--2.0.0-rc1.sql
)
set(MODULE_PATHNAME "$libdir/timescaledb-${PROJECT_VERSION_MOD}")

View File

@ -0,0 +1,423 @@
--Drop functions in size_utils and dependencies, ordering matters.
-- Do not reorder
DROP VIEW IF EXISTS timescaledb_information.hypertable;
DROP FUNCTION IF EXISTS hypertable_relation_size_pretty;
DROP FUNCTION IF EXISTS hypertable_relation_size;
DROP FUNCTION IF EXISTS chunk_relation_size_pretty;
DROP FUNCTION IF EXISTS chunk_relation_size;
DROP FUNCTION IF EXISTS indexes_relation_size_pretty;
DROP FUNCTION IF EXISTS indexes_relation_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.partitioning_column_to_pretty;
DROP FUNCTION IF EXISTS _timescaledb_internal.range_value_to_pretty;
-- end of do not reorder
DROP FUNCTION IF EXISTS hypertable_approximate_row_count;
DROP VIEW IF EXISTS timescaledb_information.compressed_chunk_stats;
DROP VIEW IF EXISTS timescaledb_information.compressed_hypertable_stats;
DROP VIEW IF EXISTS timescaledb_information.license;
-- Add new function definitions, columns and tables for distributed hypertables
DROP FUNCTION IF EXISTS create_hypertable(regclass,name,name,integer,name,name,anyelement,boolean,boolean,regproc,boolean,text,regproc,regproc);
DROP FUNCTION IF EXISTS add_drop_chunks_policy;
DROP FUNCTION IF EXISTS remove_drop_chunks_policy;
DROP FUNCTION IF EXISTS drop_chunks;
DROP FUNCTION IF EXISTS show_chunks;
DROP FUNCTION IF EXISTS add_compress_chunks_policy;
DROP FUNCTION IF EXISTS remove_compress_chunks_policy;
DROP FUNCTION IF EXISTS alter_job_schedule;
DROP FUNCTION IF EXISTS set_chunk_time_interval;
DROP FUNCTION IF EXISTS set_number_partitions;
DROP FUNCTION IF EXISTS add_dimension;
DROP FUNCTION IF EXISTS _timescaledb_internal.enterprise_enabled;
DROP FUNCTION IF EXISTS _timescaledb_internal.current_license_key;
DROP FUNCTION IF EXISTS _timescaledb_internal.license_expiration_time;
DROP FUNCTION IF EXISTS _timescaledb_internal.print_license_expiration_info;
DROP FUNCTION IF EXISTS _timescaledb_internal.license_edition;
DROP FUNCTION IF EXISTS _timescaledb_internal.current_db_set_license_key;
DROP VIEW IF EXISTS timescaledb_information.policy_stats;
DROP VIEW IF EXISTS timescaledb_information.drop_chunks_policies;
DROP VIEW IF EXISTS timescaledb_information.reorder_policies;
ALTER TABLE _timescaledb_catalog.hypertable ADD COLUMN replication_factor SMALLINT NULL CHECK (replication_factor > 0);
-- Table for hypertable -> node mappings
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.hypertable_data_node (
hypertable_id INTEGER NOT NULL REFERENCES _timescaledb_catalog.hypertable(id),
node_hypertable_id INTEGER NULL,
node_name NAME NOT NULL,
block_chunks BOOLEAN NOT NULL,
UNIQUE(node_hypertable_id, node_name),
UNIQUE(hypertable_id, node_name)
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.hypertable_data_node', '');
GRANT SELECT ON _timescaledb_catalog.hypertable_data_node TO PUBLIC;
-- Table for chunk -> nodes mappings
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.chunk_data_node (
chunk_id INTEGER NOT NULL REFERENCES _timescaledb_catalog.chunk(id),
node_chunk_id INTEGER NOT NULL,
node_name NAME NOT NULL,
UNIQUE(node_chunk_id, node_name),
UNIQUE(chunk_id, node_name)
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.chunk_data_node', '');
GRANT SELECT ON _timescaledb_catalog.chunk_data_node TO PUBLIC;
--placeholder to allow creation of functions below
CREATE TYPE rxid;
CREATE OR REPLACE FUNCTION _timescaledb_internal.rxid_in(cstring) RETURNS rxid
AS '@MODULE_PATHNAME@', 'ts_remote_txn_id_in' LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION _timescaledb_internal.rxid_out(rxid) RETURNS cstring
AS '@MODULE_PATHNAME@', 'ts_remote_txn_id_out' LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;
CREATE TYPE rxid (
internallength = 16,
input = _timescaledb_internal.rxid_in,
output = _timescaledb_internal.rxid_out
);
CREATE TABLE _timescaledb_catalog.remote_txn (
data_node_name NAME, --this is really only to allow us to cleanup stuff on a per-node basis.
remote_transaction_id TEXT CHECK (remote_transaction_id::rxid is not null),
PRIMARY KEY (remote_transaction_id)
);
CREATE INDEX IF NOT EXISTS remote_txn_data_node_name_idx
ON _timescaledb_catalog.remote_txn(data_node_name);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.remote_txn', '');
GRANT SELECT ON _timescaledb_catalog.remote_txn TO PUBLIC;
DROP VIEW IF EXISTS timescaledb_information.compressed_hypertable_stats;
DROP VIEW IF EXISTS timescaledb_information.compressed_chunk_stats;
-- all existing compressed chunks have NULL value for the new columns
ALTER TABLE IF EXISTS _timescaledb_catalog.compression_chunk_size ADD COLUMN IF NOT EXISTS numrows_pre_compression BIGINT;
ALTER TABLE IF EXISTS _timescaledb_catalog.compression_chunk_size ADD COLUMN IF NOT EXISTS numrows_post_compression BIGINT;
--rewrite catalog table to not break catalog scans on tables with missingval optimization
CLUSTER _timescaledb_catalog.compression_chunk_size USING compression_chunk_size_pkey;
ALTER TABLE _timescaledb_catalog.compression_chunk_size SET WITHOUT CLUSTER;
---Clean up constraints on hypertable catalog table ---
ALTER TABLE _timescaledb_catalog.hypertable ADD CONSTRAINT hypertable_table_name_schema_name_key UNIQUE(table_name, schema_name);
ALTER TABLE _timescaledb_catalog.hypertable DROP CONSTRAINT hypertable_schema_name_table_name_key;
ALTER TABLE _timescaledb_catalog.hypertable DROP CONSTRAINT hypertable_id_schema_name_key;
-- add fields for custom jobs/generic configuration to bgw_job table
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN proc_schema NAME NOT NULL DEFAULT '';
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN proc_name NAME NOT NULL DEFAULT '';
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN owner NAME NOT NULL DEFAULT CURRENT_ROLE;
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN scheduled BOOL NOT NULL DEFAULT true;
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN hypertable_id INTEGER REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN config JSONB;
ALTER TABLE _timescaledb_config.bgw_job DROP CONSTRAINT valid_job_type;
ALTER TABLE _timescaledb_config.bgw_job ADD CONSTRAINT valid_job_type CHECK (job_type IN ('telemetry_and_version_check_if_enabled', 'reorder', 'drop_chunks', 'continuous_aggregate', 'compress_chunks', 'custom'));
-- migrate telemetry jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', application_name, id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_telemetry',
owner = CURRENT_ROLE
WHERE job_type = 'telemetry_and_version_check_if_enabled';
-- migrate reorder jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Reorder Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_reorder',
config = jsonb_build_object('hypertable_id', reorder.hypertable_id, 'index_name', reorder.hypertable_index_name),
hypertable_id = reorder.hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = reorder.hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_config.bgw_policy_reorder reorder
WHERE job_type = 'reorder'
AND job.id = reorder.job_id;
-- migrate compression jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Compression Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_compression',
config = jsonb_build_object('hypertable_id', c.hypertable_id, 'compress_after', CASE WHEN (older_than).is_time_interval THEN
(older_than).time_interval::text
ELSE
(older_than).integer_interval::text
END),
hypertable_id = c.hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = c.hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_config.bgw_policy_compress_chunks c
WHERE job_type = 'compress_chunks'
AND job.id = c.job_id;
-- migrate retention jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Retention Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_retention',
config = jsonb_build_object('hypertable_id', c.hypertable_id, 'drop_after', CASE WHEN (older_than).is_time_interval THEN
(older_than).time_interval::text
ELSE
(older_than).integer_interval::text
END),
hypertable_id = c.hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = c.hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_config.bgw_policy_drop_chunks c
WHERE job_type = 'drop_chunks'
AND job.id = c.job_id;
-- migrate cagg jobs
--- timescale functions cannot be invoked in latest-dev.sql
--- this is a mapping for get_time_type
CREATE FUNCTION ts_tmp_get_time_type(htid integer)
RETURNS OID LANGUAGE SQL AS
$BODY$
SELECT dim.column_type
FROM _timescaledb_catalog.dimension dim
WHERE dim.hypertable_id = htid and dim.num_slices is null
and dim.interval_length is not null;
$BODY$;
--- this is a mapping for _timescaledb_internal.to_interval
CREATE FUNCTION ts_tmp_get_interval( intval bigint)
RETURNS INTERVAL LANGUAGE SQL AS
$BODY$
SELECT format('%sd %ss', intval/86400000000, (intval%86400000000)/1E6)::interval;
$BODY$;
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Refresh Continuous Aggregate Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_refresh_continuous_aggregate',
job_type = 'custom',
config =
CASE WHEN ts_tmp_get_time_type( cagg.raw_hypertable_id ) IN ('TIMESTAMP'::regtype, 'DATE'::regtype, 'TIMESTAMPTZ'::regtype)
THEN
jsonb_build_object('mat_hypertable_id', cagg.mat_hypertable_id, 'start_offset',
CASE WHEN cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN NULL
ELSE ts_tmp_get_interval(cagg.ignore_invalidation_older_than)::TEXT
END
, 'end_offset', ts_tmp_get_interval(cagg.refresh_lag)::TEXT)
ELSE
jsonb_build_object('mat_hypertable_id', cagg.mat_hypertable_id, 'start_offset',
CASE WHEN cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN NULL
ELSE cagg.ignore_invalidation_older_than::BIGINT
END
, 'end_offset', cagg.refresh_lag::BIGINT)
END,
hypertable_id = cagg.mat_hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = cagg.mat_hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_catalog.continuous_agg cagg
WHERE job_type = 'continuous_aggregate'
AND job.id = cagg.job_id ;
--drop tmp functions created for cont agg job migration
DROP FUNCTION ts_tmp_get_time_type;
DROP FUNCTION ts_tmp_get_interval;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_policy_reorder;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_policy_compress_chunks;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_policy_drop_chunks;
DROP TABLE IF EXISTS _timescaledb_config.bgw_policy_reorder CASCADE;
DROP TABLE IF EXISTS _timescaledb_config.bgw_policy_compress_chunks;
DROP TABLE IF EXISTS _timescaledb_config.bgw_policy_drop_chunks;
DROP FUNCTION IF EXISTS _timescaledb_internal.valid_ts_interval;
DROP TYPE IF EXISTS _timescaledb_catalog.ts_interval;
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregate_stats;
ALTER TABLE IF EXISTS _timescaledb_catalog.continuous_agg DROP COLUMN IF EXISTS job_id;
-- rebuild bgw_job table
CREATE TABLE _timescaledb_config.bgw_job_tmp AS SELECT * FROM _timescaledb_config.bgw_job;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_job;
ALTER EXTENSION timescaledb DROP SEQUENCE _timescaledb_config.bgw_job_id_seq;
ALTER TABLE _timescaledb_internal.bgw_job_stat DROP CONSTRAINT IF EXISTS bgw_job_stat_job_id_fkey;
ALTER TABLE _timescaledb_internal.bgw_policy_chunk_stats DROP CONSTRAINT IF EXISTS bgw_policy_chunk_stats_job_id_fkey;
-- remember sequence values so they can be restored in new sequence
CREATE TABLE tmp_bgw_job_seq_value AS SELECT last_value, is_called FROM _timescaledb_config.bgw_job_id_seq;
DROP TABLE _timescaledb_config.bgw_job;
CREATE SEQUENCE IF NOT EXISTS _timescaledb_config.bgw_job_id_seq MINVALUE 1000;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_config.bgw_job_id_seq', '');
SELECT setval('_timescaledb_config.bgw_job_id_seq', last_value, is_called) FROM tmp_bgw_job_seq_value;
DROP TABLE tmp_bgw_job_seq_value;
CREATE TABLE IF NOT EXISTS _timescaledb_config.bgw_job (
id INTEGER PRIMARY KEY DEFAULT nextval('_timescaledb_config.bgw_job_id_seq'),
application_name NAME NOT NULL,
schedule_interval INTERVAL NOT NULL,
max_runtime INTERVAL NOT NULL,
max_retries INTEGER NOT NULL,
retry_period INTERVAL NOT NULL,
proc_schema NAME NOT NULL,
proc_name NAME NOT NULL,
owner NAME NOT NULL DEFAULT CURRENT_ROLE,
scheduled BOOL NOT NULL DEFAULT true,
hypertable_id INTEGER REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
config JSONB
);
ALTER SEQUENCE _timescaledb_config.bgw_job_id_seq OWNED BY _timescaledb_config.bgw_job.id;
CREATE INDEX IF NOT EXISTS bgw_job_proc_hypertable_id_idx ON _timescaledb_config.bgw_job(proc_schema,proc_name,hypertable_id);
INSERT INTO _timescaledb_config.bgw_job SELECT id, application_name, schedule_interval, max_runtime, max_retries, retry_period, proc_schema, proc_name, owner, scheduled, hypertable_id, config FROM _timescaledb_config.bgw_job_tmp ORDER BY id;
DROP TABLE _timescaledb_config.bgw_job_tmp;
ALTER TABLE _timescaledb_internal.bgw_job_stat ADD CONSTRAINT bgw_job_stat_job_id_fkey FOREIGN KEY(job_id) REFERENCES _timescaledb_config.bgw_job(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_internal.bgw_policy_chunk_stats ADD CONSTRAINT bgw_policy_chunk_stats_job_id_fkey FOREIGN KEY(job_id) REFERENCES _timescaledb_config.bgw_job(id) ON DELETE CASCADE;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_config.bgw_job', 'WHERE id >= 1000');
GRANT SELECT ON _timescaledb_config.bgw_job TO PUBLIC;
GRANT SELECT ON _timescaledb_config.bgw_job_id_seq TO PUBLIC;
-- Add entry to materialization invalidation log to indicate that [watermark, +infinity) is invalid
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
SELECT materialization_id, BIGINT '-9223372036854775808', watermark, 9223372036854775807
FROM _timescaledb_catalog.continuous_aggs_completed_threshold;
-- Also handle continuous aggs that have never been run
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
SELECT unrun_cagg.id, -9223372036854775808, -9223372036854775808, 9223372036854775807 FROM
(SELECT mat_hypertable_id id FROM _timescaledb_catalog.continuous_agg EXCEPT SELECT materialization_id FROM _timescaledb_catalog.continuous_aggs_completed_threshold) as unrun_cagg;
-- Also add an invalidation from [-infinity, now() - ignore_invaliation_older_than] to cover any missed invalidations
-- For NULL or inf ignore_invalidations_older_than, use julian 0 for consistency with 2.0 (for int tables, use INT_MIN - 1)
DO $$
DECLARE
cagg _timescaledb_catalog.continuous_agg%ROWTYPE;
dimrow _timescaledb_catalog.dimension%ROWTYPE;
end_val bigint;
getendval text;
BEGIN
FOR cagg in SELECT * FROM _timescaledb_catalog.continuous_agg
LOOP
SELECT * INTO dimrow
FROM _timescaledb_catalog.dimension dim
WHERE dim.hypertable_id = cagg.raw_hypertable_id AND dim.num_slices IS NULL AND dim.interval_length IS NOT NULL;
IF dimrow.column_type IN ('TIMESTAMP'::regtype, 'DATE'::regtype, 'TIMESTAMPTZ'::regtype)
THEN
IF cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN
end_val := -210866803200000001;
ELSE
end_val := (extract(epoch from now()) * 1000000 - cagg.ignore_invalidation_older_than)::int8;
END IF;
ELSE
IF cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN
end_val := -2147483649;
ELSE
getendval := format('SELECT %s.%s() - %s', dimrow.integer_now_func_schema, dimrow.integer_now_func, cagg.ignore_invalidation_older_than);
EXECUTE getendval INTO end_val;
END IF;
END IF;
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
VALUES (cagg.mat_hypertable_id, -9223372036854775808, -9223372036854775808, end_val);
END LOOP;
END $$;
-- drop completed_threshold table, which is no longer used
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_completed_threshold;
DROP TABLE IF EXISTS _timescaledb_catalog.continuous_aggs_completed_threshold;
-- rebuild continuous aggregate table
CREATE TABLE _timescaledb_catalog.continuous_agg_tmp AS SELECT * FROM _timescaledb_catalog.continuous_agg;
ALTER TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log DROP CONSTRAINT continuous_aggs_materialization_invalid_materialization_id_fkey;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_agg;
DROP TABLE _timescaledb_catalog.continuous_agg;
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.continuous_agg (
mat_hypertable_id INTEGER PRIMARY KEY REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
raw_hypertable_id INTEGER NOT NULL REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
user_view_schema NAME NOT NULL,
user_view_name NAME NOT NULL,
partial_view_schema NAME NOT NULL,
partial_view_name NAME NOT NULL,
bucket_width BIGINT NOT NULL,
direct_view_schema NAME NOT NULL,
direct_view_name NAME NOT NULL,
materialized_only BOOL NOT NULL DEFAULT false,
UNIQUE(user_view_schema, user_view_name),
UNIQUE(partial_view_schema, partial_view_name)
);
CREATE INDEX IF NOT EXISTS continuous_agg_raw_hypertable_id_idx
ON _timescaledb_catalog.continuous_agg(raw_hypertable_id);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg', '');
GRANT SELECT ON _timescaledb_catalog.continuous_agg TO PUBLIC;
INSERT INTO _timescaledb_catalog.continuous_agg SELECT mat_hypertable_id,raw_hypertable_id,user_view_schema,user_view_name,partial_view_schema,partial_view_name,bucket_width,direct_view_schema,direct_view_name,materialized_only FROM _timescaledb_catalog.continuous_agg_tmp;
DROP TABLE _timescaledb_catalog.continuous_agg_tmp;
ALTER TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log ADD CONSTRAINT continuous_aggs_materialization_invalid_materialization_id_fkey FOREIGN KEY(materialization_id) REFERENCES _timescaledb_catalog.continuous_agg(mat_hypertable_id);
-- disable autovacuum for compressed chunks
DO $$
DECLARE
chunk regclass;
BEGIN
FOR chunk IN
SELECT format('%I.%I', schema_name, table_name)::regclass
FROM _timescaledb_catalog.chunk WHERE compressed_chunk_id IS NOT NULL
LOOP
EXECUTE format('ALTER TABLE %s SET (autovacuum_enabled=false);', chunk::text);
END LOOP;
END
$$;
CREATE OR REPLACE FUNCTION timescaledb_fdw_handler()
RETURNS fdw_handler
AS '@MODULE_PATHNAME@', 'ts_timescaledb_fdw_handler'
LANGUAGE C STRICT;
CREATE OR REPLACE FUNCTION timescaledb_fdw_validator(text[], oid)
RETURNS void
AS '@MODULE_PATHNAME@', 'ts_timescaledb_fdw_validator'
LANGUAGE C STRICT;
CREATE FOREIGN DATA WRAPPER timescaledb_fdw
HANDLER timescaledb_fdw_handler
VALIDATOR timescaledb_fdw_validator;

View File

@ -1,423 +0,0 @@
--Drop functions in size_utils and dependencies, ordering matters.
-- Do not reorder
DROP VIEW IF EXISTS timescaledb_information.hypertable;
DROP FUNCTION IF EXISTS hypertable_relation_size_pretty;
DROP FUNCTION IF EXISTS hypertable_relation_size;
DROP FUNCTION IF EXISTS chunk_relation_size_pretty;
DROP FUNCTION IF EXISTS chunk_relation_size;
DROP FUNCTION IF EXISTS indexes_relation_size_pretty;
DROP FUNCTION IF EXISTS indexes_relation_size;
DROP FUNCTION IF EXISTS _timescaledb_internal.partitioning_column_to_pretty;
DROP FUNCTION IF EXISTS _timescaledb_internal.range_value_to_pretty;
-- end of do not reorder
DROP FUNCTION IF EXISTS hypertable_approximate_row_count;
DROP VIEW IF EXISTS timescaledb_information.compressed_chunk_stats;
DROP VIEW IF EXISTS timescaledb_information.compressed_hypertable_stats;
DROP VIEW IF EXISTS timescaledb_information.license;
-- Add new function definitions, columns and tables for distributed hypertables
DROP FUNCTION IF EXISTS create_hypertable(regclass,name,name,integer,name,name,anyelement,boolean,boolean,regproc,boolean,text,regproc,regproc);
DROP FUNCTION IF EXISTS add_drop_chunks_policy;
DROP FUNCTION IF EXISTS remove_drop_chunks_policy;
DROP FUNCTION IF EXISTS drop_chunks;
DROP FUNCTION IF EXISTS show_chunks;
DROP FUNCTION IF EXISTS add_compress_chunks_policy;
DROP FUNCTION IF EXISTS remove_compress_chunks_policy;
DROP FUNCTION IF EXISTS alter_job_schedule;
DROP FUNCTION IF EXISTS set_chunk_time_interval;
DROP FUNCTION IF EXISTS set_number_partitions;
DROP FUNCTION IF EXISTS add_dimension;
DROP FUNCTION IF EXISTS _timescaledb_internal.enterprise_enabled;
DROP FUNCTION IF EXISTS _timescaledb_internal.current_license_key;
DROP FUNCTION IF EXISTS _timescaledb_internal.license_expiration_time;
DROP FUNCTION IF EXISTS _timescaledb_internal.print_license_expiration_info;
DROP FUNCTION IF EXISTS _timescaledb_internal.license_edition;
DROP FUNCTION IF EXISTS _timescaledb_internal.current_db_set_license_key;
DROP VIEW IF EXISTS timescaledb_information.policy_stats;
DROP VIEW IF EXISTS timescaledb_information.drop_chunks_policies;
DROP VIEW IF EXISTS timescaledb_information.reorder_policies;
ALTER TABLE _timescaledb_catalog.hypertable ADD COLUMN replication_factor SMALLINT NULL CHECK (replication_factor > 0);
-- Table for hypertable -> node mappings
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.hypertable_data_node (
hypertable_id INTEGER NOT NULL REFERENCES _timescaledb_catalog.hypertable(id),
node_hypertable_id INTEGER NULL,
node_name NAME NOT NULL,
block_chunks BOOLEAN NOT NULL,
UNIQUE(node_hypertable_id, node_name),
UNIQUE(hypertable_id, node_name)
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.hypertable_data_node', '');
GRANT SELECT ON _timescaledb_catalog.hypertable_data_node TO PUBLIC;
-- Table for chunk -> nodes mappings
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.chunk_data_node (
chunk_id INTEGER NOT NULL REFERENCES _timescaledb_catalog.chunk(id),
node_chunk_id INTEGER NOT NULL,
node_name NAME NOT NULL,
UNIQUE(node_chunk_id, node_name),
UNIQUE(chunk_id, node_name)
);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.chunk_data_node', '');
GRANT SELECT ON _timescaledb_catalog.chunk_data_node TO PUBLIC;
--placeholder to allow creation of functions below
CREATE TYPE rxid;
CREATE OR REPLACE FUNCTION _timescaledb_internal.rxid_in(cstring) RETURNS rxid
AS '@MODULE_PATHNAME@', 'ts_remote_txn_id_in' LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;
CREATE OR REPLACE FUNCTION _timescaledb_internal.rxid_out(rxid) RETURNS cstring
AS '@MODULE_PATHNAME@', 'ts_remote_txn_id_out' LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;
CREATE TYPE rxid (
internallength = 16,
input = _timescaledb_internal.rxid_in,
output = _timescaledb_internal.rxid_out
);
CREATE TABLE _timescaledb_catalog.remote_txn (
data_node_name NAME, --this is really only to allow us to cleanup stuff on a per-node basis.
remote_transaction_id TEXT CHECK (remote_transaction_id::rxid is not null),
PRIMARY KEY (remote_transaction_id)
);
CREATE INDEX IF NOT EXISTS remote_txn_data_node_name_idx
ON _timescaledb_catalog.remote_txn(data_node_name);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.remote_txn', '');
GRANT SELECT ON _timescaledb_catalog.remote_txn TO PUBLIC;
DROP VIEW IF EXISTS timescaledb_information.compressed_hypertable_stats;
DROP VIEW IF EXISTS timescaledb_information.compressed_chunk_stats;
-- all existing compressed chunks have NULL value for the new columns
ALTER TABLE IF EXISTS _timescaledb_catalog.compression_chunk_size ADD COLUMN IF NOT EXISTS numrows_pre_compression BIGINT;
ALTER TABLE IF EXISTS _timescaledb_catalog.compression_chunk_size ADD COLUMN IF NOT EXISTS numrows_post_compression BIGINT;
--rewrite catalog table to not break catalog scans on tables with missingval optimization
CLUSTER _timescaledb_catalog.compression_chunk_size USING compression_chunk_size_pkey;
ALTER TABLE _timescaledb_catalog.compression_chunk_size SET WITHOUT CLUSTER;
---Clean up constraints on hypertable catalog table ---
ALTER TABLE _timescaledb_catalog.hypertable ADD CONSTRAINT hypertable_table_name_schema_name_key UNIQUE(table_name, schema_name);
ALTER TABLE _timescaledb_catalog.hypertable DROP CONSTRAINT hypertable_schema_name_table_name_key;
ALTER TABLE _timescaledb_catalog.hypertable DROP CONSTRAINT hypertable_id_schema_name_key;
-- add fields for custom jobs/generic configuration to bgw_job table
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN proc_schema NAME NOT NULL DEFAULT '';
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN proc_name NAME NOT NULL DEFAULT '';
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN owner NAME NOT NULL DEFAULT CURRENT_ROLE;
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN scheduled BOOL NOT NULL DEFAULT true;
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN hypertable_id INTEGER REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_config.bgw_job ADD COLUMN config JSONB;
ALTER TABLE _timescaledb_config.bgw_job DROP CONSTRAINT valid_job_type;
ALTER TABLE _timescaledb_config.bgw_job ADD CONSTRAINT valid_job_type CHECK (job_type IN ('telemetry_and_version_check_if_enabled', 'reorder', 'drop_chunks', 'continuous_aggregate', 'compress_chunks', 'custom'));
-- migrate telemetry jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', application_name, id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_telemetry',
owner = CURRENT_ROLE
WHERE job_type = 'telemetry_and_version_check_if_enabled';
-- migrate reorder jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Reorder Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_reorder',
config = jsonb_build_object('hypertable_id', reorder.hypertable_id, 'index_name', reorder.hypertable_index_name),
hypertable_id = reorder.hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = reorder.hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_config.bgw_policy_reorder reorder
WHERE job_type = 'reorder'
AND job.id = reorder.job_id;
-- migrate compression jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Compression Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_compression',
config = jsonb_build_object('hypertable_id', c.hypertable_id, 'compress_after', CASE WHEN (older_than).is_time_interval THEN
(older_than).time_interval::text
ELSE
(older_than).integer_interval::text
END),
hypertable_id = c.hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = c.hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_config.bgw_policy_compress_chunks c
WHERE job_type = 'compress_chunks'
AND job.id = c.job_id;
-- migrate retention jobs
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Retention Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_retention',
config = jsonb_build_object('hypertable_id', c.hypertable_id, 'drop_after', CASE WHEN (older_than).is_time_interval THEN
(older_than).time_interval::text
ELSE
(older_than).integer_interval::text
END),
hypertable_id = c.hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = c.hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_config.bgw_policy_drop_chunks c
WHERE job_type = 'drop_chunks'
AND job.id = c.job_id;
-- migrate cagg jobs
--- timescale functions cannot be invoked in latest-dev.sql
--- this is a mapping for get_time_type
CREATE FUNCTION ts_tmp_get_time_type(htid integer)
RETURNS OID LANGUAGE SQL AS
$BODY$
SELECT dim.column_type
FROM _timescaledb_catalog.dimension dim
WHERE dim.hypertable_id = htid and dim.num_slices is null
and dim.interval_length is not null;
$BODY$;
--- this is a mapping for _timescaledb_internal.to_interval
CREATE FUNCTION ts_tmp_get_interval( intval bigint)
RETURNS INTERVAL LANGUAGE SQL AS
$BODY$
SELECT format('%sd %ss', intval/86400000000, (intval%86400000000)/1E6)::interval;
$BODY$;
UPDATE
_timescaledb_config.bgw_job job
SET
application_name = format('%s [%s]', 'Refresh Continuous Aggregate Policy', id),
proc_schema = '_timescaledb_internal',
proc_name = 'policy_refresh_continuous_aggregate',
job_type = 'custom',
config =
CASE WHEN ts_tmp_get_time_type( cagg.raw_hypertable_id ) IN ('TIMESTAMP'::regtype, 'DATE'::regtype, 'TIMESTAMPTZ'::regtype)
THEN
jsonb_build_object('mat_hypertable_id', cagg.mat_hypertable_id, 'start_offset',
CASE WHEN cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN NULL
ELSE ts_tmp_get_interval(cagg.ignore_invalidation_older_than)::TEXT
END
, 'end_offset', ts_tmp_get_interval(cagg.refresh_lag)::TEXT)
ELSE
jsonb_build_object('mat_hypertable_id', cagg.mat_hypertable_id, 'start_offset',
CASE WHEN cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN NULL
ELSE cagg.ignore_invalidation_older_than::BIGINT
END
, 'end_offset', cagg.refresh_lag::BIGINT)
END,
hypertable_id = cagg.mat_hypertable_id,
owner = (
SELECT relowner::regrole::text
FROM _timescaledb_catalog.hypertable ht,
pg_class cl
WHERE ht.id = cagg.mat_hypertable_id
AND cl.oid = format('%I.%I', schema_name, table_name)::regclass)
FROM _timescaledb_catalog.continuous_agg cagg
WHERE job_type = 'continuous_aggregate'
AND job.id = cagg.job_id ;
--drop tmp functions created for cont agg job migration
DROP FUNCTION ts_tmp_get_time_type;
DROP FUNCTION ts_tmp_get_interval;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_policy_reorder;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_policy_compress_chunks;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_policy_drop_chunks;
DROP TABLE IF EXISTS _timescaledb_config.bgw_policy_reorder CASCADE;
DROP TABLE IF EXISTS _timescaledb_config.bgw_policy_compress_chunks;
DROP TABLE IF EXISTS _timescaledb_config.bgw_policy_drop_chunks;
DROP FUNCTION IF EXISTS _timescaledb_internal.valid_ts_interval;
DROP TYPE IF EXISTS _timescaledb_catalog.ts_interval;
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregate_stats;
ALTER TABLE IF EXISTS _timescaledb_catalog.continuous_agg DROP COLUMN IF EXISTS job_id;
-- rebuild bgw_job table
CREATE TABLE _timescaledb_config.bgw_job_tmp AS SELECT * FROM _timescaledb_config.bgw_job;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_config.bgw_job;
ALTER EXTENSION timescaledb DROP SEQUENCE _timescaledb_config.bgw_job_id_seq;
ALTER TABLE _timescaledb_internal.bgw_job_stat DROP CONSTRAINT IF EXISTS bgw_job_stat_job_id_fkey;
ALTER TABLE _timescaledb_internal.bgw_policy_chunk_stats DROP CONSTRAINT IF EXISTS bgw_policy_chunk_stats_job_id_fkey;
-- remember sequence values so they can be restored in new sequence
CREATE TABLE tmp_bgw_job_seq_value AS SELECT last_value, is_called FROM _timescaledb_config.bgw_job_id_seq;
DROP TABLE _timescaledb_config.bgw_job;
CREATE SEQUENCE IF NOT EXISTS _timescaledb_config.bgw_job_id_seq MINVALUE 1000;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_config.bgw_job_id_seq', '');
SELECT setval('_timescaledb_config.bgw_job_id_seq', last_value, is_called) FROM tmp_bgw_job_seq_value;
DROP TABLE tmp_bgw_job_seq_value;
CREATE TABLE IF NOT EXISTS _timescaledb_config.bgw_job (
id INTEGER PRIMARY KEY DEFAULT nextval('_timescaledb_config.bgw_job_id_seq'),
application_name NAME NOT NULL,
schedule_interval INTERVAL NOT NULL,
max_runtime INTERVAL NOT NULL,
max_retries INTEGER NOT NULL,
retry_period INTERVAL NOT NULL,
proc_schema NAME NOT NULL,
proc_name NAME NOT NULL,
owner NAME NOT NULL DEFAULT CURRENT_ROLE,
scheduled BOOL NOT NULL DEFAULT true,
hypertable_id INTEGER REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
config JSONB
);
ALTER SEQUENCE _timescaledb_config.bgw_job_id_seq OWNED BY _timescaledb_config.bgw_job.id;
CREATE INDEX IF NOT EXISTS bgw_job_proc_hypertable_id_idx ON _timescaledb_config.bgw_job(proc_schema,proc_name,hypertable_id);
INSERT INTO _timescaledb_config.bgw_job SELECT id, application_name, schedule_interval, max_runtime, max_retries, retry_period, proc_schema, proc_name, owner, scheduled, hypertable_id, config FROM _timescaledb_config.bgw_job_tmp ORDER BY id;
DROP TABLE _timescaledb_config.bgw_job_tmp;
ALTER TABLE _timescaledb_internal.bgw_job_stat ADD CONSTRAINT bgw_job_stat_job_id_fkey FOREIGN KEY(job_id) REFERENCES _timescaledb_config.bgw_job(id) ON DELETE CASCADE;
ALTER TABLE _timescaledb_internal.bgw_policy_chunk_stats ADD CONSTRAINT bgw_policy_chunk_stats_job_id_fkey FOREIGN KEY(job_id) REFERENCES _timescaledb_config.bgw_job(id) ON DELETE CASCADE;
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_config.bgw_job', 'WHERE id >= 1000');
GRANT SELECT ON _timescaledb_config.bgw_job TO PUBLIC;
GRANT SELECT ON _timescaledb_config.bgw_job_id_seq TO PUBLIC;
-- Add entry to materialization invalidation log to indicate that [watermark, +infinity) is invalid
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
SELECT materialization_id, BIGINT '-9223372036854775808', watermark, 9223372036854775807
FROM _timescaledb_catalog.continuous_aggs_completed_threshold;
-- Also handle continuous aggs that have never been run
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
SELECT unrun_cagg.id, -9223372036854775808, -9223372036854775808, 9223372036854775807 FROM
(SELECT mat_hypertable_id id FROM _timescaledb_catalog.continuous_agg EXCEPT SELECT materialization_id FROM _timescaledb_catalog.continuous_aggs_completed_threshold) as unrun_cagg;
-- Also add an invalidation from [-infinity, now() - ignore_invaliation_older_than] to cover any missed invalidations
-- For NULL or inf ignore_invalidations_older_than, use julian 0 for consistency with 2.0 (for int tables, use INT_MIN - 1)
DO $$
DECLARE
cagg _timescaledb_catalog.continuous_agg%ROWTYPE;
dimrow _timescaledb_catalog.dimension%ROWTYPE;
end_val bigint;
getendval text;
BEGIN
FOR cagg in SELECT * FROM _timescaledb_catalog.continuous_agg
LOOP
SELECT * INTO dimrow
FROM _timescaledb_catalog.dimension dim
WHERE dim.hypertable_id = cagg.raw_hypertable_id AND dim.num_slices IS NULL AND dim.interval_length IS NOT NULL;
IF dimrow.column_type IN ('TIMESTAMP'::regtype, 'DATE'::regtype, 'TIMESTAMPTZ'::regtype)
THEN
IF cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN
end_val := -210866803200000001;
ELSE
end_val := (extract(epoch from now()) * 1000000 - cagg.ignore_invalidation_older_than)::int8;
END IF;
ELSE
IF cagg.ignore_invalidation_older_than IS NULL OR cagg.ignore_invalidation_older_than = 9223372036854775807
THEN
end_val := -2147483649;
ELSE
getendval := format('SELECT %s.%s() - %s', dimrow.integer_now_func_schema, dimrow.integer_now_func, cagg.ignore_invalidation_older_than);
EXECUTE getendval INTO end_val;
END IF;
END IF;
INSERT INTO _timescaledb_catalog.continuous_aggs_materialization_invalidation_log
VALUES (cagg.mat_hypertable_id, -9223372036854775808, -9223372036854775808, end_val);
END LOOP;
END $$;
-- drop completed_threshold table, which is no longer used
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_completed_threshold;
DROP TABLE IF EXISTS _timescaledb_catalog.continuous_aggs_completed_threshold;
-- rebuild continuous aggregate table
CREATE TABLE _timescaledb_catalog.continuous_agg_tmp AS SELECT * FROM _timescaledb_catalog.continuous_agg;
ALTER TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log DROP CONSTRAINT continuous_aggs_materialization_invalid_materialization_id_fkey;
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_agg;
DROP TABLE _timescaledb_catalog.continuous_agg;
CREATE TABLE IF NOT EXISTS _timescaledb_catalog.continuous_agg (
mat_hypertable_id INTEGER PRIMARY KEY REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
raw_hypertable_id INTEGER NOT NULL REFERENCES _timescaledb_catalog.hypertable(id) ON DELETE CASCADE,
user_view_schema NAME NOT NULL,
user_view_name NAME NOT NULL,
partial_view_schema NAME NOT NULL,
partial_view_name NAME NOT NULL,
bucket_width BIGINT NOT NULL,
direct_view_schema NAME NOT NULL,
direct_view_name NAME NOT NULL,
materialized_only BOOL NOT NULL DEFAULT false,
UNIQUE(user_view_schema, user_view_name),
UNIQUE(partial_view_schema, partial_view_name)
);
CREATE INDEX IF NOT EXISTS continuous_agg_raw_hypertable_id_idx
ON _timescaledb_catalog.continuous_agg(raw_hypertable_id);
SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_agg', '');
GRANT SELECT ON _timescaledb_catalog.continuous_agg TO PUBLIC;
INSERT INTO _timescaledb_catalog.continuous_agg SELECT mat_hypertable_id,raw_hypertable_id,user_view_schema,user_view_name,partial_view_schema,partial_view_name,bucket_width,direct_view_schema,direct_view_name,materialized_only FROM _timescaledb_catalog.continuous_agg_tmp;
DROP TABLE _timescaledb_catalog.continuous_agg_tmp;
ALTER TABLE _timescaledb_catalog.continuous_aggs_materialization_invalidation_log ADD CONSTRAINT continuous_aggs_materialization_invalid_materialization_id_fkey FOREIGN KEY(materialization_id) REFERENCES _timescaledb_catalog.continuous_agg(mat_hypertable_id);
-- disable autovacuum for compressed chunks
DO $$
DECLARE
chunk regclass;
BEGIN
FOR chunk IN
SELECT format('%I.%I', schema_name, table_name)::regclass
FROM _timescaledb_catalog.chunk WHERE compressed_chunk_id IS NOT NULL
LOOP
EXECUTE format('ALTER TABLE %s SET (autovacuum_enabled=false);', chunk::text);
END LOOP;
END
$$;
CREATE OR REPLACE FUNCTION timescaledb_fdw_handler()
RETURNS fdw_handler
AS '@MODULE_PATHNAME@', 'ts_timescaledb_fdw_handler'
LANGUAGE C STRICT;
CREATE OR REPLACE FUNCTION timescaledb_fdw_validator(text[], oid)
RETURNS void
AS '@MODULE_PATHNAME@', 'ts_timescaledb_fdw_validator'
LANGUAGE C STRICT;
CREATE FOREIGN DATA WRAPPER timescaledb_fdw
HANDLER timescaledb_fdw_handler
VALIDATOR timescaledb_fdw_validator;

View File

@ -1,2 +1,2 @@
version = 2.0.0-dev
update_from_version = 1.7.4
version = 2.1.0-dev
update_from_version = 2.0.0-rc1