670 Commits

Author SHA1 Message Date
Alexander Kuzmenkov
5c0110cbbf Mark partialize_agg as parallel safe
Postgres knows whether a given aggregate is parallel-safe, and creates
parallel aggregation plans based on that. The `partialize_agg` is a
wrapper we use to perform partial aggregation on data nodes. It is a
pure function that produces serialized aggregation state as a result.
Being pure, it doesn't influence parallel safety. This means we don't
need to mark it parallel-unsafe to artificially disable the parallel
plans for partial aggregation. They will be chosen as usual based on
the parallel-safety of the underlying aggregate function.
2022-05-31 14:53:58 +05:30
Sven Klemm
cf9b626794 Post-Release 2.7.0
Adjust version.config and add 2.7.0 to update/downgrade test.
2022-05-24 12:54:06 +02:00
Sven Klemm
0a68209563 Release 2.7.0
This release adds major new features since the 2.6.1 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* Optimize continuous aggregate query performance and storage
* The following query clauses and functions can now be used in a continuous
  aggregate: FILTER, DISTINCT, ORDER BY as well as [Ordered-Set Aggregate](https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-ORDEREDSET-TABLE)
  and [Hypothetical-Set Aggregate](https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-HYPOTHETICAL-TABLE)
* Optimize now() query planning time
* Improve COPY insert performance
* Improve performance of UPDATE/DELETE on PG14 by excluding chunks

This release also includes several bug fixes.

If you are upgrading from a previous version and were using compression
with a non-default collation on a segmentby-column you should recompress
those hypertables.

**Features**
* #4045 Custom origin's support in CAGGs
* #4120 Add logging for retention policy
* #4158 Allow ANALYZE command on a data node directly
* #4169 Add support for chunk exclusion on DELETE to PG14
* #4209 Add support for chunk exclusion on UPDATE to PG14
* #4269 Continuous Aggregates finals form
* #4301 Add support for bulk inserts in COPY operator
* #4311 Support non-superuser move chunk operations
* #4330 Add GUC "bgw_launcher_poll_time"
* #4340 Enable now() usage in plan-time chunk exclusion

**Bugfixes**
* #3899 Fix segfault in Continuous Aggregates
* #4225 Fix TRUNCATE error as non-owner on hypertable
* #4236 Fix potential wrong order of results for compressed hypertable with a non-default collation
* #4249 Fix option "timescaledb.create_group_indexes"
* #4251 Fix INSERT into compressed chunks with dropped columns
* #4255 Fix option "timescaledb.create_group_indexes"
* #4259 Fix logic bug in extension update script
* #4269 Fix bad Continuous Aggregate view definition reported in #4233
* #4289 Support moving compressed chunks between data nodes
* #4300 Fix refresh window cap for cagg refresh policy
* #4315 Fix memory leak in scheduler
* #4323 Remove printouts from signal handlers
* #4342 Fix move chunk cleanup logic
* #4349 Fix crashes in functions using AlterTableInternal
* #4358 Fix crash and other issues in telemetry reporter

**Thanks**
* @abrownsword for reporting a bug in the telemetry reporter and testing the fix
* @jsoref for fixing various misspellings in code, comments and documentation
* @yalon for reporting an error with ALTER TABLE RENAME on distributed hypertables
* @zhuizhuhaomeng for reporting and fixing a memory leak in our scheduler
2022-05-23 17:58:20 +02:00
Sven Klemm
26da1b5035 Show warning for caggs with numeric
Postgres changes the internal state format for numeric aggregates
which we materialize in caggs in PG14. This will invalidate the
affected columns when upgrading from PG13 to PG14. This patch
adds a warning to the update script when we encounter this
configuration.
2022-05-23 16:40:53 +02:00
Dmitry Simonenko
f1575bb4c3 Support moving compressed chunks between data nodes
This change allows to copy or move compressed chunks
between data nodes by including compressed chunk into the
chunk copy command stages.
2022-05-18 22:14:50 +03:00
Nikhil Sontakke
ddd02922c9 Support non-superuser move chunk operations
The non-superuser needs to have REPLICATION privileges atleast. A
new function "subscription_cmd" has been added to allow running
subscription related commands on datanodes. This function implicitly
upgrades to the bootstrapped superuser and then performs subscription
creation/alteration/deletion commands. It only accepts subscriptions
related commands and errors out otherwise.
2022-05-18 16:56:31 +05:30
Nikhil Sontakke
92f7e5d361 Support op_id in copy/nove chunk API
Allow users to specify an explicit "operation_id" while carrying out
a copy/move operation. If it's specified then that is used as the
identifier for the copy/move operation. Otherwise, am implicit id as
before gets created and used.
2022-05-13 14:20:38 +05:30
gayyappan
5d56b1cdbc Add api _timescaledb_internal.drop_chunk
Add an internal api to drop a single chunk.
This function drops the storage and metadata
associated with the chunk.
Note that chunk dependencies are not affected.
e.g. Continuous aggs are not updated when this chunk
is dropped.
2022-05-11 15:10:38 -04:00
gayyappan
9f4dcea301 Add _timescaledb_internal.freeze_chunk API
This is an internal function to freeze a chunk
for PG14 and later.

This function sets a chunk status to frozen.
Operations that modify the chunk data
(like insert, update, delete) are not
supported. Frozen chunks can be dropped.

Additionally, chunk status is cached as part of
classify_relation.
2022-05-10 14:00:32 -04:00
Fabrízio de Royes Mello
1e8d37b54e Remove chunk_id from materialization hypertable
First step to remove the re-aggregation for Continuous Aggregates
is to remove the `chunk_id` from the materialization hypertable.

Also added new metadata column named `finalized` to `continuous_cagg`
catalog table in order to store information about the new following
finalized version of Continuous Aggregates that will not need the
partials anymore. This flag is important to maintain backward
compatibility with previous Continuous Aggregate implementation that
requires the `chunk_id` to refresh data properly.
2022-05-06 14:30:00 -03:00
Sven Klemm
a4081516ca Append pg_temp to search_path
Postgres will prepend pg_temp to the effective search_path if it
is not present in the search_path. While pg_temp will never be
used to look up functions or operators unless explicitly requested
pg_temp will be used to look up relations. Putting pg_temp in
search_path makes sure objects in pg_temp will be considered last
and pg_temp cannot be used to mask existing objects.
2022-05-03 07:55:43 +02:00
Sven Klemm
4f67ed0656 Fix unqualified type reference in time_bucket
This was not caught earlier and is not currently caught by CI
because the check for unqualified casts is currently only in main
branch of pgspot and not yet in the tagged version we use as
part of PR checks.
2022-04-29 10:29:02 +02:00
Alexander Kuzmenkov
a1e76d2e84 Follow-up for compressed chunk collation #4236
Add a changelog message and add a check for broken chunks to the update
script.
2022-04-25 18:09:40 +05:30
Sven Klemm
d137c1aa83 Fix unsafe procedure creation in update script
post_update_cagg_try_repair was created with `CREATE OR REPLACE`
instead of CREATE. Additionally the procedure was created in the
public schema. This patch adjusts the procedure to be created
with `CREATE` and in a timescaledb internal schema.
Found by pgspot.
2022-04-22 12:12:48 +02:00
Sven Klemm
a2c39b9afe Fix logic bug in init_privs query
The query to get the list of saved privileges during extension
upgrade had a bug and only applying the classoid restriction
for a subset of the entries when it should have been applied to
all returned rows leading to a failure during extension update
when init privileges for other classoids existed on any of the
relevant objects.
2022-04-21 10:52:13 +02:00
Markos Fountoulakis
fab16f3798 Fix segfault in Continuous Aggregates
Add the missing variables to the finalization view of Continuous
Aggregates and the corresponding columns to the materialization table.
Cover the case of targets that contain Aggref nodes and Var nodes
that are outside of the Aggref nodes at the same time.

Stop rebuilding the Continuous Aggregate view with ALTER MATERIALIZED
VIEW. Attempt to repair the view at post-update time instead, and fail
gracefully if it is not possible to do so without raw hypertable schema
or data modifications.

Stop rebuilding the Continuous Aggregate view when switching realtime
aggregation on and off. Instead, manipulate the User View by either:
  1. removing the UNION ALL right-hand side and the WHERE clause when
     disabling realtime aggregation
  2. adding the Direct View to the right of a UNION ALL operator and
     defining WHERE clauses with the relevant watermark checks when
     enabling realtime aggregation

Fixes #3898
2022-04-18 12:54:20 +03:00
Sven Klemm
bdaa4607d4 Post release 2.6.1
Add 2.6.1 to update and downgrade tests.
2022-04-12 11:52:31 +02:00
Rafia Sabih
4d47271ad3 2.6.1 (2022-04-11)
This release is patch release. We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #3974 Fix remote EXPLAIN with parameterized queries
* #4122 Fix segfault on INSERT into distributed hypertable
* #4142 Ignore invalid relid when deleting hypertable
* #4159 Fix ADD COLUMN IF NOT EXISTS error on compressed hypertable
* #4161 Fix memory handling during scans
* #4186 Fix owner change for distributed hypertable
* #4192 Abort sessions after extension reload
* #4193 Fix relcache callback handling causing crashes

**Thanks**
* @abrownsword for reporting a crash in the telemetry reporter
* @daydayup863 for reporting issue with remote explain
2022-04-08 18:15:35 +02:00
Konstantina Skovola
915bd032bd Fix spelling errors and omissions 2022-03-30 05:21:54 -07:00
Erik Nordström
c1cf067c4f Improve restriction scanning during hypertable expansion
Improve the performance of metadata scanning during hypertable
expansion.

When a hypertable is expanded to include all children chunks, only the
chunks that match the query restrictions are included. To find the
matching chunks, the planner first scans for all matching dimension
slices. The chunks that reference those slices are the chunks to
include in the expansion.

This change optimizes the scanning for slices by avoiding repeated
open/close of the dimension slice metadata table and index.

At the same time, related dimension slice scanning functions have been
refactored along the same line.

An index on the chunk constraint metadata table is also changed to
allow scanning on dimension_slice_id. Previously, dimension_slice_id
was the second key in the index, which made scans on this key less
efficient.
2022-03-21 15:18:44 +01:00
Fabrízio de Royes Mello
33bbdccdcd Refactor function hypertable_local_size
Reorganize the code and fix minor bug that was not computing the size
of FSM, VM and INIT forks of the parent hypertable.

Fixed the bug by exposing the `ts_relation_size` function to the SQL
level to encapsulate the logic to compute `heap`, `indexes` and `toast`
sizes.
2022-03-07 16:38:40 -03:00
Mats Kindahl
15d33f0624 Add option to compile without telemetry
Add option `USE_TELEMETRY` that can be used to exclude telemetry from
the compile.

Telemetry-specific SQL is moved, which is only included when extension
is compiled with telemetry and the notice is changed so that the
message about telemetry is not printed when Telemetry is not compiled
in.

The following code is not compiled in when telemetry is not used:
- Cross-module functions for telemetry.
- Checks for telemetry job in job execution.
- GUC variables `telemetry_level` and `telemetry_cloud`.

Telemetry subsystem is not included when compiling without telemetry,
which requires some functions to be moved out of the telemetry
subsystem:
- Metadata handling is moved out of the telemetry module since it is
  used not only with telemetry.
- UUID functions are moved into a separate module instead of being
  part of the telemetry subsystem.
- Telemetry functions are either added or removed when updating from a
  previous version.

Tests are updated to:
- Not use telemetry functions to get UUID or Metadata and instead use
  the moved UUID and metadata functions.
- Not include telemetry information in tests that do not require it.
- Configuration files do not set telemetry variables when telemetry is
  not compiled in.
- Replaced usage of telemetry functions in non-telemetry tests with
  other sources of same information.

Fixes #3931
2022-03-03 12:21:07 +01:00
Sven Klemm
45cf7f1fa1 Document requirements for statements in sql files
Since we now lock down search_path during update/downgrade there
are some additional requirements for writing sql files.
2022-02-18 14:15:11 +01:00
Alexander Kuzmenkov
f29340281e Post release 2.6.0 2022-02-18 14:18:27 +03:00
Alexander Kuzmenkov
9e7fbf7f69 Release 2.6.0
This release is medium priority for upgrade. We recommend that you
upgrade at the next available opportunity.

This release adds major new features since the 2.5.2 release,
including:

Compression in continuous aggregates Experimental support for timezones
in continuous aggregates Experimental support for monthly buckets in
continuous aggregates It also includes several bug fixes. Telemetry
reports are switched to a new format, and now include more detailed
statistics on compression, distributed hypertables and indexes.

**Features**

* #3768 Allow ALTER TABLE ADD COLUMN with DEFAULT on compressed
hypertable
* #3769 Allow ALTER TABLE DROP COLUMN on compressed hypertable
* #3943 Optimize first/last
* #3945 Add support for ALTER SCHEMA on multi-node
* #3949 Add support for DROP SCHEMA on multi-node

**Bugfixes**

* #3808 Properly handle max_retries option
* #3863 Fix remote transaction heal logic
* #3869 Fix ALTER SET/DROP NULL contstraint on distributed hypertable
* #3944 Fix segfault in add_compression_policy
* #3961 Fix crash in EXPLAIN VERBOSE on distributed hypertable
* #4015 Eliminate float rounding instabilities in interpolate
* #4019 Update ts_extension_oid in transitioning state
* #4073 Fix buffer overflow in partition scheme

**Improvements**

Query planning performance is improved for hypertables with a large
number of chunks.

**Thanks**

* @fvannee for reporting a first/last memory leak
* @mmouterde for reporting an issue with floats and interpolate
2022-02-17 19:22:46 +03:00
Sven Klemm
a956faaa16 Set search_path again after COMMIT
SET LOCAL is only active until end of transaction so we set search_path
again after COMMIT in functions that do transaction control. While we
could use SET at the start of the function we do not want to bleed out
search_path to caller.
2022-02-16 06:53:15 +01:00
Sven Klemm
72d03e6f7d Remove search_path modifications from reverse-dev
Resetting search_path in reverse-dev was necessary before the
release of 2.5.2 as the previous timescaledb version scripts
didn't handle locked down search_path. We can remove setting
search_path too as the downgrade script includes pre-update.sql
which locks down search_path.
2022-02-14 09:53:28 +01:00
Sven Klemm
531f7ed8b1 Don't use pg_temp in update scripts
Scripts run under pgextwlist cannot use pg_temp so we replace
pg_temp usage in update scripts with _timescaledb_internal.
2022-02-11 00:03:33 +01:00
Sven Klemm
c07fe1f883 Post Release 2.5.2
Add 2.5.2 to update/downgrade scripts
2022-02-10 12:13:01 +01:00
Sven Klemm
f0d163603c Release 2.5.2
This release contains bug fixes since the 2.5.1 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.

**Bugfixes**
* #3900 Improve custom scan node registration
* #3911 Fix role type deparsing for GRANT command
* #3918 Fix DataNodeScan plans with one-time filter
* #3921 Fix segfault on insert into internal compressed table
* #3938 Fix subtract_integer_from_now on 32-bit platforms and improve error handling
* #3939 Fix projection handling in time_bucket_gapfill
* #3948 Avoid double PGclear() in data fetchers
* #3979 Fix deparsing of index predicates
* #4015 Eliminate float rounding instabilities in interpolate
* #4020 Fix ALTER TABLE EventTrigger initialization
* #4024 Fix premature cache release call
* #4037 Fix status for dropped chunks that have catalog entries
* #4069 Fix riinfo NULL handling in ANY construct
* #4071 Fix extension installation privilege escalation
* #4073 Fix buffer overflow in partition scheme

**Thanks**
* @carlocperez for reporting crash with NULL handling in ANY construct
* @erikhh for reporting an issue with time_bucket_gapfill
* @fvannee for reporting a first/last memory leak
* @kancsuki for reporting drop column and partial index creation not working
* @mmouterde for reporting an issue with floats and interpolate
* Pedro Gallegos for reporting a possible privilege escalation during extension installation

Security: CVE-2022-24128
2022-02-09 20:05:50 +01:00
Sven Klemm
6dddfaa54e Lock down search_path in install scripts
This patch locks down search_path in extension install and update
scripts to only contain pg_catalog, this requires that any reference
in those scripts is fully qualified. Additionally we add explicit
create commands to all update scripts for objects added to the
public schema. This change will make update scripts fail if a
function with identical signature already exists when installing
or upgrading instead reusing the existing object.
2022-02-09 17:53:20 +01:00
Sven Klemm
c8b8516e46 Fix extension installation privilege escalation
TimescaleDB was vulnerable to a privilege escalation attack in
the extension installation script. An attacker could precreate
objects normally owned by the extension and get those objects
used in the installation script since the script would only try
to create them if they did not already exist. Thanks to Pedro
Gallegos for reporting the problem.

This patch changes the schema, table and function creation to fail
and abort the installation when the object already exists instead
of using the existing object.

Security: CVE-2022-24128
2022-02-09 17:53:20 +01:00
Erik Nordström
e56b95daec Add telemetry stats based on type of relation
Refactor the telemetry function and format to include stats broken
down on common relation types. The types include:

- Tables
- Partitioned tables
- Hypertables
- Distributed hypertables
- Continuous aggregates
- Materialized views
- Views

and for each of these types report (when applicable):

- Total number of relations
- Total number of children/chunks
- Total data volume (broken into heap, toast, and indexes).
- Compression stats
- PG stats, like reltuples

The telemetry function has also been refactored to return `jsonb`
instead of `text`. This makes it easier to query and manipulate the
resulting JSON format, and also gives cleaner output.

Closes #3932
2022-02-08 09:44:55 +01:00
Sven Klemm
69b267071a Bump copyright year in license descriptions
Bump year in copyright information to 2022 and adjust same scripts
to reference NOTICE that didn't have the reference yet.
This patch also removes orphaned test/expected/utils.out.
2022-01-14 18:35:46 +01:00
Markos Fountoulakis
4762908bc8 Clean up dead code 2022-01-03 16:23:13 +02:00
gayyappan
d8d392914a Support for compression on continuous aggregates
Enable ALTER MATERIALIZED VIEW (timescaledb.compress)
This enables compression on the underlying materialized
hypertable. The segmentby and orderby columns for
compression are based on the GROUP BY clause and time_bucket
clause used while setting up the continuous aggregate.

timescaledb_information.continuous_aggregate view defn
change

Add support for compression policy on continuous
aggregates

Move code from job.c to policy_utils.c
Add support functions to check compression
policy validity for continuous aggregates.
2021-12-17 10:51:33 -05:00
Erik Nordström
e0f02c8c1a Add option to drop database when deleting data node
When deleting a data node, it is often convenient to be able to also
drop the database on the data node so that the node can be added again
using the same database name. However, dropping the database is
optional since it should be possible to delete a data node even if it
is no longer responding.

With the new functionality, a data node's database can be dropped as
follows:

```sql
SELECT delete_data_node('dn1', drop_database=>true);
```

Note that the default behavior is still to not drop the database in
order to be compatible with the old behavior. Enabling the option also
makes the function non-transactional, since dropping a database is not
transactional. Therefore, it is not possible to use this option in a
transaction block.

Closes #3876
2021-12-16 15:59:50 +01:00
Aleksander Alekseev
91f3edf609 Refactoring: get rid of max_bucket_width
Our code occasionally mentions max_bucket_width. However, in practice, there is
no such thing. For fixed-sized buckets, bucket_width and max_bucket_width are
always the same, while for variable-sized buckets bucket_width is not used at
all (except the fact that it equals -1 to indicate that the bucket size is
variable).

This patch removes any use of max_bucket_width, except for arguments of:

- _timescaledb_internal.invalidation_process_hypertable_log()
- _timescaledb_internal.invalidation_process_cagg_log()

The signatures of these functions were not changed for backward compatibility
between access and data nodes, which can run different versions of TimescaleDB.
2021-12-16 16:33:20 +03:00
Aleksander Alekseev
958040699c Monthly buckets support in CAGGs
This patch allows using time_bucket_ng("N month", ...) in CAGGs. Users can also
specify years, or months AND years. CAGGs on top of distributed hypertables
are supported as well.
2021-12-13 22:21:17 +03:00
Mats Kindahl
aae19319c0 Rewrite recompress_chunk as procedure
When executing `recompress_chunk` and a query at the same time, a
deadlock can be generated because the chunk relation and the chunk
index and the compressed and uncompressd chunks are locked in different
orders. In particular, when `recompress_chunk` is executing, it will
first decompress the chunk and as part of that lock the uncompressed
chunk index in AccessExclusive mode and when trying to compress the
chunk again it will try to lock the uncompressed chunk in
AccessExclusive as part of truncating it.

Note that `decompress_chunk` and `compress_chunk` lock the relations in
the same order and the issue arises because the procedures are combined
inth a single transaction.

To avoid the deadlock, this commit rewrites the `recompress_chunk` to
be a procedure and adds a commit between the decompression and
compression. Committing the transaction after the decompress will allow
reads and inserts to proceed by working on the uncompressed chunk, and
the compression part of the procedure will take the necessary locks in
strict order, thereby avoiding a deadlock.

In addition, the isolation test is rewritten so that instead of adding
a waitpoint in the PL/SQL function, we implement the isolation test by
taking a lock on the compressed table after the decompression.

Fixes #3846
2021-12-09 19:42:12 +01:00
Duncan Moore
1bc1993c61 Post release 2.5.1 2021-12-03 13:51:11 -05:00
Duncan Moore
94dc837394 Release 2.5.1 2021-12-02 15:11:36 -05:00
Mats Kindahl
112107546f Eliminate deadlock in recompress chunk policy
When executing recompress chunk policy concurrently with queries query, a
deadlock can be generated because the chunk relation and the chunk
index or the uncompressed chunk or the compressed chunk are locked in
different orders. In particular, when recompress chunk policy is
executing, it will first decompress the chunk and as part of that lock
the compressed chunk in `AccessExclusive` mode when dropping it and when
trying to compress the chunk again it will try to lock the uncompressed
chunk in `AccessExclusive` mode as part of truncating it.

To avoid the deadlock, this commit updates the recompress policy to do
the compression and the decompression steps in separate transactions,
which will avoid the deadlock since each phase (decompress and compress
chunk) locks indexes and compressed/uncompressed chunks in the same
order.

Note that this fixes the policy only, and not the `recompress_chunk`
function, which still is prone to deadlocks.

Partial-Bug: #3846
2021-11-30 18:04:30 +01:00
Erik Nordström
b78b25d317 Fail size utility functions when data nodes do not respond
Size utility functions, such as `hypertable_size()`, excluded
non-responding data nodes from size calculations, which led to the
functions succeeding but returning the wrong size information. To
avoid reporting confusing numbers, it is better to fail.

This change updates the SQL queries for the relevant functions to no
longer exclude non-responding data nodes and also adds a TAP test to
illustrate the error when data nodes are not responding.

Fixes #3713
2021-11-15 19:50:33 +01:00
Fabrízio de Royes Mello
7e3e771d9f Fix compression policy on tables using INTEGER
Commit fffd6c2350f5b3237486f3d49d7167105e72a55b fixes problem related
to PortalContext using PL/pgSQL procedure to execute the policy.
Unfortunately this new implementation introduced a problem when we use
INTEGER and not BIGINT for the time dimension.

Fixed it by dealing correclty with the integer types: SMALLINT, INTEGER
and BIGINT.

Also refatored the policy compression procedure replacing the two
procedures `policy_compression_{interval|integer}` by a simple
`policy_compression_execute` casting dimension type dynamically.

Fixes #3773
2021-11-05 14:55:23 -03:00
Fabrízio de Royes Mello
df7f058ad1 Post release 2.5.0
Add 2.5.0 to update test scripts for PG12 and PG13 and create update
test script for PG14.

Fix missing CHANGELOG thanks for external contributors.
2021-10-28 10:09:19 -03:00
Fabrízio de Royes Mello
8925dd8e15 Release 2.5.0
This release adds major new features since the 2.4.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* Continuous Aggregates for Distributed Hypertables
* Support for PostgreSQL 14
* Experimental: Support for timezones in `time_bucket_ng()`, including
the `origin` argument

This release also includes several bug fixes.

**Features**
* #3034 Add support for PostgreSQL 14
* #3435 Add continuous aggregates for distributed hypertables
* #3505 Add support for timezones in `time_bucket_ng()`

**Bugfixes**
* #3580 Fix memory context bug executing TRUNCATE
* #3592 Allow alter column type on distributed hypertable
* #3598 Improve evaluation of stable functions such as now() on access
node
* #3618 Fix execution of refresh_caggs from user actions
* #3625 Add shared dependencies when creating chunk
* #3626 Fix memory context bug executing TRUNCATE
* #3627 Schema qualify UDTs in multi-node
* #3638 Allow owner change of a data node
* #3654 Fix index attnum mapping in reorder_chunk
* #3661 Fix SkipScan path generation with constant DISTINCT column
* #3667 Fix compress_policy for multi txn handling
* #3673 Fix distributed hypertable DROP within a procedure
* #3701 Allow anyone to use size utilities on distributed hypertables
* #3708 Fix crash in get_aggsplit
* #3709 Fix ordered append pathkey check
* #3712 Fix GRANT/REVOKE ALL IN SCHEMA handling
* #3717 Support transparent decompression on individual chunks
* #3724 Fix inserts into compressed chunks on hypertables with caggs
* #3727 Fix DirectFunctionCall crash in distributed_exec
* #3728 Fix SkipScan with varchar column
* #3733 Fix ANALYZE crash with custom statistics for custom types
* #3747 Always reset expr context in DecompressChunk

**Thanks**
* @binakot and @sebvett for reporting an issue with DISTINCT queries
* @hardikm10, @DavidPavlicek and @pafiti for reporting bugs on TRUNCATE
* @mjf for reporting an issue with ordered append and JOINs
* @phemmer for reporting the issues on multinode with aggregate queries and evaluation of now()
* @abolognino for reporting an issue with INSERTs into compressed hypertables that have cagg
* @tanglebones for reporting the ANALYZE crash with custom types on multinode
2021-10-27 17:28:26 -03:00
Erik Nordström
e02f48c19d Add missing downgrade script files
Some of the downgrade script files were only added to the release
branch and no downgrade path existed for downgrading from the 2.4.2
release to the 2.4.1 release.

Add the missing downgrade files for 2.4.x releases to the main
development branch.

Also, avoid generating downgrade scripts for old versions, so that
only the downgrade script for the current version is generated by
default. A new CMake option `GENERATE_OLD_DOWNGRADE_SCRIPTS` is added
to control this behavior. Note that `GENERATE_DOWNGRADE_SCRIPT` needs
to be `ON` for the new option to work.

The reason we want to keep the ability to generate old downgrade
scripts is to be able to fix bugs in old downgrade scripts.
2021-10-27 18:23:57 +02:00
Markos Fountoulakis
221437e8ef Continuous aggregates for distributed hypertables
Add support for continuous aggregates for distributed hypertables by
allowing a continuous aggregate to read from a distributed hypertable
so that the continuous aggregate is on the access node while the
hypertable data is on the data nodes.

For distributed hypertables, both the hypertable and continuous
aggregate invalidation log are kept on the data nodes and the refresh
window is computed at refresh time on each data node. Since the
continuous aggregate materialization hypertable is not present on the
data nodes, the invalidation log was extended to allow using a
non-local hypertable id on the data nodes. This means that you cannot
create continuous aggregates on the data nodes since those could clash
with continuous aggregates on the access node.

Some utility statements added entries to the invalidation logs
directly (truncating chunks and hypertables, as well as dropping
individual chunks), so to handle this case, internal functions were
added to allow logging invalidation on the data nodes from the access
node.

The commit also includes some fixes to memory context usage that
caused crashes for invalidation triggers and also disable per data
node queries during refresh since that would otherwise generate an
exception.

Fixes #3435

Co-authored-by: Mats Kindahl <mats@timescale.com>
2021-10-25 18:20:11 +03:00
gayyappan
fffd6c2350 Use plpgsql procedure for executing compression policy
This PR removes the C code that executes the compression
policy. Instead we use a PL/pgSQL procedure to execute
the policy.

PG13.4 and PG12.8 introduced some changes
that require PortalContexts while executing transactions.
The compression policy procedure compresses chunks in
multiple transactions. We have seen some issues with snapshots
and portal management in the policy code (due to the
PG13.4 code changes). SPI API has transaction-portal management
code. However, the compression policy code does not use SPI
interfaces. But it is fairly easy to just convert this into
a PL/pgSQL procedure (which calls SPI) rather than replicating
portal managment code in C to manage multiple txns in the
compression policy.

This PR also disallows decompress_chunk, compress_chunk and
recompress_chunk in txn read only mode.

Fixes #3656
2021-10-13 09:11:59 -04:00