640 Commits

Author SHA1 Message Date
Sven Klemm
6dddfaa54e Lock down search_path in install scripts
This patch locks down search_path in extension install and update
scripts to only contain pg_catalog, this requires that any reference
in those scripts is fully qualified. Additionally we add explicit
create commands to all update scripts for objects added to the
public schema. This change will make update scripts fail if a
function with identical signature already exists when installing
or upgrading instead reusing the existing object.
2022-02-09 17:53:20 +01:00
Sven Klemm
c8b8516e46 Fix extension installation privilege escalation
TimescaleDB was vulnerable to a privilege escalation attack in
the extension installation script. An attacker could precreate
objects normally owned by the extension and get those objects
used in the installation script since the script would only try
to create them if they did not already exist. Thanks to Pedro
Gallegos for reporting the problem.

This patch changes the schema, table and function creation to fail
and abort the installation when the object already exists instead
of using the existing object.

Security: CVE-2022-24128
2022-02-09 17:53:20 +01:00
Erik Nordström
e56b95daec Add telemetry stats based on type of relation
Refactor the telemetry function and format to include stats broken
down on common relation types. The types include:

- Tables
- Partitioned tables
- Hypertables
- Distributed hypertables
- Continuous aggregates
- Materialized views
- Views

and for each of these types report (when applicable):

- Total number of relations
- Total number of children/chunks
- Total data volume (broken into heap, toast, and indexes).
- Compression stats
- PG stats, like reltuples

The telemetry function has also been refactored to return `jsonb`
instead of `text`. This makes it easier to query and manipulate the
resulting JSON format, and also gives cleaner output.

Closes #3932
2022-02-08 09:44:55 +01:00
Sven Klemm
69b267071a Bump copyright year in license descriptions
Bump year in copyright information to 2022 and adjust same scripts
to reference NOTICE that didn't have the reference yet.
This patch also removes orphaned test/expected/utils.out.
2022-01-14 18:35:46 +01:00
Markos Fountoulakis
4762908bc8 Clean up dead code 2022-01-03 16:23:13 +02:00
gayyappan
d8d392914a Support for compression on continuous aggregates
Enable ALTER MATERIALIZED VIEW (timescaledb.compress)
This enables compression on the underlying materialized
hypertable. The segmentby and orderby columns for
compression are based on the GROUP BY clause and time_bucket
clause used while setting up the continuous aggregate.

timescaledb_information.continuous_aggregate view defn
change

Add support for compression policy on continuous
aggregates

Move code from job.c to policy_utils.c
Add support functions to check compression
policy validity for continuous aggregates.
2021-12-17 10:51:33 -05:00
Erik Nordström
e0f02c8c1a Add option to drop database when deleting data node
When deleting a data node, it is often convenient to be able to also
drop the database on the data node so that the node can be added again
using the same database name. However, dropping the database is
optional since it should be possible to delete a data node even if it
is no longer responding.

With the new functionality, a data node's database can be dropped as
follows:

```sql
SELECT delete_data_node('dn1', drop_database=>true);
```

Note that the default behavior is still to not drop the database in
order to be compatible with the old behavior. Enabling the option also
makes the function non-transactional, since dropping a database is not
transactional. Therefore, it is not possible to use this option in a
transaction block.

Closes #3876
2021-12-16 15:59:50 +01:00
Aleksander Alekseev
91f3edf609 Refactoring: get rid of max_bucket_width
Our code occasionally mentions max_bucket_width. However, in practice, there is
no such thing. For fixed-sized buckets, bucket_width and max_bucket_width are
always the same, while for variable-sized buckets bucket_width is not used at
all (except the fact that it equals -1 to indicate that the bucket size is
variable).

This patch removes any use of max_bucket_width, except for arguments of:

- _timescaledb_internal.invalidation_process_hypertable_log()
- _timescaledb_internal.invalidation_process_cagg_log()

The signatures of these functions were not changed for backward compatibility
between access and data nodes, which can run different versions of TimescaleDB.
2021-12-16 16:33:20 +03:00
Aleksander Alekseev
958040699c Monthly buckets support in CAGGs
This patch allows using time_bucket_ng("N month", ...) in CAGGs. Users can also
specify years, or months AND years. CAGGs on top of distributed hypertables
are supported as well.
2021-12-13 22:21:17 +03:00
Mats Kindahl
aae19319c0 Rewrite recompress_chunk as procedure
When executing `recompress_chunk` and a query at the same time, a
deadlock can be generated because the chunk relation and the chunk
index and the compressed and uncompressd chunks are locked in different
orders. In particular, when `recompress_chunk` is executing, it will
first decompress the chunk and as part of that lock the uncompressed
chunk index in AccessExclusive mode and when trying to compress the
chunk again it will try to lock the uncompressed chunk in
AccessExclusive as part of truncating it.

Note that `decompress_chunk` and `compress_chunk` lock the relations in
the same order and the issue arises because the procedures are combined
inth a single transaction.

To avoid the deadlock, this commit rewrites the `recompress_chunk` to
be a procedure and adds a commit between the decompression and
compression. Committing the transaction after the decompress will allow
reads and inserts to proceed by working on the uncompressed chunk, and
the compression part of the procedure will take the necessary locks in
strict order, thereby avoiding a deadlock.

In addition, the isolation test is rewritten so that instead of adding
a waitpoint in the PL/SQL function, we implement the isolation test by
taking a lock on the compressed table after the decompression.

Fixes #3846
2021-12-09 19:42:12 +01:00
Duncan Moore
1bc1993c61 Post release 2.5.1 2021-12-03 13:51:11 -05:00
Duncan Moore
94dc837394 Release 2.5.1 2021-12-02 15:11:36 -05:00
Mats Kindahl
112107546f Eliminate deadlock in recompress chunk policy
When executing recompress chunk policy concurrently with queries query, a
deadlock can be generated because the chunk relation and the chunk
index or the uncompressed chunk or the compressed chunk are locked in
different orders. In particular, when recompress chunk policy is
executing, it will first decompress the chunk and as part of that lock
the compressed chunk in `AccessExclusive` mode when dropping it and when
trying to compress the chunk again it will try to lock the uncompressed
chunk in `AccessExclusive` mode as part of truncating it.

To avoid the deadlock, this commit updates the recompress policy to do
the compression and the decompression steps in separate transactions,
which will avoid the deadlock since each phase (decompress and compress
chunk) locks indexes and compressed/uncompressed chunks in the same
order.

Note that this fixes the policy only, and not the `recompress_chunk`
function, which still is prone to deadlocks.

Partial-Bug: #3846
2021-11-30 18:04:30 +01:00
Erik Nordström
b78b25d317 Fail size utility functions when data nodes do not respond
Size utility functions, such as `hypertable_size()`, excluded
non-responding data nodes from size calculations, which led to the
functions succeeding but returning the wrong size information. To
avoid reporting confusing numbers, it is better to fail.

This change updates the SQL queries for the relevant functions to no
longer exclude non-responding data nodes and also adds a TAP test to
illustrate the error when data nodes are not responding.

Fixes #3713
2021-11-15 19:50:33 +01:00
Fabrízio de Royes Mello
7e3e771d9f Fix compression policy on tables using INTEGER
Commit fffd6c2350f5b3237486f3d49d7167105e72a55b fixes problem related
to PortalContext using PL/pgSQL procedure to execute the policy.
Unfortunately this new implementation introduced a problem when we use
INTEGER and not BIGINT for the time dimension.

Fixed it by dealing correclty with the integer types: SMALLINT, INTEGER
and BIGINT.

Also refatored the policy compression procedure replacing the two
procedures `policy_compression_{interval|integer}` by a simple
`policy_compression_execute` casting dimension type dynamically.

Fixes #3773
2021-11-05 14:55:23 -03:00
Fabrízio de Royes Mello
df7f058ad1 Post release 2.5.0
Add 2.5.0 to update test scripts for PG12 and PG13 and create update
test script for PG14.

Fix missing CHANGELOG thanks for external contributors.
2021-10-28 10:09:19 -03:00
Fabrízio de Royes Mello
8925dd8e15 Release 2.5.0
This release adds major new features since the 2.4.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* Continuous Aggregates for Distributed Hypertables
* Support for PostgreSQL 14
* Experimental: Support for timezones in `time_bucket_ng()`, including
the `origin` argument

This release also includes several bug fixes.

**Features**
* #3034 Add support for PostgreSQL 14
* #3435 Add continuous aggregates for distributed hypertables
* #3505 Add support for timezones in `time_bucket_ng()`

**Bugfixes**
* #3580 Fix memory context bug executing TRUNCATE
* #3592 Allow alter column type on distributed hypertable
* #3598 Improve evaluation of stable functions such as now() on access
node
* #3618 Fix execution of refresh_caggs from user actions
* #3625 Add shared dependencies when creating chunk
* #3626 Fix memory context bug executing TRUNCATE
* #3627 Schema qualify UDTs in multi-node
* #3638 Allow owner change of a data node
* #3654 Fix index attnum mapping in reorder_chunk
* #3661 Fix SkipScan path generation with constant DISTINCT column
* #3667 Fix compress_policy for multi txn handling
* #3673 Fix distributed hypertable DROP within a procedure
* #3701 Allow anyone to use size utilities on distributed hypertables
* #3708 Fix crash in get_aggsplit
* #3709 Fix ordered append pathkey check
* #3712 Fix GRANT/REVOKE ALL IN SCHEMA handling
* #3717 Support transparent decompression on individual chunks
* #3724 Fix inserts into compressed chunks on hypertables with caggs
* #3727 Fix DirectFunctionCall crash in distributed_exec
* #3728 Fix SkipScan with varchar column
* #3733 Fix ANALYZE crash with custom statistics for custom types
* #3747 Always reset expr context in DecompressChunk

**Thanks**
* @binakot and @sebvett for reporting an issue with DISTINCT queries
* @hardikm10, @DavidPavlicek and @pafiti for reporting bugs on TRUNCATE
* @mjf for reporting an issue with ordered append and JOINs
* @phemmer for reporting the issues on multinode with aggregate queries and evaluation of now()
* @abolognino for reporting an issue with INSERTs into compressed hypertables that have cagg
* @tanglebones for reporting the ANALYZE crash with custom types on multinode
2021-10-27 17:28:26 -03:00
Erik Nordström
e02f48c19d Add missing downgrade script files
Some of the downgrade script files were only added to the release
branch and no downgrade path existed for downgrading from the 2.4.2
release to the 2.4.1 release.

Add the missing downgrade files for 2.4.x releases to the main
development branch.

Also, avoid generating downgrade scripts for old versions, so that
only the downgrade script for the current version is generated by
default. A new CMake option `GENERATE_OLD_DOWNGRADE_SCRIPTS` is added
to control this behavior. Note that `GENERATE_DOWNGRADE_SCRIPT` needs
to be `ON` for the new option to work.

The reason we want to keep the ability to generate old downgrade
scripts is to be able to fix bugs in old downgrade scripts.
2021-10-27 18:23:57 +02:00
Markos Fountoulakis
221437e8ef Continuous aggregates for distributed hypertables
Add support for continuous aggregates for distributed hypertables by
allowing a continuous aggregate to read from a distributed hypertable
so that the continuous aggregate is on the access node while the
hypertable data is on the data nodes.

For distributed hypertables, both the hypertable and continuous
aggregate invalidation log are kept on the data nodes and the refresh
window is computed at refresh time on each data node. Since the
continuous aggregate materialization hypertable is not present on the
data nodes, the invalidation log was extended to allow using a
non-local hypertable id on the data nodes. This means that you cannot
create continuous aggregates on the data nodes since those could clash
with continuous aggregates on the access node.

Some utility statements added entries to the invalidation logs
directly (truncating chunks and hypertables, as well as dropping
individual chunks), so to handle this case, internal functions were
added to allow logging invalidation on the data nodes from the access
node.

The commit also includes some fixes to memory context usage that
caused crashes for invalidation triggers and also disable per data
node queries during refresh since that would otherwise generate an
exception.

Fixes #3435

Co-authored-by: Mats Kindahl <mats@timescale.com>
2021-10-25 18:20:11 +03:00
gayyappan
fffd6c2350 Use plpgsql procedure for executing compression policy
This PR removes the C code that executes the compression
policy. Instead we use a PL/pgSQL procedure to execute
the policy.

PG13.4 and PG12.8 introduced some changes
that require PortalContexts while executing transactions.
The compression policy procedure compresses chunks in
multiple transactions. We have seen some issues with snapshots
and portal management in the policy code (due to the
PG13.4 code changes). SPI API has transaction-portal management
code. However, the compression policy code does not use SPI
interfaces. But it is fairly easy to just convert this into
a PL/pgSQL procedure (which calls SPI) rather than replicating
portal managment code in C to manage multiple txns in the
compression policy.

This PR also disallows decompress_chunk, compress_chunk and
recompress_chunk in txn read only mode.

Fixes #3656
2021-10-13 09:11:59 -04:00
gayyappan
c55cbb9350 Expose subtract_integer_from_now as SQL function
Move subtract_integer_from_now to src directory
and create a SQL function for it.
2021-10-13 09:11:59 -04:00
Fabrízio de Royes Mello
609b5ea34a Refactor SQL function approximate_row_count
Simplify the CTE to recursively inspect all partitions of a relation
and calculate the sum of `pg_class.reltuples` taking in account the
differences introduced by PG14.
2021-10-01 14:20:17 -03:00
Sven Klemm
6838fcf906 Post 2.4.2 release
Update version numbers and add 2.4.2 to update tests.

We have to put the DROP FUNCTION back in latest-dev because
2.4.2 did not include the commit which removed the function
definitions.
2021-09-22 18:20:30 +02:00
Sven Klemm
3f944bee82 Release 2.4.2
This release contains bug fixes since the 2.4.1 release.
We deem it high priority to upgrade.

**Bugfixes**
* #3437 Rename on all continuous aggregate objects
* #3469 Use signal-safe functions in signal handler
* #3520 Modify compression job processing logic
* #3527 Fix time_bucket_ng behaviour with origin argument
* #3532 Fix bootstrap with regresschecks disabled
* #3574 Fix failure on job execution by background worker
* #3590 Call cleanup functions on backend exit

**Thanks**
* @jankatins for reporting a crash with background workers
* @LutzWeischerFujitsu for reporting an issue with bootstrap
2021-09-20 19:22:13 +02:00
Mats Kindahl
592e0bd46e Rename on all continuous aggregate objects
When renaming a column on a continuous aggregate, only the user view
column was renamed. This commit changes this by applying the rename on
the materialized table, the user view, the direct view, and the partial
view, as well as the column name in the dimension table.

Since this also changes some of the table definitions, we need to
perform the same rename in the update scripts as well, which is done by
comparing the direct view and the user view to decide what columns that
require a rename and then executing that rename on the direct view,
partial view, and materialization table, as well as updating the column
name in the dimension table.

When renaming columns in tables with indexes, the column in the table
is renamed but not the column in the index, which keeps the same name.
When restoring from a dump, however, the column name of the table is
used, which create a diff in the update test. For that reason, we
change the update tests to not list index definitions as part of the
comparison. The existance of the indexes is still tracked and compared
since the indexes for a hypertable is listed as part of the output.

If a downgrade does not revert some changes, this will cause a diff in
the downgrade test. Since the rename is benign and not easy to revert,
this will cause test failure. Instead, we add a file to do extra
actions during a clean-rerun to prevent these diffs. In this case,
applying the same renames as the update script.

Fixes #3405
2021-09-20 12:18:52 +02:00
Sven Klemm
43bb5ba7d1 Optimize approximate_row_count
Rewrite approximate_row_count to SQL instead of PLpgSQL and remove
superfluous JOINs against pg_namespace. Adjust tuple calculation
for PG14 since in PG14 reltuples for partitioned tables is the sum
of it's children so we need to exclude those from calculation to
not doublecount.
2021-09-16 14:57:47 +02:00
Aleksander Alekseev
ab71c4a1c1 Mark time_bucket_ng() as IMMUTABLE
This patch marks time_bucket_ng() as IMMUTABLE. Two exceptions are:

- time_bucket_ng(interval, timestamptz) timestamptz
- time_bucket_ng(interval, timestamptz, timestamptz) timestamptz

... due to their implementation, see the comments. These two overloaded
versions were introduced only for backward compatibility with time_bucket()
and are not needed for building continuous aggregates.
2021-09-07 20:00:31 +03:00
Aleksander Alekseev
dc67eb75d6 time_bucket_ng() version with origin argument
This patch adds a version of time_bucket_ng() with 'origin' argument.

It doesn't address any other known issues. E.g. volatility of the function
will be changed in another patch. The error messages are going to be improved
when the feature gets a little more stable.
2021-09-03 17:36:29 +03:00
gayyappan
dfc63fe063 Remove legacy functions
Remove _timescaledb_internal.time_col_name_for_chunk and
_timescaledb_internal.time_col_type_for_chunk functions.

Fixes #3539
2021-09-02 15:25:59 -04:00
Aleksander Alekseev
22e77a77ad Support timezones in time_bucket_ng()
This patch adds support of timezones in time_bucket_ng(). The 'origin'
argument can't be used with timezones yet. This will be implemented in
a separate pull request.
2021-09-01 11:23:56 +03:00
Mats Kindahl
86cd8f6532 Post 2.4.1 release
Updates `version.config` and adds new version to update tests.
2021-08-24 10:42:29 +02:00
Mats Kindahl
4e0954e87f Release 2.4.1
This release contains bug fixes since the 2.4.0 release.  We deem it
high priority to upgrade.

The release fixes continous aggregate refresh for postgres 12.8 and
13.4, a crash with ALTER TABLE commands and a crash with continuous
aggregates with HAVING clause.

**Bugfixes**
* #3430 Fix havingqual processing for continuous aggregates
* #3468 Disable tests by default if tools are not found
* #3462 Fix crash while tracking alter table commands
* #3489 Fix continuous agg bgw job failure for PG 12.8 and 13.4
* #3494 Improve error message when adding data nodes

**Thanks**
* @brianbenns for reporting a segfault with continuous aggregates
* @brianbenns for reporting a segfault with continuous aggregates
* @usego for reporting an issue with continuous aggregate refresh on PG 13.4
2021-08-19 19:16:20 +02:00
gayyappan
9ea77fb97f Post release 2.4.0
Add tests to update from 2.4.0
Fix up version.config
Fix view creation stmt in views_experimental.sql that
causes extension update failures.
2021-07-30 17:51:30 -04:00
gayyappan
63f2bdfc9e Release 2.4.0
This release adds new experimental features since the 2.3.1 release.

The experimental features in this release are:
* APIs for chunk manipulation across data nodes in a distributed
hypertable setup. This includes the ability to add a data node and move
chunks to the new data node for cluster rebalancing.
* The `time_bucket_ng` function, a newer version of `time_bucket`. This
function supports years, months, days, hours, minutes, and seconds.

We’re committed to developing these experiments, giving the community
 a chance to provide early feedback and influence the direction of
TimescaleDB’s development. We’ll travel faster with your input!

Please create your feedback as a GitHub issue (using the
experimental-schema label), describe what you found, and tell us the
steps or share the code snip to recreate it.

This release also includes several bug fixes.

PostgreSQL 11 deprecation announcement
Timescale is working hard on our next exciting features. To make that
possible, we require functionality that is available in Postgres 12 and
above. Postgres 11 is not supported with TimescaleDB 2.4.

**Experimental Features**
* #3293 Add timescaledb_experimental schema
* #3302 Add block_new_chunks and allow_new_chunks API to experimental
schema. Add chunk based refresh_continuous_aggregate.
* #3211 Introduce experimental time_bucket_ng function
* #3366 Allow use of experimental time_bucket_ng function in continuous aggregates
* #3408 Support for seconds, minutes and hours in time_bucket_ng
* #3446 Implement cleanup for chunk copy/move.

**Bugfixes**
* #3401 Fix segfault for RelOptInfo without fdw_private
* #3411 Verify compressed chunk validity for compressed path
* #3416 Fix targetlist names for continuous aggregate views
* #3434 Remove extension check from relcache invalidation callback
* #3440 Fix remote_tx_heal_data_node to work with only current database

**Thanks**
* @fvannee for reporting an issue with hypertable expansion in functions
* @amalek215 for reporting an issue with cache invalidation during pg_class vacuum full
* @hardikm10 for reporting an issue with inserting into compressed chunks
* @dberardo-com and @iancmcc for reporting an issue with extension updates after renaming columns of continuous aggregates.
2021-07-29 13:59:32 -04:00
gayyappan
c9adab63b3 Revert cagg changes from reverse-dev.sql
Cagg changes that were removed from reverse-dev.sql
by PR 3443 were inadvertently added back in by PR 3446.
2021-07-29 13:59:32 -04:00
Nikhil
2ffa1bf436 Implement cleanup for chunk copy/move
A chunk copy/move operation is carried out in stages and it can
fail in any of them. We track the last completed stage in the
"chunk_copy_operation" catalog table. In case of failure, a
"chunk_copy_cleanup" function can be invoked to bring the chunk back
to its original state on the source datanode and all transient objects
like replication slot, publication, subscription, empty chunk, metadata
updates, etc are cleaned up.

Includes test case changes for each and every stage induced failure.

To avoid confusion between chunk copy activity and chunk copy operation
this patch also consistently uses "operation" everywhere now instead of
"activity"
2021-07-29 16:53:12 +03:00
Erik Nordström
352dc9baec Remove copy_chunk_data from downgrade script
The internal function `copy_chunk_data` was removed as part of
refactoring and is no longer necessary to remove in the downgrade
script since the function was never part of a release.
2021-07-29 16:53:12 +03:00
Erik Nordström
b4710501dd Add experimental chunk replication view
A new view in the experimental schema shows information related to
chunk replication. The view can be used to learn the replication
status of a chunk while also providing a way to easily find nodes to
move or copy chunks between in order to ensure a fully replicated
multi-node cluster.

Tests have been added to illustrate the potential usage.
2021-07-29 16:53:12 +03:00
Dmitry Simonenko
38c1781748 Copy/move chunk refactoring
Remove copy_chunk_data() function and code needed to support it,
such as the 'transactional' argument.

Rework copy chunk logic using separate stages.

Introduce copy_chunk() API function as an internal wrapper for
the move_chunk().
2021-07-29 16:53:12 +03:00
Nikhil
f6b0250557 Implement wrapper API for copy/move chunk
The building blocks required for implementing end-to-end copy/move
chunk functionality have now been wrapped in a procedure.

A procedure is required because multiple transactions are needed to
carry out the activity across the access node and the involved two data
nodes.

The following steps are encapsulated in this procedure

1) Create an empty chunk table on the destination data node

2) Copy the data from the src data node chunk to this newly created
destination node chunk. This is done via inbuilt PostgreSQL logical
replication functionality

3) Attach this chunk to the hypertable on the dst data node

4) Remove this chunk from the src data node to complete the move if
requested

A new catalog table "chunk_copy_activity" has been added to track
the progress of the above stages. A unique id gets assigned to each
activity and it is updated with the completed stages as things
progress.
2021-07-29 16:53:12 +03:00
Dmitry Simonenko
2c66c1fd64 Introduce function to copy chunk data between data nodes
Add internal copy_chunk_data() function which implements a way
to copy chunk data between data nodes using logical
replication.

This patch prepared together with @nikkhils.
2021-07-29 16:53:12 +03:00
Erik Nordström
b8ff780c50 Add ability to create chunk from existing table
The `create_chunk` API has been extended to allow creating a chunk
from an existing relational table. The table is turned into a chunk by
attaching it to the root hypertable via inheritance.

The purpose of this functionality is to allow copying a chunk to
another node. First, the chunk table and data is copied. After that,
the `create_chunk` can be executed to make the new table part of the
hypertable.

Currently, the relational table used to create the chunk has to match
the hypertable in terms of constraints, triggers, etc. PostgreSQL
itself enforces the existence of same-named CHECK constraints, but no
enforcement currently exists for other objects, including triggers
UNIQUE, PRIMARY KEY, or FOREIGN KEY constraints. Such enforcement can
be implemented in the future, if deemed necessary. Another option is
to automatically add all the required objects (triggers, constraints)
based on the hypertable equivalents. However, that might also lead to
duplicate objects in case some of them exist on the table prior to
creating the chunk.
2021-07-29 16:53:12 +03:00
Nikhil
762053431e Implement drop_chunk_replica API
This function drops a chunk on a specified data node. It then removes
the metadata about the datanode, chunk association on the access node.

This function is meant for internal use as part of the "move chunk"
functionality.

If only one chunk replica remains then this function refuses to drop it
to avoid data loss.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
404f1cdbad Create chunk table from access node
Creates a table for chunk replica on the given data node. The table
gets the same schema and name as the chunk. The created chunk replica
table is not added into metadata on the access node or data node.

The primary goal is to use it during copy/move chunk.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
34e99a1c23 Return error for NULL input to create_chunk_table
Gives errors if any argument of create_chunk_table is NULL instead of
being STRICT. Utilizes newly added macros for this.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
28ccecbe7c Create an empty chunk table
Adds an internal API function to create an empty chunk table according
the given hypertable for the given chunk table name and dimension
slices. This functions creates a chunk table inheriting from the
hypertable, so it guarantees the same schema. No TimescaleDB's
metadata is updated.

To be able to create the chunk table in a tablespace attached to the
hyeprtable, this commit allows calculating the tablespace id without
the dimension slice to exist in the catalog.

If there is already a chunk, which collides on dimension slices, the
function fails to create the chunk table.

The function will be used internally in multi-node to be able to
replicate a chunk from one data node to another.
2021-07-29 16:53:12 +03:00
Sven Klemm
ded8e82ebb Fix downgrade files for 2.3.1
The cagg rebuild was required for a fix introduced in 2.3.1 but
was not removed from reverse-dev.sql when that version got released.
2021-07-28 17:08:13 +02:00
Aleksander Alekseev
99f7a2122f Support seconds, minutes, and hours in time_bucket_ng()
As a future replacement for time_bucket(), time_bucket_ng()
should support seconds, minutes, and hours. This patch adds
this support. The implementation is the same as for
time_bucket(). Timezones are not yet supported.
2021-07-20 12:34:57 +03:00
Mats Kindahl
b5ffc71071 Post-release steps for release 2.3.1
Add 2.3.1 to the update tests and update the downgrade target for the
downgrade version. This commit also fixes two issues that were fixed in
the release branch:

1. Drop `_timescaledb_internal.refresh_continuous_aggregate`
2. Update the changelog to match the release branch.
2021-07-08 14:31:01 +02:00
Mats Kindahl
06433f6228 Release 2.3.1
**Bugfixes**
* #3279 Add some more randomness to chunk assignment
* #3288 Fix failed update with parallel workers
* #3300 Improve trigger handling on distributed hypertables
* #3304 Remove paths that reference parent relids for compressed chunks
* #3305 Fix pull_varnos miscomputation of relids set
* #3310 Generate downgrade script
* #3314 Fix heap buffer overflow in hypertable expansion
* #3317 Fix heap buffer overflow in remote connection cache.
* #3327 Make aggregate in caggs fully qualified
* #3327 Make aggregates in caggs fully qualified
* #3336 Fix pg_init_privs objsubid handling
* #3345 Fix SkipScan distinct column identification
* #3355 Fix heap buffer overflow when renaming compressed hypertable columns.
* #3367 Improve DecompressChunk qual pushdown
* #3377 Fix bad use of repalloc

**Thanks**
* @db-adrian for reporting an issue when accessing cagg view through postgres_fdw
* @fncaldas and @pgwhalen for reporting an issue accessing caggs when public is not in search_path
* @fvannee, @mglonnro and @ebreijo for reporting an issue with the upgrade script
* @fvannee for reporting a performance regression with SkipScan
2021-07-02 09:12:38 +02:00