603 Commits

Author SHA1 Message Date
Erik Nordström
b4710501dd Add experimental chunk replication view
A new view in the experimental schema shows information related to
chunk replication. The view can be used to learn the replication
status of a chunk while also providing a way to easily find nodes to
move or copy chunks between in order to ensure a fully replicated
multi-node cluster.

Tests have been added to illustrate the potential usage.
2021-07-29 16:53:12 +03:00
Dmitry Simonenko
38c1781748 Copy/move chunk refactoring
Remove copy_chunk_data() function and code needed to support it,
such as the 'transactional' argument.

Rework copy chunk logic using separate stages.

Introduce copy_chunk() API function as an internal wrapper for
the move_chunk().
2021-07-29 16:53:12 +03:00
Nikhil
f6b0250557 Implement wrapper API for copy/move chunk
The building blocks required for implementing end-to-end copy/move
chunk functionality have now been wrapped in a procedure.

A procedure is required because multiple transactions are needed to
carry out the activity across the access node and the involved two data
nodes.

The following steps are encapsulated in this procedure

1) Create an empty chunk table on the destination data node

2) Copy the data from the src data node chunk to this newly created
destination node chunk. This is done via inbuilt PostgreSQL logical
replication functionality

3) Attach this chunk to the hypertable on the dst data node

4) Remove this chunk from the src data node to complete the move if
requested

A new catalog table "chunk_copy_activity" has been added to track
the progress of the above stages. A unique id gets assigned to each
activity and it is updated with the completed stages as things
progress.
2021-07-29 16:53:12 +03:00
Dmitry Simonenko
2c66c1fd64 Introduce function to copy chunk data between data nodes
Add internal copy_chunk_data() function which implements a way
to copy chunk data between data nodes using logical
replication.

This patch prepared together with @nikkhils.
2021-07-29 16:53:12 +03:00
Erik Nordström
b8ff780c50 Add ability to create chunk from existing table
The `create_chunk` API has been extended to allow creating a chunk
from an existing relational table. The table is turned into a chunk by
attaching it to the root hypertable via inheritance.

The purpose of this functionality is to allow copying a chunk to
another node. First, the chunk table and data is copied. After that,
the `create_chunk` can be executed to make the new table part of the
hypertable.

Currently, the relational table used to create the chunk has to match
the hypertable in terms of constraints, triggers, etc. PostgreSQL
itself enforces the existence of same-named CHECK constraints, but no
enforcement currently exists for other objects, including triggers
UNIQUE, PRIMARY KEY, or FOREIGN KEY constraints. Such enforcement can
be implemented in the future, if deemed necessary. Another option is
to automatically add all the required objects (triggers, constraints)
based on the hypertable equivalents. However, that might also lead to
duplicate objects in case some of them exist on the table prior to
creating the chunk.
2021-07-29 16:53:12 +03:00
Nikhil
762053431e Implement drop_chunk_replica API
This function drops a chunk on a specified data node. It then removes
the metadata about the datanode, chunk association on the access node.

This function is meant for internal use as part of the "move chunk"
functionality.

If only one chunk replica remains then this function refuses to drop it
to avoid data loss.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
404f1cdbad Create chunk table from access node
Creates a table for chunk replica on the given data node. The table
gets the same schema and name as the chunk. The created chunk replica
table is not added into metadata on the access node or data node.

The primary goal is to use it during copy/move chunk.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
34e99a1c23 Return error for NULL input to create_chunk_table
Gives errors if any argument of create_chunk_table is NULL instead of
being STRICT. Utilizes newly added macros for this.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
28ccecbe7c Create an empty chunk table
Adds an internal API function to create an empty chunk table according
the given hypertable for the given chunk table name and dimension
slices. This functions creates a chunk table inheriting from the
hypertable, so it guarantees the same schema. No TimescaleDB's
metadata is updated.

To be able to create the chunk table in a tablespace attached to the
hyeprtable, this commit allows calculating the tablespace id without
the dimension slice to exist in the catalog.

If there is already a chunk, which collides on dimension slices, the
function fails to create the chunk table.

The function will be used internally in multi-node to be able to
replicate a chunk from one data node to another.
2021-07-29 16:53:12 +03:00
Sven Klemm
ded8e82ebb Fix downgrade files for 2.3.1
The cagg rebuild was required for a fix introduced in 2.3.1 but
was not removed from reverse-dev.sql when that version got released.
2021-07-28 17:08:13 +02:00
Aleksander Alekseev
99f7a2122f Support seconds, minutes, and hours in time_bucket_ng()
As a future replacement for time_bucket(), time_bucket_ng()
should support seconds, minutes, and hours. This patch adds
this support. The implementation is the same as for
time_bucket(). Timezones are not yet supported.
2021-07-20 12:34:57 +03:00
Mats Kindahl
b5ffc71071 Post-release steps for release 2.3.1
Add 2.3.1 to the update tests and update the downgrade target for the
downgrade version. This commit also fixes two issues that were fixed in
the release branch:

1. Drop `_timescaledb_internal.refresh_continuous_aggregate`
2. Update the changelog to match the release branch.
2021-07-08 14:31:01 +02:00
Mats Kindahl
06433f6228 Release 2.3.1
**Bugfixes**
* #3279 Add some more randomness to chunk assignment
* #3288 Fix failed update with parallel workers
* #3300 Improve trigger handling on distributed hypertables
* #3304 Remove paths that reference parent relids for compressed chunks
* #3305 Fix pull_varnos miscomputation of relids set
* #3310 Generate downgrade script
* #3314 Fix heap buffer overflow in hypertable expansion
* #3317 Fix heap buffer overflow in remote connection cache.
* #3327 Make aggregate in caggs fully qualified
* #3327 Make aggregates in caggs fully qualified
* #3336 Fix pg_init_privs objsubid handling
* #3345 Fix SkipScan distinct column identification
* #3355 Fix heap buffer overflow when renaming compressed hypertable columns.
* #3367 Improve DecompressChunk qual pushdown
* #3377 Fix bad use of repalloc

**Thanks**
* @db-adrian for reporting an issue when accessing cagg view through postgres_fdw
* @fncaldas and @pgwhalen for reporting an issue accessing caggs when public is not in search_path
* @fvannee, @mglonnro and @ebreijo for reporting an issue with the upgrade script
* @fvannee for reporting a performance regression with SkipScan
2021-07-02 09:12:38 +02:00
Mats Kindahl
a58ebdb3b4 Split update and downgrade version
During an update, it is not possible to run the downgrade scripts until
the release has been tagged, but the update scripts can be run. This
means that we need to split the previous version into two different
fields: one for running the update tests and one for running the
downgrade tests.
2021-07-01 16:05:31 +02:00
davidkohn88
bfd92ab822 Use CREATE OR REPLACE AGGREGATE
From PG12 on, CREATE OR REPLACE is supported for aggregates,
therefore, since we have dropped support for PG11, we can avoid
going through the rigamarole of having our aggregates in a separate
file from the functions we define to support them. Nor do we need to
handle aggregates separately from other functions as their creation
is now idempotent.
2021-07-01 07:40:46 +02:00
Sven Klemm
ff5d7e42bb Adjust code to PG14 reltuples changes
PG14 changes the initial value of pg_class.reltuples to -1 to allow
differentiating between an empty relation and a relation where
ANALYZE has not yet run.

https://github.com/postgres/postgres/commit/3d351d916b
2021-06-29 16:35:35 +02:00
Aleksander Alekseev
e57155fdc0 time_bucket_ng() may be IMMUTABLE depending on arguments.
This function is IMMUTABLE when it doesn't accept timestamptz arguments,
and STABLE otherwise. See the comments in sql/time_bucket_ng.sql for
more details.
2021-06-25 12:36:14 +03:00
Mats Kindahl
15b46818ea Generate downgrade script
This commit add functions and code to generate a downgrade script from
the current version to the previous version. This requires execution
from a Git repository since it retrieves the prolog and epilog for the
"downgrade" file from the version given by `update_from_version` in the
`version.config` file.

The commit adds several CMake functions that simplifies the composition
of script files, but these are not used to generate the update scripts.
A potential improvement is to use the scripts to also generate the
update scripts.

This commit supports generating a downgrade script from the
current version to the previous version. Other versions are handled
using a variable containing file names of reverse update
scripts and  the source and target version is extracted from the file
names, which is assumed to be of the form
`<source-version>--<target-version>.sql`.

In addition to adding support for generating downgrade scripts, the
commit adds a downgrade test file that tests a release in a similar way
to the update script and adds it as a workflow.
2021-06-24 11:10:25 +02:00
Aleksander Alekseev
33dfdcf5ea Introduce experimental time_bucket_ng() function
This patch adds time_bucket_ng() function to the experimental
schema. The "ng" part stands for "next generation". Unlike
current time_bucket() implementation the _ng version will support
months, years and timezones.

Current implementation doesn't claim to be complete. For instance,
it doesn't support timezones yet. The reasons to commit it in it's
current state are 1) to shorten the feedback loop 2) to start
experimenting with monthly buckets are soon as possible,
3) to reduce the unnecessary work of rebasing and resolving
conflicts 4) to make the work easier to the reviewers
2021-06-21 12:00:18 +03:00
Nikhil
8aaef4ae14 Fix update tests to handle sequences
The post-update script was handling preserving initprivs for newly
added catalog tables and views. However, newly added catalog sequences
need separate handling otherwise update tests start failing. We also
now grant privileges for all future sequences in the update tests.

In passing, default the PG_VERSION in the update tests to 12 since we
don't work with PG11 anymore.
2021-06-21 13:22:08 +05:30
Mats Kindahl
71e8f13871 Add workflow and CMake support for formatting
Add a workflow to check that CMake files are correctly formatted as
well as a custom target to format CMake files in the repository. This
commit also runs the formatting on all CMake files in the repository.
2021-06-17 22:52:29 +02:00
Sven Klemm
c4c0c3de4f Harden pg_init_privs query
Since we are only interested in entries with classoid pg_class
our queries should reflect that. Without these restrictions
objects that have entries for multiple classoids can cause the
extension update to fail.
2021-06-16 10:15:53 +02:00
Markos Fountoulakis
6e7679e222 Fix failed update with parallel workers
When executing "ALTER EXTENSION timescaledb UPDATE TO ..." it will fail
if parallel workers spawn for the update itself. Disable parallel
execution during the update.
2021-06-15 17:50:22 +03:00
Sven Klemm
6d6172b027 Fix pg_init_privs objsubid handling
pg_init_privs can have multiple entries per relation if the relation
has per column privileges. An objsubid different from 0 means that
the entry is for a column privilege. Since we do not currently
restore column privileges we have to ignore those rows otherwise
the update script will fail when tables with column privileges are
present.
2021-06-15 10:15:51 +02:00
Sven Klemm
f1ae6468d8 Make aggregate in caggs fully qualified
When querying continuous aggregate views with a search_path not
including public the query will fail cause the function reference
in the finalize call is not fully qualified.

This can surface when querying caggs through postgres_fdw which
resets search_path to contain only pg_catalog.

Fixes #1919
Fixes #3326
2021-06-14 18:21:27 +02:00
Erik Nordström
264b77eb20 Move internal API functions to experimental schema
Move the "block new chunks" functions and the chunk-based continuous
aggregate refresh function to the new experimental schema.
2021-06-09 14:47:16 +02:00
Mats Kindahl
10e339f591 Add experimental schema
This commit adds an experimental schema where experimental features can
be added.
2021-06-04 08:28:27 +02:00
Sven Klemm
dd3e422964 Remove update files for PG11
Remove update files that were only used for PG11 versions since
those versions do not support postgres past PG11
2021-06-01 20:21:06 +02:00
Juanito Fatas
9fe90ebcea Fix broken links in TimescaleDB NOTICE
Link to Telemetry page explains more than disable it

Co-authored-by: Mike Freedman <mfreed@cs.princeton.edu>
2021-05-31 06:46:44 +02:00
Erik Nordström
77a3d5a033 Release 2.3.0
This release adds major new features since the 2.2.1 release. We deem
it moderate priority for upgrading.

This release adds support for inserting data into compressed chunks
and improves performance when inserting data into distributed
hypertables. Distributed hypertables now also support triggers and
compression policies.

The bug fixes in this release address issues related to the handling
of privileges on compressed hypertables, locking, and triggers with
transition tables.

**Features**
* #3116 Add distributed hypertable compression policies
* #3162 Use COPY when executing distributed INSERTs
* #3199 Add GENERATED column support on distributed hypertables
* #3210 Add trigger support on distributed hypertables
* #3230 Support for inserts into compressed chunks

**Bugfixes**
* #3213 Propagate grants to compressed hypertables
* #3229 Use correct lock mode when updating chunk
* #3243 Fix assertion failure in decompress_chunk_plan_create
* #3250 Fix constraint triggers on hypertables
* #3251 Fix segmentation fault due to incorrect call to chunk_scan_internal
* #3252 Fix blocking triggers with transition tables

**Thanks**
* @yyjdelete for reporting a crash with decompress_chunk and identifying the bug in the code
* @fabriziomello for documenting the prerequisites when compiling against PostgreSQL 13
2021-05-25 15:11:55 +02:00
Sven Klemm
fe872cb684 Add policy_recompression procedure
This patch adds a recompress procedure that may be used as custom
job when compression and recompression should run as separate
background jobs.
2021-05-24 18:03:47 -04:00
gayyappan
4f865f7870 Add recompress_chunk function
After inserts go into a compressed chunk, the chunk is marked as
unordered.This PR adds a new function recompress_chunk that
compresses the data and sets the status back to compressed. Further
optimizations for this function are planned but not part of this PR.

This function can be invoked by calling
SELECT recompress_chunk(<chunk_name>).

recompress_chunk function is automatically invoked by the compression
policy job, when it sees that a chunk is in unordered state.
2021-05-24 18:03:47 -04:00
Sven Klemm
fc5f10d454 Remove chunk_dml_blocker trigger
Remove the chunk_dml_blocker trigger which was used to prevent
INSERTs into compressed chunks.
2021-05-24 18:03:47 -04:00
gayyappan
45462c775e Fix hypertable_chunk_local_size view
The view uses cached information from compression_chunk_size to
report the size of compressed chunks. Since compressed chunks
can be modified, we call pg_relation_size on the compressed chunk
while reporting the size

The view also incorrectly used the hypertable's reltoastrelid to
calculate toast bytes. It has been changed to use the chunk's
reltoastrelid.
2021-05-24 11:52:03 -04:00
Sven Klemm
3e28f10600 Fix constraint trigger on hypertables
When creating a constraint trigger on a hypertable the command
succeeds and the constraint trigger works correctly for existing
chunks but any chunk creation after that would fail with an
error, because the constraint trigger was treated like a normal
constraint. But since pg_get_constraintdef does not return a SQL
command for constraint triggers the corresponding command that was
created would throw an error.

Fixes #3235
2021-05-20 21:26:47 +02:00
Markos Fountoulakis
adde40f548 Use chunk status for is_compressed field in view
The timescaledb_information.chunks view used to return NULL in the
is_compressed field for distributed chunks. This changes the view to
always return true or false, depending on the chunk status flags in the
Access Node. This is a hint and cannot be used as a source of truth for
the compression state of the actual chunks in the Data Nodes.
2021-05-19 10:57:24 +03:00
Mats Kindahl
b788168e59 Propagate grants to compressed hypertable
Grants and revokes where not propagated to compressed hypertables, if
the hypertable had a compressed hypertable, meaning that `pg_dump` and
`pg_restore` would not be able to dump nor restore the compressed part
of a hypertable unless they were owners and/or was a superuser.

This commit fixes this by propagating grants and revokes on a
hypertable to the associated compressed hypertable, if one exists,
which will then include the compressed hypertable and the associated
chunks in the grant and revoke execution.

It also adds code to fix the permissions of compressed hypertables and
all associated chunks in an update and adds an update test to check
that the permissions match.

Fixes #3209
2021-05-13 08:43:07 +02:00
Markos Fountoulakis
bc740a32fb Add distributed hypertable compression policies
Add support for compression policies on Access Nodes. Extend the
compress_chunk() function to maintain compression state per chunk
on the Access Node.
2021-05-07 16:50:12 +03:00
Mats Kindahl
e7daf74d40 Release 2.2.1
This maintenance release contains bugfixes since the 2.2.0 release. We
deem it high priority for upgrading.

This release extends Skip Scan to multinode by enabling the pushdown
of `DISTINCT` to data nodes. It also fixes a number of bugs in the
implementation of Skip Scan, in distributed hypertables, in creation
of indexes, in compression, and in policies.

**Features**
* #3113 Pushdown "SELECT DISTINCT" in multi-node to allow use of Skip
  Scan

**Bugfixes**
* #3101 Use commit date in `get_git_commit()`
* #3102 Fix `REINDEX TABLE` for distributed hypertables
* #3104 Fix use after free in `add_reorder_policy`
* #3106 Fix use after free in `chunk_api_get_chunk_stats`
* #3109 Copy recreated object permissions on update
* #3111 Fix `CMAKE_BUILD_TYPE` check
* #3112 Use `%u` to format Oid instead of `%d`
* #3118 Fix use after free in cache
* #3123 Fix crash while using `REINDEX TABLE CONCURRENTLY`
* #3135 Fix SkipScan path generation in `DISTINCT` queries
  with expressions
* #3146 Fix SkipScan for IndexPaths without pathkeys
* #3147 Skip ChunkAppend if AppendPath has no children
* #3148 Make `SELECT DISTINCT` handle non-var targetlists
* #3151 Fix `fdw_relinfo_get` assertion failure on `DELETE`
* #3155 Inherit `CFLAGS` from PostgreSQL
* #3169 Fix incorrect type cast in compression policy
* #3183 Fix segfault in calculate_chunk_interval
* #3185 Fix wrong datatype in integer based retention policy

**Thanks**
* @Dead2, @dv8472 and @einsibjarni for reporting an issue with
  multinode queries and views
* @hperez75 for reporting an issue with Skip Scan
* @nathanloisel for reporting an issue with compression on hypertables
  with integer-based timestamps
* @xin-hedera for fixing an issue with compression on hypertables with
  integer-based timestamps
2021-05-05 08:50:39 +02:00
gayyappan
e0bff859e3 Add chunk_status column to catalog chunk table
Add a new column chunk_status to _timescaledb_catalog.chunk.
2021-04-26 16:26:47 -04:00
Mats Kindahl
aeb107659b Copy recreated object permissions on update
Tables, indexes, and sequences that are recreated as part of an update
does not propagate permissions to the recreated object. This commit
fixes that by saving away the permissions in `pg_class` temporarily and
then copying them back into the `pg_class` table.

If internal objects are created or re-created, they get the wrong
initial privileges, which result in privileges not being dumped when
using `pg_dump`. Save away the privileges before starting the update
and restore them afterwards to make sure that the privileges are
maintained over the update.

For new objects, we use the initial privileges of the `chunk` metadata
table, which should always have correct initial privileges.

Fixes #3078
2021-04-26 08:36:57 +02:00
Ruslan Fomkin
b1143de795 Release 2.2.0
This release adds major new features since the 2.1.1 release.
We deem it moderate priority for upgrading.

This release adds the Skip Scan optimization, which significantly
improves the performance of queries with DISTINCT ON. This
optimization is not yet available for queries on distributed
hypertables.

This release also adds a function to create a distributed
restore point, which allows performing a consistent restore of a
multi-node cluster from a backup.

The bug fixes in this release address issues with size and stats
functions, high memory usage in distributed inserts, slow distributed
ORDER BY queries, indexes involving INCLUDE, and single chunk query
planning.

**PostgreSQL 11 deprecation announcement**

Timescale is working hard on our next exciting features. To make that
possible, we require functionality that is unfortunately absent on
PostgreSQL 11. For this reason, we will continue supporting PostgreSQL
11 until mid-June 2021. Sooner to that time, we will announce the
specific version of TimescaleDB in which PostgreSQL 11 support will
not be included going forward.

**Major Features**
* #2843 Add distributed restore point functionality
* #3000 SkipScan to speed up SELECT DISTINCT

**Bugfixes**
* #2989 Refactor and harden size and stats functions
* #3058 Reduce memory usage for distributed inserts
* #3067 Fix extremely slow multi-node order by queries
* #3082 Fix chunk index column name mapping
* #3083 Keep Append pathkeys in ChunkAppend

**Thanks**
* @BowenGG for reporting an issue with indexes with INCLUDE
* @fvannee for reporting an issue with ChunkAppend pathkeys
* @pedrokost and @RobAtticus for reporting an issue with size
  functions on empty hypertables
* @phemmer and @ryanbooz for reporting issues with slow
  multi-node order by queries
* @stephane-moreau for reporting an issue with high memory usage during
  single-transaction inserts on a distributed hypertable.
2021-04-13 09:41:03 +02:00
Ruslan Fomkin
2cba4b1d81 Release 2.1.1
This maintenance release contains bugfixes since the 2.1.0 release. We
deem it high priority for upgrading.

The bug fixes in this release address issues with CREATE INDEX and
UPSERT for hypertables, custom jobs, and gapfill queries.

This release marks TimescaleDB as a trusted extension in PG13, so that
superuser privileges are not required anymore to install the extension.

**Minor features**
* #2998 Mark timescaledb as trusted extension

**Bugfixes**
* #2948 Fix off by 4 error in histogram deserialize
* #2974 Fix index creation for hypertables with dropped columns
* #2990 Fix segfault in job_config_check for cagg
* #2987 Fix crash due to txns in emit_log_hook_callback
* #3042 Commit end transaction for CREATE INDEX
* #3053 Fix gapfill/hashagg planner interaction
* #3059 Fix UPSERT on hypertables with columns with defaults

**Thanks**
* @eloyekunle and @kitwestneat for reporting an issue with UPSERT
* @jocrau for reporting an issue with index creation
* @kev009 for fixing a compilation issue
* @majozv and @pehlert for reporting an issue with time_bucket_gapfill
2021-03-29 20:08:50 +02:00
Erik Nordström
931da9a656 Refactor and harden size and stats functions
Fix a number of issues with size and stats functions:

* Return `0` size instead of `NULL` in several functions when
  hypertables have no chunks (e.g., `hypertable_size`,
  `hypertable_detailed_size`).
* Return `NULL` when functions are called on non-hypertables instead
  of simply failing with generic error `query returned no rows`.
* Include size of "root" hypertable, which can have non-zero size
  indexes and other objects even if the root table holds no data.
* Make `hypertable_detailed_size` include one additional row for
  storage size of objects on the access node. While the access node
  stores no data, the empty hypertable may still take up some disk
  space.
* Improve test coverage for all size utility functions. In particular,
  add tests on regular tables as well as empty and compressed
  hypertables.
* Several size utility functions that were defined as `PL/pgSQL`
  functions have been converted to simple `SQL` functions since they
  ran only a single SQL query.

The `dist_util` test is moved to the solo test group because,
otherwise, it gives different size output when run in parallel vs. in
isolation.

Fixes #2871
2021-03-23 16:23:56 +01:00
Dmitry Simonenko
6a1c81b63e Add distributed restore point functionality
This change adds create_distributed_restore_point() function
which allows to create recovery restore point across data
nodes.

Fix #2846
2021-02-25 15:39:50 +03:00
Sven Klemm
06593a1c24 Release 2.1.0
This release adds major new features since the 2.0.2 release.
We deem it moderate priority for upgrading.

This release adds the long-awaited support for PostgreSQL 13 to TimescaleDB.
The minimum required PostgreSQL 13 version is 13.2 due to a security vulnerability
affecting TimescaleDB functionality present in earlier versions of PostgreSQL 13.

This release also relaxes some restrictions for compressed hypertables;
namely, TimescaleDB now supports adding columns to compressed hypertables
and renaming columns of compressed hypertables.

**Major Features**
* #2779 Add support for PostgreSQL 13

**Minor features**
* #2736 Support adding columns to hypertables with compression enabled
* #2909 Support renaming columns of hypertables with compression enabled
2021-02-22 12:04:22 +01:00
Erik Nordström
9ddf375fd5 Release 2.0.2
This maintenance release contains bugfixes since the 2.0.1 release. We
deem it high priority for upgrading.

The bug fixes in this release address issues with joins, the status of
background jobs, and disabling compression. It also includes
enhancements to continuous aggregates, including improved validation
of policies and optimizations for faster refreshes when there are a
lot of invalidations.

**Minor features**
* #2926 Optimize cagg refresh for small invalidations

**Bugfixes**
* #2850 Set status for backend in background jobs
* #2883 Fix join qual propagation for nested joins
* #2884 Add GUC to control join qual propagation
* #2885 Fix compressed chunk check when disabling compression
* #2908 Fix changing column type of clustered hypertables
* #2942 Validate continuous aggregate policy

**Thanks**
* @zeeshanshabbir93 for reporting the issue with full outer joins
* @Antiarchitect for reporting the issue with slow refreshes of
* @diego-hermida for reporting the issue about being unable to disable
  compression
* @mtin for reporting the issue about wrong job status
2021-02-19 16:39:42 +01:00
Ruslan Fomkin
6d679a14d0 Release 1.7.5
This maintenance release contains bugfixes since the 1.7.4 release.
Most of these fixes were backported from the 2.0.0 and 2.0.1 releases.
We deem it high priority for upgrading for users on TimescaleDB 1.7.4
or previous versions.

In particular the fixes contained in this maintenance release address
issues in continuous aggregates, compression, JOINs with hypertables,
and when upgrading from previous versions.

**Bugfixes**
* #2502 Replace check function when updating
* #2558 Repair dimension slice table on update
* #2619 Fix segfault in decompress_chunk for chunks with dropped
  columns
* #2664 Fix support for complex aggregate expression
* #2800 Lock dimension slices when creating new chunk
* #2860 Fix projection in ChunkAppend nodes
* #2865 Apply volatile function quals at decompresschunk
* #2851 Fix nested loop joins that involve compressed chunks
* #2868 Fix corruption in gapfill plan
* #2883 Fix join qual propagation for nested joins
* #2885 Fix compressed chunk check when disabling compression
* #2920 Fix repair in update scripts

**Thanks**
* @akamensky for reporting several issues including segfaults after
  version update
* @alex88 for reporting an issue with joined hypertables
* @dhodyn for reporting an issue when joining compressed chunks
* @diego-hermida for reporting an issue with disabling compression
* @Netskeh for reporting bug on time_bucket problem in continuous
  aggregates
* @WarriorOfWire for reporting the bug with gapfill queries not being
  able to find pathkey item to sort
* @zeeshanshabbir93 for reporting an issue with joins
2021-02-12 16:07:11 +01:00
Sven Klemm
e3c6725ad1 Fix compressed chunk check when disabling compression
The check for existence of compressed chunks when disabling
compression would not ignore dropped chunks making it impossible
to disable compression on hypertables with continuous aggregates
that had dropped chunks.
This patch ignores dropped chunks in this check and also sets
compressed_chunk_id to NULL in the metadata for deleted chunks.
2021-02-02 14:33:53 +01:00
Sven Klemm
636c8bbdae Release 2.0.1
This maintenance release contains bugfixes since the 2.0.0 release. We deem it
high priority for upgrading.

In particular the fixes contained in this maintenance release address issues
in continuous aggregates, compression, JOINs with hypertables and when
upgrading from previous versions.

**Bugfixes**
* #2772 Always validate existing database and extension
* #2780 Fix config enum entries for remote data fetcher
* #2806 Add check for dropped chunk on update
* #2828 Improve cagg watermark caching
* #2842 Do not mark job as started when setting next_start field
* #2845 Fix continuous aggregate privileges during upgrade
* #2851 Fix nested loop joins that involve compressed chunks
* #2860 Fix projection in ChunkAppend nodes
* #2861 Remove compression stat update from update script
* #2865 Apply volatile function quals at decompresschunk node
* #2866 Avoid partitionwise planning of partialize_agg
* #2868 Fix corruption in gapfill plan
* #2874 Fix partitionwise agg crash due to uninitialized memory

**Thanks**
* @alex88 for reporting an issue with joined hypertables
* @brian-from-quantrocket for reporting an issue with extension update and dropped chunks
* @dhodyn for reporting an issue when joining compressed chunks
* @markatosi for reporting a segfault with partitionwise aggregates enabled
* @PhilippJust for reporting an issue with add_job and initial_start
* @sgorsh for reporting an issue when using pgAdmin on windows
* @WarriorOfWire for reporting the bug with gapfill queries not being
  able to find pathkey item to sort
2021-01-28 17:28:05 +01:00