899 Commits

Author SHA1 Message Date
Nikhil Sontakke
c715d96aa4 Don't dump unnecessary extension tables
Logging and caching related tables from the timescaledb extension
should not be dumped using pg_dump. Our scripts specify a few such
unwanted tables. Apart from being unnecessary, the "job_errors" had
some restricted permissions causing additional problems in pg_dump.

We now don't include such tables for dumping.

Fixes #5449
2024-01-25 12:01:11 +05:30
Konstantina Skovola
929ad41fa7 Fix now() plantime constification with BETWEEN
Previously when using BETWEEN ... AND additional constraints in a
WHERE clause, the BETWEEN was not handled correctly because it was
wrapped in a BoolExpr node, which prevented plantime exclusion.
The flattening of such expressions happens in `eval_const_expressions`
which gets called after our constify_now code.
This commit fixes the handling of this case to allow chunk exclusion to
take place at planning time.
Also, makes sure we use our mock timestamp in all places in tests.
Previously we were using a mix of current_timestamp_mock and now(),
which was returning unexpected/incorrect results.
2024-01-24 18:25:39 +02:00
Sven Klemm
0b23bab466 Include _timescaledb_catalog.metadata in dumps
This patch changes the dump configuration for
_timescaledb_catalog.metadata to include all entries. To allow loading
logical dumps with this configuration an insert trigger is added that
turns uniqueness conflicts into updates to not block the restore.
2024-01-23 12:53:48 +01:00
Matvey Arye
e89bc24af2 Add functions for determining compression defaults
Add functions to help determine defaults for segment_by and order_by.
2024-01-22 08:10:23 -05:00
Sven Klemm
754f77e083 Remove chunks_in function
This function was used to propagate chunk exclusion decisions from
an access node to data nodes and is no longer needed with the removal
of multinode.
2024-01-22 09:18:26 +01:00
Sven Klemm
f57d584dd2 Make compression settings per chunk
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
2024-01-17 12:53:07 +01:00
Sven Klemm
75cc4f7d7b Improve multinode detection in update script
Previously we would only check for data nodes defined or distributed
hypertables being present which might not be true on data nodes so
we prevent update on any installation that has dist_uuid defined which
is also true on data nodes.
2024-01-12 11:18:35 +01:00
Mats Kindahl
662fcc1b1b Make extension state available through function
The extension state is not easily accessible in release builds, which
makes debugging issue with the loader very difficult. This commit
introduces a new schema `_timescaledb_debug` and makes the function
`ts_extension_get_state` available also in release builds as
`_timescaledb_debug.extension_state`.

See #1682
2024-01-11 10:52:35 +01:00
Jan Nidzwetzki
df7a8fed6f Post-release fixes for 2.13.1
Bumping the previous version and adding tests for 2.13.1
2024-01-09 16:31:07 +01:00
Jan Nidzwetzki
a69c4682ce Release 2.13.1
This release contains bug fixes since the 2.13.0 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6365 Use numrows_pre_compression in approximate row count
* #6377 Use processed group clauses in PG16
* #6384 Change bgw_log_level to use PGC_SUSET
* #6393 Disable vectorized sum for expressions.
* #6408 Fix groupby pathkeys for gapfill in PG16
* #6428 Fix index matching during DML decompression
* #6439 Fix compressed chunk permission handling on PG16
* #6443 Fix lost concurrent CAgg updates
* #6454 Fix unique expression indexes on compressed chunks
* #6465 Fix use of freed path in decompression sort logic

**Thanks**
* @MA-MacDonald for reporting an issue with gapfill in PG16
* @aarondglover for reporting an issue with unique expression indexes on compressed chunks
* @adriangb for reporting an issue with security barrier views on pg16
2024-01-04 10:04:10 +01:00
Alexander Kuzmenkov
bb00de4db8 Display the welcome message as NOTICE
Now we display it as WARNING and it makes it harder to grep the logs for
failures such as broken memory contexts or tupdesc reference leaks,
which are also reported as warnings.
2023-12-19 19:55:55 +01:00
Sven Klemm
8f73f95c2a Remove replication_factor field from _timescaledb_catalog.hypertable 2023-12-18 10:53:27 +01:00
Sven Klemm
11dd9af847 Remove multinode catalog objects
This patch removes the following objects:

tables:
- _timescaledb_catalog.chunk_data_node
- _timescaledb_catalog.dimension_partition
- _timescaledb_catalog.hypertable_data_node
- _timescaledb_catalog.remote_txn

views:
- timescaledb_information.data_nodes

functions:
- _timescaledb_functions.hypertable_remote_size
- _timescaledb_functions.chunks_remote_size
- _timescaledb_functions.indexes_remote_size
- _timescaledb_functions.compressed_chunk_remote_stats
2023-12-18 10:53:27 +01:00
Sven Klemm
ce90fde526 Alter pre-update handling during downgrade script generation
Before this change during downgrade script generation we would
always fetch the pre-update script from the previous version and
prepend it to the generated scripts. This limits what can be
referenced in the pre-update script and also what is possible
within the downgrade itself.
This patch splits the pre-update script into a generic part that
is used for update/downgrade and an update specific part. We could
later also add a downgrade specific part but currently it is not
needed. This change is necessary because we reference a timescaledb
view in the pre-update script which prevents changes to that view.
2023-12-18 10:13:03 +01:00
Sven Klemm
6395b249a9 Remove remote connection handling code
Remove the code used by multinode to handle remote connections.
This patch completely removes tsl/src/remote and any remaining
distributed hypertable checks.
2023-12-15 19:13:08 +01:00
Sven Klemm
06867af966 Remove multinode functions from crossmodule struct
This commit removes the multinode specific entries from the cross
module function struct. It also removes the function
set_chunk_default_data_node
2023-12-14 21:32:14 +01:00
Sven Klemm
11df1dd648 Remove experimental multinode functions
This commit removes the following functions:
- timescaledb_experimental.block_new_chunks
- timescaledb_experimental.allow_new_chunks
- timescaledb_experimental.subscription_exec
- timescaledb_experimental.move_chunk
- timescaledb_experimental.copy_chunk
- timescaledb_experimental.cleanup_copy_chunk_operation
2023-12-13 23:38:32 +01:00
Sven Klemm
8a2029f569 Remove rxid type and distributed size util functions 2023-12-13 23:38:32 +01:00
Sven Klemm
19f1395191 Remove internal multinode ddl functions
This commit removes the following functions:
- _timescaledb_functions.create_chunk_replica_table
- _timescaledb_functions.chunk_drop_replica
- _timescaledb_functions.wait_subscription_sync
- _timescaledb_functions.health
- _timescaledb_functions.drop_stale_chunks
2023-12-13 23:38:32 +01:00
Fabrízio de Royes Mello
78490c47b7 Remove support for creating CAggs with old format
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.

In 2.10.0 released on February 2023 the Continuous Aggregate old format
deprecation was announced.

With this PR the ability of creating Continuous Aggregate in the old
format was removed, but we still support migrate from the old to the
new format by running the `cagg_migrate` procedure.

This is the continuation of the PR #5977 started by @pdipesh02.

References:
https://docs.timescale.com/api/latest/continuous-aggregates/cagg_migrate/
https://github.com/timescale/timescaledb/releases/tag/2.10.0
https://github.com/timescale/timescaledb/releases/tag/2.7.0
https://github.com/timescale/timescaledb/pull/5977
2023-12-13 18:48:31 -03:00
Sven Klemm
c914d19fac Remove the timescaledb_fdw foreign data wrapper
This is the fdw implementation that was used for communication
between multinode instances.
2023-12-13 09:48:03 +01:00
Sven Klemm
36c36564a8 Refactor compression setting storage
This patch drops the catalog table _timescaledb_catalog.hypertable_compression
and stores those settings in _timescaledb_catalog.compression_settings instead.
The storage format is changed and the new table will have 1 entry per relation
instead of 1 entry per column and has no dependancy on hypertables.
All other aspects of compression will remain the same. This is refactoring is
to enable per chunk compression settings in a follow-up patch.
2023-12-12 21:45:33 +01:00
Sven Klemm
bc935ab2ca Remove multinode public API
This patch removes the following functions/procedures:
- add_data_node
- alter_data_node
- attach_data_node
- create_distributed_hypertable
- create_distributed_restore_point
- delete_data_node
- detach_data_node
- distributed_exec
- set_replication_factor
- _timescaledb_functions.ping_data_node
- _timescaledb_functions.remote_txn_heal_data_node
- _timescaledb_functions.set_dist_id
- _timescaledb_functions.set_peer_dist_id
- _timescaledb_functions.show_connection_cache
- _timescaledb_functions.validate_as_data_node
- _timescaledb_internal.ping_data_node
- _timescaledb_internal.remote_txn_heal_data_node
- _timescaledb_internal.set_dist_id
- _timescaledb_internal.set_peer_dist_id
- _timescaledb_internal.show_connection_cache
- _timescaledb_internal.validate_as_data_node
2023-12-12 20:37:35 +01:00
Nikhil Sontakke
293104add2 Use numrows_pre_compression in approx row count
The approximate_row_count function was using the reltuples from
compressed chunks and multiplying that with 1000 which is the default
batch size. This was leading to a huge skew between the actual row
count and the approximate one. We now use the numrows_pre_compression
value from the timescaledb catalog which accurately represents the
number of rows before the actual compression.
2023-12-04 22:26:56 +05:30
Jan Nidzwetzki
65f681537d Fix makeaclitem function creation 2023-11-29 21:49:17 +01:00
Jan Nidzwetzki
3b59a8a774 Post-release fixes for 2.13.0
Bumping the previous version and adding tests for 2.13.0.
2023-11-29 21:49:17 +01:00
Jan Nidzwetzki
337adb63fc Release 2.13.0
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
* #5575 Add chunk-wise sorted paths for compressed chunks
* #5761 Simplify hypertable DDL API
* #5890 Reduce WAL activity by freezing compressed tuples immediately
* #6050 Vectorized aggregation execution for sum()
* #6062 Add metadata for chunk creation time
* #6077 Make Continous Aggregates materialized only (non-realtime) by default
* #6177 Change show_chunks/drop_chunks using chunk creation time
* #6178 Show batches/tuples decompressed during DML operations in EXPLAIN output
* #6185 Keep track of catalog version
* #6227 Use creation time in retention/compression policy
* #6307 Add SQL function cagg_validate_query

**Bugfixes**
* #6188 Add GUC for setting background worker log level
* #6222 Allow enabling compression on hypertable with unique expression index
* #6240 Check if worker registration succeeded
* #6254 Fix exception detail passing in compression_policy_execute
* #6264 Fix missing bms_del_member result assignment
* #6275 Fix negative bitmapset member not allowed in compression
* #6280 Potential data loss when compressing a table with a partial index that matches compression order.
* #6289 Add support for startup chunk exclusion with aggs
* #6290 Repair relacl on upgrade
* #6297 Fix segfault when creating a cagg using a NULL width in time bucket function
* #6305 Make timescaledb_functions.makeaclitem strict
* #6332 Fix typmod and collation for segmentby columns
* #6339 Fix tablespace with constraints
* #6343 Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
2023-11-27 16:13:51 +01:00
Nikhil Sontakke
51d92a3638 Fix non-default tablespaces with constraints
If a hypertable uses a non-default tablespace for its primary or
unique constraints with additional DEFERRABLE or INITIALLY DEFERRED
characteristics then any chunk creation will fail with syntax error. We
now set the tablespace via a separate command for such constraints for
the chunks.

Fixes #6338
2023-11-23 19:09:48 +05:30
Ante Kresic
8745063bd4 Use segmentwise recompression in compress policy
Change compression policy to use segmentwise
recompression when possible to increase performance.
Segmentwise recompression decompresses rows into memory,
thus reducing IO load when recompressing, making it
much faster for bigger chunks.
2023-11-23 13:13:41 +01:00
Sven Klemm
d1c8cb0673 Fix segmentby typmod and collation of compressed chunks
We added a workaround for segmentby columns with incorrect typmod
and collation in c73c5a74b9 but did not adjust pre-existing relations.
This patch will fix any existing relations where the segmentby columns
of compressed chunks have incorrect typmod and collation and remove
the code workaround.
2023-11-21 14:50:58 +01:00
Nikhil Sontakke
44817252b5 Use creation time in retention/compression policy
The retention and compression policies can now use drop_created_before
and compress_created_before arguments respectively to specify chunk
selection using their creation times.

We don't support creation times for CAggs, yet.
2023-11-16 20:17:17 +05:30
Fabrízio de Royes Mello
0254abd9c2 Remove useless table lock
In ae21ee96 we fixed a race condition when running a query to get the
hypertable sizes and one or more chunks was dropped in a concurrent
session leading to exception because the chunks does not exist.

In fact the table lock introduced is useless because we also added
proper joins with Postgres catalog tables to ensure that the relation
exists in the database when calculating the sizes. And even worse with
this table lock now dropping chunks wait for the functions that
calculate the hypertable sizes.

Fixed it by removing the useless table lock and also added isolation
tests to make sure we'll not end up with race conditions again.
2023-11-15 12:17:33 -03:00
Fabrízio de Royes Mello
3e08d21ace Add SQL function cagg_validate_query
With this function is possible to execute the Continuous Aggregate query
validation over an arbitrary query string, without the need to actually
create the Continuous Aggregate.

It can be used, for example, to check for most frequent queries maybe
using `pg_stat_statements`, validate them and check if there are queries
that potenttialy can turned into a Continuous Aggregate.
2023-11-14 08:29:26 -03:00
Mats Kindahl
483ddcd4de Make _timescaledb_functions.makeaclitem strict
Function `_timescaledb_functions.makeaclitem` needs to be strict since
if it is called with NULL values, it should return NULL.
2023-11-10 14:08:09 +01:00
Mats Kindahl
f2310216a8 Repair relacl on extension upgrade
If users have accidentally been removed from `pg_authid` as a result of
bugs where dropping a user did not revoke privileges from all tables
where the had privileges, it will not be possible to create new chunks
since these require the user to be found when copying the privileges
for the parent table (either compressed hypertable or normal
hypertable).

To fix the situation, we repair the `pg_class` table when updating the
extension by modifying the `relacl` for relations and remove any user
that do not have an entry in `pg_authid`.

A repair function `_timescaledb_functions.repair_relation_acls` is
added that will perform the job. A `makeaclitem` from PG16 that accepts
a list of comma and used as part of the repair is also added as
`_timescaledb_functions.makeaclitem`.
2023-11-09 11:35:27 +01:00
Nikhil Sontakke
844807a374 Change show_chunks/drop_chunks using creation time
- Updated show_chunks, drop_chunks APIs to get the affected
chunks using chunk creation time metadata based on the
"date/time/interval" like boundary specified for the INTEGER
columns.

- We honor "integer_now" function if it's specified so as to keep
backwards compatibility with the existing behavior

Co-authored-by: Dipesh Pandit <dipesh@timescale.com>
2023-11-02 18:37:09 +05:30
Sven Klemm
22b556d069 Fix exception detail passing in compression_policy_execute
In 2 instances of the exception handling _message and _detail were
not properly set in compression_policy_execute leading to empty
message and detail being reported.
2023-11-01 19:45:29 +01:00
Jan Nidzwetzki
8767de658b Reduce WAL activity by freezing tuples immediately
When we compress a chunk, we create a new compressed chunk for storing
the compressed data. So far, the tuples were just inserted into the
compressed chunk and frozen by a later vacuum run.

However, freezing tuples causes WAL activity can be optimized because
the compressed chunk is created in the same transaction as the tuples.
This patch reduces the WAL activity by storing these tuples directly as
frozen and preventing a freeze operation in the future. This approach is
similar to PostgreSQL's COPY FREEZE.
2023-10-25 13:27:07 +02:00
Sven Klemm
1126b08567 2.12.2 Post-release
Add 2.12.2 to update test scripts and add update downgrade metadata.
2023-10-23 18:31:36 +02:00
Sven Klemm
21a3ebd77c Release 2.12.2
This release contains bug fixes since the 2.12.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6155 Align gapfill bucket generation with time_bucket
* #6181 Ensure fixed_schedule field is populated
* #6210 Fix EXPLAIN ANALYZE for compressed DML
2023-10-19 14:37:28 +02:00
Fabrízio de Royes Mello
a409065285 PG16: Prohibit use of multi-node
Since multi-node is not supported on PG16, add errors to multi-node
functions when run on this PostgreSQL version.
2023-10-18 11:45:06 -03:00
Sven Klemm
a664e685cd Keep track of catalog version
This patch stores the current catalog version in an internal
table to allow us to verify catalog and code version match.
When doing dump/restore people occasionally report very unusual
errors and during investigation it is discovered that they loaded
a dump from an older version and run it with a later code version.
This allows to detect mismatches between installed code version
and loaded dump version. The version number in the metadata table
will be kept updated in upgrade and downgrade scripts.
2023-10-14 22:28:21 +02:00
Konstantina Skovola
5752c33b0a Post-release fixes for 2.12.1
Bumping the previous version and adding tests for 2.12.1.
Also adjust tagging date in changelog.
2023-10-13 10:55:51 +03:00
Sven Klemm
ecd88f8c8e Remove code related to pre 1.0 triggers
This code is no longer needed as no valid source timescaledb version
in upgrade path can have those legacy triggers.
2023-10-12 19:49:37 +02:00
Konstantina Skovola
6af0cb00c9 Release 2.12.1
This release contains bug fixes since the 2.12.0 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6113 Fix planner distributed table count
* #6117 Avoid decompressing batches using an empty slot
* #6123 Fix concurrency errors in OSM API
* #6142 Do not throw an error when deprecation GUC cannot be read

**Thanks**
* @symbx for reporting a crash when selecting from empty hypertables
2023-10-10 12:33:48 +03:00
Fabrízio de Royes Mello
586b247c2b Fix cagg trigger on compat schema layer
The trigger `continuous_agg_invalidation_trigger` receive the hypertable
id as parameter as the following example:

Triggers:
    ts_cagg_invalidation_trigger AFTER INSERT OR DELETE OR UPDATE ON
       _timescaledb_internal._hyper_3_59_chunk
    FOR EACH ROW EXECUTE FUNCTION
       _timescaledb_functions.continuous_agg_invalidation_trigger('3')

The problem is in the compatibility layer using PL/pgSQL code there's no
way to passdown the parameter from the wrapper trigger function created
to the underlying trigger function in another schema.

To solve this we simple create a new function in the deprecated
`_timescaledb_internal` schema pointing to the function trigger and
inside the C code we emit the WARNING message if the function is called
from the deprecated schema.
2023-10-07 14:39:35 -03:00
Dipesh Pandit
0b87a069e7 Add metadata for chunk creation time
- Added creation_time attribute to timescaledb catalog table "chunk".
  Also, updated corresponding view timescaledb_information.chunks to
  include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
  creation to handle new partition range for give dimension (Time/
  SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
  part of running upgrade script. The current timestamp (now()) at the
  time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
  creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.

Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>
2023-10-04 14:49:05 +05:30
Feike Steenbergen
27f4382d27 Do not throw an error on missing GUC value
When connected to TimescaleDB with the timescaledb.disable_load=on GUC
setting, calling these functions causes an error to be thrown.

By specifying missing_ok=true, we prevent this situation from causing an
error for the user.
2023-10-03 17:12:27 +02:00
Dipesh Pandit
6019775ec5 Simplify hypertable DDL APIs
The current hypertable creation interface is heavily focused on a time
column, but since hypertables are focused on partitioning of not only
time columns, we introduce a more generic API that support different
types of keys for partitioning.

The new interface introduced new versions of create_hypertable,
add_dimension, and a replacement function `set_partitioning_interval`
that replaces `set_chunk_time_interval`. The new functions accept an
instance of dimension_info that can be constructed using constructor
functions `by_range` and `by_hash`, allowing a more versatile and
future-proof API.

For examples:

    SELECT create_hypertable('conditions', by_range('time'));
    SELECT add_dimension('conditions', by_hash('device'));

The old API remains, but will eventually be deprecated.
2023-09-28 08:14:30 +02:00
Sven Klemm
b339131c68 2.12.0 Post-release adjustments 2023-09-27 09:38:11 +02:00