4224 Commits

Author SHA1 Message Date
Konstantina Skovola
6af0cb00c9 Release 2.12.1
This release contains bug fixes since the 2.12.0 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6113 Fix planner distributed table count
* #6117 Avoid decompressing batches using an empty slot
* #6123 Fix concurrency errors in OSM API
* #6142 Do not throw an error when deprecation GUC cannot be read

**Thanks**
* @symbx for reporting a crash when selecting from empty hypertables
2023-10-10 12:33:48 +03:00
Fabrízio de Royes Mello
8d8f957c82 PG16: Fix regression tests
Seems there was some leftovers from #6139 do be fixed.
2023-10-10 02:39:43 -03:00
Fabrízio de Royes Mello
6f038ec5db PG16: Regression Tests
Added missing template output tests and also converted others to
template due to output changes.
2023-10-09 09:10:04 -03:00
Jan Nidzwetzki
f746398a46 Make agg_partials_pushdown test more predictable
So far, the parallel query plans of the agg_partials_pushdown test were
not deterministic since multiple workers were used and the parallel
session leader also participated in query processing. This makes it
impossible to predict which worker would process how many tuples. With
this patch, the number of parallel workers for the agg_partials_pushdown
test has been reduced to 1 and the parallel_leader_participation has
been disabled to ensure deterministic query plans.
2023-10-09 07:53:31 +02:00
Sven Klemm
8476adc484 Move explain utility functions to src/import
This moves EXPLAIN utility functions into apache to make it available
to all custom nodes. Since these are copies of static postgres functions
there is no reason for them to be TSL licensed.
2023-10-08 17:37:59 +02:00
Jan Nidzwetzki
7dc14ac2ed Handle non existing ht in cache_inval_entry_init
When the continuous_agg_invalidation trigger is called with an invalid
hypertable id (e.g., caused by an inconsistent backup restore), the
hypertable lookup returns NULL. This case was not handled properly so
far. This patch adds proper checks to the function.
2023-10-08 17:19:33 +02:00
Fabrízio de Royes Mello
051ae51011 PG16: Reset PlannerInfo->placeholdersFrozen
Reset the flag to control PlaceHolderInfo creation because we're copying
the entire content of the (PlannerInfo *)root data structure when
building the first/last path.

https://github.com/postgres/postgres/commit/b3ff6c74
2023-10-08 12:15:08 -03:00
Sven Klemm
6304c7a06f Reduce loops in scheduled sqlsmith runs
The scheduled sqlsmith seems to run into timeout quite often so
reduce the number of loops a bit.
2023-10-07 21:00:23 +02:00
Fabrízio de Royes Mello
586b247c2b Fix cagg trigger on compat schema layer
The trigger `continuous_agg_invalidation_trigger` receive the hypertable
id as parameter as the following example:

Triggers:
    ts_cagg_invalidation_trigger AFTER INSERT OR DELETE OR UPDATE ON
       _timescaledb_internal._hyper_3_59_chunk
    FOR EACH ROW EXECUTE FUNCTION
       _timescaledb_functions.continuous_agg_invalidation_trigger('3')

The problem is in the compatibility layer using PL/pgSQL code there's no
way to passdown the parameter from the wrapper trigger function created
to the underlying trigger function in another schema.

To solve this we simple create a new function in the deprecated
`_timescaledb_internal` schema pointing to the function trigger and
inside the C code we emit the WARNING message if the function is called
from the deprecated schema.
2023-10-07 14:39:35 -03:00
Jan Nidzwetzki
b514d05c2d Improve arg handling in continuous_agg_trigfn
When continuous_agg_invalidation_trigger is called without any argument,
fcinfo->context in continuous_agg_trigfn is NULL and it should not be
dereferenced.
2023-10-06 18:42:36 +02:00
Sven Klemm
f8738c838c Remove multinode tests from regresscheck-shared
The multinode tests in regresscheck-shared were already disabled
by default and removing them allows us to skip setting up the
multinode environment in regresscheck-shared. This database
is also used for sqlsmith which will make sqlsmith runs more
targetted. Additionally this will be a step towards running
regresscheck-shared unmodified against our cloud.
2023-10-05 15:44:18 +02:00
Sven Klemm
fc0b905d3d PG16 Fix transparent decompression
The initial commit to adjust RELOPT_DEADREL for PG16 also freed
the RelOptInfo but even though we dont want them to be referenced
again since they are internal we must not free them because the
lower plan nodes still reference them leading to segfaults in the
executor when trying to access those reloptinfo. With this change
most of the compression tests pass on PG16.
This patch also gets rid of RELOPT_DEADREL for earlier PG versions
and always removes the entry from simple_rel_array.
2023-10-05 15:35:01 +02:00
Konstantina Skovola
7a5cecf786 Fix errors and add isolation tests for OSM API
This commit fixes two issues with the osm range update API:

1. Deadlock between two concurrent range updates.
Two transactions concurrently attempting to update the range
of the OSM chunk by calling hypertable_osm_range_update would
deadlock because the dimension slice tuple was first locked with
a FOR KEY SHARE lock, then a FOR UPDATE lock was requested before
proceeding with the dimension slice tuple udpate.
This commit fixes the deadlock by taking FOR UPDATE lock on the
tuple from the start, before proceeding to update it.

2. Tuple concurrently updated error for hypertable tuple.
When one session tries to update the range of the OSM chunk and another
enables compression on the hypertable, the update failed with tuple
concurrently updated error. This commit fixes this by first locking the
hypertable tuple with a FOR UPDATE lock before proceeding to UPDATE it.

Isolation tests for OSM range API are also added.
2023-10-05 11:45:36 +03:00
Pallavi Sontakke
0d6f5f2634
Modify interpolate() call in gapfill test (#6146)
To work fine on dev-cloud tests, in presence of other extensions.

Disable-check: force-changelog-file
2023-10-04 16:44:16 +05:30
Dipesh Pandit
0b87a069e7 Add metadata for chunk creation time
- Added creation_time attribute to timescaledb catalog table "chunk".
  Also, updated corresponding view timescaledb_information.chunks to
  include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
  creation to handle new partition range for give dimension (Time/
  SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
  part of running upgrade script. The current timestamp (now()) at the
  time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
  creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.

Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>
2023-10-04 14:49:05 +05:30
Feike Steenbergen
27f4382d27 Do not throw an error on missing GUC value
When connected to TimescaleDB with the timescaledb.disable_load=on GUC
setting, calling these functions causes an error to be thrown.

By specifying missing_ok=true, we prevent this situation from causing an
error for the user.
2023-10-03 17:12:27 +02:00
Sven Klemm
e7026a97a4 PG16: adjust tests to use debug_parallel_query
PG16 renamed force_parallel_mode to debug_parallel_query.

https://github.com/postgres/postgres/commit/5352ca22e
2023-10-02 20:59:42 +02:00
Konstantina Skovola
c1b500f0be Fix pylint errors 2023-10-02 20:59:08 +03:00
Konstantina Skovola
f505bd2207 Add CI check for incorrect catalog updates
There are certain modifications/commands that should not be allowed
in our update/downgrade scripts. For example, when adding or dropping
columns to timescaledb catalog tables, the right way to do this is to
drop and recreate the table with the desired definition instead of doing
ALTER TABLE ... ADD/DROP COLUMN. This is required to ensure consistent
attribute numbers across versions.
This workflow detects this and some other incorrect catalog table
modifications and fails with an error in that case.

Fixes #6049
2023-10-02 12:15:59 +03:00
Ante Kresic
646950fabb NULL check column name in by_range/by_hash funcs
Users can set a NULL value as column name and cause a segfault
which would crash the instance. This change checks for NULL
values and errors out with appropriate message.
2023-10-02 11:07:23 +02:00
Sven Klemm
91c0def6b2 Fix segfaults in generic hypertable api
This patch fixes segfaults in create_hypertable and add_dimension
when passing NULL parameters.
2023-10-02 10:46:48 +02:00
Ante Kresic
6e4d174a6a Fix bgw_custom flakiness by stopping bgws
When dropping a table, we get notices if that causes to cancel
background jobs and thus fail tests. By stopping the workers
before dropping the table, we should reduce this flakiness.
2023-09-30 18:53:27 +02:00
Mats Kindahl
6bef59a21e Fix flaky cagg_insert
Isolation test `cagg_insert` is flaky because refresh steps can
complete in any order. Adding constraints so that completion is
reported in the same order in all test runs.

Fixes #5331
2023-09-28 17:59:09 +02:00
Lakshmi Narayanan Sreethar
642970f014 PG16: Update query relation permission checking
PG16 moves the permission checking information out of the range table
entries into a new struct - RTEPermissionInfo. This commit updates
timescaledb to use the this new mechanism when compiling with PG16.

postgres/postgres@a61b1f74
postgres/postgres@b803b7d1
postgres/postgres@47bb9db7
2023-09-28 10:35:03 -03:00
Lakshmi Narayanan Sreethar
f40d3fdc94 PG16: New 'missing_ok' argument to build_attrmap_by_name
postgres/postgres@ad86d159
2023-09-28 10:35:03 -03:00
Dipesh Pandit
6019775ec5 Simplify hypertable DDL APIs
The current hypertable creation interface is heavily focused on a time
column, but since hypertables are focused on partitioning of not only
time columns, we introduce a more generic API that support different
types of keys for partitioning.

The new interface introduced new versions of create_hypertable,
add_dimension, and a replacement function `set_partitioning_interval`
that replaces `set_chunk_time_interval`. The new functions accept an
instance of dimension_info that can be constructed using constructor
functions `by_range` and `by_hash`, allowing a more versatile and
future-proof API.

For examples:

    SELECT create_hypertable('conditions', by_range('time'));
    SELECT add_dimension('conditions', by_hash('device'));

The old API remains, but will eventually be deprecated.
2023-09-28 08:14:30 +02:00
Sven Klemm
d07cb914d2 Adjust APT package test
We now build APT packages for Debian 10,11,12 and Ubuntu packages
for 20.04 and 22.04 for both amd64 and arm64.
We also build Ubuntu packages for 23.04 for amd64 but not for arm64
because pgdg packages for postgres is not available for arm64 on
Ubuntu 23.04.
2023-09-27 10:33:30 +02:00
noctarius aka Christoph Engelbert
05b66accfa Fix location (and name) of .perltidyrc
Update the location and name of .perltidyrc in the README

After the recent change (244b3e637c)
the location and name of perltitdyrc has changed.

Signed-off-by: noctarius aka Christoph Engelbert <me@noctarius.com>
2023-09-27 09:40:48 +02:00
Sven Klemm
b339131c68 2.12.0 Post-release adjustments 2023-09-27 09:38:11 +02:00
Ante Kresic
1932c02fc9 Avoid decompressing batches using an empty slot
When running COPY command into a compressed hypertable, we
could end up using an empty slot for filtering compressed batches.
This happens when a previously created copy buffer for a chunk
does not contain any new tuples for inserting. The fix is to
verify slots before attempting to do anything else.
2023-09-27 09:08:59 +02:00
Fabrízio de Royes Mello
32a695e18f Make CAggs materialized only by default
Historically creating a Continuous Aggregate make it realtime by default
but it confuse users specially when using `WITH NO DATA` option. Also is
well known that realtime Continuous Aggregates can potentially lead to
issues with Hierarchical and Data Tiering.

Improved the UX by making Continuous Aggregates non-realtime by default.
2023-09-26 15:53:26 -03:00
Fabrízio de Royes Mello
0ae6f95646 Use DROP DATABASE ... WITH (FORCE) on tests
PG13 introduced an option to DROP DATABASE statement to terminate all
existing connections to the target database. Now that our minor
supported version is PG13 make sense to use it on regression tests in
order to avoid potential flaky tests.
2023-09-26 14:35:23 -03:00
Sven Klemm
727a0278fd Fix planner distributed table count
The check for a distributed hypertable was done after ht had been
changed to the compressed hypertable potentially leading to miscount
or even segfault when the cache lookup for the compressed hypertable
returned NULL.
2023-09-26 18:53:48 +02:00
Jan Nidzwetzki
d9e4a71af3 Ensure database is unused in bgw_launcher test
The bgw_launcher test changes the tablespace for a database. However,
this requires that the database is used and no BGW are accessing the
database. This PR ensures that the background workers are stopped before
the tablespace is changed.
2023-09-26 09:58:44 +02:00
Jan Nidzwetzki
3f19e6aa97 Remove outdated windows CI information
The CONTRIBUTING.md file states that our CI does not run regression
checks on Windows, which is outdated information. In fact, we do execute
regression checks on Windows in our current CI. This pull request
removes the outdated section.
2023-09-26 09:58:26 +02:00
Ante Kresic
5cdd414094 Fix bgw_custom test flakiness
One of the test cases changes chunk status in order to fail
the background job in order to ensure jobs continue to be
scheduled even after such a scenario. The problem lies in
the race condition between compression policy and the status
update which both update the catalog and that causes random
test failures. The solution is to add the compression policy
after updating the status, eliminating the possibility of
race condition.
2023-09-25 14:04:11 +02:00
Jan Nidzwetzki
42c3750481 Adjust multi-node test output
In 683e2bcf189f478ef51d2bc9b409eabf483df18a the default
schedule_interval of the compression policy was changed. This PR adjusts
the test output in the optional multi-node tests.
2023-09-25 12:51:21 +02:00
Jan Nidzwetzki
092cb6dfd6 Make compression_bgw executable multiple times
The test compression_bgw cannot be executed multiple times because a
role was created and not properly deleted. This PR adds the needed
teardown step to the test.
2023-09-25 12:51:05 +02:00
Sven Klemm
bdcca067b3 Fix assert in dimension slice lookup
Calling ts_scan_iterator_next will advance internal scanner data
structures. It is therefore not safe to use the iterator after
ts_scan_iterator_next returned NULL and expect to be able to access
the previous tuple. Under certain circumstances this may point the
iterator to tuples not normally visible.
2023-09-25 10:34:33 +02:00
gayyappan
887f538cd7 Fix error in osmcallback hook
The callback hook should use the old pointer instead of the new.
This causes a segfault when ptr is NULL.
2023-09-23 08:36:26 -04:00
Jan Nidzwetzki
683e2bcf18 Schedule compression policy more often
By default, the compression policy is scheduled for every
chunk_time_interval / 2 in the current implementation, equal to three
days and twelve hours with our default settings. This schedule interval
was sufficient for previous versions of TimescaleDB. However, with the
introduction of features like mutable compression and ON CONFLICT .. DO
UPDATE queries, regular DML operations decompress data. To ensure that
modified data is compressed earlier, this patch reduces the schedule
interval of the compression policy to run at least every 12 hours.
2023-09-22 12:30:57 +02:00
Jan Nidzwetzki
3589d379b7 Run scheduled windows test
Our nightly windows tests have not been running for several months,
causing the status badge in our readme to remain in a failing state
without ever turning green. This pull request enables the daily
scheduled windows builds once again.
2023-09-21 21:53:19 +02:00
Jan Nidzwetzki
14e275a514 Make sure BGW are stopped in test teardown
This patch stops the background worker in the bgw_launcher test teardown
before the database is dropped. The current version of the test is flaky
because sometimes the database cannot be dropped due to BGW activity.
2023-09-21 15:34:22 +02:00
Jan Nidzwetzki
ebaee6c46f Fix race condition in bgw_db_scheduler
The regression test bgw_db_scheduler tests if the BGW scheduler starts a
background job after it is created. However, this only happens if the
previous instance of the BGW scheduler is no longer active. If the old
BGW scheduler instance is still running, it will pick up the job and
prevent it from starting in the current test.
2023-09-21 12:03:15 +02:00
Jan Nidzwetzki
abf79f47fd Support for multiple BGW in scheduler mock
The current version of the BGW scheduler mock does not support multiple
background workers. If a second worker was registered, the reference to
the previously registered worker gets lost. So, the implementation of
the mock scheduler only waits for the latest registered worker before
updating the mocked current time, which causes a race condition. If the
previous workers are still active and logging messages after this point,
the messages are logged with the updated mocked time. This leads to a
non-deterministic behavior and test failures. For example:

  msg_no | mock_time |    application_name     |    msg
---------+-----------+-------------------------+----------------------
-      1 |         0 | Retention Policy [1002] | job 1002 [...]
[...]
+      1 |   1000000 | Retention Policy [1002] | job 1002 [...]
2023-09-21 08:33:42 +02:00
Jan Nidzwetzki
1327289a77 Remove outdated URL from install message
With the current version of our docs, the first and third URL in our
extension installation message pointed to the same page. This PR removes
one of the duplicates.
2023-09-20 20:03:11 +02:00
Sven Klemm
8c41757358 Release 2.12.0
This release contains performance improvements for compressed hypertables
and continuous aggregates and bug fixes since the 2.11.2 release.
We recommend that you upgrade at the next available opportunity.

This release moves all internal functions from the _timescaleb_internal
schema into the _timescaledb_functions schema. This separates code from
internal data objects and improves security by allowing more restrictive
permissions for the code schema. If you are calling any of those internal
functions you should adjust your code as soon as possible. This version
also includes a compatibility layer that allows calling them in the old
location but that layer will be removed in 2.14.0.

**PostgreSQL 12 support removal announcement**
Following the deprecation announcement for PostgreSQL 12 in TimescaleDB 2.10,
PostgreSQL 12 is not supported starting with TimescaleDB 2.12.
Currently supported PostgreSQL major versions are 13, 14 and 15.
PostgreSQL 16 support will be added with a following TimescaleDB release.

**Features**
* #5137 Insert into index during chunk compression
* #5150 MERGE support on hypertables
* #5515 Make hypertables support replica identity
* #5586 Index scan support during UPDATE/DELETE on compressed hypertables
* #5596 Support for partial aggregations at chunk level
* #5599 Enable ChunkAppend for partially compressed chunks
* #5655 Improve the number of parallel workers for decompression
* #5758 Enable altering job schedule type through `alter_job`
* #5805 Make logrepl markers for (partial) decompressions
* #5809 Relax invalidation threshold table-level lock to row-level when refreshing a Continuous Aggregate
* #5839 Support CAgg names in chunk_detailed_size
* #5852 Make set_chunk_time_interval CAggs aware
* #5868 Allow ALTER TABLE ... REPLICA IDENTITY (FULL|INDEX) on materialized hypertables (continuous aggregates)
* #5875 Add job exit status and runtime to log
* #5909 CREATE INDEX ONLY ON hypertable creates index on chunks

**Bugfixes**
* #5860 Fix interval calculation for hierarchical CAggs
* #5894 Check unique indexes when enabling compression
* #5951 _timescaledb_internal.create_compressed_chunk doesn't account for existing uncompressed rows
* #5988 Move functions to _timescaledb_functions schema
* #5788 Chunk_create must add an existing table or fail
* #5872 Fix duplicates on partially compressed chunk reads
* #5918 Fix crash in COPY from program returning error
* #5990 Place data in first/last function in correct mctx
* #5991 Call eq_func correctly in time_bucket_gapfill
* #6015 Correct row count in EXPLAIN ANALYZE INSERT .. ON CONFLICT output
* #6035 Fix server crash on UPDATE of compressed chunk
* #6044 Fix server crash when using duplicate segmentby column
* #6045 Fix segfault in set_integer_now_func
* #6053 Fix approximate_row_count for CAggs
* #6081 Improve compressed DML datatype handling
* #6084 Propagate parameter changes to decompress child nodes

**Thanks**
* @ajcanterbury for reporting a problem with lateral joins on compressed chunks
* @alexanderlaw for reporting multiple server crashes
* @lukaskirner for reporting a bug with monthly continuous aggregates
* @mrksngl for reporting a bug with unusual user names
* @willsbit for reporting a crash in time_bucket_gapfill
2023-09-20 11:01:44 +02:00
Sven Klemm
8c7843b7a7 Ignore deleted files in changelog_check
Only check files that were added, copied, modified or renamed in
changelog check.
2023-09-20 11:01:44 +02:00
Sven Klemm
9dc699a59b Fix ignored workflow logic
The paths filter in github workflows will trigger when at least
one of the pathes match unless everything else has been explicitly
excluded. For the ignored workflows we want it to only trigger
when only those files explicitly specified are changed and nothing
else.
2023-09-20 09:12:23 +02:00
gayyappan
6e5d687184 Add drop_chunks hook for OSM
Introduce version number for OsmCallbacks struct

Add a callback for cascading drop chunks to OSM and
integrate with drop_chunks

Add backward compatibility for OsmCallbacks
2023-09-19 15:35:14 -04:00