757 Commits

Author SHA1 Message Date
Mats Kindahl
3947c01124 Support sending telemetry event reports
Add table `_timescaledb_catalog.telemetry_event` table containing
events that should be sent out with telemetry reports. The table will
be truncated after reporting being generated.
2023-05-12 16:03:05 +02:00
Dmitry Simonenko
8ca17e704c Fix ALTER TABLE SET with normal tables
Running ALTER TABLE SET with multiple SET clauses on a regular PostgreSQL table
produces irrelevant error when timescaledb extension is installed.

Fix #5641
2023-05-04 16:32:25 +03:00
Nikhil Sontakke
ed8ca318c0 Quote username identifier appropriately
Need to use quote_ident() on the user roles. Otherwise the
extension scripts will fail.

Co-authored-by: Mats Kindahl <mats@timescale.com>
2023-04-28 16:53:43 +05:30
Fabrízio de Royes Mello
b16bf3b100 Fix post repair tests
Commit 3f9cb3c2 introduced new repair tests for broken Continuous
Aggregates with JOIN clause but in the post.repair.sql we not properly
calling the post.repair.cagg_joins.sql because a wrong usage of psql
`if` statement.
2023-04-17 14:47:06 -03:00
Fabrízio de Royes Mello
a3d778f7a0 Add CI check for missing gitignore entries
Whenever we create a template sql file (*.sql.in) we should add the
respective .gitignore entry for the generated test files.

So added a CI check to check for missing gitignore entries for generated
test files.
2023-04-14 12:46:20 -03:00
Rafia Sabih
3f9cb3c27a Pass join related structs to the cagg rte
In case of joins in the continuous aggregates, pass the required
structs to the new rte created. These values are required by the
planner to finally query the materialized view.

Fixes #5433
2023-04-13 04:57:33 +02:00
Mats Kindahl
777c599a34 Do not segfault on large histogram() parameters
There is a bug in `width_bucket()` causing an overflow and subsequent
NaN value as a result of dividing with `+inf`. The NaN value is
interpreted as an integer and hence generates an index out of range for
the buckets.

This commit fixes this by generating an error rather than
segfaulting for bucket indexes that are out of range.
2023-03-28 12:47:02 +02:00
Konstantina Skovola
8cccc375fb Add license information to extension description
Fixes #5436
2023-03-20 13:27:41 -03:00
Zoltan Haindrich
790b322b24 Fix DEFAULT value handling in decompress_chunk
The sql function decompress_chunk did not filled in
default values during its operation.

Fixes #5412
2023-03-16 09:16:50 +01:00
Bharathy
c13ed17fbc Fix DELETE command tag
DELETE on hypertables always reports 0 as affected rows.
This patch fixes this issue.
2023-03-07 20:45:12 +05:30
Sven Klemm
f680b99529 Fix assertion in calculate_chunk_interval for negative target size
When called with negative chunk_target_size_bytes
calculate_chunk_interval will throw an assertion. This patch adds
error handling for this condition. Found by sqlsmith.
2023-03-07 14:50:57 +01:00
Mats Kindahl
a6ff7ba6cc Rename columns in old-style continuous aggregates
For continuous aggregates with the old-style partial aggregates
renaming columns that are not in the group-by clause will generate an
error when upgrading to a later version. The reason is that it is
implicitly assumed that the name of the column is the same as for the
direct view. This holds true for new-style continous aggregates, but is
not always true for old-style continuous aggregates. In particular,
columns that are not part of the `GROUP BY` clause can have an
internally generated name.

This commit fixes that by extracting the name of the column from the
partial view and use that when renaming the partial view column and the
materialized table column.
2023-03-03 14:02:37 +01:00
Alexander Kuzmenkov
fd66f5936a Warn about mismatched chunk cache sizes
Just noticed abysmal INSERT performance when experimenting with one of
our customers' data set, and turns out my cache sizes were
misconfigured, leading to constant hypertable chunk cache thrashing.
Show a warning to detect this misconfiguration. Also use more generous
defaults, we're not supposed to run on a microwave (unlike Postgres).
2023-02-14 19:32:41 +04:00
Bharathy
9a2cbe30a1 Fix ChunkAppend, ConstraintAwareAppend child subplan
When TidRangeScan is child of ChunkAppend or ConstraintAwareAppend node, an
error is reported as "invalid child of chunk append: Node (26)". This patch
fixes the issue by recognising TidRangeScan as a valid child.

Fixes: #4872
2023-01-18 18:06:30 +05:30
Sven Klemm
08bb21f7e6 2.9.0 Post-release adjustments
Add 2.9.0 to update test scripts and adjust downgrade scripts for
2.9.0. Additionally adjust CHANGELOG to sync with the actual release
CHANGELOG and add PG15 to CI.
2022-12-19 19:10:24 +01:00
Alexander Kuzmenkov
27310470be Allow AsyncAppend under IncrementalSort
We forgot to add a case for it.
2022-12-19 21:16:14 +04:00
Bharathy
dd65a6b436 Fix segfault after second ANALYZE
Issue occurs in extended query protocol mode only where every
query goes through PREPARE and EXECUTE phase. First time ANALYZE
is executed, a list of relations to be vaccumed is extracted and
saved in a list. This list is referenced in parsetree node. Once
execution of ANALYZE is complete, this list is cleaned up, however
reference to the same is not cleaned up in parsetree node. When
second time ANALYZE is executed, segfault happens as we access an
invalid memory location.

Fixed the issue by restoring the actual value in parsetree node
once ANALYZE completes its execution.

Fixes #4857
2022-12-12 17:34:41 +05:30
Fabrízio de Royes Mello
35fa891013 Add missing gitignore entry
Pull request #4998 introduced a new template SQL test file but missed
to add the properly `.gitignore` entry to ignore generated test files.
2022-11-23 05:08:05 -03:00
Lakshmi Narayanan Sreethar
7bc6e56cb7 Fix plan_hashagg test failure in PG15
Updated the expected output of plan_hashagg to reflect changes introduced
by postgres/postgres@4b160492.
2022-11-22 22:36:22 +05:30
Fabrízio de Royes Mello
a4356f342f Remove trailing whitespaces from test code 2022-11-18 16:31:47 -03:00
Sachin
1e3200be7d USE C function for time_bucket() offset
Instead of using SQL UDF for handling offset parameter
added ts_timestamp/tz/date_offset_bucket() which will
handle offset
2022-11-17 13:08:19 +00:00
Alexander Kuzmenkov
1b65297ff7 Fix memory leak with INSERT into compressed hypertable
We used to allocate some temporary data in the ExecutorContext.
2022-11-16 13:58:52 +04:00
Alexander Kuzmenkov
676d1fb1f1 Fix const null clauses in runtime chunk exclusion
The code we inherited from postgres expects that if we have a const null
or false clause, it's going to be the single one, but that's not true
for runtime chunk exclusion because we don't try to fold such
restrictinfos after evaluating the mutable functions. Fix it to also
work for multiple restrictinfos.
2022-11-15 21:49:39 +04:00
Markos Fountoulakis
e2b7c76c9c Disable MERGE when using hypertables
Fixes #4930

Co-authored-by: Lakshmi Narayanan Sreethar <lakshmi@timescale.com>
2022-11-14 13:57:17 +02:00
Bharathy
2a64450651 Add new tests to gitignore list
Since new tests specific to PG15 were added, these tests which generated .sql files needs to be added to .gitnore
2022-11-07 22:14:39 +05:30
Bharathy
3a9688cc97 Extra Result node on top of CustomScan on PG15
On PG15 CustomScan by default is not projection capable, thus wraps this
node in Result node. THis change in PG15 causes tests result files which
have EXPLAIN output to fail. This patch fixes the plan outputs.

Fixes #4833
2022-11-07 21:20:08 +05:30
Mats Kindahl
b95576550c Add printout for multiple jobs with same job_id
We have a rare condition where a debug build asserts on more than one
job with the same job id. Since it is hard to create a reproduction,
this commit adds a printout for those conditions and print out all the
jobs with that job id in the postgres log.

Part of #4863
2022-11-07 14:17:38 +01:00
Bharathy
c06b647680 pg_dump on PG15 does not log messages with log level set to PG_LOG_INFO.
Version 15 pg_dump program does not log any messages with log level <
PG_LOG_WARNING to stdout. Whereas this check is not present in version
14, thus we see corresponding tests fail with missing log information.
This patch fixes by supressing those log information, so that the tests
pass on all versions of postgresql.

Fixes #4832
2022-11-01 20:13:17 +05:30
Alexander Kuzmenkov
1cc8c15cad Do not clobber the baserel cache on UDF error
The baserel cache should only be allocated and freed by the top-level
query.
2022-11-01 18:01:26 +04:00
Alexander Kuzmenkov
39c9921947 Fix flaky copy_memory_usage tests
The changes from e555eea lead to flakiness. They are a leftover of
earlier version and probably not needed anymore.

The original version is also still flaky on Windows, so use linear
regression to tell if the memory usage is increasing.

Verified to still fail on 2.7.x
2022-10-21 21:20:40 +04:00
Jan Nidzwetzki
e555eea9db Fix performance regressions in the copy code
In 8375b9aa536a619a5ac2644e0dae3c25880a4ead, a patch was added to handle
chunks closes during an ongoing copy operation. However, this patch
introduces a performance regression. All MultiInsertBuffers are deleted
after they are flushed. In this PR, the performance regression is fixed.
The most commonly used MultiInsertBuffers survive flushing.

The 51259b31c4c62b87228b059af0bbf28caa143eb3 commit changes the way the
per-tuple context is used. Since this commit, more objects are stored in
this context. The size of the context was used to set the tuple size to
PG < 14. The extra objects in the context lead to wrong (very large)
results and flushes almost after every tuple read.

The cache synchronization introduced in
296601b1d7aba7f23aea3d47c617e2d6df81de3e is reverted. With the current
implementation, `MAX_PARTITION_BUFFERS` survive the flash. If
`timescaledb.max_open_chunks_per_insert` is lower than
`MAX_PARTITION_BUFFERS` , a buffer flush would be performed after each
tuple read.
2022-10-21 09:02:03 +02:00
Konstantina Skovola
54ed0d5c05 Introduce fixed schedules for background jobs
Currently, the next start of a scheduled background job is
calculated by adding the `schedule_interval` to its finish
time. This does not allow scheduling jobs to execute at fixed
times, as the next execution is "shifted" by the job duration.

This commit introduces the option to execute a job on a fixed
schedule instead. Users are expected to provide an initial_start
parameter on which subsequent job executions are aligned. The next
start is calculated by computing the next time_bucket of the finish
time with initial_start origin.
An `initial_start` parameter is added to the compression, retention,
reorder and continuous aggregate `add_policy` signatures. By passing
that upon policy creation users indicate the policy will execute on
a fixed schedule, or drifting schedule if `initial_start` is not
provided.
To allow users to pick a drifting schedule when registering a UDA,
an additional parameter `fixed_schedule` is added to `add_job`
to allow users to specify the old behavior by setting it to false.

Additionally, an optional TEXT parameter, `timezone`, is added to both
add_job and add_policy signatures, to address the 1-hour shift in
execution time caused by DST switches. As internally the next start of
a fixed schedule job is calculated using time_bucket, the timezone
parameter allows using timezone-aware buckets to calculate
the next start.
2022-10-18 18:46:57 +03:00
Alexander Kuzmenkov
bde337e92d Fix the flaky pg_dump test
It was frequently failing on Windows. Sort by what is actually printed.
2022-10-17 21:17:48 +03:00
Fabrízio de Royes Mello
e0bbd4042a Fix missing upgrade/downgrade tests DDL validation
Recently we fixed a DDL error (#4739) after upgrading to 2.8.0 version
that surprisly the CI upgrade/dowgrade tests didn't complained during
the development of the feature (#4552).

Fixed it by adding an specific query in the `post.catalog.sql` script to
make sure we'll check all the constraints of our internal tables and
catalog.
2022-10-07 16:40:30 -03:00
Bharathy
f6dd55a191 Hypertable FK reference to partitioned table
Consider a hypertable which has a foreign key constraint on a
referenced table, which is a parititioned table. In such case, foreign
key constraint is duplicated for each of the partitioned table. When
we insert into a hypertable, we end up checking the foreign key
constraint multiple times, which obviously leads to foreign key
constraint violation. Instead we only check foreign key constraint of
the parent table of the partitioned table.

Fixes #4684
2022-09-27 21:09:05 +05:30
Sven Klemm
85d0e16a98 Fix flaky pg_dump test
Use DROP DATABASE WITH(FORCE) to drop the database in pg_dump test
since occasionally there would still be connections to the database
leading to test failures. Unfortunately PG12 does not support that
syntax so we have to drop without that option on PG12.
2022-09-16 15:01:31 +02:00
Bharathy
b869f91e25 Show warnings during create_hypertable().
The schema of base table on which hypertables are created, should define
columns with proper data types. As per postgres best practices Wiki
(https://wiki.postgresql.org/wiki/Don't_Do_This), one should not define
columns with CHAR, VARCHAR, VARCHAR(N), instead use TEXT data type.
Similarly instead of using timestamp, one should use timestamptz.
This patch reports a WARNING to end user when creating hypertables,
if underlying parent table, has columns of above mentioned data types.

Fixes #4335
2022-09-12 18:47:47 +05:30
Alexander Kuzmenkov
8e4dcddad6 Make the copy_memory_usage test less flaky
Increase the failure threshold.
2022-09-08 22:13:18 +03:00
Alexander Kuzmenkov
ae6773fca6 Fix joins in RETURNING
To make it work, it is enough to properly pass the parent of the
PlanState while initializing the projection in RETURNING clause.
2022-08-31 14:14:34 +03:00
Matvey Arye
c43307387e Add runtime exclusion for hypertables
In some cases, entire hypertables can be excluded
at runtime. Some Examples:

   WHERE col @> ANY(subselect)
   if the subselect returns empty set

   WHERE col op (subselect)
   if the op is a strict operator and
   the subselect returns empty set.

When qual clauses are not on partition columns, we use
the old chunk exclusion, otherwise we try hypertable exclusion.

Hypertable exclusion is executed once per hypertable.
This is cheaper than the chunk  exclusion
that is once-per-chunk.
2022-08-25 13:17:21 -04:00
Sven Klemm
5d934baf1d Add timezone support to time_bucket
This patch adds a new function time_bucket(period,timestamp,timezone)
which supports bucketing for arbitrary timezones.
2022-08-25 12:59:05 +02:00
Alexander Kuzmenkov
bc85fb1cf0 Fix the flaky dist_ddl test
Add an option to hide the data node names from error messages.
2022-08-24 15:51:27 +03:00
Alexander Kuzmenkov
51259b31c4 Fix OOM in large INSERTs
Do not allocate various temporary data in PortalContext, such as the
hyperspace point corresponding to the row, or the intermediate data
required for chunk lookup.
2022-08-23 19:40:51 +03:00
Sven Klemm
1c0bf4b777 Support bucketing by month in time_bucket_gapfill 2022-08-22 19:07:32 +02:00
Sven Klemm
c488fcdbc9 Allow bucketing by month, year, century in time_bucket
This patch allows bucketing by month for time_bucket with date,
timestamp or timestamptz. When bucketing by month the interval
must only contain month components. When using origin together
with bucketing by month only the year and month components are
honoured.

To bucket by month we get the year and month of a date and convert
that to the nth month since origin. This allows us to treat month
bucketing similar to int bucketing. During this process we ignore
the day component and therefore only support bucketing by full months.
2022-08-22 19:07:32 +02:00
Markos Fountoulakis
9c6433e6ed Handle TRUNCATE TABLE on chunks
Make truncating a uncompressed chunk drop the data for the case where
they reside in a corresponding compressed chunk.

Generate invalidations for Continuous Aggregates after TRUNCATE, so
as to have consistent refresh operations on the materialization
hypertable.

Fixes #4362
2022-08-17 10:23:40 +03:00
Joshua Lockerman
a3cfc091e8 Re-enable telemetry tests
They should be functioning after 2.7.2
2022-08-15 12:13:51 -04:00
Fabrízio de Royes Mello
5c129be60f Fix partitioning functions
When executing `get_partition_{hash|for_key}` inside an IMMUTABLE
function we're getting the following error:

`ERROR: unsupported expression argument node type 112`

This error is because the underlying `resolve_function_argtype` was not
dealing with `T_Param` node type.

Fixed it by dealing properly with `T_Param` node type returning the
`paramtype` for the argument type.

Fixes #4575
2022-08-08 10:14:10 -03:00
Fabrízio de Royes Mello
d35ea0f997 Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
When working on a fix for #4555 discovered that executing
`{GRANT|REVOKE} .. ON ALL TABLES IN SCHEMA` in an empty schema lead to
an assertion because we change the way that command is executed by
collecting all objects involved and processing one by one.

Fixed it by executing the previous process utility hook just when the
list of target objects is not empty.

Fixes #4581
2022-08-08 09:39:30 -03:00
Erik Nordström
025bda6a81 Add stateful partition mappings
Add a new metadata table `dimension_partition` which explicitly and
statefully details how a space dimension is split into partitions, and
(in the case of multi-node) which data nodes are responsible for
storing chunks in each partition. Previously, partition and data nodes
were assigned dynamically based on the current state when creating a
chunk.

This is the first in a series of changes that will add more advanced
functionality over time. For now, the metadata table simply writes out
what was previously computed dynamically in code. Future code changes
will alter the behavior to do smarter updates to the partitions when,
e.g., adding and removing data nodes.

The idea of the `dimension_partition` table is to minimize changes in
the partition to data node mappings across various events, such as
changes in the number of data nodes, number of partitions, or the
replication factor, which affect the mappings. For example, increasing
the number of partitions from 3 to 4 currently leads to redefining all
partition ranges and data node mappings to account for the new
partition. Complete repartitioning can be disruptive to multi-node
deployments. With stateful mappings, it is possible to split an
existing partition without affecting the other partitions (similar to
partitioning using consistent hashing).

Note that the dimension partition table expresses the current state of
space partitions; i.e., the space-dimension constraints and data nodes
to be assigned to new chunks. Existing chunks are not affected by
changes in the dimension partition table, although an external job
could rewrite, move, or copy chunks as desired to comply with the
current dimension partition state. As such, the dimension partition
table represents the "desired" space partitioning state.

Part of #4125
2022-08-02 11:38:32 +02:00