1
0
mirror of https://github.com/timescale/timescaledb.git synced 2025-05-18 03:23:37 +08:00

3994 Commits

Author SHA1 Message Date
Zoltan Haindrich
f58d8c20c2 Release 2.11.0 2023-05-17 15:04:30 +02:00
Konstantina Skovola
19dd7bbd7a Fix DISTINCT query with JOIN on multiple segmentby columns
Previously when adding equivalence class members for the compressed
chunk's variables, we would only consider Vars. This led us to ignore
cases where the Var was wrapped in a RelabelType,
returning inaccurate results.

Fixed the issue by accepting Vars
with RelabelType for segmentby equivalence class.

Fixes 
2023-05-17 12:56:12 +03:00
Alexander Kuzmenkov
fb65086b55 Add a ubsan suppression for overflow in histogram()
It is in postgres code, and doesn't lead to bugs.
2023-05-16 21:32:52 +02:00
Alexander Kuzmenkov
8ff0648fd0 Fix ubsan failure in gorilla decompression
Also add more tests
2023-05-16 21:32:52 +02:00
Alexander Kuzmenkov
936d751037 Add AddressSanitizer instrumentation for memory contexts
Use manual poison/unpoison at the existing Valgrind hooks, so that
AddressSanitizer sees palloc/pfree as well, not only the underlying
mallocs which are called much less often.

Fix some out-of-bound reads found with this instrumentation.
2023-05-16 21:32:52 +02:00
Alexander Kuzmenkov
f58500a32c Add clang-tidy changes to git blame ignore list 2023-05-16 21:32:52 +02:00
Alexander Kuzmenkov
030bfe867d Fix errors in decompression found by fuzzing
For deltadelta and gorilla codecs, add various length and consistency
checks that prevent segfaults on incorrect data.
2023-05-15 18:33:22 +02:00
Alexander Kuzmenkov
a7321199a4 Enable branch-level code coverage
Helps to check the test coverage for various complex conditions in the
decompression code.
2023-05-15 18:33:22 +02:00
Alexander Kuzmenkov
62b6bc5f7f Add deltadelta int8 fuzzing corpus 2023-05-15 18:33:22 +02:00
Alexander Kuzmenkov
451b982a74 Add gorilla-float8 fuzzing corpus 2023-05-15 18:33:22 +02:00
Mats Kindahl
3947c01124 Support sending telemetry event reports
Add table `_timescaledb_catalog.telemetry_event` table containing
events that should be sent out with telemetry reports. The table will
be truncated after reporting being generated.
2023-05-12 16:03:05 +02:00
Bharathy
2d71a5bca9 Fix leak during concurrent UPDATE/DELETE
When updating and deleting the same tuple while both transactions are
running at the same time, we end up with reference leak. This is because
one of the query in a transaction fails and we take error path. However
we fail to close the table.

This patch fixes the above mentioned problem by closing the required
tables.

Fixes 
2023-05-12 11:21:10 +05:30
Mats Kindahl
656daf45f6 Fix subtransaction resource owner
When executing a subtransaction using `BeginInternalSubTransaction` the
memory context switches from the current context to
`CurTransactionContext` and when the transaction is aborted or
committed using `ReleaseCurrentSubTransaction` or
`RollbackAndReleaseCurrentSubTransaction` respectively, it will not
restore to the previous memory context or resource owner but rather use
`TopTransactionContext`. Because of this, both the memory context and
the resource owner will be wrong when executing
`calculate_next_start_on_failure`, which causes `run_job` to generate
an error when used with the telemetry job.

This commit fixes this by saving both the resource owner and the memory
context before starting the internal subtransaction and restoring it
after finishing the internal subtransaction.

Since the `ts_bgw_job_run_and_set_next_start` was incorrectly reading
the wrong result from the telemetry job, this commit fixes this as
well. Note that `ts_bgw_job_run_and_set_next_start` is only used when
running the telemetry job, so it does not cause issues for other jobs.
2023-05-11 14:11:29 +02:00
Erik Nordström
abb6762450 Reduce memory usage for distributed analyze
Use a per-tuple memory context when receiving chunk statistics from
data nodes. Otherwise memory usage is proportional to the number of
chunks and columns.
2023-05-10 15:26:27 +02:00
Erik Nordström
96d2acea30 Cleanup PGresults on transaction end
Fix a regression due to a previous change in c571d54c. That change
unintentionally removed the cleanup of PGresults at the end of
transactions. Add back this functionality in order to reduce memory
usage.
2023-05-10 15:26:27 +02:00
Fabrízio de Royes Mello
f250eaa631 Remove FK from continuous_agg_migrate_plan
During the `cagg_migrate` execution if the user set the `drop_old`
parameter to `true` the routine will drop the old Continuous Aggregate
leading to an inconsistent state because the catalog code don't handle
this table as a normal catalog table so the records are not removed
when dropping a Continuous Aggregate. The same problem will happen if
you manually drop the old Continuous Aggregate after the migration.

Fixed it by removing the useless Foreign Key and also adding another
column named `user_view_definition` to the main plan table just to store
the original user view definition for troubleshooting purposes.

Fixed 
2023-05-10 09:40:03 -03:00
Ante Kresic
ab22478992 Fix DML decompression issues with bitmap heap scan
Bitmap heap scans are specific in that they store scan state
during node initialization. This means they would not pick up on
any data that might have been decompressed during a DML command
from the compressed chunk. To avoid this, we update the snapshot
on the node scan state and issue a rescan to update the internal state.
2023-05-10 12:54:20 +02:00
shhnwz
bd36afe2f3 Fixed Coverity Scan Warnings
Unused Value reported during coverity scan
link: https://scan4.scan.coverity.com/reports.htm#v56957/p12995
2023-05-10 14:23:56 +05:30
Ante Kresic
8e69a9989f Ignore multinode tests from PR CI runs
dist_move_chunk, dist_param, dist_insert and remote_txn
create a lot of friction due to their flakiness.
Ignoring them until we can fix them.
2023-05-08 09:13:34 +02:00
Fabrízio de Royes Mello
3dc6824eb5 Add GUC to enable/disable DML decompression 2023-05-05 14:59:13 -03:00
Ante Kresic
6782beb150 Fix index scan handling in DML decompression
We need to use the correct qualifiers for index scans since the
generic scan qualifiers are not populated in this case.
2023-05-05 13:16:57 +02:00
Dmitry Simonenko
8ca17e704c Fix ALTER TABLE SET with normal tables
Running ALTER TABLE SET with multiple SET clauses on a regular PostgreSQL table
produces irrelevant error when timescaledb extension is installed.

Fix 
2023-05-04 16:32:25 +03:00
Sven Klemm
9259311275 Fix JOIN handling in UPDATE/DELETE on compressed chunks
When JOINs were present during UPDATE/DELETE on compressed chunks
the code would decompress other hypertables that were not the
target of the UPDATE/DELETE operations and in the case of self-JOINs
potentially decompress chunks not required to be decompressed.
2023-05-04 13:52:14 +02:00
Bharathy
769f9fe609 Fix segfault when deleting from compressed chunk
During UPDATE/DELETE on compressed hypertables, we iterate over plan
tree to collect all scan nodes. For each scan nodes there can be
filter conditions.

Prior to this patch we collect only first filter condition and use
for first chunk which may be wrong. In this patch as and when we
encounter a target scan node, we immediatly process those chunks.

Fixes 
2023-05-03 23:19:26 +05:30
Fabrízio de Royes Mello
90f585ed7f Fix CoverityScan deference after null check
Don't need to check NULL for `direct_query->jointree` because
we don't allow queries without FROM clause in Continuous Aggregate
definition.

CoverityScan link:
https://scan4.scan.coverity.com/reports.htm#v54116/p12995/fileInstanceId=131745632&defectInstanceId=14569562&mergedDefectId=384045
2023-05-02 10:44:46 -03:00
Konstantina Skovola
6e65172cd8 Fix tablespace for compressed hypertable and corresponding toast
If a hypertable uses a non default tablespace, the compressed hypertable
and its corresponding toast table and index is still created in the
default tablespace.
This PR fixes this unexpected behavior and creates the compressed
hypertable and its toast table and index in the same tablespace as
the hypertable.

Fixes 
2023-05-02 15:28:00 +03:00
Jan Nidzwetzki
df32ad4b79 Optimize compressed chunk resorting
This patch adds an optimization to the DecompressChunk node. If the
query 'order by' and the compression 'order by' are compatible (query
'order by' is equal or a prefix of compression 'order by'), the
compressed batches of the segments are decompressed in parallel and
merged using a binary heep. This preserves the ordering and the sorting
of the result can be prevented. Especially LIMIT queries benefit from
this optimization because only the first tuples of some batches have to
be decompressed. Previously, all segments were completely decompressed
and sorted.

Fixes: 

Co-authored-by: Sotiris Stamokostas <sotiris@timescale.com>
2023-05-02 10:46:15 +02:00
Fabrízio de Royes Mello
cc9c3b3431 Post-release 2.10.3
Adjust the upgrade/downgrade scripts and add the tests.
2023-04-28 10:05:11 -03:00
Nikhil Sontakke
ed8ca318c0 Quote username identifier appropriately
Need to use quote_ident() on the user roles. Otherwise the
extension scripts will fail.

Co-authored-by: Mats Kindahl <mats@timescale.com>
2023-04-28 16:53:43 +05:30
Bharathy
2ce4bbc432 Enable continuous_aggs tests on all PG version. 2023-04-28 07:40:59 +05:30
Zoltan Haindrich
1d092560f4 Fix on-insert decompression after schema changes
On compressed hypertables 3 schema levels are in use simultaneously
 * main - hypertable level
 * chunk - inheritance level
 * compressed chunk

In the build_scankeys method all of them appear - as slot have their
fields represented as a for a row of the main hypertable.

Accessing the slot by the attribut numbers of the chunks may lead to
indexing mismatches if there are differences between the schemes.

Fixes: 
2023-04-27 16:33:36 +02:00
Mats Kindahl
be28794384 Enable run_job() for telemetry job
Since telemetry job has a special code path to be able to be used both
from Apache code and from TSL code, trying to execute the telemetry job
with run_job() will fail.

This code will allow run_job() to be used with the telemetry job to
trigger a send of telemetry data. You have to belong to the group that
owns the telemetry job (or be the owner of the telemetry job) to be
able to use it.

Closes 
2023-04-27 16:00:03 +02:00
Fabrízio de Royes Mello
d5a286174d Changelog for 2.10.3 2023-04-26 20:11:57 -03:00
Fabrízio de Royes Mello
8a95d1b9ee Add missing matrix in ignored 32bits regression workflow 2023-04-26 19:47:47 -03:00
Fabrízio de Royes Mello
e140cc702c Add missing matrix in ignored regression workflow 2023-04-26 17:43:40 -03:00
Fabrízio de Royes Mello
002b6e879a Fix CI ignored regression workflows
We defined some paths to ignore regression test workflows when, for
example, we change the CHANGELOG.md and others. But in fact it was not
happening because we didn't define a proper name in the fake regression
workflow that fake to the CI that some required status passed.

Fixed it defining a proper regression name like we do for the regular
regression workflow.
2023-04-26 15:58:33 -03:00
Fabrízio de Royes Mello
3c8d7cef77 Fix cagg_repair for the old CAgg format
In commit 4a6650d1 we fixed the cagg_repair running for broken
Continuous Aggregates with JOINs but we accidentally removed the
code path for running against the old format (finalized=false)
leading us to a dead code pointed out by CoverityScan:
https://scan4.scan.coverity.com/reports.htm#v54116/p12995/fileInstanceId=131706317&defectInstanceId=14569420&mergedDefectId=384044

Fixed it by restoring the old code path for running the cagg_repair
for Continuous Aggregates in the old format (finalized=false).
2023-04-26 14:29:58 -03:00
Mats Kindahl
d3730a4f6a Add permission checks to run_job()
There were no permission checks when calling run_job(), so it was
possible to execute any job regardless of who owned it. This commit
adds such checks.
2023-04-26 11:56:56 +02:00
Fabrízio de Royes Mello
4a6650d170 Fix broken CAgg with JOIN repair function
The internal `cagg_rebuild_view_definition` function was trying to cast
a pointer to `RangeTblRef` but it actually is a `RangeTblEntry`.

Fixed it by using the already existing `direct_query` data struct to
check if there are JOINs in the CAgg to be repaired.
2023-04-25 15:20:48 -03:00
Ante Kresic
910663d0be Reduce decompression during UPDATE/DELETE
When updating or deleting tuples from a compressed chunk, we first
need to decompress the matching tuples then proceed with the operation.
This optimization reduces the amount of data decompressed by using
compressed metadata to decompress only the affected segments.
2023-04-25 15:49:59 +02:00
Lakshmi Narayanan Sreethar
3bf58dac02 Update windows package link in github action 2023-04-25 11:29:37 +05:30
Bharathy
44dc042bb3 Fixed transparent decompress chunk test which seem to be flaky. 2023-04-24 19:58:02 +05:30
Jan Nidzwetzki
2f194e6109 Make compression metadata column names reusable
Move the creation of metadata column names for min/max values to
separate functions to make the code reusable.
2023-04-24 13:42:00 +02:00
Jan Nidzwetzki
c54d8bd946 Add missing order by to compression_ddl tests
Some queries in compression_ddl had no order by. Therefore the output
order was not defined, which led to flaky tests.
2023-04-21 15:54:25 +02:00
Ante Kresic
583c36e91e Refactor compression code to reduce duplication 2023-04-20 22:27:34 +02:00
Sven Klemm
744b44cc52 Fix parameterization in DecompressChunk path generation
All children of an append path are required to have the same parameterization
so we have to reparameterize when the selected path does not have the right
parameterization.
2023-04-20 17:20:04 +02:00
Ante Kresic
23b3f8d7a6 Block unique idx creation on compressed hypertable
This block was removed by accident, in order to support this we
need to ensure the uniqueness in the compressed data which is
something we should do in the future thus removing this block.
2023-04-20 16:22:03 +02:00
Alexander Kuzmenkov
12f3131f9e Post-release 2.10.2
Adjust the upgrade/downgrade scripts and add the tests.
2023-04-20 17:55:18 +04:00
Zoltan Haindrich
a0df8c8e6d Fix on-insert decompression for unique constraints
Inserting multiple rows into a compressed chunk could have bypassed
constraint check in case the table had segment_by columns.

Decompression is narrowed to only consider candidates by the actual
segment_by value.
Because of caching - decompression was skipped for follow-up rows of
the same Chunk.

Fixes 
2023-04-20 13:47:47 +02:00
Ante Kresic
a49fdbcffb Reduce decompression during constraint checking
When inserting into a compressed chunk with constraints present,
we need to decompress relevant tuples in order to do speculative
inserting. Usually we used segment by column values to limit the
amount of compressed segments to decompress. This change expands
on that by also using segment metadata to further filter
compressed rows that need to be decompressed.
2023-04-20 12:17:12 +02:00