5121 Commits

Author SHA1 Message Date
Alexander Kuzmenkov
98780f7d59
Release 2.17.2 -- main branch (#7421) 2024-11-07 12:30:15 +00:00
Alexander Kuzmenkov
0b5017ed03
run pr review workflow when a review is changed (#7424) 2024-11-06 17:11:02 +00:00
Alexander Kuzmenkov
6803dd87c0
Use America/Los_Angeles in tests instead of PST8PDT (#7416)
Follow the PG upstream changes in light of the upcoming timezone
updates. This should fix our regression tests on latest PG snapshots.

2b94ee58bf
2024-11-06 16:55:26 +00:00
Pallavi Sontakke
fc2805ecdf
Ignore failure in upstream test jsonb_jsonpath
Until PG 17.1 is released.

Currently shows a diff in PG 17.0 regression run as:

```
select jsonb_path_query_tz('"12:34:56"', '$.time_tz().string()');
  jsonb_path_query_tz
 ---------------------
- "12:34:56-07:00"
+ "12:34:56-08:00"
```
2024-11-06 18:17:42 +05:30
Keyur Panchal
7c78fad964
Add GUC for segmentwise recompression (#7413)
Add GUC option to enable or disable segmentwise recompression. If
disabled, then a full recompression is done instead when recompression
is attempted through `compress_chunk`. If `recompress_chunk_segmentwise`
is used when GUC is disabled, then an error is thrown.

Closes #7381.
2024-11-04 07:24:52 -07:00
Mats Kindahl
76e3b27b69 Fix false positive in coverity
Coverity assumes that casting a void pointer to anything else is
tainted, which gives warning when casting the `rd_amcache` field to
anything else. Since this is what the field is used for, we mark the
cast from `void*` to `HypercoreInfo*` as a false positive.
2024-10-29 08:53:11 +01:00
Sven Klemm
647e558871 Reenable autovacuum in CI
We want to run autovacuum in CI to find bugs with our own code when
parallel vacuum is happening.
2024-10-28 12:54:55 +01:00
Sven Klemm
a1839cc0eb Prepend postgres compiler flags to cmake flags
Prepend postgres compiler flags instead of appending to allow
overwriting flags used for compilation with CFLAGS.
2024-10-28 10:29:14 +01:00
Fabrízio de Royes Mello
59871c0da6 Disable autovacuum on TSL regression tests
We disable autovacuum on Apache regression tests to avoid flaky results
and time to time we face some issues on TSL regression tests as well.

https://github.com/timescale/timescaledb/actions/runs/11470614522/job/31920227424#step:12:43
2024-10-25 13:15:32 -03:00
Ante Kresic
99d940ab18 Fix using OIDs in bitmapset
Due to a typo, we were misusing OIDs and putting them
into a bitmapset which caused plan time errors. Switching
to using RTE index and adding a test with OID that is
greater than INT32_MAX.
2024-10-25 16:51:11 +02:00
Fabrízio de Royes Mello
57742db4e1 Update runner script to avoid flaky tests
https://github.com/timescale/timescaledb/actions/runs/11504450173/job/32024163052?pr=7386#step:13:17
2024-10-25 09:47:45 -03:00
Keyur Panchal
650d331838
Fix handling of oldtuple to match PG17 upstream (#7340)
A [previous
commit](d7513b26f6)
I made added support for MERGE ... RETURNING ....
While working on the coverity fix, I realized that this code handling
oldtuple was not added; that's what this fixes.

Disable-check: force-changelog-file
2024-10-24 22:51:13 +00:00
Fabrízio de Royes Mello
04f0b47ca7 Force auto backport workflow
Currently if a PR that is elegible for backporting touches a workflow
file then the automatic backport fails.

Now adding the label `force-auto-backport-workflow` to a PR that touches
workflow files will proceed with the automatic backporting. This is
useful because sometimes we need to fix a workflow and backport it to
the current release branch or when we're adding support to a new
Postgres major version that requires workflow changes and it should be
backported to the release branch in case of creating patch releases.
2024-10-24 15:30:59 -03:00
Alexander Kuzmenkov
946e9b74da
Fix use-after-free in per-batch vectorized grouping policy (#7388)
The grouping column values are references into the compressed batch, so
we can reset it only after we have returned the partial aggregation
result.
2024-10-24 17:21:28 +00:00
Fabrízio de Royes Mello
c243f1ac02 Add .gdb_history to gitignore 2024-10-23 11:36:06 -03:00
Sven Klemm
64c36719fb Remove obsolete job
policy_job_error_retention was removed in 2.15.0 but we did not
get rid of the job that called it back then. This patch removes
the defunct job definition calling that function.
2024-10-22 08:39:40 +02:00
Mats Kindahl
f754918e89 Add assert for fetch_att and store_att_byval
These two functions throw an error if you pass in a strange combination
of typbyval and typlen, so we assert on bad values to get a backtrace.
2024-10-22 07:13:52 +02:00
Keyur Panchal
3834d81b46
Add Ensure to fix coverity defect (#7338)
Coverity detected a possible null pointer dereference. It doesn't seem
like this can be triggered, so added an `Ensure` clause.

Disable-check: force-changelog-file
2024-10-21 20:31:47 +00:00
Fabrízio de Royes Mello
3c707bf28a Release 2.17.1 on main
This release contains performance improvements and bug fixes since
the 2.17.0 release. We recommend that you upgrade at the next
available opportunity.

**Features**
* #7360 Add chunk skipping GUC

**Bugfixes**
* #7335 Change log level used in compression
* #7342 Fix collation for in-memory tuple filtering

**Thanks**
* @gmilamjr for reporting an issue with the log level of compression messages
* @hackbnw for reporting an issue with collation during tuple filtering
2024-10-21 15:16:05 -03:00
Erik Nordström
d83383615a Fix flaky Hypercore join test
The join test could sometimes pick a seqscan+sort instead of an
indexscan when doing a MergeAppend+MergeJoin. Disabling seqscan should
make it deterministic.
2024-10-20 22:29:21 +02:00
Erik Nordström
132d14fe7d Fix flaky Hypercore index test
Having multiple indexes that include same prefix of columns caused the
planner to sometimes pick a different index for one of the querires,
which led to different test output. Temporarily remove the alternative
index to make the test predictible.
2024-10-20 22:29:21 +02:00
Sven Klemm
23b736e449 Fix flaky continuous_aggs test
Add missing ORDER BY clause to continuous_aggs test to make output
deterministic.
2024-10-19 14:03:52 +02:00
Sven Klemm
2e3cf30cbd Fix flaky rowsecurity test
Check sql state code instead of error message in row security foreign
key check.
2024-10-19 13:47:43 +02:00
Sven Klemm
5945e01456 Fix approval count workflow
When queried from within action context .authorAssociation is not
filled in as MEMBER but CONTRIBUTOR instead so adjust to query to
take that into account.
2024-10-19 11:34:40 +02:00
Sven Klemm
694fcf428e Remove obsolote multinode comment about chunk status 2024-10-19 11:34:40 +02:00
Sven Klemm
a732b19084 Pushdown ORDER BY for realtime caggs
Previously ordered queries on realtime caggs would always lead to full
table scan as the query plan would have a sort with the limit on top.
With this patch this gets changed so that the ORDER BY can be pushed
down so the query can benefit from the ordered append optimization and
does not require full table scan.

Since the internal structure is different on PG 14 and 15 this
optimization will only be available on PG 16 and 17.

Fixes #4861
2024-10-18 22:06:09 +02:00
Fabrízio de Royes Mello
aa9bc607ce Use proper INVALID_{HYPERTABLE|CHUNK}_ID macros 2024-10-18 14:25:11 -03:00
Sven Klemm
de6b478208 Add workflow to check number of approvals
All PRs except trivial ones should require 2 approvals, since this a global setting
we cannot allow trivial PRs to only have 1 from github configuration alone. So we set
the required approvals in github to 1 and make this check required which will enforce
2 approvals unless overwritten or only CI files are touched.
2024-10-18 19:01:05 +02:00
Ante Kresic
8767565e3f Add chunk skipping GUC
Add the ability to enable/disable new chunk skipping functionality
completely.
2024-10-18 17:47:22 +02:00
Sven Klemm
b65083ef69 Pin setup-wsl version to 3.1.1
Looks like version 3.1.2 does not work so pin to the previous version
instead of generic v3.
2024-10-18 12:45:51 +02:00
Sven Klemm
4316f2c203 Remove multinode ssl tests 2024-10-16 18:03:18 +02:00
Fabrízio de Royes Mello
c359c16c74 PG17: Enable Windows tests on CI
We're forcing PG17 installation since the package still on moderation by
the Chocolatey Community:

https://community.chocolatey.org/packages/postgresql17
2024-10-16 10:46:03 -03:00
Erik Nordström
ed19e29985 Add changelog entry for Hypercore TAM
The Hypercore table access method (TAM) wraps TimescaleDB's columnar
compression engine in a table access method. The TAM API enables
sevaral features that were previously not available on compressed
data, including (but not limited to):

- Ability to build indexes on compressed data (btree, hash).

- Proper statistics, including column stats via ANALYZE

- Better support for vacuum and vacuum full

- Skip-scans on top of compressed data

- Better support for DML (copy/insert/update) directly on compressed
  chunks

- Ability to dynamically create constraints (check, unique, etc.)

- Better lock handling including via CURSORs
2024-10-16 13:13:34 +02:00
Mats Kindahl
e0a7a6f6e1 Hyperstore renamed to hypercore
This changes the names of all symbols, comments, files, and functions
to use "hypercore" rather than "hyperstore".
2024-10-16 13:13:34 +02:00
Mats Kindahl
406901d838 Rename files using "hyperstore" to use "hypercore"
Files and directories using "hyperstore" as part of the name is moved
to the new name using "hypercore".
2024-10-16 13:13:34 +02:00
Mats Kindahl
5798b9f534 Add Hypercore analyze support for PG17
PG17 changed the TAM API to use the new `ReadStream` API instead of the
previous block-oriented API. This commit port the existing
block-oriented solution to use the new `ReadStream` API by setting up
separate read streams for the two relations and using the provided read
stream as a block sampler, fetching the apropriate block from either
the non-compressed or compressed relation.
2024-10-16 13:13:34 +02:00
Erik Nordström
eb2ee0bc5c Refactor hyperstore handling in compress_chunk()
Break out any hypestore handling in `compress_chunk()` into separate
functions. This makes the code more readable.
2024-10-16 13:13:34 +02:00
Mats Kindahl
10c78f1137 Remove memory context switch macro
The macro `TS_WITH_MEMORY_CONTEXT` was used to switch memory context
for a block of code and restore it afterwards. This is checked using
Coccinelle rules instead and the macro is removed.
2024-10-16 13:13:34 +02:00
Mats Kindahl
2ab527e9e3 Fix TRUNCATE of hyperstore tables
Truncate the compressed relation when truncating a hyperstore relation.
This can happen in two situations: either a non-transactional context
or in a transactional context.

For the transactional context, `relation_set_new_filelocator`
will be called to replace the file locator. If this happens, we need to
replace the file locator for the compressed relation as well, if there
is one.

For the non-transactional case, `relation_nontransactional_truncate`
will be called, and we will just forward the call to the compressed
relation as well, if it exists.
2024-10-16 13:13:34 +02:00
Mats Kindahl
29cb359d46 Optimize check for segmentby-only index scans
If an index scan is on segment-by columns only, the index is optimized
to only contain references to complete segments. However, deciding if a
scan is only on segment-by columns requires checking all columns used
in the index scan and since this does not change during a scan, but
needs to be checked for each tuple, we cache this information for the
duration of the scan.
2024-10-16 13:13:34 +02:00
Erik Nordström
ee0a3afee1 Fix Hyperstore index builds with null segments
The index build function didn't properly handle the case when all
rolled-up values in a compressed column were null, thus having a
null-segment. The code has been slightly refactored to handle this
case.

A test is also added for this case.
2024-10-16 13:13:34 +02:00
Erik Nordström
b5b73dc3b6 Fix handling of dropped columns in Arrow slot
Dropped columns need to be included in a tuple table slot's values
array after having called slot_getsomeattrs(). The arrow slot didn't
do this and instead skipped dropped columns, which lead to assertion
errors in some cases.
2024-10-16 13:13:34 +02:00
Erik Nordström
201cfe3b94 Fix issue when recompressing Hyperstore
When recompressing a Hyperstore after changing compression settings,
the compressed chunk could be created twice, leading to a conflict
error when inserting two compression chunk size rows.

The reason this happened was that Hyperstore creates a compressed
chunk on-demand if it doesn't exist when the relation is opened. And,
the recompression code had not yet associated the compressed chunk
with the main chunk when compressing the data.

Fix this by associating the compressed chunk with the main chunk
before opening the main chunk relation to compress the data.
2024-10-16 13:13:34 +02:00
Erik Nordström
ea31d4f5c2 Refactor setting attributes in Arrow getsomeattrs()
When populating an Arrow slot's tts_values array with values in the
getsomeattrs() function, the function set_attr_value() is called. This
function requires passing in an ArrowArray which is acquired via a
compression cache lookup. However, that lookup is not necassary for
segmentby columns (which aren't compressed) and, to avoid it, a
special fast-path was created for segmentby columns outside
set_attr_value(). That, unfortunately, created som code duplication.

This change moves the cache lookup into set_attr_value() instead,
where it can be performed only for the columns that need it. This
leads to cleaner code and less code duplication.
2024-10-16 13:13:34 +02:00
Mats Kindahl
e73d0ceb04 Always copy into non-compressed slot of arrow slot
When copying from a non-arrow slot to a arrow slot, we should always
copy the data into the non-compressed slot and never to the compressed
slot.

The previous check for matching number of attributes fail when you drop
one column from the hyperstore.
2024-10-16 13:13:34 +02:00
Mats Kindahl
86fb747202 Disable hash agg for hypertable_index_btree
We disable hash aggregation in favor of group aggregation to get stable
test. It was flaky because it could pick either group aggregate or hash
aggregate.
2024-10-16 13:13:34 +02:00
Mats Kindahl
1a9d319d4b Fix issue when copying into arrow slot
If you set the table access method for a hypertable all new chunks will
use `ArrowTupleTableSlot` but the copy code assumes that the parent
table has a virtual tuple table slot. This causes a crash when copying
a heap tuple since the values are stored in the "main" slot and not in
either of the child tuple table slots.

Fix this issue by storing the values in the uncompressed slot when it
is empty.
2024-10-16 13:13:34 +02:00
Mats Kindahl
d28a9fc892 Raise error when using Hyperstore with plain table
If an attempt is made to use the hyperstore table access method with a
plain table during creation, throw an error instead of allowing the
table access method to be used.

The table access method currently only supports hypertables and expect
chunks to exist for the table.
2024-10-16 13:13:34 +02:00
Erik Nordström
8f311b7844 Do simple projection in columnar scan
When a columnar scan needs to return a subset of the columns in a scan
relation, it is possible to do a "simple" projection that just copies
the column values to the projection result slot. This avoids a more
costly projection done by PostgreSQL.
2024-10-16 13:13:34 +02:00
Erik Nordström
ff940170cd Always set tts_tableOid in Arrow slot
The tableOid was not set in an Arrow slot when hyperstore was
delivering the next arrow value from the same compressed child slot,
assuming that the tableOid would remain the same since since
delivering the previous value.

This is not always the case, however, as the same slot can be used in
a CREATE TABLE AS or similar statement that inserts the data into
another table. In that case, the insert function of that table will
change the slot's tableOid.

To fix this, hyperstore will always set the tableOid on the slot when
delivering new values.
2024-10-16 13:13:34 +02:00