Before this change during downgrade script generation we would
always fetch the pre-update script from the previous version and
prepend it to the generated scripts. This limits what can be
referenced in the pre-update script and also what is possible
within the downgrade itself.
This patch splits the pre-update script into a generic part that
is used for update/downgrade and an update specific part. We could
later also add a downgrade specific part but currently it is not
needed. This change is necessary because we reference a timescaledb
view in the pre-update script which prevents changes to that view.
Remove the code used by multinode to handle remote connections.
This patch completely removes tsl/src/remote and any remaining
distributed hypertable checks.
The update test fails on PG13 on the statement
`\d+ cagg_joins_upgrade_test_with_realtime_using` with
the error message `ERROR: invalid memory alloc request size 13877216128`.
To unblock CI and allow other PRs to get merged we temporarily
skip the offending query on PG13.
The latest OpenSSL 3.2.0 version has known issues with Postgres. MacOSX
CI runs were failing because of this. We now use macos-13 visavis the
earlier macos-11. That seems to solve this OpenSSL issue.
With PG16, group pathkeys can include column which are
not in the GROUP BY clause when dealing with ordered
aggregates. This means we have to exclude those columns
when checking and creating the gapfill custom scan subpath.
Fixes#6396
Direct access of the `.data` member of `NameData` structures are
discuraged and `NameStr` should be used instead.
Also adding one instance that was missed in #5336.
This patch drops the catalog table _timescaledb_catalog.hypertable_compression
and stores those settings in _timescaledb_catalog.compression_settings instead.
The storage format is changed and the new table will have 1 entry per relation
instead of 1 entry per column and has no dependancy on hypertables.
All other aspects of compression will remain the same. This is refactoring is
to enable per chunk compression settings in a follow-up patch.
Refactor the compression code to only use the table scan API when
scanning relations instead of a mix of the table and heap scan
APIs. The table scan API is a higher-level API and recommended as it
works for any type of relation and uses table slots directly, which
means that in some cases a full heap tuple need not be materialized.
The previous multinode PR only removes the regression test,
isolation test and TAP test files but not the C files. This patch
also removes the deparse test which was missed in the previous
PR.
This PR adds capabilities to allow individual PRs to override the
loader-change check. Not all changes in the loader directories
will require a new loader version so the check can sometimes have
false positives. It also mentions to bump loader version for
loader changes.
We used a mixture of include guards and pragma once in our header
files. This patch changes our headers to always use pragma once
because it is less error prone, can be copy/pasted, doesnt require
a unique identifier and is also shorter.
Removes the following functions:
hypertable_is_compressed_or_materialization()
hypertable_filter_exclude_compressed_and_materialization()
hypertable_tuple_append()
ts_hypertable_get_all()
The tuplelock argument was never used in hypertable_scan_limit_internal.
This patch removes it and also removes some other unused functions related
to locking.
The batch queue implementation, which is part of transparent
decompression, is refactored to be its own self-contained
module. Previously, the batch queue functionality consisted of
functions that operated on the DecompressChunkState node instead of
its own distinct object.
The two batch queue types, heap and fifo, are now modularized in their
separate source files with a common interface exposed in
`batch_queue.h`. The state related to the two batch queue
implementations is moved from DecompressChunkState into the respective
modules depending on type. For example, the heap state and related
initialization code is moved to the heap-based batch queue.
The DecompressContext is also moved to its own header file so that
other modules need not include everything related to
DecompressChunkState.
We can do this if the underlying scalar predicate is vectorizable, by
running the vector predicate on each element of the array and combining
the results.