PostgreSQL 14 introduced new `Memoize Node` that serve as a cache of
results from parameterized nodes.
We should make sure it will work correctly together with ChunckAppend
custom node over hypertables (compressed and uncompressed).
Closes#3684
To improve remote query push down, do the following:
* Import changes to remote cost estimates from PostgreSQL 14
`postgres_fdw`. The cost estimations for distributed (remote)
queries are originally based on the `postgres_fdw` code in
PG11. However, fixes and changes have been applied in never
PostgreSQL versions, which improves, among other things, costing of
sorts and having clauses.
* Increase the cost of transferring tuples. This penalizes doing
grouping/aggregation on the AN since it requires transferring more
tuples, leading to the planner preferring the push-down plans.
* As a result of the above, the improved costing also makes
distributed queries behave similar across all currently supported
PostgreSQL versions for our test cases.
* Enable `dist_query` tests on PG14 (since it now passes).
* Update the `dist_partial_agg` test to use additional ordering
columns so that there is no diff in the test output due to ordering
of input to the `first` and `last` functions.
This change removes a check for `USAGE` privileges on data nodes
required to query the data node using utility commands, such as
`hypertable_size`. Normally, PostgreSQL doesn't require `USAGE` on a
foreign server to query its remote tables. Also, size utilities, like
`pg_table_size` can be used by anyone---even roles without any
privileges on a table. The behavior on distributed hypertables is now
consistent with PostgreSQL.
Fixes#3698
When building on MacOS with Clang 11 the build fails with:
error: unused function 'get_chunk_rri'
error: unused function 'get_hyper_rri'
[-Werror,-Wunused-function]
This PR fixes it.
The windows compiler has problems with the macros in genbki.h
complaining about redefinition of a variable with a different
storage class. Since those specific macros are processed by a
perl script and not relevant for the build process we turn them
into noops for windows.
This PR removes the C code that executes the compression
policy. Instead we use a PL/pgSQL procedure to execute
the policy.
PG13.4 and PG12.8 introduced some changes
that require PortalContexts while executing transactions.
The compression policy procedure compresses chunks in
multiple transactions. We have seen some issues with snapshots
and portal management in the policy code (due to the
PG13.4 code changes). SPI API has transaction-portal management
code. However, the compression policy code does not use SPI
interfaces. But it is fairly easy to just convert this into
a PL/pgSQL procedure (which calls SPI) rather than replicating
portal managment code in C to manage multiple txns in the
compression policy.
This PR also disallows decompress_chunk, compress_chunk and
recompress_chunk in txn read only mode.
Fixes#3656
The plan output of the dist_partial_agg test is different on PG14
so we need to make it PG version specific. On PG14 sorts are pushed
down more often leading to better plans in some cases.
This also updates the dist_hypertable-14 test output which differs
to previous PG versions due to some renumbering of relation aliases.
PG14 introduced new `ALTER TABLE` sub-commands:
* `.. ALTER COLUMN .. SET COMPRESSION`: handled it properly on
`process_utility` hook code and added related regression tests
* `.. DETACH PARTITION .. {CONCURRENTLY | FINALIZE}`: handled it
properly on `process_utility` hook code but there's no need to add
regression tests because we don't rely to native partitioning in
hypertables.
Closes#3643
Since custom types are hashable in PG14 the partition test will be
different on PG14. Since the only difference was testing whether
creating hypertable with custom type paritition throws errors
without partitioning function that specific test got moved to ddl
tests which already is pg version specific.
VACUUM VERBOSE is the source for flaky tests and we don't gain much
by including the verbose output in the test. Additionally removing
the verbose option prevents us from having to make the vacuum tests
pg-version specific as PG14 slightly changes the formatting of the
VACUUM VERBOSE output.
With memoize enabled PG14 append tests produce a very different
plan compared to previous PG versions. To make comparing plans
between PG versions easier we disable memoize for PG14.
PG14 also modified how EXTRACT is shown in EXPLAIN output
so any query using EXTRACT will have different EXPLAIN output
between PG14 and previous versions.
The previous PR enabling tests on PG14 did not actually require
tests to pass but with INSERT support merged and most of the tests
passing it makes sense to require tests to pass to not introduce
regression and explicitly not require the currently known failing
tests.
PG14 refactors the INSERT path and removes the result relation from
executor state which means plan nodes don't have easy access to
the current result relation and can no longer modify it. This patch
changes the chunk tuple routing for PG14 to pull in code from
ModifyTable which unfortunately is static and adjust it to allow
chunk tuple routing.
When a DISTINCT query has a WHERE clause that constifies the
DISTINCT column the query might use an index that does not have
include the DISTINCT column even though it is referenced in the
ORDER BY clause. The skipscan path generation would error on any
path with such a configuration. This patch changes the path
generation code to skip generating SkipScan path under these
circumstances.
Fixes#3629
Inside the `process_truncate` function is created a new relations list
removing the distributed hypertables and this new list is assigned to
the current statement relation list. This new list is allocated into
the `PortalContext` that is destroyed at the end of the statement
execution.
The problem arise on the subsequent `TRUNCATE` call because the
compiled plpgsql code is cached into another memory context and the
elements of the relations inside this cache is pointing to an invalid
memory area because the `PortalContext` is destroyed.
Fixed it by allocating the new relations list to the same memory
context of the original list.
Fixes#3580, fixes#3622, fixes#3182
The telemetry without openssl test did not use the correct args
supplied in the build matrix leading to a failure on PG14 as
building against PG14 currently requires -DEXPERIMENTAL=ON
When we clone an index from a chunk we must not do attnum mapping if the
supplied index template is not on the hypertable. Ideally we would check
for the template being on the chunk but we cannot do that since when we
rebuild a chunk the new chunk has a different id.
Fixes#3651
Segmentation fault was ocurring when calling the procedure
`refresh_continous_aggregate` from an user defined action (job).
Fixed it by adding the `SPI_connect_ext/SPI_finish` during the
execution because there are underlying SPI calls that was leading
us to an invalid SPI state (nonexistent `_SPI_current` global).
Fixes#3145
timescaledb.ssl_dir/passfile GUC is currently only visible for superusers.
This makes it impossible for non superusers to read GUC value which might be useful in some cases
(eg. troubleshooting or building multinode automation).
Since this GUC's does not contain any sensitive information it makes sense to make it public.
Fix#3608
"create_distributed_hypertable" fails on the datanodes if the columns
involved in the underlying non-default schema-qualified PG table are
using user defined types (UDTs) from another non-standard schema.
This happens because we explicitly set the namespace during the table
creation on the datanode which doesn't allow us to lookup other
schemas. We now unconditionally schema-qualify the UDT while sending
the SQL from access node to the datanode to avoid this.
Includes test case changes.
Issue reported by and general fix provided by @phemmer
Fixes#3543
PG14 changes ModifyTablePath struct to have single child subpath
instead of list of subpaths. Similarly ModifyTableState mt_nplans
gets removed because ModifyTable will only have single child in
PG14. The same patch also removes ri_junkFilter from ResultRelInfo.
https://github.com/postgres/postgres/commit/86dc9005
Using now() in regression tests will result in flaky tests as this
can result in creating different number of chunks depending on
alignment of now() relative to chunk boundaries.
Simplify the CTE to recursively inspect all partitions of a relation
and calculate the sum of `pg_class.reltuples` taking in account the
differences introduced by PG14.
When a new chunk is created, the ACL is copied from the hypertable, but
the shared dependencies are not set at all. Since there is no shared
dependency, a `DROP OWNED BY` will not find the chunk and revoke the
privileges for the user from the chunk. When the user is later dropped,
the ACL for the chunk will contain a non-existent user.
This commit fixes this by adding shared dependencies of the hypertable
to the chunk when the chunk is created.
Fixes#3614
When a chunk is not found, we print a generic error message that does
not hint at what we are looking for, which means that it is very hard
to locate the problem.
This commit adds details to the error message printing out values used
for the scan key when searching for the chunk.
Related-To: #2344
Related-To: #3400
Related-To: #153
If insertion is attempted into a chunk that is compressed, the error
message is very brief. This commit adds a hint that the chunk should be
decompressed before inserting into it and also list the triggers on the
chunk so that it is easy to debug.
Update version numbers and add 2.4.2 to update tests.
We have to put the DROP FUNCTION back in latest-dev because
2.4.2 did not include the commit which removed the function
definitions.
The access node maintains a cache of connections to its data
nodes. Each entry in the cache is a connection for a user and remote
data node pair. Currently, a cache entry is invalidated when a foreign
server object representing a data node is changed (e.g., the port
could have been updated). The connection will remain in the cache for
the duration of the current command, but will be remade with the
updated parameters the next time it is fetched from the cache.
This change invalidates a connection cache entry if the connection's
role/user changes and drops an entry if the role is dropped. One
reason for invalidating a connection is that a role rename invalidates
the certificate the connection is using in case client certificate
authentication is used. Thus, connections that have been
authenticated with a certificate that is no longer valid will be
remade. In some cases, this extra invalidation leads to purging
connections when not strictly necessary. However, this is not a big
problem in practice since role objects don't change often.
Users often have trouble creating continuous aggregates because many
error messages are the same and quite terse offering no guidance
for the user in how to fix. This makes some of them less terse.
Co-authored-by: David Kohn <david@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>"
This release contains bug fixes since the 2.4.1 release.
We deem it high priority to upgrade.
**Bugfixes**
* #3437 Rename on all continuous aggregate objects
* #3469 Use signal-safe functions in signal handler
* #3520 Modify compression job processing logic
* #3527 Fix time_bucket_ng behaviour with origin argument
* #3532 Fix bootstrap with regresschecks disabled
* #3574 Fix failure on job execution by background worker
* #3590 Call cleanup functions on backend exit
**Thanks**
* @jankatins for reporting a crash with background workers
* @LutzWeischerFujitsu for reporting an issue with bootstrap
When renaming a column on a continuous aggregate, only the user view
column was renamed. This commit changes this by applying the rename on
the materialized table, the user view, the direct view, and the partial
view, as well as the column name in the dimension table.
Since this also changes some of the table definitions, we need to
perform the same rename in the update scripts as well, which is done by
comparing the direct view and the user view to decide what columns that
require a rename and then executing that rename on the direct view,
partial view, and materialization table, as well as updating the column
name in the dimension table.
When renaming columns in tables with indexes, the column in the table
is renamed but not the column in the index, which keeps the same name.
When restoring from a dump, however, the column name of the table is
used, which create a diff in the update test. For that reason, we
change the update tests to not list index definitions as part of the
comparison. The existance of the indexes is still tracked and compared
since the indexes for a hypertable is listed as part of the output.
If a downgrade does not revert some changes, this will cause a diff in
the downgrade test. Since the rename is benign and not easy to revert,
this will cause test failure. Instead, we add a file to do extra
actions during a clean-rerun to prevent these diffs. In this case,
applying the same renames as the update script.
Fixes#3405
If the registered procedure has COMMIT inside the code, the execution
of the job by the background worker was failing.
Fixed it checking if there are an ActivePortal, if NO then create a
Portal from scratch and execute the existing job execution code.