Currently we finish the execution of some process utility statements
and don't execute other hooks in the chain.
Because that reason neither `ts_stat_statements` and
`pg_stat_statements` are able to track some utility statements, for
example COPY ... FROM.
To be able to track it on `ts_stat_statements` we're introducing some
callbacks in order to hook `pgss_store` from TimescaleDB and store
information about the execution of those statements.
In this PR we're also adding a new GUC `enable_tss_callbacks=true` to
enable or disable the ability to hook `ts_stat_statements` from
TimescaleDB.
The ts_hypertable_insert_blocker function was accessing data from the
trigger context before it was tested that a trigger context actually
exists. This led to a crash when the function was called directly.
Fixes: #6819
When a catalog corruption occurs, and a chunk does not contain any
dimension slices, we crash in ts_dimension_slice_cmp(). This patch adds
a proper check and errors out before the code path is called.
Add telemetry for tracking access methods used, number of pages for
each access method, and number of instances using each access method.
Also introduces a type-based function `ts_jsonb_set_value_by_type` that
can generate correct JSONB based on the PostgreSQL type. It will
generate "bare" values for numerics, and strings for anything else
using the output function for the type.
To test this for string values, we update `ts_jsonb_add_interval` to
use this new function, which is calling the output function for the
type, just like `ts_jsonb_set_value_by_type`.
This test file was created to handle repairing of hypertables that
had broken related metadata in the dimension_slice catalog tables.
Probably the test does not make sense today given that we have more
robust referential integrity in our catalog tables. Removing it now.
The logic in chunk append path creation when a space dimension was
involved was crashing while checking for matches in the flattened out
children chunk lists. This has been fixed now.
Several scheduler regression tests create the plpgsql function
`wait_for_job_to_run` with the same purpose of waiting for a given job
to execute or fail, so refactor the regression tests by adding it to the
testsupport.sql library.
If "created_after/before" is used with a "time" type partitioning
column then show_chunks was not showing appropriate list due to a
mismatch in the comparison of the "creation_time" metadata (which is
stored as a timestamptz) with the internally converted epoch based
input argument value. This is now fixed by not doing the unnecessary
conversion into the internal format for cases where it's not needed.
Fixes#6611
The only relevant update test versions are v7 and v8 all previous
versions are no longeri used in any supported update path so we can
safely remove those files.
Currently the update test is quite inconvenient to run locally and also
inconvenient to debug as the different update tests all run in their own
docker container. This patch refactors the update test to no longer
require docker and make it easier to debug as it will run in the
local environment as determined by pg_config.
This patch also consolidates update/downgrade and repair test since
they do very similar things and adds support for coredump stacktraces to
the github action and removes some dead code from the update tests.
Additionally the versions to be used in the update test are now
determined from existing git tags so the post release patch no longer
needs to add newly released versions.
Historically we preserve chunk metadata because the old format of the
Continuous Aggregate has the `chunk_id` column in the materialization
hypertable so in order to don't have chunk ids left over there we just
mark it as dropped when dropping chunks.
In #4269 we introduced a new Continuous Aggregate format that don't
store the `chunk_id` in the materialization hypertable anymore so it's
safe to also remove the metadata when dropping chunk and all associated
Continuous Aggregates are in the new format.
Also added a post-update SQL script to cleanup unnecessary dropped chunk
metadata in our catalog.
Closes#6570
Version 2.14.0 removes the multi-node code. However, there were a few
leftovers for the handling of distributed CAggs. This commit cleans up
the CAgg code and removes the no longer needed functions:
invalidation_cagg_log_add_entry(integer,bigint,bigint);
invalidation_hyper_log_add_entry(integer,bigint,bigint);
materialization_invalidation_log_delete(integer);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
bigint,integer[],bigint[],bigint[]);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
bigint,integer[],bigint[],bigint[],text[]);
invalidation_process_hypertable_log(integer,integer,regtype,
integer[],bigint[],bigint[]);
invalidation_process_hypertable_log(integer,integer,regtype,
integer[],bigint[],bigint[],text[]);
hypertable_invalidation_log_delete(integer);
If a lot of chunks are involved then the current pl/pgsql function
to compute the size of each chunk via a nested loop is pretty slow.
Additionally, the current functionality makes a system call to get the
file size on disk for each chunk everytime this function is called.
That again slows things down. We now have an approximate function which
is implemented in C to avoid the issues in the pl/pgsql function.
Additionally, this function also uses per backend caching using the
smgr layer to compute the approximate size cheaply. The PG cache
invalidation clears off the cached size for a chunk when DML happens
into it. That size cache is thus able to get the latest size in a
matter of minutes. Also, due to the backend caching, any long running
session will only fetch latest data for new or modified chunks and can
use the cached data (which is calculated afresh the first time around)
effectively for older chunks.
Foreign keys pointing to hypertables are not supported. Creating a
hypertable from a table referenced by foreign key succeeds, but it
leaves the referencing (child) table in a broken state, failing on every
insert with a `violates foreign key constraint` error.
To prevent this scenario, if a foreign key reference to the table exists
before converting it to a hypertable, the following error will be
raised:
```
cannot have FOREIGN KEY contraints to hypertable "<table_name>"
HINT: Remove all FOREIGN KEY constraints to table "<table_name>"
before making it a hypertable.
```
Fixes#6452
Logging and caching related tables from the timescaledb extension
should not be dumped using pg_dump. Our scripts specify a few such
unwanted tables. Apart from being unnecessary, the "job_errors" had
some restricted permissions causing additional problems in pg_dump.
We now don't include such tables for dumping.
Fixes#5449
This patch changes the dump configuration for
_timescaledb_catalog.metadata to include all entries. To allow loading
logical dumps with this configuration an insert trigger is added that
turns uniqueness conflicts into updates to not block the restore.
Since the optional time_bucket arguments like offset, origin and
timezone shift the output by at most bucket width we can optimize
these similar to how we optimize the other time_bucket constraints.
Fixes#4825
When time_bucket is compared to constant in WHERE, we also add condition
on the underlying time variable (ts_transform_time_bucket_comparison).
Unfortunately we only do this for plan-time constants, which prevents
chunk exclusion when the interval is given by a query parameter, and a
generic plan is used. This commit also tries to apply this optimization
after applying the execution-time constants.
This PR also enables startup exclusion based on parameterized filters.
This patch removes some version checks that are now superfluous.
The oldest version our update process needs to be able to handle is
2.1.0 as previous versions will not work with currently supported
postgres versions.
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
The extension state is not easily accessible in release builds, which
makes debugging issue with the loader very difficult. This commit
introduces a new schema `_timescaledb_debug` and makes the function
`ts_extension_get_state` available also in release builds as
`_timescaledb_debug.extension_state`.
See #1682
Another try to reduce the flakiness of this test. Previously
added REVOKE CONNECT does not stop db owners or superusers connecting.
Altering the database with allow_connections = off should stop
everybody from connecting.
During pg_dump creation of extra database, we terminate all
processes connected to the template database but we don't
revoke connection to it. This is why we still see random
failures to create the new database.
During dropping of the database used in the various tests
we can encounter an error if something is still connected
to the database, making the output unpredicatable. This
change should reduce the possibility of that happening.
The update test fails on PG13 on the statement
`\d+ cagg_joins_upgrade_test_with_realtime_using` with
the error message `ERROR: invalid memory alloc request size 13877216128`.
To unblock CI and allow other PRs to get merged we temporarily
skip the offending query on PG13.
Latest version of Postgres introduced an optimization
which removes redundant grouping and DISTINCT columns.
This optimization needs to be taken into account when
generating pushdown aggregation plans so we can create
valid plans with correct grouping columns.
This is a very known flaky test where we need to create another database
as a template of the TEST_DBNAME but some background workers didn't had
enough time to properly shutdown and disconnect from the database.
Fixed it by forcing other processes to terminate just after waiting for
background workers to discinnect from the database.
Closes#4766
Signed-off-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
If a hypertable uses a non-default tablespace for its primary or
unique constraints with additional DEFERRABLE or INITIALLY DEFERRED
characteristics then any chunk creation will fail with syntax error. We
now set the tablespace via a separate command for such constraints for
the chunks.
Fixes#6338
One test query of the parallel test did not contain an ORDER BY.
Therefore, the test result was not deterministic. This patch adds the
missing ORDER BY.
Tests `util` and `repair` both used the same user name, so when
executing in the same parallel suite they could cause conflict.
Instead, use different role names for different tests.
If users have accidentally been removed from `pg_authid` as a result of
bugs where dropping a user did not revoke privileges from all tables
where the had privileges, it will not be possible to create new chunks
since these require the user to be found when copying the privileges
for the parent table (either compressed hypertable or normal
hypertable).
To fix the situation, we repair the `pg_class` table when updating the
extension by modifying the `relacl` for relations and remove any user
that do not have an entry in `pg_authid`.
A repair function `_timescaledb_functions.repair_relation_acls` is
added that will perform the job. A `makeaclitem` from PG16 that accepts
a list of comma and used as part of the repair is also added as
`_timescaledb_functions.makeaclitem`.
- Updated show_chunks, drop_chunks APIs to get the affected
chunks using chunk creation time metadata based on the
"date/time/interval" like boundary specified for the INTEGER
columns.
- We honor "integer_now" function if it's specified so as to keep
backwards compatibility with the existing behavior
Co-authored-by: Dipesh Pandit <dipesh@timescale.com>
The upgrade test utilizes several CAggs and defines refresh policies.
However, the test results are not deterministic due to the unpredictable
timing of the refreshes. This causes the test to be flaky. This pull
request explicitly adds explicit refreshes to the upgrade test to ensure
that the test results are deterministic.
PG16 will remove RelOptInfo entries from root->simple_rel_array
when it considers them not needed in analyzejoins.c to prevent
reprocessing them. Due to this a relation may be present in
root->simple_rte_array but have no corresponding entry in
root->simple_rel_array. This patch adds a check for this case.
https://github.com/postgres/postgres/commit/e9a20e45
This patch stores the current catalog version in an internal
table to allow us to verify catalog and code version match.
When doing dump/restore people occasionally report very unusual
errors and during investigation it is discovered that they loaded
a dump from an older version and run it with a later code version.
This allows to detect mismatches between installed code version
and loaded dump version. The version number in the metadata table
will be kept updated in upgrade and downgrade scripts.
- Added creation_time attribute to timescaledb catalog table "chunk".
Also, updated corresponding view timescaledb_information.chunks to
include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
creation to handle new partition range for give dimension (Time/
SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
part of running upgrade script. The current timestamp (now()) at the
time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.
Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>