If a hypertable uses a non-default tablespace for its primary or
unique constraints with additional DEFERRABLE or INITIALLY DEFERRED
characteristics then any chunk creation will fail with syntax error. We
now set the tablespace via a separate command for such constraints for
the chunks.
Fixes#6338
One test query of the parallel test did not contain an ORDER BY.
Therefore, the test result was not deterministic. This patch adds the
missing ORDER BY.
Tests `util` and `repair` both used the same user name, so when
executing in the same parallel suite they could cause conflict.
Instead, use different role names for different tests.
If users have accidentally been removed from `pg_authid` as a result of
bugs where dropping a user did not revoke privileges from all tables
where the had privileges, it will not be possible to create new chunks
since these require the user to be found when copying the privileges
for the parent table (either compressed hypertable or normal
hypertable).
To fix the situation, we repair the `pg_class` table when updating the
extension by modifying the `relacl` for relations and remove any user
that do not have an entry in `pg_authid`.
A repair function `_timescaledb_functions.repair_relation_acls` is
added that will perform the job. A `makeaclitem` from PG16 that accepts
a list of comma and used as part of the repair is also added as
`_timescaledb_functions.makeaclitem`.
- Updated show_chunks, drop_chunks APIs to get the affected
chunks using chunk creation time metadata based on the
"date/time/interval" like boundary specified for the INTEGER
columns.
- We honor "integer_now" function if it's specified so as to keep
backwards compatibility with the existing behavior
Co-authored-by: Dipesh Pandit <dipesh@timescale.com>
The upgrade test utilizes several CAggs and defines refresh policies.
However, the test results are not deterministic due to the unpredictable
timing of the refreshes. This causes the test to be flaky. This pull
request explicitly adds explicit refreshes to the upgrade test to ensure
that the test results are deterministic.
PG16 will remove RelOptInfo entries from root->simple_rel_array
when it considers them not needed in analyzejoins.c to prevent
reprocessing them. Due to this a relation may be present in
root->simple_rte_array but have no corresponding entry in
root->simple_rel_array. This patch adds a check for this case.
https://github.com/postgres/postgres/commit/e9a20e45
This patch stores the current catalog version in an internal
table to allow us to verify catalog and code version match.
When doing dump/restore people occasionally report very unusual
errors and during investigation it is discovered that they loaded
a dump from an older version and run it with a later code version.
This allows to detect mismatches between installed code version
and loaded dump version. The version number in the metadata table
will be kept updated in upgrade and downgrade scripts.
- Added creation_time attribute to timescaledb catalog table "chunk".
Also, updated corresponding view timescaledb_information.chunks to
include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
creation to handle new partition range for give dimension (Time/
SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
part of running upgrade script. The current timestamp (now()) at the
time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.
Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>
The current hypertable creation interface is heavily focused on a time
column, but since hypertables are focused on partitioning of not only
time columns, we introduce a more generic API that support different
types of keys for partitioning.
The new interface introduced new versions of create_hypertable,
add_dimension, and a replacement function `set_partitioning_interval`
that replaces `set_chunk_time_interval`. The new functions accept an
instance of dimension_info that can be constructed using constructor
functions `by_range` and `by_hash`, allowing a more versatile and
future-proof API.
For examples:
SELECT create_hypertable('conditions', by_range('time'));
SELECT add_dimension('conditions', by_hash('device'));
The old API remains, but will eventually be deprecated.
PG13 introduced an option to DROP DATABASE statement to terminate all
existing connections to the target database. Now that our minor
supported version is PG13 make sense to use it on regression tests in
order to avoid potential flaky tests.
The bgw_launcher test changes the tablespace for a database. However,
this requires that the database is used and no BGW are accessing the
database. This PR ensures that the background workers are stopped before
the tablespace is changed.
By default, the compression policy is scheduled for every
chunk_time_interval / 2 in the current implementation, equal to three
days and twelve hours with our default settings. This schedule interval
was sufficient for previous versions of TimescaleDB. However, with the
introduction of features like mutable compression and ON CONFLICT .. DO
UPDATE queries, regular DML operations decompress data. To ensure that
modified data is compressed earlier, this patch reduces the schedule
interval of the compression policy to run at least every 12 hours.
This patch stops the background worker in the bgw_launcher test teardown
before the database is dropped. The current version of the test is flaky
because sometimes the database cannot be dropped due to BGW activity.
So far, we have created fake partitioning info for hypertables if the
PostgreSQL setting 'enable_partitionwise_aggregate' is set. This causes
PostgreSQL to push down partial aggregations to the chunk level.
However, the PostgreSQL code has some drawbacks because the query is
replanned and optimizations like ChunkAppend are lost. Since #5596 we
have implemented our own code to push down partial aggregations.
Therefore, we can ignore the PostgreSQL setting from now on.
Previously the view was created outside the if, leading to an unexpected
error when attempting to drop a dependency of the view in the
extension upgrade script.
When an invalid function oid is passed to set_integer_now_func, it finds
out that the function oid is invalid but before throwing the error, it
calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that
by removing the invalid call to ReleaseSysCache.
Fixes#6037
INSERT ... ON CONFLICT statements record few metrics in the ModifyTable
node's instrument but they get overwritten by hypertable_modify_explain
causing wrong output in EXPLAIN ANALYZE statments. Fix it by saving the
metrics into HypertableModify node before replacing them.
Fixes#6014
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- get_partition_for_key(val anyelement)
- get_partition_hash(val anyelement)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- chunk_constraint_add_table_constraint(_timescaledb_catalog.chunk_constraint)
- chunk_drop_replica(regclass,name)
- chunk_index_clone(oid)
- chunk_index_replace(oid,oid)
- create_chunk_replica_table(regclass,name)
- drop_stale_chunks(name,integer[])
- health()
- hypertable_constraint_add_table_fk_constraint(name,name,name,integer)
- process_ddl_event()
- wait_subscription_sync(name,name,integer,numeric)
Two queries in post.continuous_aggs.v3.sql had no ORDER BY
specification. Therefore, the query output was not deterministic. This
patch adds the missing ORDER BY.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_watermark(integer)
- cagg_watermark_materialized(integer)
- hypertable_invalidation_log_delete(integer)
- invalidation_cagg_log_add_entry(integer,bigint,bigint)
- invalidation_hyper_log_add_entry(integer,bigint,bigint)
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[])
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[],text[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[],text[])
- materialization_invalidation_log_delete(integer)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- restart_background_workers()
- stop_background_workers()
- start_background_workers()
- alter_job_set_hypertable_id(integer,regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- calculate_chunk_interval(int, bigint, bigint)
- chunk_status(regclass)
- chunks_in(record, integer[])
- chunk_id_from_relid(oid)
- show_chunk(regclass)
- create_chunk(regclass, jsonb, name, name, regclass)
- set_chunk_default_data_node(regclass, name)
- get_chunk_relstats(regclass)
- get_chunk_colstats(regclass)
- create_chunk_table(regclass, jsonb, name, name)
- freeze_chunk(regclass)
- unfreeze_chunk(regclass)
- drop_chunk(regclass)
- attach_osm_table_chunk(regclass, regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- generate_uuid()
- get_git_commit()
- get_os_info()
- tsl_loaded()
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- relation_size(regclass)
- data_node_hypertable_info(name, name, name)
- data_node_chunk_info(name, name, name)
- hypertable_local_size(name, name)
- hypertable_remote_size(name, name)
- chunks_local_size(name, name)
- chunks_remote_size(name, name)
- range_value_to_pretty(bigint, regtype)
- get_approx_row_count(regclass)
- data_node_compressed_chunk_stats(name, name, name)
- compressed_chunk_local_stats(name, name)
- compressed_chunk_remote_stats(name, name)
- indexes_local_size(name, name)
- data_node_index_size(name, name, name)
- indexes_remote_size(name, name, name)
So far, the ts_bookend_deserializefunc() function has allocated the
deserialized data in the current memory context. This data could be
removed before the aggregation is finished. This patch moves the data
into the aggregation memory context.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
The expression part of the psql \if cannot contain actual SQL expression
and is instead much more limited.
A valid value is any unambiguous case-insensitive match for one of: true, false, 1, 0, on, off, yes, no.
See https://www.postgresql.org/docs/current/app-psql.html
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the get_create_command function.
The parallel version of the ChunkAppend node uses shared memory to
coordinate the plan selection for the parallel workers. If the workers
perform the startup exclusion individually, it may choose different
subplans for each worker (e.g., due to a "constant" function that claims
to be constant but returns different results). In that case, we have a
disagreement about the plans between the workers. This would lead to
hard-to-debug problems and out-of-bounds reads when pstate->next_plan is
used for subplan selection.
With this patch, startup exclusion is only performed in the parallel
leader. The leader stores this information in shared memory. The
parallel workers read the information from shared memory and don't
perform startup exclusion.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
Due to the postman-echo endpoint redirecting http requests to https, we
get an unexpected 301 response in the tests, leading to repeated test
failures. This commit removes these function calls.
This PR improves the way the number of parallel workers for the
DecompressChunk node are calculated. Since
1a93c2d482b50a43c105427ad99e6ecb58fcac7f, no partial paths for small
relations are generated, which could cause a fallback to a sequential
plan and a performance regression. This patch ensures that for all
relations, a partial path is created again.
Since we want to be able to handle update of weird user names we add
some to the update tests and create policies on them. This will create
jobs with the strange name as owner.
Removed the PG12 specific macros and all the now, dead code. Also updated
the testcases which had workarounds in place to make them compatible
with PG12.
For continuous aggregates with variable bucket size, the interval
was wrongly manipulated in the process. Now it is corrected by
creating a copy of interval structure for validation purposes
and keeping the original structure untouched.
Fixes#5734
Add support for setting replica identity on hypertables via ALTER
TABLE. The replica identity is used in logical replication to identify
rows that have changed.
Currently, replica identity can only be altered on hypertables via the
root; changing it directly on chunks will raise an error.
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.
This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
In the function bookend_sfunc values are compared. If the first
processed value is a NULL value, it was copied into the state of the
sfunc. A following comparison between the NULL value of the state and a
non-NULL value could lead to a crash.
This patch improves the handling of NULL values in bookend_sfunc.
This patch does following:
1. Planner changes to create ChunkDispatch node when MERGE command
has INSERT action.
2. Changes to map partition attributes from a tuple returned from
child node of ChunkDispatch against physical targetlist, so that
ChunkDispatch node can read the correct value from partition column.
3. Fixed issues with MERGE on compressed hypertable.
4. Added more testcases.
5. MERGE in distributed hypertables is not supported.
6. Since there is no Custom Scan (HypertableModify) node for MERGE
with UPDATE/DELETE on compressed hypertables, we don't support this.
Fixes#5139