When an invalid function oid is passed to set_integer_now_func, it finds
out that the function oid is invalid but before throwing the error, it
calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that
by removing the invalid call to ReleaseSysCache.
Fixes#6037
INSERT ... ON CONFLICT statements record few metrics in the ModifyTable
node's instrument but they get overwritten by hypertable_modify_explain
causing wrong output in EXPLAIN ANALYZE statments. Fix it by saving the
metrics into HypertableModify node before replacing them.
Fixes#6014
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- get_partition_for_key(val anyelement)
- get_partition_hash(val anyelement)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- chunk_constraint_add_table_constraint(_timescaledb_catalog.chunk_constraint)
- chunk_drop_replica(regclass,name)
- chunk_index_clone(oid)
- chunk_index_replace(oid,oid)
- create_chunk_replica_table(regclass,name)
- drop_stale_chunks(name,integer[])
- health()
- hypertable_constraint_add_table_fk_constraint(name,name,name,integer)
- process_ddl_event()
- wait_subscription_sync(name,name,integer,numeric)
Two queries in post.continuous_aggs.v3.sql had no ORDER BY
specification. Therefore, the query output was not deterministic. This
patch adds the missing ORDER BY.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_watermark(integer)
- cagg_watermark_materialized(integer)
- hypertable_invalidation_log_delete(integer)
- invalidation_cagg_log_add_entry(integer,bigint,bigint)
- invalidation_hyper_log_add_entry(integer,bigint,bigint)
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[])
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[],text[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[],text[])
- materialization_invalidation_log_delete(integer)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- restart_background_workers()
- stop_background_workers()
- start_background_workers()
- alter_job_set_hypertable_id(integer,regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- calculate_chunk_interval(int, bigint, bigint)
- chunk_status(regclass)
- chunks_in(record, integer[])
- chunk_id_from_relid(oid)
- show_chunk(regclass)
- create_chunk(regclass, jsonb, name, name, regclass)
- set_chunk_default_data_node(regclass, name)
- get_chunk_relstats(regclass)
- get_chunk_colstats(regclass)
- create_chunk_table(regclass, jsonb, name, name)
- freeze_chunk(regclass)
- unfreeze_chunk(regclass)
- drop_chunk(regclass)
- attach_osm_table_chunk(regclass, regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- generate_uuid()
- get_git_commit()
- get_os_info()
- tsl_loaded()
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- relation_size(regclass)
- data_node_hypertable_info(name, name, name)
- data_node_chunk_info(name, name, name)
- hypertable_local_size(name, name)
- hypertable_remote_size(name, name)
- chunks_local_size(name, name)
- chunks_remote_size(name, name)
- range_value_to_pretty(bigint, regtype)
- get_approx_row_count(regclass)
- data_node_compressed_chunk_stats(name, name, name)
- compressed_chunk_local_stats(name, name)
- compressed_chunk_remote_stats(name, name)
- indexes_local_size(name, name)
- data_node_index_size(name, name, name)
- indexes_remote_size(name, name, name)
So far, the ts_bookend_deserializefunc() function has allocated the
deserialized data in the current memory context. This data could be
removed before the aggregation is finished. This patch moves the data
into the aggregation memory context.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
The expression part of the psql \if cannot contain actual SQL expression
and is instead much more limited.
A valid value is any unambiguous case-insensitive match for one of: true, false, 1, 0, on, off, yes, no.
See https://www.postgresql.org/docs/current/app-psql.html
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the get_create_command function.
The parallel version of the ChunkAppend node uses shared memory to
coordinate the plan selection for the parallel workers. If the workers
perform the startup exclusion individually, it may choose different
subplans for each worker (e.g., due to a "constant" function that claims
to be constant but returns different results). In that case, we have a
disagreement about the plans between the workers. This would lead to
hard-to-debug problems and out-of-bounds reads when pstate->next_plan is
used for subplan selection.
With this patch, startup exclusion is only performed in the parallel
leader. The leader stores this information in shared memory. The
parallel workers read the information from shared memory and don't
perform startup exclusion.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
Due to the postman-echo endpoint redirecting http requests to https, we
get an unexpected 301 response in the tests, leading to repeated test
failures. This commit removes these function calls.
This PR improves the way the number of parallel workers for the
DecompressChunk node are calculated. Since
1a93c2d482b50a43c105427ad99e6ecb58fcac7f, no partial paths for small
relations are generated, which could cause a fallback to a sequential
plan and a performance regression. This patch ensures that for all
relations, a partial path is created again.
Since we want to be able to handle update of weird user names we add
some to the update tests and create policies on them. This will create
jobs with the strange name as owner.
Removed the PG12 specific macros and all the now, dead code. Also updated
the testcases which had workarounds in place to make them compatible
with PG12.
For continuous aggregates with variable bucket size, the interval
was wrongly manipulated in the process. Now it is corrected by
creating a copy of interval structure for validation purposes
and keeping the original structure untouched.
Fixes#5734
Add support for setting replica identity on hypertables via ALTER
TABLE. The replica identity is used in logical replication to identify
rows that have changed.
Currently, replica identity can only be altered on hypertables via the
root; changing it directly on chunks will raise an error.
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.
This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
In the function bookend_sfunc values are compared. If the first
processed value is a NULL value, it was copied into the state of the
sfunc. A following comparison between the NULL value of the state and a
non-NULL value could lead to a crash.
This patch improves the handling of NULL values in bookend_sfunc.
This patch does following:
1. Planner changes to create ChunkDispatch node when MERGE command
has INSERT action.
2. Changes to map partition attributes from a tuple returned from
child node of ChunkDispatch against physical targetlist, so that
ChunkDispatch node can read the correct value from partition column.
3. Fixed issues with MERGE on compressed hypertable.
4. Added more testcases.
5. MERGE in distributed hypertables is not supported.
6. Since there is no Custom Scan (HypertableModify) node for MERGE
with UPDATE/DELETE on compressed hypertables, we don't support this.
Fixes#5139
Add table `_timescaledb_catalog.telemetry_event` table containing
events that should be sent out with telemetry reports. The table will
be truncated after reporting being generated.
Running ALTER TABLE SET with multiple SET clauses on a regular PostgreSQL table
produces irrelevant error when timescaledb extension is installed.
Fix#5641
Commit 3f9cb3c2 introduced new repair tests for broken Continuous
Aggregates with JOIN clause but in the post.repair.sql we not properly
calling the post.repair.cagg_joins.sql because a wrong usage of psql
`if` statement.
Whenever we create a template sql file (*.sql.in) we should add the
respective .gitignore entry for the generated test files.
So added a CI check to check for missing gitignore entries for generated
test files.
In case of joins in the continuous aggregates, pass the required
structs to the new rte created. These values are required by the
planner to finally query the materialized view.
Fixes#5433
There is a bug in `width_bucket()` causing an overflow and subsequent
NaN value as a result of dividing with `+inf`. The NaN value is
interpreted as an integer and hence generates an index out of range for
the buckets.
This commit fixes this by generating an error rather than
segfaulting for bucket indexes that are out of range.
When called with negative chunk_target_size_bytes
calculate_chunk_interval will throw an assertion. This patch adds
error handling for this condition. Found by sqlsmith.
For continuous aggregates with the old-style partial aggregates
renaming columns that are not in the group-by clause will generate an
error when upgrading to a later version. The reason is that it is
implicitly assumed that the name of the column is the same as for the
direct view. This holds true for new-style continous aggregates, but is
not always true for old-style continuous aggregates. In particular,
columns that are not part of the `GROUP BY` clause can have an
internally generated name.
This commit fixes that by extracting the name of the column from the
partial view and use that when renaming the partial view column and the
materialized table column.
Just noticed abysmal INSERT performance when experimenting with one of
our customers' data set, and turns out my cache sizes were
misconfigured, leading to constant hypertable chunk cache thrashing.
Show a warning to detect this misconfiguration. Also use more generous
defaults, we're not supposed to run on a microwave (unlike Postgres).
When TidRangeScan is child of ChunkAppend or ConstraintAwareAppend node, an
error is reported as "invalid child of chunk append: Node (26)". This patch
fixes the issue by recognising TidRangeScan as a valid child.
Fixes: #4872
Add 2.9.0 to update test scripts and adjust downgrade scripts for
2.9.0. Additionally adjust CHANGELOG to sync with the actual release
CHANGELOG and add PG15 to CI.
Issue occurs in extended query protocol mode only where every
query goes through PREPARE and EXECUTE phase. First time ANALYZE
is executed, a list of relations to be vaccumed is extracted and
saved in a list. This list is referenced in parsetree node. Once
execution of ANALYZE is complete, this list is cleaned up, however
reference to the same is not cleaned up in parsetree node. When
second time ANALYZE is executed, segfault happens as we access an
invalid memory location.
Fixed the issue by restoring the actual value in parsetree node
once ANALYZE completes its execution.
Fixes#4857