This patch stores the current catalog version in an internal
table to allow us to verify catalog and code version match.
When doing dump/restore people occasionally report very unusual
errors and during investigation it is discovered that they loaded
a dump from an older version and run it with a later code version.
This allows to detect mismatches between installed code version
and loaded dump version. The version number in the metadata table
will be kept updated in upgrade and downgrade scripts.
This release contains bug fixes since the 2.12.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #6113 Fix planner distributed table count
* #6117 Avoid decompressing batches using an empty slot
* #6123 Fix concurrency errors in OSM API
* #6142 Do not throw an error when deprecation GUC cannot be read
**Thanks**
* @symbx for reporting a crash when selecting from empty hypertables
The trigger `continuous_agg_invalidation_trigger` receive the hypertable
id as parameter as the following example:
Triggers:
ts_cagg_invalidation_trigger AFTER INSERT OR DELETE OR UPDATE ON
_timescaledb_internal._hyper_3_59_chunk
FOR EACH ROW EXECUTE FUNCTION
_timescaledb_functions.continuous_agg_invalidation_trigger('3')
The problem is in the compatibility layer using PL/pgSQL code there's no
way to passdown the parameter from the wrapper trigger function created
to the underlying trigger function in another schema.
To solve this we simple create a new function in the deprecated
`_timescaledb_internal` schema pointing to the function trigger and
inside the C code we emit the WARNING message if the function is called
from the deprecated schema.
- Added creation_time attribute to timescaledb catalog table "chunk".
Also, updated corresponding view timescaledb_information.chunks to
include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
creation to handle new partition range for give dimension (Time/
SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
part of running upgrade script. The current timestamp (now()) at the
time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.
Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>
When connected to TimescaleDB with the timescaledb.disable_load=on GUC
setting, calling these functions causes an error to be thrown.
By specifying missing_ok=true, we prevent this situation from causing an
error for the user.
The current hypertable creation interface is heavily focused on a time
column, but since hypertables are focused on partitioning of not only
time columns, we introduce a more generic API that support different
types of keys for partitioning.
The new interface introduced new versions of create_hypertable,
add_dimension, and a replacement function `set_partitioning_interval`
that replaces `set_chunk_time_interval`. The new functions accept an
instance of dimension_info that can be constructed using constructor
functions `by_range` and `by_hash`, allowing a more versatile and
future-proof API.
For examples:
SELECT create_hypertable('conditions', by_range('time'));
SELECT add_dimension('conditions', by_hash('device'));
The old API remains, but will eventually be deprecated.
With the current version of our docs, the first and third URL in our
extension installation message pointed to the same page. This PR removes
one of the duplicates.
This release contains performance improvements for compressed hypertables
and continuous aggregates and bug fixes since the 2.11.2 release.
We recommend that you upgrade at the next available opportunity.
This release moves all internal functions from the _timescaleb_internal
schema into the _timescaledb_functions schema. This separates code from
internal data objects and improves security by allowing more restrictive
permissions for the code schema. If you are calling any of those internal
functions you should adjust your code as soon as possible. This version
also includes a compatibility layer that allows calling them in the old
location but that layer will be removed in 2.14.0.
**PostgreSQL 12 support removal announcement**
Following the deprecation announcement for PostgreSQL 12 in TimescaleDB 2.10,
PostgreSQL 12 is not supported starting with TimescaleDB 2.12.
Currently supported PostgreSQL major versions are 13, 14 and 15.
PostgreSQL 16 support will be added with a following TimescaleDB release.
**Features**
* #5137 Insert into index during chunk compression
* #5150 MERGE support on hypertables
* #5515 Make hypertables support replica identity
* #5586 Index scan support during UPDATE/DELETE on compressed hypertables
* #5596 Support for partial aggregations at chunk level
* #5599 Enable ChunkAppend for partially compressed chunks
* #5655 Improve the number of parallel workers for decompression
* #5758 Enable altering job schedule type through `alter_job`
* #5805 Make logrepl markers for (partial) decompressions
* #5809 Relax invalidation threshold table-level lock to row-level when refreshing a Continuous Aggregate
* #5839 Support CAgg names in chunk_detailed_size
* #5852 Make set_chunk_time_interval CAggs aware
* #5868 Allow ALTER TABLE ... REPLICA IDENTITY (FULL|INDEX) on materialized hypertables (continuous aggregates)
* #5875 Add job exit status and runtime to log
* #5909 CREATE INDEX ONLY ON hypertable creates index on chunks
**Bugfixes**
* #5860 Fix interval calculation for hierarchical CAggs
* #5894 Check unique indexes when enabling compression
* #5951 _timescaledb_internal.create_compressed_chunk doesn't account for existing uncompressed rows
* #5988 Move functions to _timescaledb_functions schema
* #5788 Chunk_create must add an existing table or fail
* #5872 Fix duplicates on partially compressed chunk reads
* #5918 Fix crash in COPY from program returning error
* #5990 Place data in first/last function in correct mctx
* #5991 Call eq_func correctly in time_bucket_gapfill
* #6015 Correct row count in EXPLAIN ANALYZE INSERT .. ON CONFLICT output
* #6035 Fix server crash on UPDATE of compressed chunk
* #6044 Fix server crash when using duplicate segmentby column
* #6045 Fix segfault in set_integer_now_func
* #6053 Fix approximate_row_count for CAggs
* #6081 Improve compressed DML datatype handling
* #6084 Propagate parameter changes to decompress child nodes
**Thanks**
* @ajcanterbury for reporting a problem with lateral joins on compressed chunks
* @alexanderlaw for reporting multiple server crashes
* @lukaskirner for reporting a bug with monthly continuous aggregates
* @mrksngl for reporting a bug with unusual user names
* @willsbit for reporting a crash in time_bucket_gapfill
This commit introduces a function `hypertable_osm_range_update`
in the _timescaledb_functions schema. This function is meant to serve
as an API call for the OSM extension to update the time range
of a hypertable's OSM chunk with the min and max values present
in the contiguous time range its tiered chunks span.
If the range is not contiguous, then it must be set to the invalid
range an OSM chunk is assigned upon creation.
A new status field is also introduced in the hypertable catalog
table to keep track of whether the ranges covered by tiered and
non-tiered chunks overlap.
When there is no overlap detected then it is possible to apply the
Ordered Append optimization in the presence of OSM chunks.
The initial range for the OSM chunk will be from INT64_MAX - 1 to INT64_MAX.
This range was chosen to minimize interference with tuple routing and
occupy a range outside of potential values as there must be no overlap
between the hypercube occupied by the osm chunk and actual chunks.
Previously the range covered by the osm chunk would bucket the start
of the range leading to a dependency on chunk_time_interval and
limiting the range of usable values in the hypertable when an osm chunk
is present.
The approximate_row_count function is executed directly on the user view
instead of corresponding materialized hypertable which returns 0 for
caggs. The function is updated to fetch the details for materialized
hypertable information corresponding to cagg and then get the
approximate_row_count for the materialized hypertable.
Fixes#6051
With timescaledb 2.12 all the functions present in _timescaledb_internal
were moved into the _timescaledb_functions schema to improve schema
security. This patch adds a compatibility layer so external callers
of these internal functions will not break and allow for more
flexibility when migrating.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- finalize_agg_ffunc(internal,text,name,name,name[],bytea,anyelement)
- finalize_agg_sfunc(internal,text,name,name,name[],bytea,anyelement)
- partialize_agg(anyelement)
- finalize_agg(text,name,name,name[][],bytea,anyelement)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- get_partition_for_key(val anyelement)
- get_partition_hash(val anyelement)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- chunk_constraint_add_table_constraint(_timescaledb_catalog.chunk_constraint)
- chunk_drop_replica(regclass,name)
- chunk_index_clone(oid)
- chunk_index_replace(oid,oid)
- create_chunk_replica_table(regclass,name)
- drop_stale_chunks(name,integer[])
- health()
- hypertable_constraint_add_table_fk_constraint(name,name,name,integer)
- process_ddl_event()
- wait_subscription_sync(name,name,integer,numeric)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_watermark(integer)
- cagg_watermark_materialized(integer)
- hypertable_invalidation_log_delete(integer)
- invalidation_cagg_log_add_entry(integer,bigint,bigint)
- invalidation_hyper_log_add_entry(integer,bigint,bigint)
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[])
- invalidation_process_cagg_log(integer,integer,regtype,bigint,bigint,integer[],bigint[],bigint[],text[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[])
- invalidation_process_hypertable_log(integer,integer,regtype,integer[],bigint[],bigint[],text[])
- materialization_invalidation_log_delete(integer)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- restart_background_workers()
- stop_background_workers()
- start_background_workers()
- alter_job_set_hypertable_id(integer,regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_migrate_create_plan(_timescaledb_catalog.continuous_agg,text,boolean,boolean)
- cagg_migrate_execute_copy_data(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_copy_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_create_new_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_disable_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_drop_old_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_enable_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_override_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_plan(_timescaledb_catalog.continuous_agg)
- cagg_migrate_execute_refresh_new_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_plan_exists(integer)
- cagg_migrate_pre_validation(text,text,text)
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com
Co-authored-by: Sven Klemm <sven@timescale.com>
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- policy_compression_check(jsonb)
- policy_compression_execute(integer,integer,anyelement,integer,boolean,boolean)
- policy_compression(integer,jsonb)
- policy_job_error_retention_check(jsonb)
- policy_job_error_retention(integer,jsonb)
- policy_recompression(integer,jsonb)
- policy_refresh_continuous_aggregate_check(jsonb)
- policy_refresh_continuous_aggregate(integer,jsonb)
- policy_reorder_check(jsonb)
- policy_reorder(integer,jsonb)
- policy_retention_check(jsonb)
- policy_retention(integer,jsonb)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- calculate_chunk_interval(int, bigint, bigint)
- chunk_status(regclass)
- chunks_in(record, integer[])
- chunk_id_from_relid(oid)
- show_chunk(regclass)
- create_chunk(regclass, jsonb, name, name, regclass)
- set_chunk_default_data_node(regclass, name)
- get_chunk_relstats(regclass)
- get_chunk_colstats(regclass)
- create_chunk_table(regclass, jsonb, name, name)
- freeze_chunk(regclass)
- unfreeze_chunk(regclass)
- drop_chunk(regclass)
- attach_osm_table_chunk(regclass, regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- generate_uuid()
- get_git_commit()
- get_os_info()
- tsl_loaded()
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- relation_size(regclass)
- data_node_hypertable_info(name, name, name)
- data_node_chunk_info(name, name, name)
- hypertable_local_size(name, name)
- hypertable_remote_size(name, name)
- chunks_local_size(name, name)
- chunks_remote_size(name, name)
- range_value_to_pretty(bigint, regtype)
- get_approx_row_count(regclass)
- data_node_compressed_chunk_stats(name, name, name)
- compressed_chunk_local_stats(name, name)
- compressed_chunk_remote_stats(name, name)
- indexes_local_size(name, name)
- data_node_index_size(name, name, name)
- indexes_remote_size(name, name, name)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- set_dist_id(uuid)
- set_peer_dist_id(uuid)
- validate_as_data_node()
- show_connection_cache()
- ping_data_node(name, interval)
- remote_txn_heal_data_node(oid)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the get_create_command function.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for our trigger functions.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
This release contains bug fixes since the 2.11.1 release.
We recommend that you upgrade at the next available opportunity.
**Features**
* #5923 Feature flags for TimescaleDB features
**Bugfixes**
* #5680 Fix DISTINCT query with JOIN on multiple segmentby columns
* #5774 Fixed two bugs in decompression sorted merge code
* #5786 Ensure pg_config --cppflags are passed
* #5906 Fix quoting owners in sql scripts.
* #5912 Fix crash in 1-step integer policy creation
**Thanks**
* @mrksngl for submitting a PR to fix extension upgrade scripts
* @ericdevries for reporting an issue with DISTINCT queries using
segmentby columns of compressed hypertable
When referring to a role from a string type, it must be properly quoted
using pg_catalog.quote_ident before it can be casted to regrole.
Fixed this, especially in update scripts.
This patch adds support to pass continuous aggregate names to
`chunk_detailed_size` to align it to the behavior of other functions
such as `show_chunks`, `drop_chunks`, `hypertable_size`.
In #4664 we introduced fixed schedules for jobs. This was done by
introducing additional parameters fixed_schedule, initial_start and
timezone for our add_job and add_policy APIs.
These fields were not updatable by alter_job so it was not
possible to switch from one type of schedule to another without dropping
and recreating existing jobs and policies.
This patch adds the missing parameters to alter_job to enable switching
from one type of schedule to another.
Fixes#5681
This release contains bug fixes since the 2.11.0 release. We recommend
that you upgrade at the next available opportunity.
**Features**
* #5679 Teach loader to load OSM extension
**Bugfixes**
* #5705 Scheduler accidentally getting killed when calling `delete_job`
* #5742 Fix Result node handling with ConstraintAwareAppend on
compressed chunks
* #5750 Ensure tlist is present in decompress chunk plan
* #5754 Fixed handling of NULL values in bookend_sfunc
* #5798 Fixed batch look ahead in compressed sorted merge
* #5804 Mark cagg_watermark function as PARALLEL RESTRICTED
* #5807 Copy job config JSONB structure into current MemoryContext
* #5824 Improve continuous aggregate query chunk exclusion
**Thanks**
* @JamieD9 for reporting an issue with a wrong result ordering
* @xvaara for reporting an issue with Result node handling in
ConstraintAwareAppend
This patch marks the function cagg_watermark as PARALLEL RESTRICTED. It
partially reverts the change of
c0f2ed18095f21ac737f96fe93e4035dbfeeaf2c. The reason is as follows: for
transaction isolation levels < REPEATABLE READ it can not be ensured
that parallel worker reads the same watermark (e.g., using read
committed isolation level: worker A reads the watermark, the CAGG is
refreshed and the watermark changes, worker B reads the newer
watermark). The different views on the CAGG can cause unexpected results
and crashes (e.g., the chunk exclusion excludes different chunks in
worker A and in worker B).
In addition, a correct snapshot is used when the watermark is read from
the CAGG and a TAP test is added, which detects inconsistent watermark
reads.
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Co-authored-by: Zoltan Haindrich <zoltan@timescale.com>
Internal Server Error when loading Explorer tab (SDC #995)
This is with reference to a weird scenarios where chunk table entry exist in
timescaledb catalog but it does not exist in PG catalog. The stale entry blocks
executing hypertable_size function on the hypertable.
The changes in this patch are related to improvements suggested for
hypertable_size function which involves:
1. Locking the hypertable in ACCESS SHARE mode in function hypertable_size to
avoid risk of chunks being dropped by another concurrent process.
2. Joining the hypertable and inherited chunk tables with "pg_class" to make
sure that a stale table without an entry is pg_catalog is not included as part
of hypertable size calculation.
3. An additional filter (schema_name) is required on pg_class to avoid
calculating size of multiple hypertables with same in different schema.
NOTE: With this change calling hypertable_size function will require select
privilege on the table.
Disable-check: force-changelog-file
Add table `_timescaledb_catalog.telemetry_event` table containing
events that should be sent out with telemetry reports. The table will
be truncated after reporting being generated.
During the `cagg_migrate` execution if the user set the `drop_old`
parameter to `true` the routine will drop the old Continuous Aggregate
leading to an inconsistent state because the catalog code don't handle
this table as a normal catalog table so the records are not removed
when dropping a Continuous Aggregate. The same problem will happen if
you manually drop the old Continuous Aggregate after the migration.
Fixed it by removing the useless Foreign Key and also adding another
column named `user_view_definition` to the main plan table just to store
the original user view definition for troubleshooting purposes.
Fixed#5662
Instead of using a user name to register the owner of a job, we use
regrole. This allows renames to work properly since the underlying OID
does not change when the owner name changes.
We add a check when calling `DROP ROLE` that there is no job with that
owner and generate an error if there is.