To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- restart_background_workers()
- stop_background_workers()
- start_background_workers()
- alter_job_set_hypertable_id(integer,regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- cagg_migrate_create_plan(_timescaledb_catalog.continuous_agg,text,boolean,boolean)
- cagg_migrate_execute_copy_data(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_copy_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_create_new_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_disable_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_drop_old_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_enable_policies(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_override_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_execute_plan(_timescaledb_catalog.continuous_agg)
- cagg_migrate_execute_refresh_new_cagg(_timescaledb_catalog.continuous_agg,_timescaledb_catalog.continuous_agg_migrate_plan_step)
- cagg_migrate_plan_exists(integer)
- cagg_migrate_pre_validation(text,text,text)
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com
Co-authored-by: Sven Klemm <sven@timescale.com>
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- policy_compression_check(jsonb)
- policy_compression_execute(integer,integer,anyelement,integer,boolean,boolean)
- policy_compression(integer,jsonb)
- policy_job_error_retention_check(jsonb)
- policy_job_error_retention(integer,jsonb)
- policy_recompression(integer,jsonb)
- policy_refresh_continuous_aggregate_check(jsonb)
- policy_refresh_continuous_aggregate(integer,jsonb)
- policy_reorder_check(jsonb)
- policy_reorder(integer,jsonb)
- policy_retention_check(jsonb)
- policy_retention(integer,jsonb)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- calculate_chunk_interval(int, bigint, bigint)
- chunk_status(regclass)
- chunks_in(record, integer[])
- chunk_id_from_relid(oid)
- show_chunk(regclass)
- create_chunk(regclass, jsonb, name, name, regclass)
- set_chunk_default_data_node(regclass, name)
- get_chunk_relstats(regclass)
- get_chunk_colstats(regclass)
- create_chunk_table(regclass, jsonb, name, name)
- freeze_chunk(regclass)
- unfreeze_chunk(regclass)
- drop_chunk(regclass)
- attach_osm_table_chunk(regclass, regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- generate_uuid()
- get_git_commit()
- get_os_info()
- tsl_loaded()
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- relation_size(regclass)
- data_node_hypertable_info(name, name, name)
- data_node_chunk_info(name, name, name)
- hypertable_local_size(name, name)
- hypertable_remote_size(name, name)
- chunks_local_size(name, name)
- chunks_remote_size(name, name)
- range_value_to_pretty(bigint, regtype)
- get_approx_row_count(regclass)
- data_node_compressed_chunk_stats(name, name, name)
- compressed_chunk_local_stats(name, name)
- compressed_chunk_remote_stats(name, name)
- indexes_local_size(name, name)
- data_node_index_size(name, name, name)
- indexes_remote_size(name, name, name)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- set_dist_id(uuid)
- set_peer_dist_id(uuid)
- validate_as_data_node()
- show_connection_cache()
- ping_data_node(name, interval)
- remote_txn_heal_data_node(oid)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the get_create_command function.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for our trigger functions.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
This release contains bug fixes since the 2.11.1 release.
We recommend that you upgrade at the next available opportunity.
**Features**
* #5923 Feature flags for TimescaleDB features
**Bugfixes**
* #5680 Fix DISTINCT query with JOIN on multiple segmentby columns
* #5774 Fixed two bugs in decompression sorted merge code
* #5786 Ensure pg_config --cppflags are passed
* #5906 Fix quoting owners in sql scripts.
* #5912 Fix crash in 1-step integer policy creation
**Thanks**
* @mrksngl for submitting a PR to fix extension upgrade scripts
* @ericdevries for reporting an issue with DISTINCT queries using
segmentby columns of compressed hypertable
When referring to a role from a string type, it must be properly quoted
using pg_catalog.quote_ident before it can be casted to regrole.
Fixed this, especially in update scripts.
This patch adds support to pass continuous aggregate names to
`chunk_detailed_size` to align it to the behavior of other functions
such as `show_chunks`, `drop_chunks`, `hypertable_size`.
In #4664 we introduced fixed schedules for jobs. This was done by
introducing additional parameters fixed_schedule, initial_start and
timezone for our add_job and add_policy APIs.
These fields were not updatable by alter_job so it was not
possible to switch from one type of schedule to another without dropping
and recreating existing jobs and policies.
This patch adds the missing parameters to alter_job to enable switching
from one type of schedule to another.
Fixes#5681
This release contains bug fixes since the 2.11.0 release. We recommend
that you upgrade at the next available opportunity.
**Features**
* #5679 Teach loader to load OSM extension
**Bugfixes**
* #5705 Scheduler accidentally getting killed when calling `delete_job`
* #5742 Fix Result node handling with ConstraintAwareAppend on
compressed chunks
* #5750 Ensure tlist is present in decompress chunk plan
* #5754 Fixed handling of NULL values in bookend_sfunc
* #5798 Fixed batch look ahead in compressed sorted merge
* #5804 Mark cagg_watermark function as PARALLEL RESTRICTED
* #5807 Copy job config JSONB structure into current MemoryContext
* #5824 Improve continuous aggregate query chunk exclusion
**Thanks**
* @JamieD9 for reporting an issue with a wrong result ordering
* @xvaara for reporting an issue with Result node handling in
ConstraintAwareAppend
This patch marks the function cagg_watermark as PARALLEL RESTRICTED. It
partially reverts the change of
c0f2ed18095f21ac737f96fe93e4035dbfeeaf2c. The reason is as follows: for
transaction isolation levels < REPEATABLE READ it can not be ensured
that parallel worker reads the same watermark (e.g., using read
committed isolation level: worker A reads the watermark, the CAGG is
refreshed and the watermark changes, worker B reads the newer
watermark). The different views on the CAGG can cause unexpected results
and crashes (e.g., the chunk exclusion excludes different chunks in
worker A and in worker B).
In addition, a correct snapshot is used when the watermark is read from
the CAGG and a TAP test is added, which detects inconsistent watermark
reads.
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Co-authored-by: Zoltan Haindrich <zoltan@timescale.com>
Internal Server Error when loading Explorer tab (SDC #995)
This is with reference to a weird scenarios where chunk table entry exist in
timescaledb catalog but it does not exist in PG catalog. The stale entry blocks
executing hypertable_size function on the hypertable.
The changes in this patch are related to improvements suggested for
hypertable_size function which involves:
1. Locking the hypertable in ACCESS SHARE mode in function hypertable_size to
avoid risk of chunks being dropped by another concurrent process.
2. Joining the hypertable and inherited chunk tables with "pg_class" to make
sure that a stale table without an entry is pg_catalog is not included as part
of hypertable size calculation.
3. An additional filter (schema_name) is required on pg_class to avoid
calculating size of multiple hypertables with same in different schema.
NOTE: With this change calling hypertable_size function will require select
privilege on the table.
Disable-check: force-changelog-file
Add table `_timescaledb_catalog.telemetry_event` table containing
events that should be sent out with telemetry reports. The table will
be truncated after reporting being generated.
During the `cagg_migrate` execution if the user set the `drop_old`
parameter to `true` the routine will drop the old Continuous Aggregate
leading to an inconsistent state because the catalog code don't handle
this table as a normal catalog table so the records are not removed
when dropping a Continuous Aggregate. The same problem will happen if
you manually drop the old Continuous Aggregate after the migration.
Fixed it by removing the useless Foreign Key and also adding another
column named `user_view_definition` to the main plan table just to store
the original user view definition for troubleshooting purposes.
Fixed#5662
Instead of using a user name to register the owner of a job, we use
regrole. This allows renames to work properly since the underlying OID
does not change when the owner name changes.
We add a check when calling `DROP ROLE` that there is no job with that
owner and generate an error if there is.
In case of joins in the continuous aggregates, pass the required
structs to the new rte created. These values are required by the
planner to finally query the materialized view.
Fixes#5433
Commit 16fdb6ca5e introduced `timescaledb_experimental.policies` view
to expose the Continuous Aggregate policies but the current JOINS over
our catalog are not accurate.
Fixed it by properly JOIN the underlying catalog tables to expose the
correct information without duplicates about the Continuous Aggregate
policies.
Fixes#5492
This patch drops the following internal SQL functions which were
unused:
_timescaledb_internal.is_main_table(regclass);
_timescaledb_internal.is_main_table(text, text);
_timescaledb_internal.hypertable_from_main_table(regclass);
_timescaledb_internal.main_table_from_hypertable(integer);
_timescaledb_internal.time_literal_sql(bigint, regtype);
Commit 8afdddc2da added the first step for deprecating the old format
of Continuous Aggregate but just for PostgreSQL 15 and later versions.
During the extension update we emit a message about the deprecation but
this has being emited even if the user is using PostgreSQL versions
before 15.
Fixed it by emiting the WARNING just when PostgreSQL version is greater
or equal to 15.
This patch moves the support functions for histogram, first and last
into the _timescaledb_functions schema. Since we alter the schema
of the existing functions in upgrade scripts and do not change the
aggregates this should work completely transparently for any user
objects using those aggregates.
Currently internal user objects like chunks and our functions
live in the same schema making locking down that schema hard.
This patch adds a new schema _timescaledb_functions that is meant
to be the schema used for timescaledb internal functions to
allow separation of code and chunks or other user objects.
During chunk creation, the chunk's dimensional CHECK constraints are
created via an "upcall" to PL/pgSQL code. However, creating
dimensional constraints in PL/pgSQL code sometimes fails, especially
during high-concurrency inserts, because PL/pgSQL code scans metadata
using a snapshot that might not see the same metadata as the C
code. As a result, chunk creation sometimes fail during constraint
creation.
To fix this issue, implement dimensional CHECK-constraint creation in
C code. Other constraints (FK, PK, etc.) are still created via an
upcall, but should probably also be rewritten in C. However, since
these constraints don't depend on recently updated metadata, this is
left to a future change.
Fixes#5456
This patch introduces a C-function to perform the recompression at
a finer granularity instead of decompressing and subsequently
compressing the entire chunk.
This improves performance for the following reasons:
- it needs to sort less data at a time and
- it avoids recreating the decompressed chunk and the heap
inserts associated with that by decompressing each segment
into a tuplesort instead.
If no segmentby is specified when enabling compression or if an
index does not exist on the compressed chunk then the operation is
performed as before, decompressing and subsequently
compressing the entire chunk.
When calling the `cagg_watermark` function to get the watermark of a
Continuous Aggregate we execute a `SELECT MAX(time_dimension)` query
in the underlying materialization hypertable.
The problem is that a `SELECT MAX(time_dimention)` query can be
expensive because it will scan all hypertable chunks increasing the
planning time for a Realtime Continuous Aggregates.
Improved it by creating a new catalog table to serve as a cache table
to store the current Continous Aggregate watermark in the following
situations:
- Create CAgg: store the minimum value of hypertable time dimension
data type;
- Refresh CAgg: store the last value of the time dimension materialized
in the underlying materialization hypertable (or the minimum value of
materialization hypertable time dimension data type if there's no
data materialized);
- Drop CAgg Chunks: the same as refresh cagg.
Closes#4699, #5307
During the compression autovacuum use to be disabled for uncompressed
chunk and enable after decompression. This leads to postgres
maintainence issue. Let's not disable autovacuum for uncompressed
chunk anymore. Let postgres take care of the stats in its natural way.
Fixes#309
Make `partialize_agg()` support parallel query execution. To make this
work, the finalize node need combine the individual partials from each
parallel worker, but the final step that turns the resulting partial
into the finished aggregate should not happen. Thus, in the case of
distributed hypertables, each data node can run a parallel query to
compute a partial, and the access node can later combine and finalize
these partials into the final aggregate. Esssentially, there will be
one combine step (minus final) on each data node, and then another one
plus final on the access node.
To implement this, the finalize aggregate plan is simply modified to
elide the final step, and to reserialize the partial. It is only
possible to do this at the plan stage; if done at the path stage, the
PostgreSQL planner will hit assertions that assume that the node has
certain values (e.g., it doesn't expect combine Paths to skip the
final step).
This release contains bug fixes since the 2.10.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #5159 Support Continuous Aggregates names in hypertable_(detailed_)size
* #5226 Fix concurrent locking with chunk_data_node table
* #5317 Fix some incorrect memory handling
* #5336 Use NameData and namestrcpy for names
* #5343 Set PortalContext when starting job
* #5360 Fix uninitialized bucket_info variable
* #5362 Make copy fetcher more async
* #5364 Fix num_chunks inconsistency in hypertables view
* #5367 Fix column name handling in old-style continuous aggregates
* #5378 Fix multinode DML HA performance regression
* #5384 Fix Hierarchical Continuous Aggregates chunk_interval_size
**Thanks**
* @justinozavala for reporting an issue with PL/Python procedures in the background worker
* @Medvecrab for discovering an issue with copying NameData when forming heap tuples.
* @pushpeepkmonroe for discovering an issue in upgrading old-style
continuous aggregates with renamed columns
* @pushpeepkmonroe for discovering an issue in upgrading old-style continuous aggregates with renamed columns
For continuous aggregates with the old-style partial aggregates
renaming columns that are not in the group-by clause will generate an
error when upgrading to a later version. The reason is that it is
implicitly assumed that the name of the column is the same as for the
direct view. This holds true for new-style continous aggregates, but is
not always true for old-style continuous aggregates. In particular,
columns that are not part of the `GROUP BY` clause can have an
internally generated name.
This commit fixes that by extracting the name of the column from the
partial view and use that when renaming the partial view column and the
materialized table column.
Different num_chunks values reported by
timescaledb_information.hypertables and
timescaledb_information.chunks.
View definition of hypertables was
not filtering dropped and osm_chunks.
Fixes#5338
This small patch adds support for continuous aggregates to the
`hypertable_detailed_size` (and with that `hypertable_size`).
It adds an additional check to see if a continuous aggregate exists
if a hypertable with the given regclass name isn't found.
This PR introduces a timeout argument and a new logic to the
timescale_internal.ping_data_node() function which allows
to handle io timeouts for nodes being unresponsive.
Fix#5312