This release contains bug fixes since the 2.13.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #6365 Use numrows_pre_compression in approximate row count
* #6377 Use processed group clauses in PG16
* #6384 Change bgw_log_level to use PGC_SUSET
* #6393 Disable vectorized sum for expressions.
* #6408 Fix groupby pathkeys for gapfill in PG16
* #6428 Fix index matching during DML decompression
* #6439 Fix compressed chunk permission handling on PG16
* #6443 Fix lost concurrent CAgg updates
* #6454 Fix unique expression indexes on compressed chunks
* #6465 Fix use of freed path in decompression sort logic
**Thanks**
* @MA-MacDonald for reporting an issue with gapfill in PG16
* @aarondglover for reporting an issue with unique expression indexes on compressed chunks
* @adriangb for reporting an issue with security barrier views on pg16
Before this change during downgrade script generation we would
always fetch the pre-update script from the previous version and
prepend it to the generated scripts. This limits what can be
referenced in the pre-update script and also what is possible
within the downgrade itself.
This patch splits the pre-update script into a generic part that
is used for update/downgrade and an update specific part. We could
later also add a downgrade specific part but currently it is not
needed. This change is necessary because we reference a timescaledb
view in the pre-update script which prevents changes to that view.
This commit introduces a function `hypertable_osm_range_update`
in the _timescaledb_functions schema. This function is meant to serve
as an API call for the OSM extension to update the time range
of a hypertable's OSM chunk with the min and max values present
in the contiguous time range its tiered chunks span.
If the range is not contiguous, then it must be set to the invalid
range an OSM chunk is assigned upon creation.
A new status field is also introduced in the hypertable catalog
table to keep track of whether the ranges covered by tiered and
non-tiered chunks overlap.
When there is no overlap detected then it is possible to apply the
Ordered Append optimization in the presence of OSM chunks.
With timescaledb 2.12 all the functions present in _timescaledb_internal
were moved into the _timescaledb_functions schema to improve schema
security. This patch adds a compatibility layer so external callers
of these internal functions will not break and allow for more
flexibility when migrating.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
This patch drops the following internal SQL functions which were
unused:
_timescaledb_internal.is_main_table(regclass);
_timescaledb_internal.is_main_table(text, text);
_timescaledb_internal.hypertable_from_main_table(regclass);
_timescaledb_internal.main_table_from_hypertable(integer);
_timescaledb_internal.time_literal_sql(bigint, regtype);
This commit gives more visibility into job failures by making the
information regarding a job runtime error available in an extension
table (`job_errors`) that users can directly query.
This commit also adds an infromational view on top of the table for
convenience.
To prevent the `job_errors` table from growing too large,
a retention job is also set up with a default retention interval
of 1 month. The retention job is registered with a custom check
function that requires that a valid "drop_after" interval be provided
in the config field of the job.
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.
When upgrading to Timescale 2.7, new created Continuous Aggregates
are using the new format, but existing Continuous Aggregates keep
using the format they were defined with.
Created a procedure to upgrade existing Continuous Aggregates from
the old format to the new format, by calling a simple procedure:
test=# CALL cagg_migrate('conditions_summary_daily');
Closes#4424
Add a parameter `drop_remote_data` to `detach_data_node()` which
allows dropping the hypertable on the data node when detaching
it. This is useful when detaching a data node and then immediately
attaching it again. If the data remains on the data node, the
re-attach will fail with an error complaining that the hypertable
already exists.
The new parameter is analogous to the `drop_database` parameter of
`delete_data_node`. The new parameter is `false` by default for
compatibility and ensures that a data node can be detached without
requiring communicating with the data node (e.g., if the data node is
not responding due to a failure).
Closes#4414
Add option `USE_TELEMETRY` that can be used to exclude telemetry from
the compile.
Telemetry-specific SQL is moved, which is only included when extension
is compiled with telemetry and the notice is changed so that the
message about telemetry is not printed when Telemetry is not compiled
in.
The following code is not compiled in when telemetry is not used:
- Cross-module functions for telemetry.
- Checks for telemetry job in job execution.
- GUC variables `telemetry_level` and `telemetry_cloud`.
Telemetry subsystem is not included when compiling without telemetry,
which requires some functions to be moved out of the telemetry
subsystem:
- Metadata handling is moved out of the telemetry module since it is
used not only with telemetry.
- UUID functions are moved into a separate module instead of being
part of the telemetry subsystem.
- Telemetry functions are either added or removed when updating from a
previous version.
Tests are updated to:
- Not use telemetry functions to get UUID or Metadata and instead use
the moved UUID and metadata functions.
- Not include telemetry information in tests that do not require it.
- Configuration files do not set telemetry variables when telemetry is
not compiled in.
- Replaced usage of telemetry functions in non-telemetry tests with
other sources of same information.
Fixes#3931
On non-debug builds we might end up with an empty list of tests
when generating the schedule. On older cmake versions < 3.14
trying to sort an empty list will produce an error, so
we check for empty list here.
TimescaleDB was vulnerable to a privilege escalation attack in
the extension installation script. An attacker could precreate
objects normally owned by the extension and get those objects
used in the installation script since the script would only try
to create them if they did not already exist. Thanks to Pedro
Gallegos for reporting the problem.
This patch changes the schema, table and function creation to fail
and abort the installation when the object already exists instead
of using the existing object.
Security: CVE-2022-24128
Refactor the telemetry function and format to include stats broken
down on common relation types. The types include:
- Tables
- Partitioned tables
- Hypertables
- Distributed hypertables
- Continuous aggregates
- Materialized views
- Views
and for each of these types report (when applicable):
- Total number of relations
- Total number of children/chunks
- Total data volume (broken into heap, toast, and indexes).
- Compression stats
- PG stats, like reltuples
The telemetry function has also been refactored to return `jsonb`
instead of `text`. This makes it easier to query and manipulate the
resulting JSON format, and also gives cleaner output.
Closes#3932
To support tests with different configuration options, we split the
tests into *test configurations*. Each test configuration NAME will have
- A configuration template file `NAME.conf.in` that is used to run the
suite of tests.
- A variable `TEST_FILES_<NAME>` listing the test files available for
that test suite.
- A variable `SOLO_TESTS_<NAME>` that lists the tests that need to be
run as solo tests.
The code to generate test schedules is then factored out into a
separate file and used for each configuration.
Add support for continuous aggregates for distributed hypertables by
allowing a continuous aggregate to read from a distributed hypertable
so that the continuous aggregate is on the access node while the
hypertable data is on the data nodes.
For distributed hypertables, both the hypertable and continuous
aggregate invalidation log are kept on the data nodes and the refresh
window is computed at refresh time on each data node. Since the
continuous aggregate materialization hypertable is not present on the
data nodes, the invalidation log was extended to allow using a
non-local hypertable id on the data nodes. This means that you cannot
create continuous aggregates on the data nodes since those could clash
with continuous aggregates on the access node.
Some utility statements added entries to the invalidation logs
directly (truncating chunks and hypertables, as well as dropping
individual chunks), so to handle this case, internal functions were
added to allow logging invalidation on the data nodes from the access
node.
The commit also includes some fixes to memory context usage that
caused crashes for invalidation triggers and also disable per data
node queries during refresh since that would otherwise generate an
exception.
Fixes#3435
Co-authored-by: Mats Kindahl <mats@timescale.com>
A new view in the experimental schema shows information related to
chunk replication. The view can be used to learn the replication
status of a chunk while also providing a way to easily find nodes to
move or copy chunks between in order to ensure a fully replicated
multi-node cluster.
Tests have been added to illustrate the potential usage.
This commit add functions and code to generate a downgrade script from
the current version to the previous version. This requires execution
from a Git repository since it retrieves the prolog and epilog for the
"downgrade" file from the version given by `update_from_version` in the
`version.config` file.
The commit adds several CMake functions that simplifies the composition
of script files, but these are not used to generate the update scripts.
A potential improvement is to use the scripts to also generate the
update scripts.
This commit supports generating a downgrade script from the
current version to the previous version. Other versions are handled
using a variable containing file names of reverse update
scripts and the source and target version is extracted from the file
names, which is assumed to be of the form
`<source-version>--<target-version>.sql`.
In addition to adding support for generating downgrade scripts, the
commit adds a downgrade test file that tests a release in a similar way
to the update script and adds it as a workflow.