Currently the update test is quite inconvenient to run locally and also
inconvenient to debug as the different update tests all run in their own
docker container. This patch refactors the update test to no longer
require docker and make it easier to debug as it will run in the
local environment as determined by pg_config.
This patch also consolidates update/downgrade and repair test since
they do very similar things and adds support for coredump stacktraces to
the github action and removes some dead code from the update tests.
Additionally the versions to be used in the update test are now
determined from existing git tags so the post release patch no longer
needs to add newly released versions.
Adjust DecompressChunk path generation to use the per chunk settings
and not the hypertable settings when building compression info.
This patch also fixes the missing chunk configuration generation
in the update script which was masked by this bug.
SQLSmith finds many internal program errors (`elog`, code `XX000`).
Normally these errors shouldn't be triggered by user actions and
indicate a bug in the program (like `variable not found in subplan
targetlist`). We don't have a capacity to fix all of them currently,
especially since some of them seem to be the upstream ones. This commit
adds logging for these errors so that we at least can study the current
situation.
Fix missing alignment handling, and also buffer size problems. We were
not properly reporting the failures in the compression_algo tests, so
didn't notice these problems. Add part of tests from "vector text
predicates" PR to improve coverage.
Since #6505, the changelog script tries to access the
secrets.ORG_AUTOMATION_TOKEN. However, accessing secrets is not possible
for PRs. This PR changes the token to the default access token, which
is available in PRs and provides read access to the issue API.
Currently, MN is not supported on PG16. Therefore, the creation of
distributed restore points fails on PG16. This patch disables the CI
test for this PG version.
When the regression tests output differs between PG versions we create a
template test file (*.sql.in) and add several different output files for
each supported version.
Over time we support new PG version and deprecate older ones, for
example nowadays we're working on PG16 support and removed PG12 so we
need to check if we still need template files.
- Added creation_time attribute to timescaledb catalog table "chunk".
Also, updated corresponding view timescaledb_information.chunks to
include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
creation to handle new partition range for give dimension (Time/
SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
part of running upgrade script. The current timestamp (now()) at the
time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.
Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>
There are certain modifications/commands that should not be allowed
in our update/downgrade scripts. For example, when adding or dropping
columns to timescaledb catalog tables, the right way to do this is to
drop and recreate the table with the desired definition instead of doing
ALTER TABLE ... ADD/DROP COLUMN. This is required to ensure consistent
attribute numbers across versions.
This workflow detects this and some other incorrect catalog table
modifications and fails with an error in that case.
Fixes#6049
By default, the compression policy is scheduled for every
chunk_time_interval / 2 in the current implementation, equal to three
days and twelve hours with our default settings. This schedule interval
was sufficient for previous versions of TimescaleDB. However, with the
introduction of features like mutable compression and ON CONFLICT .. DO
UPDATE queries, regular DML operations decompress data. To ensure that
modified data is compressed earlier, this patch reduces the schedule
interval of the compression policy to run at least every 12 hours.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- relation_size(regclass)
- data_node_hypertable_info(name, name, name)
- data_node_chunk_info(name, name, name)
- hypertable_local_size(name, name)
- hypertable_remote_size(name, name)
- chunks_local_size(name, name)
- chunks_remote_size(name, name)
- range_value_to_pretty(bigint, regtype)
- get_approx_row_count(regclass)
- data_node_compressed_chunk_stats(name, name, name)
- compressed_chunk_local_stats(name, name)
- compressed_chunk_remote_stats(name, name)
- indexes_local_size(name, name)
- data_node_index_size(name, name, name)
- indexes_remote_size(name, name, name)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the get_create_command function.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
Removed the PG12 specific macros and all the now, dead code. Also updated
the testcases which had workarounds in place to make them compatible
with PG12.
The backport script for the PRs does not have the permission to backport
PRs which include workflow changes. So, these PRs are excluded from
being automatically backported.
Failed CI run:
https://github.com/timescale/timescaledb/actions/runs/5387338161/
jobs/9780701395
> refusing to allow a GitHub App to create or update workflow
> `.github/workflows/xxx.yaml` without `workflows` permission)
The CHANGELOG.MD file contains the sections features, bugfixes, and
thanks. This patch adjusts the script merge_changelogs.sh to produce
the sections in the same order.
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.
This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
Adds a simple check to ensure that the PR number is present at least once
in the added changelog file.
Also fixes an earlier PR which introduced a typo.
The changes in this commit
1. workflow action to check if the PR has its own changelog file in
".unreleased/" folder.
2. script to merge the individual changelog entries that can be copied
into the CHANGELOG.md file.
3. script to check the format of the lines in the change log file.
4. script to delete the individual changelogs, post release.