Currently, MN is not supported on PG16. Therefore, the creation of
distributed restore points fails on PG16. This patch disables the CI
test for this PG version.
When the regression tests output differs between PG versions we create a
template test file (*.sql.in) and add several different output files for
each supported version.
Over time we support new PG version and deprecate older ones, for
example nowadays we're working on PG16 support and removed PG12 so we
need to check if we still need template files.
- Added creation_time attribute to timescaledb catalog table "chunk".
Also, updated corresponding view timescaledb_information.chunks to
include chunk_creation_time attribute.
- A newly created chunk is assigned the creation time during chunk
creation to handle new partition range for give dimension (Time/
SERIAL/BIGSERIAL/INT/...).
- In case of an already existing chunk, the creation time is updated as
part of running upgrade script. The current timestamp (now()) at the
time of upgrade has been assigned as chunk creation time.
- Similarly, downgrade script is updated to drop the attribute
creation_time from catalog table "chunk".
- All relevant queries/views/test output have been updated accordingly.
Co-authored-by: Nikhil Sontakke <nikhil@timescale.com>
There are certain modifications/commands that should not be allowed
in our update/downgrade scripts. For example, when adding or dropping
columns to timescaledb catalog tables, the right way to do this is to
drop and recreate the table with the desired definition instead of doing
ALTER TABLE ... ADD/DROP COLUMN. This is required to ensure consistent
attribute numbers across versions.
This workflow detects this and some other incorrect catalog table
modifications and fails with an error in that case.
Fixes#6049
By default, the compression policy is scheduled for every
chunk_time_interval / 2 in the current implementation, equal to three
days and twelve hours with our default settings. This schedule interval
was sufficient for previous versions of TimescaleDB. However, with the
introduction of features like mutable compression and ON CONFLICT .. DO
UPDATE queries, regular DML operations decompress data. To ensure that
modified data is compressed earlier, this patch reduces the schedule
interval of the compression policy to run at least every 12 hours.
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- relation_size(regclass)
- data_node_hypertable_info(name, name, name)
- data_node_chunk_info(name, name, name)
- hypertable_local_size(name, name)
- hypertable_remote_size(name, name)
- chunks_local_size(name, name)
- chunks_remote_size(name, name)
- range_value_to_pretty(bigint, regtype)
- get_approx_row_count(regclass)
- data_node_compressed_chunk_stats(name, name, name)
- compressed_chunk_local_stats(name, name)
- compressed_chunk_remote_stats(name, name)
- indexes_local_size(name, name)
- data_node_index_size(name, name, name)
- indexes_remote_size(name, name, name)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- to_unix_microseconds(timestamptz)
- to_timestamp(bigint)
- to_timestamp_without_timezone(bigint)
- to_date(bigint)
- to_interval(bigint)
- interval_to_usec(interval)
- time_to_internal(anyelement)
- subtract_integer_from_now(regclass, bigint)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the get_create_command function.
This will move the definitions of `debug_waitpoint_enable`,
`debug_waitpoint_disable`, and `debug_waitpoint_id` to always be
defined for debug builds and modify existing tests accordingly.
This means that it is no longer necessary to generate isolation test
files from templates (in most cases), and it will be straightforward to
use these functions in debug builds.
The debug utilities can be disabled by setting the option
`ENABLE_DEBUG_UTILS` to `OFF`.
Removed the PG12 specific macros and all the now, dead code. Also updated
the testcases which had workarounds in place to make them compatible
with PG12.
The backport script for the PRs does not have the permission to backport
PRs which include workflow changes. So, these PRs are excluded from
being automatically backported.
Failed CI run:
https://github.com/timescale/timescaledb/actions/runs/5387338161/
jobs/9780701395
> refusing to allow a GitHub App to create or update workflow
> `.github/workflows/xxx.yaml` without `workflows` permission)
The CHANGELOG.MD file contains the sections features, bugfixes, and
thanks. This patch adjusts the script merge_changelogs.sh to produce
the sections in the same order.
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.
This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
Adds a simple check to ensure that the PR number is present at least once
in the added changelog file.
Also fixes an earlier PR which introduced a typo.
The changes in this commit
1. workflow action to check if the PR has its own changelog file in
".unreleased/" folder.
2. script to merge the individual changelog entries that can be copied
into the CHANGELOG.md file.
3. script to check the format of the lines in the change log file.
4. script to delete the individual changelogs, post release.
Our cost model should be self-consistent, and the relative values for
the remote tuple and startup costs should reflect their real cost,
relative to costs of other operations like CPU tuple cost.
For example, now remote costs are set even lower than the parallel tuple
and startup cost. Contrary to that, their real world cost is going to be
an order of magnitude higher or more, because parallel tuples are sent
through shared memory, and remote tuples are sent over the network.
Increasing these costs leads to query plan improvements, e.g. we start
to favor the GROUP BY pushdown in some cases.
Whenever we create a template sql file (*.sql.in) we should add the
respective .gitignore entry for the generated test files.
So added a CI check to check for missing gitignore entries for generated
test files.
In case of joins in the continuous aggregates, pass the required
structs to the new rte created. These values are required by the
planner to finally query the materialized view.
Fixes#5433
NameData is a fixed-size type of 64 bytes. Using strlcpy to copy data
into a NameData struct can cause problems because any data that follows
the initial null-terminated string will also be part of the data.
The upgrade and downgrade tests are running PostgreSQL in Docker
containers. The function 'wait_for_pg' is used to determine if
PostgreSQL is ready to accept connections. In contrast to the upgrade
tests, the downgrade tests use more relaxed timeout values. The upgrade
tests sometimes fail because PostgreSQL cannot accept connections within
the configured time range. This patch applies the more relaxed timeout
values also to the upgrade script.
Using `strlcpy` to copy variables holding PostgreSQL names can cause
issues since names are fixed-size types of length 64. This means that
any data that follows the initial null-terminated string will also be
part of the data.
Instead of using `const char*` for PostgreSQL names, use `NameData`
type for PostgreSQL names and use `namestrcpy` to copy them rather
than `strlcpy`.