When building timescaledb on Ubuntu 20.04 mangle_path is present
as exported symbol in binaries, making the export prefix check fail.
This patch changes the export prefix check to ignore mangle_path.
Adds an error detail, which clarifies that only one continuous
aggregate policy can be created per a continuous aggregate. It also
reports the job id of the continuous aggregate policy, which already
exists.
In order to implement repair tests, changes are made to the
`constraint_check` table to simulate a broken dependency, which
requires the constraints on that table to be dropped. This means that
the repair runs without constraints, and a bug in the update test could
potentially not get caught.
This commit fixes this by factoring out the repair tests from the
update tests and run them as a separate pass. This means that the
contraints are not dropped in the update tests and bugs there will be
caught.
In addition, some bash functions are factored out into a separate file
to avoid duplication.
Declarations before code has been informally supported and we have
several examples in the code. This commit adds that guideline to the
coding guidelines.
Due to a validation bug involving interval validation and the current
daylight savings time switch in the 1.7 releases the update tests that
includes 1.7 versions is currently failing. This patch changes CI
to run in GMT instead.
This patch makes TEST_PGPORT available in the environment pg_regress
is run on and changes pg_regress to drop the test tablespaces at
the end of the run to get a clean state for installchecklocal runs.
Change the telemetry_distributed test to not use hardcoded database
names but instead derive it from test name. This patch also changes
the test to drop any databases created by the test itself.
If an unsupported CMake build type is used, it is accepted but generate
strange builds for the compiler. This commit adds a check that only one
of the standard build types are used.
This patch removes hardcoded database names from regresscheck-t
and replaces it with database names derived from the test name.
This allows more of the regresscheck-t tests to be run in parallel
leading to a roughly 15% speedup of regresscheck-t.
This patch is also prerequisite for allowing repeatable
installchecklocal runs.
PG13 adds the trusted attribute to extensions. Extensions marked
as trusted do not require superuser privileges, but can be installed
by a non-superuser with CREATE privilege in the database.
The objects within the extension will be owned by the bootstrap
superuser, but the extension itself will be owned by the calling user.
https://github.com/postgres/postgres/commit/50fc694e
In order to support smoke-testing with a single server, the update
tests are refactored to not require a `postgres` user with full
privileges.
To support both running smoke tests and update tests, the following
changes where made:
- Database creation code was factored out of tests and is only executed
for update tests.
- Since the default for `docker_pgscript` was to use the `postgres`
database and the database creation code also switched database to
`single` as part of the exection, the default of `docker_pgscript` is
now changed to `single`.
- Parts of tests that changes roles during execution was removed since
it is more suitable for a regression test. Some `GRANT` statements
were kept for the update tests since they test that non-superuser
privileges are propagated correctly.
- Operations that require escalated privileges are factored out into
support functions that execute the original code for update tests and
are no-ops for smoke tests.
- A dedicated `test_update_smoke` script was added that can run a smoke
test against a single server.
This commit is part of a longer sequence to refactor the update tests
for use with smoke testing.
The multinode parts of the tests are broken out into a new version to
make the logic simpler.
- All multinode tests are now added to a new version, `v7` which allow
us to create multinode objects only for versions that support
multinode.
- Test script `test_updates_pg11` and `test_updates_pg12` are changed
to use `v6` for updates from versions preceeding 2.0 and `v7` for
versions 2.0 and later.
- Setup file `post.update.sql` is not needed any more since it was used
to "fix" a pre-2.0 updated version to add data nodes so that it
matched the clean setup. This is not necessary any more since v6 does
not add data nodes for some versions and not for others.
A weird crash was coming up while running the bgw_db_scheduler test
while using log_min_messages with DEBUG3 setting. So what's happening
here is that we register a RegisterXactCallback callback which gets
called at post-commit time. This remote_connection_xact_end callback
was issuing a debug3 level message. As part of its processing, the
emit_log_hook_callback function was trying to write this message
to a catalog table via a transaction. Obviously trying to start a
transaction at this post-commit stage will error out and leads to
weird behavior and crashes. In general it's not a good idea to have
emit_log_hook_callback functions start transactions because messages
can be emitted at any point of a transaction cycle, but we have to live
with the current code for now.
The fix is to reset the emit_log_hook in these xact_end callback
functions. Additionally the emit_log_hook_callback function is only
interested in elog/ereport messages with elevel LOG or above. So add
that check in the emit_log_hook_callback as well.
In order to support smoke-testing with a single server, the update
tests are refactored. This is part of a longer sequence of changes to
support both running smoke tests and update tests and contain the
following changes:
- All timestamp columns now have `not null` added to avoid spamming the
output file and help finding the real issue.
- Extension of dump files are changed from `sql` to `dump` since they
use the custom format and are not SQL files.
Since we require that commit messages follow seven rules of writing good
commit messages and include GitHub issues number, this commit clarifies these
requirements in the contribution.
The above issue was encountered while testing bgw_db_scheduler.sql.
A backend is allowed to only do one dsm_attach call to attach to
any given dynamic shared memory segment (it needs to detach before
calling attach again). The code in params_open_wrapper took care of
calling it once. However the code in params_close_wrapper was doing an
unconditional dsm_detach everytime which can cause the dsm refcnt to go
down unnecessarily and was causing weird crashes
The fix is to track when we should be calling params_close_wrapper
instead of doing an unconditional detach always.
The index creation code would take the IndexInfo from the hypertable
and adjust it for the chunk but wouldnt reset IndexInfo for each
individual chunk leading to errors when the hypertable has dropped
columns.
Fixes#2504
In the histogram deserialization function the StringInfoData's pointer
is set to VARDATA(...) which does not include the varlen header, while
the length is set to VARSIZE(...) which does include the header's
length.
Fixes#2684
Our current support is limited to "ON CONFLICT DO NOTHING" without
specifying any inference index columns. While we do error out in the
other cases, the error messages do not convey things clearly. So rework
them.
Notifying a Slack channel on regression test failures fails in a fork,
which is common for PRs. The failed Slack error is confusing, since it
disturbs from checking actual regression test failure. Furthermore,
notifying Slack is not needed in the case of PRs as developers rely on
GitHub's GUI around PRs. This commit fixes GH Actions workflows to not
notify Slack on PRs.
This release adds major new features since the 2.0.2 release.
We deem it moderate priority for upgrading.
This release adds the long-awaited support for PostgreSQL 13 to TimescaleDB.
The minimum required PostgreSQL 13 version is 13.2 due to a security vulnerability
affecting TimescaleDB functionality present in earlier versions of PostgreSQL 13.
This release also relaxes some restrictions for compressed hypertables;
namely, TimescaleDB now supports adding columns to compressed hypertables
and renaming columns of compressed hypertables.
**Major Features**
* #2779 Add support for PostgreSQL 13
**Minor features**
* #2736 Support adding columns to hypertables with compression enabled
* #2909 Support renaming columns of hypertables with compression enabled
This maintenance release contains bugfixes since the 2.0.1 release. We
deem it high priority for upgrading.
The bug fixes in this release address issues with joins, the status of
background jobs, and disabling compression. It also includes
enhancements to continuous aggregates, including improved validation
of policies and optimizations for faster refreshes when there are a
lot of invalidations.
**Minor features**
* #2926 Optimize cagg refresh for small invalidations
**Bugfixes**
* #2850 Set status for backend in background jobs
* #2883 Fix join qual propagation for nested joins
* #2884 Add GUC to control join qual propagation
* #2885 Fix compressed chunk check when disabling compression
* #2908 Fix changing column type of clustered hypertables
* #2942 Validate continuous aggregate policy
**Thanks**
* @zeeshanshabbir93 for reporting the issue with full outer joins
* @Antiarchitect for reporting the issue with slow refreshes of
* @diego-hermida for reporting the issue about being unable to disable
compression
* @mtin for reporting the issue about wrong job status
ALTER TABLE <hypertable> RENAME <column_name> TO <new_column_name>
is now supported for hypertables that have compression enabled.
Note: Column renaming is not supported for distributed hypertables.
So this will not work on distributed hypertables that have
compression enabled.
When there are many small (e.g., single timestamp) invalidations that
cannot be merged despite expanding invalidations to full buckets
(e.g., invalidations are spread across every second bucket in the
worst case), it might no longer be beneficial to materialize every
invalidation separately.
Instead, this change adds a threshold for the number of invalidations
used by the refresh (currently 10 by default) above which
invalidations are merged into one range based on the lowest and
greatest invalidated time value.
The limit can be controlled by an anonymous session variable for
debugging and tweaking purposes. It might be considered for promotion
to an official GUC in the future.
Fixes#2867
The refreshing of a continuous aggregate is slow when many small
invalidations are generated by frequent single row insert
backfills. This change adds an optimization that merges small
invalidations by first expanding invalidations to full bucket
boundaries. There is really no reason to maintain invalidations that
aren't covering full buckets since refresh windows are already aligned
to buckets anyway.
Fixes#2867
This change adds validation of the settings for a continuous aggregate
policy when the policy is created. Previously it was possible to
create policies that would either fail at runtime or never refresh
anything due to bad configuration.
In particular, the refresh window (start and end offsets for
refreshing) must now be at least two buckets in size or an error is
generated when the policy is created. The policy must cover at least
two buckets to ensure that at least one bucket is refreshed when the
policy runs, since it is unlikely that the policy runs at a time that
is perfectly aligned with the beginning of a bucket.
Note that it is still possible to create policies that might not
refresh anything depending on the time when it runs. For instance, if
the "current" time is close to the minimum allowed time value, the
refresh window can lag enough to fall outside the valid time range
(e.g., the end offset is big enough to push the window outside the
valid time range). As time moves on, the window would eventually move
into the valid time range, however.
Fixes#2929
When refreshing a continuous aggregate, we only materialize the
buckets that are fully enclosed by the refresh window. Therefore, we
should generate an error if the window is smaller than one bucket.
The sanitizer test is running on PG11.1 where the cluster test is
expected to fail. This patch changes the sanitizer test to ignore
the test result of the cluster test.
The minimum required PG13 version is 13.2 because we require a
bugfix only present in that version otherwise decompression might
not work properly for certain queries.
When checking for -Wno-stringop-truncation gcc reports success even
though it doesn't support the flag. This patch changes the check
to check for actual flag instead of the flag that turns it off
which produces the correct result.
PG13 has introduced the parallel vacuum functionality in which if the
table has two or more indexes (with sizes beyond 512KB and enough number
of parallel worker processes), then the vacuum on those indexes can be
done in parallel.