This patch adds a helper macro to define cross module
wrapper functions to reduce code repetition. It also
changes the crossmodule struct names to match the function
names where this wasn't the case already.
On macOS the path used is depending on the runner version leading to cache
failure when the runner version changes or is different from the one used to
build the cache. This patch extracts the runner version and adds it as
suffix to the cache key on macOS.
The chunk_api test fails sometimes because of inconsistent resultset
ordering in one of the queries. This patch adds the missing ORDER BY
clause to that query.
This maintenance release contains bugfixes since the 1.7.1 release. We deem it medium
priority for upgrading.
In particular the fixes contained in this maintenance release address bugs in continuous
aggregates, drop_chunks and compression.
**Features**
* #1877 Add support for fast pruning of inlined functions
**Bugfixes**
* #1908 Fix drop_chunks with unique constraints when cascade_to_materializations is false
* #1915 Check for database in extension_current_state
* #1918 Unify chunk index creation
* #1932 Change compression locking order
* #1938 Fix gapfill locf treat_null_as_missing
* #1982 Check for disabled telemetry earlier
* #1984 Fix compression bit array left shift count
* #1997 Add checks for read-only transactions
* #2002 Reset restoring gucs rather than explicitly setting 'off'
* #2028 Fix locking in drop_chunks
* #2031 Enable compression for tables with compound foreign key
* #2039 Fix segfault in create_trigger_handler
* #2043 Fix segfault in cagg_update_view_definition
* #2046 Use index tablespace during chunk creation
* #2047 Better handling of chunk insert state destruction
* #2049 Fix handling of PlaceHolderVar in DecompressChunk
* #2051 Fix tuple concurrently deleted error with multiple continuous aggregates
**Thanks**
* @akamensky for reporting an issue with telemetry and an issue with drop_chunks
* @darko408 for reporting an issue with decompression
* @dmitri191 for reporting an issue with failing background workers
* @eduardotsj for reporting an issue with indexes not inheriting tablespace settings
* @fourseventy for reporting an issue with multiple continuous aggregrates
* @fvannee for contributing optimizations for pruning inlined functions
* @jflambert for reporting an issue with failing telemetry jobs
* @nbouscal for reporting an issue with compression jobs locking referenced tables
* @nicolai6120 for reporting an issue with locf and treat_null_as_missing
* @nomanor for reporting an issue with expression index with table references
* @olernov for contributing a fix for compressing tables with compound foreign keys
* @werjo for reporting an issue with drop_chunks and unique constraints
When we have multiple continuous aggregates defined on
the same hypertable, they could try to delete the hypertable
invalidation logs concurrently. Resolve this by serializing
invalidation log deletes by raw hypertable id.
Fixes#1940
This workflow will install our rpm packages and then try to enable
timescaledb in the database and also check the version installed
from the package against the expected version.
If a new chunk is created as part of an insert and drop_chunks runs
concurrently with the insert, there is a risk of a race. This is a test
for this.
Add locks for dimension slice tuples
If a dimension slice tuple is found while adding new chunk constraints
as part of a chunk creation it is not locked prior to adding the chunk
constraint. Hence a concurrently executing `drop_chunks` can find a
dimension slice unused (because there is no chunk constraint that
references it) and subsequently remove it. The insert will the continue
to add the chunk constraint with a reference to a now non-existent
dimension slice.
This commit fixes this by locking the dimension slice tuple with a
share lock when creating chunks and locking the dimension slice with an
exclusive lock prior to scanning for existing chunk constraints.
The commit also contains a script that repair the `dimension_slice`
table if it is broken by extracting information about dimension slices
that are mentioned in `chunk_constraint` table but not present in
`dimension_slice` table and re-create the rows from the constraints on
the chunks.
If a tablespace is provided for an index on a hypertable, it will be
also used for the index on new chunks. This is done when constraints
are created on new chunk from the hypertable constraints.
Fixes#903
When the relation targetlist of the uncompressed chunk contained
PlaceHolderVars the construction of the relation targetlist of
the compressed chunk would fail with an error. This patch changes
the behaviour to recurse into those PlaceHolderVar.
When enabling compression on a hypertable the existing
constraints are being cloned to the new compressed hypertable.
During validation of existing constraints a loop
through the conkey array is performed, and constraint name
is erroneously added to the list multiple times. This fix
moves the addition to the list outside the conkey loop.
Fixes#2000
If a unique constraint is created on a hypertable, it could under some
circumstance crash. This commit adds a test for this situation even
though it was already fixed (but was reported on the 1.7 branch).
Since pg_regress.sh and pg_isolation_regress.sh were almost
identical this patch combines them into a single script.
This patch also changes the dynamic schedule generation
so make installcheck TESTS='foo' is supported again which
was broken by previous refactoring and you needed to specify
the exact suite your test was in if you wanted to use TESTS.
This workflow will install our apt package and then try to enable
timescaledb in the database and also check the version installed
from the package against the expected version.
This change copies the chunk object into the distributed copy's
memory context before caching it in the ChunkConnectionList. This
resolves an issue where the chunk was being modified after being
stored which was resulting in rows being sent to the incorrect
data node.
This fixes github issue #2037
Since relation_close will decrease the reference counter this
might lead to the relation being freed while we are still using
the view query. This patch changes cagg_update_view_definition
to release the relations later and also keeps the locks till
the end of the transaction.
Remove functions that are no longer used due to refactorings.
Removes the following functions:
- hypertable_tuple_match_name
- ts_hypertable_get_all_by_name
- ts_hypertable_has_tuples
When either TESTS, SKIPS or IGNORES is set for a regression check
run we would generate a new temporary schedule based on those
variables without any parallelity. This patch changes the behaviour
to keep the original test schedule when only IGNORES is specified
and just prepend the ignore lines to a copy of the original schedule.
The TriggerDesc from rel->trigdesc seems to be modified during
iterations of the loop and sometimes gets reallocated leading
to a situation where the local variable trigdesc no longer matches
rel->trigdesc leading to a segfault in create_trigger_handler.
Fixes#2013
This patch adds collecting coredumps to the regression workflow
in addition to the binary all shared libraries used by the coredump
are collected as well.
Check for coredumps and only execute the stracktrace if there are
actually coredumps. This patch also changes the log handling to
always collect logs and upload them cause they might have useful
information even when all steps succeed. Additionally a list
of all failed tests is shown before the regression diff.
This patch also disables fail-fast so a single failed job does
not cancel other jobs still in progress.
It is incorrect to forward relcache invalidations as syscache
invalidations, like cacheid = InvalidOid which is not possible
condition at the moment. Allow syscache invalidations only for
FOREIGNSERVEROIDs.
In our normal regression tests fsync is already disabled because
the cluster is initialized by pg_regress which turns fsync off,
but for all tests using regresschecklocal the setting will be missing
because the cluster is initialized outside of pg_regress.
This PR reduces the dataset size in the transparent_decompression
test to make it finish in a more reasonable time. It also splits
the test and modifies queries that used now(). Due to the change
in dataset size the resulting diff is rather large but is mostly
row count changes in the plan but the actual plan shapes don't
change. In addition to those changes to some LATERAL queries
additional constraints have been added to reduce the number of
loops.
Constify expressions of the following form in WHERE clause:
column OP timestamptz - interval
column OP timestamptz + interval
column OP interval + timestamptz
Iff interval has no month component.
Since the operators for timestamptz OP interval are marked
as stable they will not be constified during planning.
However, intervals without a month component can be safely
constified during planning as the result of those calculations
do not depend on the timezone setting.
testsupport.sql had a reference to TSL code which will fail in
apache-only code since this file is included in every test run
for every build configuration.
Since within a workflow artifacts share a namespace we need to make
sure artifacts have a unique name within the workflow so other
runs in the same workflow don't overwrite artifacts.
The test runner used to use a lockfile to decide whether
initial cluster setup has to be done. Unfortunately
checking for existance and creating the lockfile are 2 distinct
operations leading to race conditions. This patch changes the
runner to use a directory instead because with a directory
the operation is atomic.
Setting the `timescaledb.restoring` guc explicitly to 'off'
for the db meant that the setting got exported in `pg_dumpall`
and some other cases where that setting would then conflict
with the setting set by the pre_restore function causing it to
be overridden and causing errors on restore. This changes to
`RESET` so that instead it will take the system default and not
be dumped separately as an override.
This change ensures that API functions and DDL operations
which modify data respects read-only transaction state
set by default_transaction_read_only option.
This patch makes TSL_MODULE_PATHNAME available when executing
testsupport.sql in the regression test runner. This fixes an
error that happened in the test runner that was suppressed
because it happens before the actual test run.
The sql tests still had version checks and would run EXPLAIN
with different parameters depending on postgres version. Since
all supported postgres versions now support all the parameters
we use we can safely remove the version check.
When fail-fast is true a single failing job will cancel
the entire workflow which is not desirable for scheduled runs
and the code style tests. This patch changes fail-fast
to false for the code style tests and the scheduled i386 tests.
It also changes the different code style check steps to always
run.
The telemetry code still had code to handle the version format used
by postgres before PG10 which is dead code now since PG11 is the
minimum version. This patch removes that code path.