A function to lookup the name of a chunk constraint returned a pointer
to string without first copying the string into a safe memory
context. This probably worked by chance because everything in the scan
function ran in the current memory context, including the deforming of
the tuple. However, returning pointers to data in deformed tuples can
easily cause memory corruption with the introduction of other changes
(such as improved memory management).
This memory issue is fixed by explicitly reallocating the string in
the memory context that should be used for any returned data.
Changes are also made to avoid unnecessarily deforming tuples multiple
times in the same scan function.
This change makes the Scanner code agnostic to the underlying storage
implementation of the tables it scans. This also fixes a bug that made
it impossible to use non-heap table access methods on a
hypertable. The bug existed because a check is made for existing data
before a table is made into a hypertable. And, since this check reads
data from the table using the Scanner, it must be able to read the
data irrespective of the underlying storage.
As a result of the more generic scan interface, resource management is
also improved by delivering tuples in reference-counted tuple table
slots. A backwards-compatibility layer is used for PG11, which maps
all table access functions to the heap equivalents.
Whenever chunks are created, no privileges are added to the chunks.
For accesses that go through the hypertable permission checks are
ignored so reads and writes will succeed anyway. However, for direct
accesses to the chunks, permission checks are done, which creates
problems for, e.g., `pg_dump`.
This commit fixes this by propagating `GRANT` and `REVOKE` statements
to the chunks when executed on the hypertable, and whenever new chunks
are created, privileges are copied from the hypertable.
This commit do not propagate privileges for distributed hypertables,
this is in a separate commit.
This patch adds a proc_name, proc_schema, hypertable_id index to
bgw_job. 3 functions using the new index are added as well:
ts_bgw_job_find_by_proc
ts_bgw_job_find_by_hypertable_id
ts_bgw_job_find_by_proc_and_hypertable_id
These functions are required for migrating the existing policies
to store their configuration in bgw_job directly.
This patch makes the scheduler ignore all jobs where scheduled = false.
The schedule flag can be used to disable jobs without deleting the job
entry.
This patch also changes the job retrieval to use an index scan to
satisfy the assumption of the scheduler to receive a list ordered
by job_id.
timescaledb_information.chunks view shows metadata
related to chunks.
timescaledb_information.dimensions shows metadata
related to hypertable's dimensions.
Allow move_chunk() to work with uncompressed chunk and
automatically move associated compressed chunk to specified
tablespace.
Block move_chunk() execution for compressed chunks.
Issue: #2067
The `pg_config.h` file in the PostgreSQL include directory does not
contain any include guards, so including it will generate errors when
building from PGDG-distributed packages.
Since `pg_config.h` is included from `c.h` (which has include guards)
which in turn is included from `postgres.h` (which also has include
guards), we remove usage of `pg_config.h` from all places where
`postgres.h` is included and replace it with `postgres.h` where it is
not included.
This change fixes the stats collecting code to also return the slot
collation fields for PG12. This fixes a bug (#2093) where running an
ANALYZE in PG12 would break queries on distributed tables.
Testing that drop_chunks works correctly on a distributed hypertable.
Tests of different arguments are assumed to be done on a usual
hypertable previously.
The DML blocker to block INSERTs and UPDATEs on compressed hypertables
would trigger if the UPDATE or DELETE referenced any hypertable with
compressed chunks. This patch changes the logic to only block if the
target of the UPDATE or DELETE is a compressed chunk.
When creating an index with IF NOT EXISTS we still tried to
create indexes for all the chunks but since no index was created
on the parent table the meta data did not have the object id of
the main table index leading to an error when trying to open
the main table index. This patch adjusts the logic to check
for IF NOT EXISTS and does an early return when no index was
created on the parent table.
Migrate the travis coverity config to github actions. This new workflow
will depend on the cached postgres build of the main regression workflow
so we don't have to duplicate the build matrix in this workflow. Since
the main regression workflow runs at least once a day there should
always be a cache hit. In case of a cache miss this workflow will abort.
Since there is a quota on coverity runs this workflow is configured to
run once a week every saturday. Additional runs can be triggered by
pushing to the coverity_scan branch.
Allow ALTER SET TABLESPACE on an uncompressed chunk and
automatically execute it on the associated compressed chunk,
if any. Block SET TABLESPACE command for compressed chunks.
Issue #2068
The update test checks that default jobs are the same before and after
an upgrade, but does not order the jobs in any specific way. This means
that if the database order is different before and after the update, it
will result in a false negative even if the jobs are identical.
This commit fixes this by ordering the jobs by id.
Infering start and stop parameter used to only work for top
level constraints. However even when constraints are at the
toplevel in the query they might not end up at the top-level
of the jointree depending on the plan shape. This patch
changes the gapfill code to traverse the jointree to find
valid start and stop arguments.
This patch adds a helper macro to define cross module
wrapper functions to reduce code repetition. It also
changes the crossmodule struct names to match the function
names where this wasn't the case already.
On macOS the path used is depending on the runner version leading to cache
failure when the runner version changes or is different from the one used to
build the cache. This patch extracts the runner version and adds it as
suffix to the cache key on macOS.
The chunk_api test fails sometimes because of inconsistent resultset
ordering in one of the queries. This patch adds the missing ORDER BY
clause to that query.
This maintenance release contains bugfixes since the 1.7.1 release. We deem it medium
priority for upgrading.
In particular the fixes contained in this maintenance release address bugs in continuous
aggregates, drop_chunks and compression.
**Features**
* #1877 Add support for fast pruning of inlined functions
**Bugfixes**
* #1908 Fix drop_chunks with unique constraints when cascade_to_materializations is false
* #1915 Check for database in extension_current_state
* #1918 Unify chunk index creation
* #1932 Change compression locking order
* #1938 Fix gapfill locf treat_null_as_missing
* #1982 Check for disabled telemetry earlier
* #1984 Fix compression bit array left shift count
* #1997 Add checks for read-only transactions
* #2002 Reset restoring gucs rather than explicitly setting 'off'
* #2028 Fix locking in drop_chunks
* #2031 Enable compression for tables with compound foreign key
* #2039 Fix segfault in create_trigger_handler
* #2043 Fix segfault in cagg_update_view_definition
* #2046 Use index tablespace during chunk creation
* #2047 Better handling of chunk insert state destruction
* #2049 Fix handling of PlaceHolderVar in DecompressChunk
* #2051 Fix tuple concurrently deleted error with multiple continuous aggregates
**Thanks**
* @akamensky for reporting an issue with telemetry and an issue with drop_chunks
* @darko408 for reporting an issue with decompression
* @dmitri191 for reporting an issue with failing background workers
* @eduardotsj for reporting an issue with indexes not inheriting tablespace settings
* @fourseventy for reporting an issue with multiple continuous aggregrates
* @fvannee for contributing optimizations for pruning inlined functions
* @jflambert for reporting an issue with failing telemetry jobs
* @nbouscal for reporting an issue with compression jobs locking referenced tables
* @nicolai6120 for reporting an issue with locf and treat_null_as_missing
* @nomanor for reporting an issue with expression index with table references
* @olernov for contributing a fix for compressing tables with compound foreign keys
* @werjo for reporting an issue with drop_chunks and unique constraints
When we have multiple continuous aggregates defined on
the same hypertable, they could try to delete the hypertable
invalidation logs concurrently. Resolve this by serializing
invalidation log deletes by raw hypertable id.
Fixes#1940
This workflow will install our rpm packages and then try to enable
timescaledb in the database and also check the version installed
from the package against the expected version.
If a new chunk is created as part of an insert and drop_chunks runs
concurrently with the insert, there is a risk of a race. This is a test
for this.
Add locks for dimension slice tuples
If a dimension slice tuple is found while adding new chunk constraints
as part of a chunk creation it is not locked prior to adding the chunk
constraint. Hence a concurrently executing `drop_chunks` can find a
dimension slice unused (because there is no chunk constraint that
references it) and subsequently remove it. The insert will the continue
to add the chunk constraint with a reference to a now non-existent
dimension slice.
This commit fixes this by locking the dimension slice tuple with a
share lock when creating chunks and locking the dimension slice with an
exclusive lock prior to scanning for existing chunk constraints.
The commit also contains a script that repair the `dimension_slice`
table if it is broken by extracting information about dimension slices
that are mentioned in `chunk_constraint` table but not present in
`dimension_slice` table and re-create the rows from the constraints on
the chunks.
If a tablespace is provided for an index on a hypertable, it will be
also used for the index on new chunks. This is done when constraints
are created on new chunk from the hypertable constraints.
Fixes#903
When the relation targetlist of the uncompressed chunk contained
PlaceHolderVars the construction of the relation targetlist of
the compressed chunk would fail with an error. This patch changes
the behaviour to recurse into those PlaceHolderVar.
When enabling compression on a hypertable the existing
constraints are being cloned to the new compressed hypertable.
During validation of existing constraints a loop
through the conkey array is performed, and constraint name
is erroneously added to the list multiple times. This fix
moves the addition to the list outside the conkey loop.
Fixes#2000
If a unique constraint is created on a hypertable, it could under some
circumstance crash. This commit adds a test for this situation even
though it was already fixed (but was reported on the 1.7 branch).
Since pg_regress.sh and pg_isolation_regress.sh were almost
identical this patch combines them into a single script.
This patch also changes the dynamic schedule generation
so make installcheck TESTS='foo' is supported again which
was broken by previous refactoring and you needed to specify
the exact suite your test was in if you wanted to use TESTS.
This workflow will install our apt package and then try to enable
timescaledb in the database and also check the version installed
from the package against the expected version.
This change copies the chunk object into the distributed copy's
memory context before caching it in the ChunkConnectionList. This
resolves an issue where the chunk was being modified after being
stored which was resulting in rows being sent to the incorrect
data node.
This fixes github issue #2037
Since relation_close will decrease the reference counter this
might lead to the relation being freed while we are still using
the view query. This patch changes cagg_update_view_definition
to release the relations later and also keeps the locks till
the end of the transaction.
Remove functions that are no longer used due to refactorings.
Removes the following functions:
- hypertable_tuple_match_name
- ts_hypertable_get_all_by_name
- ts_hypertable_has_tuples