This patch removes the restrictions preventing changing compression
settings when compressed chunks exist. Compression settings can now
be changed at any time and will not affect existing compressed chunks
but any subsequent compress_chunk will apply the changed settings.
A decompress_chunk/compress_chunk cycle will also compress the chunk
with the new settings.
Logging and caching related tables from the timescaledb extension
should not be dumped using pg_dump. Our scripts specify a few such
unwanted tables. Apart from being unnecessary, the "job_errors" had
some restricted permissions causing additional problems in pg_dump.
We now don't include such tables for dumping.
Fixes#5449
Previously when using BETWEEN ... AND additional constraints in a
WHERE clause, the BETWEEN was not handled correctly because it was
wrapped in a BoolExpr node, which prevented plantime exclusion.
The flattening of such expressions happens in `eval_const_expressions`
which gets called after our constify_now code.
This commit fixes the handling of this case to allow chunk exclusion to
take place at planning time.
Also, makes sure we use our mock timestamp in all places in tests.
Previously we were using a mix of current_timestamp_mock and now(),
which was returning unexpected/incorrect results.
The commit #6513 removed some restricted chunk operations, enabling
adding constraints to OSM chunks directly. This operation is blocked
on OSM chunks. The present commit ensures that adding a constraint
directly on an OSM chunk is blocked.
Segmentwise recompression grabbed an AccessExclusiveLock on
the compressed chunk index. This would block all read operations
on the chunk which involved said index. Reducing the lock to
ExclusiveLock would allow reads, removing blocks from other
ongoing operations.
This patch changes the dump configuration for
_timescaledb_catalog.metadata to include all entries. To allow loading
logical dumps with this configuration an insert trigger is added that
turns uniqueness conflicts into updates to not block the restore.
Since #6505, the changelog script tries to access the
secrets.ORG_AUTOMATION_TOKEN. However, accessing secrets is not possible
for PRs. This PR changes the token to the default access token, which
is available in PRs and provides read access to the issue API.
We couldn't use parameterized index scan by segmentby column on
compressed chunk, effectively making joins on segmentby unusable. We
missed this when bulk-updating test references for PG16. The underlying
reason is the incorrect creation of equivalence members for segmentby
columns of compressed chunk tables. This commit fixes it.
Since the optional time_bucket arguments like offset, origin and
timezone shift the output by at most bucket width we can optimize
these similar to how we optimize the other time_bucket constraints.
Fixes#4825
When time_bucket is compared to constant in WHERE, we also add condition
on the underlying time variable (ts_transform_time_bucket_comparison).
Unfortunately we only do this for plan-time constants, which prevents
chunk exclusion when the interval is given by a query parameter, and a
generic plan is used. This commit also tries to apply this optimization
after applying the execution-time constants.
This PR also enables startup exclusion based on parameterized filters.
In the binary heap, address smaller dedicated structures with data
necessary for comparison, instead of the entire CompressedBatch
structures. This is important when we have a large number of batches.
This patch removes some version checks that are now superfluous.
The oldest version our update process needs to be able to handle is
2.1.0 as previous versions will not work with currently supported
postgres versions.
The changes for per chunk compression settings got rid of some
locking that previously prevented compressing different chunks
of the same hypertable. This patch just adds an isolation test for
that functionality.
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
We enabled scans using indexes for a chunk that is to be compressed.
The theory was that avoding tuplesort will be a win if there's a
matching index to the compression settings. However, a few customers
have reported very slow compress timings with lots of disk usage. It's
important to know which scan is being used for the compression in such
cases to help debug the issue. There's an existing GUC parameter which
was "DEBUG" only till now. Make it available in release builds as well.
We used to reindex relation when compressing chunks. Recently
we moved to inserting into indexes on compressed chunks in
order to reduce locks necessary for the operation. Since
recompression uses RowCompressor, it also started inserting
tuples into indexes but we never removed the relation reindexing.
This change removes the unnecessary reindex call.
One of the merge join queries is known to switch inner and
outer join relations because the costs seem to be the same.
Adding an additional filter on one of the relations should
change the costs enough so they are not interchangeable.
Previously we would only check for data nodes defined or distributed
hypertables being present which might not be true on data nodes so
we prevent update on any installation that has dist_uuid defined which
is also true on data nodes.
The extension state is not easily accessible in release builds, which
makes debugging issue with the loader very difficult. This commit
introduces a new schema `_timescaledb_debug` and makes the function
`ts_extension_get_state` available also in release builds as
`_timescaledb_debug.extension_state`.
See #1682
This change makes gapfill calculate timezone offsets like
we do in time_bucket when there is a supplied timezone.
Without this, timezones would not align and we get timestamp
mismatches which cause double entries for a single bucket.
This patch adds a function `ts_chunk_get_by_hypertable_id` to return
a List of all chunks belonging to a hypertable. The returned chunk
objects will not have any constraints or dimension information filled
in.
So far, we have also checked the changelog files on our backport branch.
However, the test always failed since the backported PR numbers did not
match the changelog PR numbers. This PR disables the changelog check on
the backport branch.
The OS X build currently fails with a brew link error for Python. This
PR fixes the problem by allowing the overwriting of existing files in
the Python install step.