Previously when adding equivalence class members for the compressed
chunk's variables, we would only consider Vars. This led us to ignore
cases where the Var was wrapped in a RelabelType,
returning inaccurate results.
Fixed the issue by accepting Vars
with RelabelType for segmentby equivalence class.
Fixes#5585
Use manual poison/unpoison at the existing Valgrind hooks, so that
AddressSanitizer sees palloc/pfree as well, not only the underlying
mallocs which are called much less often.
Fix some out-of-bound reads found with this instrumentation.
Add table `_timescaledb_catalog.telemetry_event` table containing
events that should be sent out with telemetry reports. The table will
be truncated after reporting being generated.
When updating and deleting the same tuple while both transactions are
running at the same time, we end up with reference leak. This is because
one of the query in a transaction fails and we take error path. However
we fail to close the table.
This patch fixes the above mentioned problem by closing the required
tables.
Fixes#5674
When executing a subtransaction using `BeginInternalSubTransaction` the
memory context switches from the current context to
`CurTransactionContext` and when the transaction is aborted or
committed using `ReleaseCurrentSubTransaction` or
`RollbackAndReleaseCurrentSubTransaction` respectively, it will not
restore to the previous memory context or resource owner but rather use
`TopTransactionContext`. Because of this, both the memory context and
the resource owner will be wrong when executing
`calculate_next_start_on_failure`, which causes `run_job` to generate
an error when used with the telemetry job.
This commit fixes this by saving both the resource owner and the memory
context before starting the internal subtransaction and restoring it
after finishing the internal subtransaction.
Since the `ts_bgw_job_run_and_set_next_start` was incorrectly reading
the wrong result from the telemetry job, this commit fixes this as
well. Note that `ts_bgw_job_run_and_set_next_start` is only used when
running the telemetry job, so it does not cause issues for other jobs.
Use a per-tuple memory context when receiving chunk statistics from
data nodes. Otherwise memory usage is proportional to the number of
chunks and columns.
Fix a regression due to a previous change in c571d54c. That change
unintentionally removed the cleanup of PGresults at the end of
transactions. Add back this functionality in order to reduce memory
usage.
During the `cagg_migrate` execution if the user set the `drop_old`
parameter to `true` the routine will drop the old Continuous Aggregate
leading to an inconsistent state because the catalog code don't handle
this table as a normal catalog table so the records are not removed
when dropping a Continuous Aggregate. The same problem will happen if
you manually drop the old Continuous Aggregate after the migration.
Fixed it by removing the useless Foreign Key and also adding another
column named `user_view_definition` to the main plan table just to store
the original user view definition for troubleshooting purposes.
Fixed#5662
Bitmap heap scans are specific in that they store scan state
during node initialization. This means they would not pick up on
any data that might have been decompressed during a DML command
from the compressed chunk. To avoid this, we update the snapshot
on the node scan state and issue a rescan to update the internal state.
Running ALTER TABLE SET with multiple SET clauses on a regular PostgreSQL table
produces irrelevant error when timescaledb extension is installed.
Fix#5641
When JOINs were present during UPDATE/DELETE on compressed chunks
the code would decompress other hypertables that were not the
target of the UPDATE/DELETE operations and in the case of self-JOINs
potentially decompress chunks not required to be decompressed.
During UPDATE/DELETE on compressed hypertables, we iterate over plan
tree to collect all scan nodes. For each scan nodes there can be
filter conditions.
Prior to this patch we collect only first filter condition and use
for first chunk which may be wrong. In this patch as and when we
encounter a target scan node, we immediatly process those chunks.
Fixes#5640
If a hypertable uses a non default tablespace, the compressed hypertable
and its corresponding toast table and index is still created in the
default tablespace.
This PR fixes this unexpected behavior and creates the compressed
hypertable and its toast table and index in the same tablespace as
the hypertable.
Fixes#5520
This patch adds an optimization to the DecompressChunk node. If the
query 'order by' and the compression 'order by' are compatible (query
'order by' is equal or a prefix of compression 'order by'), the
compressed batches of the segments are decompressed in parallel and
merged using a binary heep. This preserves the ordering and the sorting
of the result can be prevented. Especially LIMIT queries benefit from
this optimization because only the first tuples of some batches have to
be decompressed. Previously, all segments were completely decompressed
and sorted.
Fixes: #4223
Co-authored-by: Sotiris Stamokostas <sotiris@timescale.com>
On compressed hypertables 3 schema levels are in use simultaneously
* main - hypertable level
* chunk - inheritance level
* compressed chunk
In the build_scankeys method all of them appear - as slot have their
fields represented as a for a row of the main hypertable.
Accessing the slot by the attribut numbers of the chunks may lead to
indexing mismatches if there are differences between the schemes.
Fixes: #5577
Since telemetry job has a special code path to be able to be used both
from Apache code and from TSL code, trying to execute the telemetry job
with run_job() will fail.
This code will allow run_job() to be used with the telemetry job to
trigger a send of telemetry data. You have to belong to the group that
owns the telemetry job (or be the owner of the telemetry job) to be
able to use it.
Closes#5605
We defined some paths to ignore regression test workflows when, for
example, we change the CHANGELOG.md and others. But in fact it was not
happening because we didn't define a proper name in the fake regression
workflow that fake to the CI that some required status passed.
Fixed it defining a proper regression name like we do for the regular
regression workflow.
There were no permission checks when calling run_job(), so it was
possible to execute any job regardless of who owned it. This commit
adds such checks.
The internal `cagg_rebuild_view_definition` function was trying to cast
a pointer to `RangeTblRef` but it actually is a `RangeTblEntry`.
Fixed it by using the already existing `direct_query` data struct to
check if there are JOINs in the CAgg to be repaired.
When updating or deleting tuples from a compressed chunk, we first
need to decompress the matching tuples then proceed with the operation.
This optimization reduces the amount of data decompressed by using
compressed metadata to decompress only the affected segments.
All children of an append path are required to have the same parameterization
so we have to reparameterize when the selected path does not have the right
parameterization.
This block was removed by accident, in order to support this we
need to ensure the uniqueness in the compressed data which is
something we should do in the future thus removing this block.
Inserting multiple rows into a compressed chunk could have bypassed
constraint check in case the table had segment_by columns.
Decompression is narrowed to only consider candidates by the actual
segment_by value.
Because of caching - decompression was skipped for follow-up rows of
the same Chunk.
Fixes#5553
When inserting into a compressed chunk with constraints present,
we need to decompress relevant tuples in order to do speculative
inserting. Usually we used segment by column values to limit the
amount of compressed segments to decompress. This change expands
on that by also using segment metadata to further filter
compressed rows that need to be decompressed.