Recent refactorings in the INSERT into compressed chunk code
path allowed us to support this feature but the check to
prevent users from using this feature was not removed as part
of that patch. This patch removes the blocker code and adds a
minimal test case.
As part of inserting into a compressed table, the tuple is
materialized, which computes the data size for the tuple using
`heap_compute_data_size`. When computing the data size of the tuple,
columns that are null are not considered and are just ignored. Columns
that are dropped are, however, not explicitly checked and instead the
`heap_compute_data_size` rely on these columns being set to null.
When reading tuples from a compressed table for insert, the null vector
is cleared, meaning that it by default is non-null. Since columns that
are dropped are not explicitly processed, they are expected to have a
defined value, which they do not have, causing a crash when an attempt
to dereference them are made.
This commit fixes this by setting the null vector to all null, and the
code after will overwrite the columns with proper null bits, except the
dropped columns that will be considered null.
Fixes#4251
AFTER ROW triggers are not supported on compressed chunks.
Directly call the continuous aggregate trigger function for copies.
This fix is similar to PR 3764 that handles cagg triggers for
inserts into compressed chunks.
When trying to insert into the internal compressed hypertable
timescaledb would segfault. This patch blocks direct inserts into
the internal compressed hypertable through our tuple routing.
Internally we don't use this code path for compression as we
create chunks explicitly and insert directly into those chunks
in compress_chunk.
Fixes#3920
After row triggers do not work when we insert into a compressed chunk.
This causes a problem for caggs as invalidations are not recorded.
Explicitly call the function to record invalidations when we
insert into a compressed chunk (if the hypertable has caggs
defined on it)
Fixes#3410.
When a chunk is not found, we print a generic error message that does
not hint at what we are looking for, which means that it is very hard
to locate the problem.
This commit adds details to the error message printing out values used
for the scan key when searching for the chunk.
Related-To: #2344
Related-To: #3400
Related-To: #153
If insertion is attempted into a chunk that is compressed, the error
message is very brief. This commit adds a hint that the chunk should be
decompressed before inserting into it and also list the triggers on the
chunk so that it is easy to debug.
After inserts go into a compressed chunk, the chunk is marked as
unordered.This PR adds a new function recompress_chunk that
compresses the data and sets the status back to compressed. Further
optimizations for this function are planned but not part of this PR.
This function can be invoked by calling
SELECT recompress_chunk(<chunk_name>).
recompress_chunk function is automatically invoked by the compression
policy job, when it sees that a chunk is in unordered state.
Compressed chunks with inserts after being compressed have batches
that are not ordered according to compress_orderby for those
chunks we cannot set pathkeys on the DecompressChunk node and we
need an extra sort step if we require ordered output from those
chunks.