This is mostly a cosmetic change. When only 1 child is present there
is no need for ordered append. In this situation we might still
benefit from a ChunkAppend node here due to runtime chunk exclusion
when we have non-immutable constraints, so we still add the ChunkAppend
node in that situation even with only 1 child.
This patch drops the following internal SQL functions which were
unused:
_timescaledb_internal.is_main_table(regclass);
_timescaledb_internal.is_main_table(text, text);
_timescaledb_internal.hypertable_from_main_table(regclass);
_timescaledb_internal.main_table_from_hypertable(integer);
_timescaledb_internal.time_literal_sql(bigint, regtype);
While executing compression operations in parallel with
inserting into chunks (both operations which can potentially
change the chunk status), we could get into situations where
the chunk status would end up inconsistent. This change re-reads
the chunk status after locking the chunk to make sure it can
decompress data when handling ON CONFLICT inserts correctly.
Verify that insertion into compressed chunks does not block
each other if the chunks is already partially compressed. Also
check that using the RETURNING clause works the same.
These tests try to verify that changing physical layout
of chunks (either compressed or uncompressed) should
yield consistent results. They also verify index mapping
on compressed chunks is handled correctly.
This patch moves the support functions for histogram, first and last
into the _timescaledb_functions schema. Since we alter the schema
of the existing functions in upgrade scripts and do not change the
aggregates this should work completely transparently for any user
objects using those aggregates.
Now we look them up again at execution time, which adds up for tables
with a large number of chunks.
This gives about 15% speedup (100 mcs) on a small query on a table from
tests with 50 chunks:
`select id, ts, value from metric_compressed order by id, ts limit 100;`
Decompression produces records which have all the decompressed data
set, but it also retains the fields which are used internally during
decompression.
These didn't cause any problem - unless an operation is being done
with the whole row - in which case all the fields which have ended up
being non-null can be a potential segfault source.
Fixes#5458#5411
This patch does following:
1. Executor changes to parse qual ExprState to check if SEGMENTBY
column is specified in WHERE clause.
2. Based on step 1, we build scan keys.
3. Executor changes to do heapscan on compressed chunk based on
scan keys and move only those rows which match the WHERE clause
to staging area aka uncompressed chunk.
4. Mark affected chunk as partially compressed.
5. Perform regular UPDATE/DELETE operations on staging area.
6. Since there is no Custom Scan (HypertableModify) node for
UPDATE/DELETE operations on PG versions < 14, we don't support this
feature on PG12 and PG13.
Refactor the code path that handles remote distributed COPY. The
main changes include:
* Use a hash table to lookup data node connections instead of a list.
* Refactor the per-data node buffer code that accumulates rows into
bigger CopyData messages.
* Reduce the default number of rows in a CopyData message to 100. This
seems to improve throughput, probably striking a better balance
between message overhead and latency.
* The number of rows to send in each CopyData message can now be
changed via a new foreign data wrapper option.
Add isolation test case to check that the chunk object created during
chunk copy/move operation on the destination datanode always has
superuser credentials till the end of the operation.
The joins could be between a continuous aggregate and hypertable,
continuous aggregate and a regular Postgres table,
and continuous aggregate and a regular Postgres view.
During chunk creation, the chunk's dimensional CHECK constraints are
created via an "upcall" to PL/pgSQL code. However, creating
dimensional constraints in PL/pgSQL code sometimes fails, especially
during high-concurrency inserts, because PL/pgSQL code scans metadata
using a snapshot that might not see the same metadata as the C
code. As a result, chunk creation sometimes fail during constraint
creation.
To fix this issue, implement dimensional CHECK-constraint creation in
C code. Other constraints (FK, PK, etc.) are still created via an
upcall, but should probably also be rewritten in C. However, since
these constraints don't depend on recently updated metadata, this is
left to a future change.
Fixes#5456
This patch introduces a C-function to perform the recompression at
a finer granularity instead of decompressing and subsequently
compressing the entire chunk.
This improves performance for the following reasons:
- it needs to sort less data at a time and
- it avoids recreating the decompressed chunk and the heap
inserts associated with that by decompressing each segment
into a tuplesort instead.
If no segmentby is specified when enabling compression or if an
index does not exist on the compressed chunk then the operation is
performed as before, decompressing and subsequently
compressing the entire chunk.
There is a security loophole in current core Postgres, due to which
it's possible for a non-superuser to gain superuser access by attaching
dependencies like expression indexes, triggers, etc. before logical
replication commences.
To avoid this, we now ensure that the chunk objects that get created
for the subscription are done so as a superuser. This avoids malicious
dependencies by regular users.
When calling the `cagg_watermark` function to get the watermark of a
Continuous Aggregate we execute a `SELECT MAX(time_dimension)` query
in the underlying materialization hypertable.
The problem is that a `SELECT MAX(time_dimention)` query can be
expensive because it will scan all hypertable chunks increasing the
planning time for a Realtime Continuous Aggregates.
Improved it by creating a new catalog table to serve as a cache table
to store the current Continous Aggregate watermark in the following
situations:
- Create CAgg: store the minimum value of hypertable time dimension
data type;
- Refresh CAgg: store the last value of the time dimension materialized
in the underlying materialization hypertable (or the minimum value of
materialization hypertable time dimension data type if there's no
data materialized);
- Drop CAgg Chunks: the same as refresh cagg.
Closes#4699, #5307
During the compression autovacuum use to be disabled for uncompressed
chunk and enable after decompression. This leads to postgres
maintainence issue. Let's not disable autovacuum for uncompressed
chunk anymore. Let postgres take care of the stats in its natural way.
Fixes#309
No functional changes, mostly just reshuffles the code to prepare for
batch decompression.
Also removes unneeded repeated column value stores and ExecStoreTuple,
to save 3-5% execution time on some queries.
The functions `PQconndefaults` and `PQmakeEmptyPGresult` calls
`malloc` and can return NULL if it fails to allocate memory for the
defaults and the empty result. It is checked with an `Assert`, but this
will be removed in production builds.
Replace the `Assert` with an checks to generate an error in production
builds rather than trying to de-reference the pointer and cause a
crash.
This patch allows unique constraints on compressed chunks. When
trying to INSERT into compressed chunks with unique constraints
any potentially conflicting compressed batches will be decompressed
to let postgres do constraint checking on the INSERT.
With this patch only INSERT ON CONFLICT DO NOTHING will be supported.
For decompression only segment by information is considered to
determine conflicting batches. This will be enhanced in a follow-up
patch to also include orderby metadata to require decompressing
less batches.
Reindexing a relation requires AccessExclusiveLock which prevents
queries on that chunk. This patch changes decompress_chunk to update
the index during decompression instead of reindexing. This patch
does not change the required locks as there are locking adjustments
needed in other places to make it safe to weaken that lock.
This patch changes the way user-defined FDW options (e.g., startup
costs, per-tuple costs) are handled. So far, these values were retrieved
in apply_fdw_and_server_options() but reset to default values afterward.
Problem:
When the guc timescaledb.license = 'timescale' is set in the conf file
and a SIGHUP is sent to postgress process and a reload of the tsl
module is triggered.
This reload happens in 2 phases 1. tsl_module_load is called which
will load the module only if not already loaded and 2.The
ts_module_init is called for every ts_license_guc_assign_hook
irrespective of if it is new load.This ts_module_init initialization
function also registers a on_proc_exit function to be called on exit.
The list of on_proc_exit methods are maintained in a fixed array
on_proc_exit_list of size MAX_ON_EXITS (20) which gets filled up on
repeated SIGHUPs and hence an error.
Fix:
The fix is to make the ts_module_init() register the on_proc_exit
callback, only in case the module is reloaded and not in every init
call.
Closes#5233
The commit 96574a7 changes the handling of the file_trailer_received
flag. It is now only used in asserts and not in any other kind of logic.
This patch encapsulates the file_trailer_received in a
USE_ASSERT_CHECKING macro.
The copy fetcher fetches tuples in batches. When the last element in the
batch is the file trailer, the trailer was not handled correctly. The
existing logic did not perform a PQgetCopyData in that case. Therefore
the state of the fetcher was not set to EOF and the copy operation was
not correctly finished at this point.
Fixes: #5323
WHERE clause with SEGMENTBY column of type text/bytea
non-equality operators are not pushed down to Seq Scan
node of compressed chunk. This patch fixes this issue.
Fixes#5286
Make `partialize_agg()` support parallel query execution. To make this
work, the finalize node need combine the individual partials from each
parallel worker, but the final step that turns the resulting partial
into the finished aggregate should not happen. Thus, in the case of
distributed hypertables, each data node can run a parallel query to
compute a partial, and the access node can later combine and finalize
these partials into the final aggregate. Esssentially, there will be
one combine step (minus final) on each data node, and then another one
plus final on the access node.
To implement this, the finalize aggregate plan is simply modified to
elide the final step, and to reserialize the partial. It is only
possible to do this at the plan stage; if done at the path stage, the
PostgreSQL planner will hit assertions that assume that the node has
certain values (e.g., it doesn't expect combine Paths to skip the
final step).
Previously we used date_part("epoch", interval) and integer division
internally to determine whether the top cagg's interval is a
multiple of its parent's.
This led to precision loss and wrong results
in the case of intervals with sub-second components.
Fixed by using the `ts_interval_value_to_internal` function to convert
intervals to appropriate integer representation for division.
Fixes#5277
Concurrent insert into dist hypertable after a data node marked as
unavailable would produce 'tuple concurrently deleted` error.
The problem occurs because of missing tuple level locking during
scan and concurrent delete from chunk_data_node table afterwards,
which should be treated as `SELECT … FOR UPDATE` case instead.
Based on the fix by @erimatnor.
Fix#5153
When a Continuous Aggregate is created the `chunk_interval_size` is
defined my the `chunk_interval_size` of the original hypertable
multiplied by a fixed factor of 10.
The problem is currently when we create a Hierarchical Continuous
Aggregate the same factor is applied and it lead to an exponential
`chunk_interval_size`.
Fixed it by just copying the `chunk_interval_size` from the base
Continuous Aggregate for an Hierachical Continuous Aggreagate.
Fixes#5382
We added checks via #4846 to handle DML HA when replication factor is
greater than 1 and a datanode is down. Since each insert can go to a
different chunk with a different set of datanodes, we added checks
on every insert to check if DNs are unavailable. This increased CPU
consumption on the AN leading to a performance regression for RF > 1
code paths.
This patch fixes this regression. We now track if any DN is marked as
unavailable at the start of the transaction and use that information to
reduce unnecessary checks for each inserted row.
Make the copy fetcher more asynchronous by separating the sending of
the request for data from the receiving of the response. By doing
that, the async append node can send the request to each data node
before it starts reading the first response. This can massively
improve the performance because the response isn't returned until the
remote node has finished executing the query and is ready to return
the first tuple.
Renamed:
tsl/test/sql/size_utils.sql
tsl/test/expected/size_utils.out
To:
tsl/test/sql/size_utils_tsl.sql
tsl/test/expected/size_utils_tsl.out
because conflicting with test/sql/size_utils.sql
At the moment, the MERGE command is not supported on distributed
hypertables. This patch ensures that the join pushdown code ignores the
invocation by the MERGE command.
Different num_chunks values reported by
timescaledb_information.hypertables and
timescaledb_information.chunks.
View definition of hypertables was
not filtering dropped and osm_chunks.
Fixes#5338
The `bucket_info` variable is initialized by `caggtimebucketinfo_init`
function called inside the following branch:
`if (rte->relkind == RELKIND_RELATION || rte->relkind == RELKIND_VIEW)`
If for some reason we don't enter in this branch then the `bucket_info`
will not be initialized leading to an uninitialized variable when
returning `bucket_info` at the end of the `cagg_validate_query`
function.
Fixed it by initializing with zeros the `bucket_info` variable when
declaring it.
Found by coverity scan.