When calling the `cagg_watermark` function to get the watermark of a
Continuous Aggregate we execute a `SELECT MAX(time_dimension)` query
in the underlying materialization hypertable.
The problem is that a `SELECT MAX(time_dimention)` query can be
expensive because it will scan all hypertable chunks increasing the
planning time for a Realtime Continuous Aggregates.
Improved it by creating a new catalog table to serve as a cache table
to store the current Continous Aggregate watermark in the following
situations:
- Create CAgg: store the minimum value of hypertable time dimension
data type;
- Refresh CAgg: store the last value of the time dimension materialized
in the underlying materialization hypertable (or the minimum value of
materialization hypertable time dimension data type if there's no
data materialized);
- Drop CAgg Chunks: the same as refresh cagg.
Closes#4699, #5307
During the compression autovacuum use to be disabled for uncompressed
chunk and enable after decompression. This leads to postgres
maintainence issue. Let's not disable autovacuum for uncompressed
chunk anymore. Let postgres take care of the stats in its natural way.
Fixes#309
No functional changes, mostly just reshuffles the code to prepare for
batch decompression.
Also removes unneeded repeated column value stores and ExecStoreTuple,
to save 3-5% execution time on some queries.
Invalidate the catalog snapshot in the scanner to ensure that any
lookups into `pg_catalog` uses a snapshot that is consistent with the
snapshot used to scan TimescaleDB metadata.
This fixes an issue where a chunk could be looked up without having a
proper relid filled in, causing an assertion failure
(`ASSERT_IS_VALID_CHUNK`). When a chunk is scanned and found (in
`chunk_tuple_found()`), the Oid of the chunk table is filled in using
`get_relname_relid()`, which could return InvalidOid due to use of a
different snapshot when scanning `pg_class`. Calling
`InvalidateCatalogSnapshot()` before starting the metadata scan in
`Scanner` ensures the pg_catalog snapshot used is refreshed.
Due to the difficulty of reproducing this MVCC issue, no regression or
isolation test is provided, but it is easy to hit this bug when doing
highly concurrent COPY:s into a distributed hypertable.
The functions `PQconndefaults` and `PQmakeEmptyPGresult` calls
`malloc` and can return NULL if it fails to allocate memory for the
defaults and the empty result. It is checked with an `Assert`, but this
will be removed in production builds.
Replace the `Assert` with an checks to generate an error in production
builds rather than trying to de-reference the pointer and cause a
crash.
When adding new status values we must make sure to add special
handling for these values to the downgrade script as previous
versions will not know how to deal with those.
This patch allows unique constraints on compressed chunks. When
trying to INSERT into compressed chunks with unique constraints
any potentially conflicting compressed batches will be decompressed
to let postgres do constraint checking on the INSERT.
With this patch only INSERT ON CONFLICT DO NOTHING will be supported.
For decompression only segment by information is considered to
determine conflicting batches. This will be enhanced in a follow-up
patch to also include orderby metadata to require decompressing
less batches.
Reindexing a relation requires AccessExclusiveLock which prevents
queries on that chunk. This patch changes decompress_chunk to update
the index during decompression instead of reindexing. This patch
does not change the required locks as there are locking adjustments
needed in other places to make it safe to weaken that lock.
This patch changes the way user-defined FDW options (e.g., startup
costs, per-tuple costs) are handled. So far, these values were retrieved
in apply_fdw_and_server_options() but reset to default values afterward.
Problem:
When the guc timescaledb.license = 'timescale' is set in the conf file
and a SIGHUP is sent to postgress process and a reload of the tsl
module is triggered.
This reload happens in 2 phases 1. tsl_module_load is called which
will load the module only if not already loaded and 2.The
ts_module_init is called for every ts_license_guc_assign_hook
irrespective of if it is new load.This ts_module_init initialization
function also registers a on_proc_exit function to be called on exit.
The list of on_proc_exit methods are maintained in a fixed array
on_proc_exit_list of size MAX_ON_EXITS (20) which gets filled up on
repeated SIGHUPs and hence an error.
Fix:
The fix is to make the ts_module_init() register the on_proc_exit
callback, only in case the module is reloaded and not in every init
call.
Closes#5233
The commit 96574a7 changes the handling of the file_trailer_received
flag. It is now only used in asserts and not in any other kind of logic.
This patch encapsulates the file_trailer_received in a
USE_ASSERT_CHECKING macro.
Use explicit version checks to decide whether to define backported
RelationGetSmgr function or rely on the function being available.
This simplifies the cmake code a bit and make the backporting similar
to how we handle this for other functions.
The copy fetcher fetches tuples in batches. When the last element in the
batch is the file trailer, the trailer was not handled correctly. The
existing logic did not perform a PQgetCopyData in that case. Therefore
the state of the fetcher was not set to EOF and the copy operation was
not correctly finished at this point.
Fixes: #5323
WHERE clause with SEGMENTBY column of type text/bytea
non-equality operators are not pushed down to Seq Scan
node of compressed chunk. This patch fixes this issue.
Fixes#5286
Make `partialize_agg()` support parallel query execution. To make this
work, the finalize node need combine the individual partials from each
parallel worker, but the final step that turns the resulting partial
into the finished aggregate should not happen. Thus, in the case of
distributed hypertables, each data node can run a parallel query to
compute a partial, and the access node can later combine and finalize
these partials into the final aggregate. Esssentially, there will be
one combine step (minus final) on each data node, and then another one
plus final on the access node.
To implement this, the finalize aggregate plan is simply modified to
elide the final step, and to reserialize the partial. It is only
possible to do this at the plan stage; if done at the path stage, the
PostgreSQL planner will hit assertions that assume that the node has
certain values (e.g., it doesn't expect combine Paths to skip the
final step).
When called with negative chunk_target_size_bytes
calculate_chunk_interval will throw an assertion. This patch adds
error handling for this condition. Found by sqlsmith.
Previously we used date_part("epoch", interval) and integer division
internally to determine whether the top cagg's interval is a
multiple of its parent's.
This led to precision loss and wrong results
in the case of intervals with sub-second components.
Fixed by using the `ts_interval_value_to_internal` function to convert
intervals to appropriate integer representation for division.
Fixes#5277
We don't use `ts_catalog_delete[_only]` functions anywhere and instead
we rely on `ts_catalog_delete_tid[_only]` functions so removing it from
our code base.
This release contains bug fixes since the 2.10.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #5159 Support Continuous Aggregates names in hypertable_(detailed_)size
* #5226 Fix concurrent locking with chunk_data_node table
* #5317 Fix some incorrect memory handling
* #5336 Use NameData and namestrcpy for names
* #5343 Set PortalContext when starting job
* #5360 Fix uninitialized bucket_info variable
* #5362 Make copy fetcher more async
* #5364 Fix num_chunks inconsistency in hypertables view
* #5367 Fix column name handling in old-style continuous aggregates
* #5378 Fix multinode DML HA performance regression
* #5384 Fix Hierarchical Continuous Aggregates chunk_interval_size
**Thanks**
* @justinozavala for reporting an issue with PL/Python procedures in the background worker
* @Medvecrab for discovering an issue with copying NameData when forming heap tuples.
* @pushpeepkmonroe for discovering an issue in upgrading old-style
continuous aggregates with renamed columns
* @pushpeepkmonroe for discovering an issue in upgrading old-style continuous aggregates with renamed columns
Chocolatey has all the postgres versions we need available so we
can reenable previously disabled tests. But the recent packages
seem to have different versioning schema without a suffix.
Concurrent insert into dist hypertable after a data node marked as
unavailable would produce 'tuple concurrently deleted` error.
The problem occurs because of missing tuple level locking during
scan and concurrent delete from chunk_data_node table afterwards,
which should be treated as `SELECT … FOR UPDATE` case instead.
Based on the fix by @erimatnor.
Fix#5153
When a Continuous Aggregate is created the `chunk_interval_size` is
defined my the `chunk_interval_size` of the original hypertable
multiplied by a fixed factor of 10.
The problem is currently when we create a Hierarchical Continuous
Aggregate the same factor is applied and it lead to an exponential
`chunk_interval_size`.
Fixed it by just copying the `chunk_interval_size` from the base
Continuous Aggregate for an Hierachical Continuous Aggreagate.
Fixes#5382
We added checks via #4846 to handle DML HA when replication factor is
greater than 1 and a datanode is down. Since each insert can go to a
different chunk with a different set of datanodes, we added checks
on every insert to check if DNs are unavailable. This increased CPU
consumption on the AN leading to a performance regression for RF > 1
code paths.
This patch fixes this regression. We now track if any DN is marked as
unavailable at the start of the transaction and use that information to
reduce unnecessary checks for each inserted row.
For continuous aggregates with the old-style partial aggregates
renaming columns that are not in the group-by clause will generate an
error when upgrading to a later version. The reason is that it is
implicitly assumed that the name of the column is the same as for the
direct view. This holds true for new-style continous aggregates, but is
not always true for old-style continuous aggregates. In particular,
columns that are not part of the `GROUP BY` clause can have an
internally generated name.
This commit fixes that by extracting the name of the column from the
partial view and use that when renaming the partial view column and the
materialized table column.
Make the copy fetcher more asynchronous by separating the sending of
the request for data from the receiving of the response. By doing
that, the async append node can send the request to each data node
before it starts reading the first response. This can massively
improve the performance because the response isn't returned until the
remote node has finished executing the query and is ready to return
the first tuple.
Renamed:
tsl/test/sql/size_utils.sql
tsl/test/expected/size_utils.out
To:
tsl/test/sql/size_utils_tsl.sql
tsl/test/expected/size_utils_tsl.out
because conflicting with test/sql/size_utils.sql
At the moment, the MERGE command is not supported on distributed
hypertables. This patch ensures that the join pushdown code ignores the
invocation by the MERGE command.
Different num_chunks values reported by
timescaledb_information.hypertables and
timescaledb_information.chunks.
View definition of hypertables was
not filtering dropped and osm_chunks.
Fixes#5338
This patch backports following:
1. Refactor ExecInsert/Delete/Update
Backported commit 25e777cf8e547d7423d2e1e9da71f98b9414d59e
2. Backport all MERGE related interfaces and its implementations.
Backported commit 7103ebb7aae8ab8076b7e85f335ceb8fe799097c
The `bucket_info` variable is initialized by `caggtimebucketinfo_init`
function called inside the following branch:
`if (rte->relkind == RELKIND_RELATION || rte->relkind == RELKIND_VIEW)`
If for some reason we don't enter in this branch then the `bucket_info`
will not be initialized leading to an uninitialized variable when
returning `bucket_info` at the end of the `cagg_validate_query`
function.
Fixed it by initializing with zeros the `bucket_info` variable when
declaring it.
Found by coverity scan.