The functions `PQconndefaults` and `PQmakeEmptyPGresult` calls
`malloc` and can return NULL if it fails to allocate memory for the
defaults and the empty result. It is checked with an `Assert`, but this
will be removed in production builds.
Replace the `Assert` with an checks to generate an error in production
builds rather than trying to de-reference the pointer and cause a
crash.
When adding new status values we must make sure to add special
handling for these values to the downgrade script as previous
versions will not know how to deal with those.
This patch allows unique constraints on compressed chunks. When
trying to INSERT into compressed chunks with unique constraints
any potentially conflicting compressed batches will be decompressed
to let postgres do constraint checking on the INSERT.
With this patch only INSERT ON CONFLICT DO NOTHING will be supported.
For decompression only segment by information is considered to
determine conflicting batches. This will be enhanced in a follow-up
patch to also include orderby metadata to require decompressing
less batches.
Reindexing a relation requires AccessExclusiveLock which prevents
queries on that chunk. This patch changes decompress_chunk to update
the index during decompression instead of reindexing. This patch
does not change the required locks as there are locking adjustments
needed in other places to make it safe to weaken that lock.
This patch changes the way user-defined FDW options (e.g., startup
costs, per-tuple costs) are handled. So far, these values were retrieved
in apply_fdw_and_server_options() but reset to default values afterward.
Problem:
When the guc timescaledb.license = 'timescale' is set in the conf file
and a SIGHUP is sent to postgress process and a reload of the tsl
module is triggered.
This reload happens in 2 phases 1. tsl_module_load is called which
will load the module only if not already loaded and 2.The
ts_module_init is called for every ts_license_guc_assign_hook
irrespective of if it is new load.This ts_module_init initialization
function also registers a on_proc_exit function to be called on exit.
The list of on_proc_exit methods are maintained in a fixed array
on_proc_exit_list of size MAX_ON_EXITS (20) which gets filled up on
repeated SIGHUPs and hence an error.
Fix:
The fix is to make the ts_module_init() register the on_proc_exit
callback, only in case the module is reloaded and not in every init
call.
Closes#5233
The commit 96574a7 changes the handling of the file_trailer_received
flag. It is now only used in asserts and not in any other kind of logic.
This patch encapsulates the file_trailer_received in a
USE_ASSERT_CHECKING macro.
Use explicit version checks to decide whether to define backported
RelationGetSmgr function or rely on the function being available.
This simplifies the cmake code a bit and make the backporting similar
to how we handle this for other functions.
The copy fetcher fetches tuples in batches. When the last element in the
batch is the file trailer, the trailer was not handled correctly. The
existing logic did not perform a PQgetCopyData in that case. Therefore
the state of the fetcher was not set to EOF and the copy operation was
not correctly finished at this point.
Fixes: #5323
WHERE clause with SEGMENTBY column of type text/bytea
non-equality operators are not pushed down to Seq Scan
node of compressed chunk. This patch fixes this issue.
Fixes#5286
Make `partialize_agg()` support parallel query execution. To make this
work, the finalize node need combine the individual partials from each
parallel worker, but the final step that turns the resulting partial
into the finished aggregate should not happen. Thus, in the case of
distributed hypertables, each data node can run a parallel query to
compute a partial, and the access node can later combine and finalize
these partials into the final aggregate. Esssentially, there will be
one combine step (minus final) on each data node, and then another one
plus final on the access node.
To implement this, the finalize aggregate plan is simply modified to
elide the final step, and to reserialize the partial. It is only
possible to do this at the plan stage; if done at the path stage, the
PostgreSQL planner will hit assertions that assume that the node has
certain values (e.g., it doesn't expect combine Paths to skip the
final step).
When called with negative chunk_target_size_bytes
calculate_chunk_interval will throw an assertion. This patch adds
error handling for this condition. Found by sqlsmith.
Previously we used date_part("epoch", interval) and integer division
internally to determine whether the top cagg's interval is a
multiple of its parent's.
This led to precision loss and wrong results
in the case of intervals with sub-second components.
Fixed by using the `ts_interval_value_to_internal` function to convert
intervals to appropriate integer representation for division.
Fixes#5277
We don't use `ts_catalog_delete[_only]` functions anywhere and instead
we rely on `ts_catalog_delete_tid[_only]` functions so removing it from
our code base.
This release contains bug fixes since the 2.10.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #5159 Support Continuous Aggregates names in hypertable_(detailed_)size
* #5226 Fix concurrent locking with chunk_data_node table
* #5317 Fix some incorrect memory handling
* #5336 Use NameData and namestrcpy for names
* #5343 Set PortalContext when starting job
* #5360 Fix uninitialized bucket_info variable
* #5362 Make copy fetcher more async
* #5364 Fix num_chunks inconsistency in hypertables view
* #5367 Fix column name handling in old-style continuous aggregates
* #5378 Fix multinode DML HA performance regression
* #5384 Fix Hierarchical Continuous Aggregates chunk_interval_size
**Thanks**
* @justinozavala for reporting an issue with PL/Python procedures in the background worker
* @Medvecrab for discovering an issue with copying NameData when forming heap tuples.
* @pushpeepkmonroe for discovering an issue in upgrading old-style
continuous aggregates with renamed columns
* @pushpeepkmonroe for discovering an issue in upgrading old-style continuous aggregates with renamed columns
Chocolatey has all the postgres versions we need available so we
can reenable previously disabled tests. But the recent packages
seem to have different versioning schema without a suffix.
Concurrent insert into dist hypertable after a data node marked as
unavailable would produce 'tuple concurrently deleted` error.
The problem occurs because of missing tuple level locking during
scan and concurrent delete from chunk_data_node table afterwards,
which should be treated as `SELECT … FOR UPDATE` case instead.
Based on the fix by @erimatnor.
Fix#5153
When a Continuous Aggregate is created the `chunk_interval_size` is
defined my the `chunk_interval_size` of the original hypertable
multiplied by a fixed factor of 10.
The problem is currently when we create a Hierarchical Continuous
Aggregate the same factor is applied and it lead to an exponential
`chunk_interval_size`.
Fixed it by just copying the `chunk_interval_size` from the base
Continuous Aggregate for an Hierachical Continuous Aggreagate.
Fixes#5382
We added checks via #4846 to handle DML HA when replication factor is
greater than 1 and a datanode is down. Since each insert can go to a
different chunk with a different set of datanodes, we added checks
on every insert to check if DNs are unavailable. This increased CPU
consumption on the AN leading to a performance regression for RF > 1
code paths.
This patch fixes this regression. We now track if any DN is marked as
unavailable at the start of the transaction and use that information to
reduce unnecessary checks for each inserted row.
For continuous aggregates with the old-style partial aggregates
renaming columns that are not in the group-by clause will generate an
error when upgrading to a later version. The reason is that it is
implicitly assumed that the name of the column is the same as for the
direct view. This holds true for new-style continous aggregates, but is
not always true for old-style continuous aggregates. In particular,
columns that are not part of the `GROUP BY` clause can have an
internally generated name.
This commit fixes that by extracting the name of the column from the
partial view and use that when renaming the partial view column and the
materialized table column.
Make the copy fetcher more asynchronous by separating the sending of
the request for data from the receiving of the response. By doing
that, the async append node can send the request to each data node
before it starts reading the first response. This can massively
improve the performance because the response isn't returned until the
remote node has finished executing the query and is ready to return
the first tuple.
Renamed:
tsl/test/sql/size_utils.sql
tsl/test/expected/size_utils.out
To:
tsl/test/sql/size_utils_tsl.sql
tsl/test/expected/size_utils_tsl.out
because conflicting with test/sql/size_utils.sql
At the moment, the MERGE command is not supported on distributed
hypertables. This patch ensures that the join pushdown code ignores the
invocation by the MERGE command.
Different num_chunks values reported by
timescaledb_information.hypertables and
timescaledb_information.chunks.
View definition of hypertables was
not filtering dropped and osm_chunks.
Fixes#5338
This patch backports following:
1. Refactor ExecInsert/Delete/Update
Backported commit 25e777cf8e547d7423d2e1e9da71f98b9414d59e
2. Backport all MERGE related interfaces and its implementations.
Backported commit 7103ebb7aae8ab8076b7e85f335ceb8fe799097c
The `bucket_info` variable is initialized by `caggtimebucketinfo_init`
function called inside the following branch:
`if (rte->relkind == RELKIND_RELATION || rte->relkind == RELKIND_VIEW)`
If for some reason we don't enter in this branch then the `bucket_info`
will not be initialized leading to an uninitialized variable when
returning `bucket_info` at the end of the `cagg_validate_query`
function.
Fixed it by initializing with zeros the `bucket_info` variable when
declaring it.
Found by coverity scan.
This small patch adds support for continuous aggregates to the
`hypertable_detailed_size` (and with that `hypertable_size`).
It adds an additional check to see if a continuous aggregate exists
if a hypertable with the given regclass name isn't found.
This patch adds the functionality that is needed to perform distributed,
parallel joins on reference tables on access nodes. This code allows the
pushdown of a join if:
* (1) The setting "ts_guc_enable_per_data_node_queries" is enabled
* (2) The outer relation is a distributed hypertable
* (3) The inner relation is marked as a reference table
* (4) The join is a left join or an inner join
This PR introduces a timeout argument and a new logic to the
timescale_internal.ping_data_node() function which allows
to handle io timeouts for nodes being unresponsive.
Fix#5312
NameData is a fixed-size type of 64 bytes. Using strlcpy to copy data
into a NameData struct can cause problems because any data that follows
the initial null-terminated string will also be part of the data.
While running TimescaleDB under valgrind I've found
two cases of incorrect memory handling.
Case 1: When creating timescaledb extension, during the insertion of
metadata there is some junk in memory that is not zeroed
before writing there.
Changes in metadata.c fix this.
Case 2: When executing GRANT smth ON ALL TABLES IN SCHEMA some_schema
and deconstructing this statement into granting to individual tables,
process of copying names of those tables is wrong.
Currently, you aren't copying the data itself, but an address to data
on a page in some buffer. There's a problem - when the page in this
buffer changes, copied address would lead to wrong data.
Changes in process_utility.c fix this by allocating memory and then
copying needed relname there.
Fixes#5311
The upgrade and downgrade tests are running PostgreSQL in Docker
containers. The function 'wait_for_pg' is used to determine if
PostgreSQL is ready to accept connections. In contrast to the upgrade
tests, the downgrade tests use more relaxed timeout values. The upgrade
tests sometimes fail because PostgreSQL cannot accept connections within
the configured time range. This patch applies the more relaxed timeout
values also to the upgrade script.