If a hypertable uses a non default tablespace, the compressed hypertable
and its corresponding toast table and index is still created in the
default tablespace.
This PR fixes this unexpected behavior and creates the compressed
hypertable and its toast table and index in the same tablespace as
the hypertable.
Fixes#5520
This patch adds an optimization to the DecompressChunk node. If the
query 'order by' and the compression 'order by' are compatible (query
'order by' is equal or a prefix of compression 'order by'), the
compressed batches of the segments are decompressed in parallel and
merged using a binary heep. This preserves the ordering and the sorting
of the result can be prevented. Especially LIMIT queries benefit from
this optimization because only the first tuples of some batches have to
be decompressed. Previously, all segments were completely decompressed
and sorted.
Fixes: #4223
Co-authored-by: Sotiris Stamokostas <sotiris@timescale.com>
On compressed hypertables 3 schema levels are in use simultaneously
* main - hypertable level
* chunk - inheritance level
* compressed chunk
In the build_scankeys method all of them appear - as slot have their
fields represented as a for a row of the main hypertable.
Accessing the slot by the attribut numbers of the chunks may lead to
indexing mismatches if there are differences between the schemes.
Fixes: #5577
Since telemetry job has a special code path to be able to be used both
from Apache code and from TSL code, trying to execute the telemetry job
with run_job() will fail.
This code will allow run_job() to be used with the telemetry job to
trigger a send of telemetry data. You have to belong to the group that
owns the telemetry job (or be the owner of the telemetry job) to be
able to use it.
Closes#5605
There were no permission checks when calling run_job(), so it was
possible to execute any job regardless of who owned it. This commit
adds such checks.
The internal `cagg_rebuild_view_definition` function was trying to cast
a pointer to `RangeTblRef` but it actually is a `RangeTblEntry`.
Fixed it by using the already existing `direct_query` data struct to
check if there are JOINs in the CAgg to be repaired.
All children of an append path are required to have the same parameterization
so we have to reparameterize when the selected path does not have the right
parameterization.
Inserting multiple rows into a compressed chunk could have bypassed
constraint check in case the table had segment_by columns.
Decompression is narrowed to only consider candidates by the actual
segment_by value.
Because of caching - decompression was skipped for follow-up rows of
the same Chunk.
Fixes#5553
When inserting into a compressed chunk with constraints present,
we need to decompress relevant tuples in order to do speculative
inserting. Usually we used segment by column values to limit the
amount of compressed segments to decompress. This change expands
on that by also using segment metadata to further filter
compressed rows that need to be decompressed.
## 2.10.2 (2023-04-20)
**Bugfixes**
* #5410 Fix file trailer handling in the COPY fetcher
* #5446 Add checks for malloc failure in libpq calls
* #5233 Out of on_proc_exit slots on guc license change
* #5428 Use consistent snapshots when scanning metadata
* #5499 Do not segfault on large histogram() parameters
* #5470 Ensure superuser perms during copy/move chunk
* #5500 Fix when no FROM clause in continuous aggregate definition
* #5433 Fix join rte in CAggs with joins
* #5556 Fix duplicated entries on timescaledb_experimental.policies view
* #5462 Fix segfault after column drop on compressed table
* #5543 Copy scheduled_jobs list before sorting it
* #5497 Allow named time_bucket arguments in Cagg definition
* #5544 Fix refresh from beginning of Continuous Aggregate with variable time bucket
* #5558 Use regrole for job owner
* #5542 Enable indexscan on uncompressed part of partially compressed chunks
**Thanks**
* @nikolaps for reporting an issue with the COPY fetcher
* @S-imo-n for reporting the issue on Background Worker Scheduler crash
* @geezhu for reporting issue on segfault in historgram()
* @mwahlhuetter for reporting the issue with joins in CAggs
* @mwahlhuetter for reporting issue with duplicated entries on timescaledb_experimental.policies view
* @H25E for reporting error refreshing from beginning of a Continuous Aggregate with variable time bucket
This was previously disabled as no data resided on the
uncompressed chunk once it was compressed, but this is not
the case anymore with partially compressed chunks, so we
enable indexscan for the uncompressed chunk again.
Fixes#5432
Co-authored-by: Ante Kresic <ante.kresic@gmail.com>
Instead of using a user name to register the owner of a job, we use
regrole. This allows renames to work properly since the underlying OID
does not change when the owner name changes.
We add a check when calling `DROP ROLE` that there is no job with that
owner and generate an error if there is.
The start_scheduled_jobs function mistakenly sorts the scheduled_jobs
list in-place. As a result, when the ts_update_scheduled_jobs_list
function compares the updated list of scheduled jobs with the existing
scheduled jobs list, it is comparing a list that is sorted by job_id to
one that is sorted by next_start time. Fix that by properly copying the
scheduled_jobs list into a new list and use that for sorting.
Fixes#5537
With recent changes, we enabled analyze on uncompressed chunk tables
for compressed chunks. This change includes analyzing the compressed
chunks table when analyzing the hypertable and its chunks,
enabling us to remove the generating stats when compressing chunks.
In case of joins in the continuous aggregates, pass the required
structs to the new rte created. These values are required by the
planner to finally query the materialized view.
Fixes#5433
Commit 16fdb6ca5e introduced `timescaledb_experimental.policies` view
to expose the Continuous Aggregate policies but the current JOINS over
our catalog are not accurate.
Fixed it by properly JOIN the underlying catalog tables to expose the
correct information without duplicates about the Continuous Aggregate
policies.
Fixes#5492
When refreshing from the beginning (window_start=NULL) of a
Continuous Aggregate with variable time bucket we were getting a
`timestamp out of range` error.
Fixed it by setting `-Infinity` when passing `window_start=NULL` when
refreshing a Continuous Aggregate with variable time bucket.
Fixes#5474, #5534
This is mostly a cosmetic change. When only 1 child is present there
is no need for ordered append. In this situation we might still
benefit from a ChunkAppend node here due to runtime chunk exclusion
when we have non-immutable constraints, so we still add the ChunkAppend
node in that situation even with only 1 child.
Decompression produces records which have all the decompressed data
set, but it also retains the fields which are used internally during
decompression.
These didn't cause any problem - unless an operation is being done
with the whole row - in which case all the fields which have ended up
being non-null can be a potential segfault source.
Fixes#5458#5411
This patch does following:
1. Executor changes to parse qual ExprState to check if SEGMENTBY
column is specified in WHERE clause.
2. Based on step 1, we build scan keys.
3. Executor changes to do heapscan on compressed chunk based on
scan keys and move only those rows which match the WHERE clause
to staging area aka uncompressed chunk.
4. Mark affected chunk as partially compressed.
5. Perform regular UPDATE/DELETE operations on staging area.
6. Since there is no Custom Scan (HypertableModify) node for
UPDATE/DELETE operations on PG versions < 14, we don't support this
feature on PG12 and PG13.
Refactor the code path that handles remote distributed COPY. The
main changes include:
* Use a hash table to lookup data node connections instead of a list.
* Refactor the per-data node buffer code that accumulates rows into
bigger CopyData messages.
* Reduce the default number of rows in a CopyData message to 100. This
seems to improve throughput, probably striking a better balance
between message overhead and latency.
* The number of rows to send in each CopyData message can now be
changed via a new foreign data wrapper option.
The joins could be between a continuous aggregate and hypertable,
continuous aggregate and a regular Postgres table,
and continuous aggregate and a regular Postgres view.
There is a bug in `width_bucket()` causing an overflow and subsequent
NaN value as a result of dividing with `+inf`. The NaN value is
interpreted as an integer and hence generates an index out of range for
the buckets.
This commit fixes this by generating an error rather than
segfaulting for bucket indexes that are out of range.
During chunk creation, the chunk's dimensional CHECK constraints are
created via an "upcall" to PL/pgSQL code. However, creating
dimensional constraints in PL/pgSQL code sometimes fails, especially
during high-concurrency inserts, because PL/pgSQL code scans metadata
using a snapshot that might not see the same metadata as the C
code. As a result, chunk creation sometimes fail during constraint
creation.
To fix this issue, implement dimensional CHECK-constraint creation in
C code. Other constraints (FK, PK, etc.) are still created via an
upcall, but should probably also be rewritten in C. However, since
these constraints don't depend on recently updated metadata, this is
left to a future change.
Fixes#5456
There is a security loophole in current core Postgres, due to which
it's possible for a non-superuser to gain superuser access by attaching
dependencies like expression indexes, triggers, etc. before logical
replication commences.
To avoid this, we now ensure that the chunk objects that get created
for the subscription are done so as a superuser. This avoids malicious
dependencies by regular users.
When calling the `cagg_watermark` function to get the watermark of a
Continuous Aggregate we execute a `SELECT MAX(time_dimension)` query
in the underlying materialization hypertable.
The problem is that a `SELECT MAX(time_dimention)` query can be
expensive because it will scan all hypertable chunks increasing the
planning time for a Realtime Continuous Aggregates.
Improved it by creating a new catalog table to serve as a cache table
to store the current Continous Aggregate watermark in the following
situations:
- Create CAgg: store the minimum value of hypertable time dimension
data type;
- Refresh CAgg: store the last value of the time dimension materialized
in the underlying materialization hypertable (or the minimum value of
materialization hypertable time dimension data type if there's no
data materialized);
- Drop CAgg Chunks: the same as refresh cagg.
Closes#4699, #5307
During the compression autovacuum use to be disabled for uncompressed
chunk and enable after decompression. This leads to postgres
maintainence issue. Let's not disable autovacuum for uncompressed
chunk anymore. Let postgres take care of the stats in its natural way.
Fixes#309
Invalidate the catalog snapshot in the scanner to ensure that any
lookups into `pg_catalog` uses a snapshot that is consistent with the
snapshot used to scan TimescaleDB metadata.
This fixes an issue where a chunk could be looked up without having a
proper relid filled in, causing an assertion failure
(`ASSERT_IS_VALID_CHUNK`). When a chunk is scanned and found (in
`chunk_tuple_found()`), the Oid of the chunk table is filled in using
`get_relname_relid()`, which could return InvalidOid due to use of a
different snapshot when scanning `pg_class`. Calling
`InvalidateCatalogSnapshot()` before starting the metadata scan in
`Scanner` ensures the pg_catalog snapshot used is refreshed.
Due to the difficulty of reproducing this MVCC issue, no regression or
isolation test is provided, but it is easy to hit this bug when doing
highly concurrent COPY:s into a distributed hypertable.
The functions `PQconndefaults` and `PQmakeEmptyPGresult` calls
`malloc` and can return NULL if it fails to allocate memory for the
defaults and the empty result. It is checked with an `Assert`, but this
will be removed in production builds.
Replace the `Assert` with an checks to generate an error in production
builds rather than trying to de-reference the pointer and cause a
crash.
This patch allows unique constraints on compressed chunks. When
trying to INSERT into compressed chunks with unique constraints
any potentially conflicting compressed batches will be decompressed
to let postgres do constraint checking on the INSERT.
With this patch only INSERT ON CONFLICT DO NOTHING will be supported.
For decompression only segment by information is considered to
determine conflicting batches. This will be enhanced in a follow-up
patch to also include orderby metadata to require decompressing
less batches.
This patch changes the way user-defined FDW options (e.g., startup
costs, per-tuple costs) are handled. So far, these values were retrieved
in apply_fdw_and_server_options() but reset to default values afterward.
Problem:
When the guc timescaledb.license = 'timescale' is set in the conf file
and a SIGHUP is sent to postgress process and a reload of the tsl
module is triggered.
This reload happens in 2 phases 1. tsl_module_load is called which
will load the module only if not already loaded and 2.The
ts_module_init is called for every ts_license_guc_assign_hook
irrespective of if it is new load.This ts_module_init initialization
function also registers a on_proc_exit function to be called on exit.
The list of on_proc_exit methods are maintained in a fixed array
on_proc_exit_list of size MAX_ON_EXITS (20) which gets filled up on
repeated SIGHUPs and hence an error.
Fix:
The fix is to make the ts_module_init() register the on_proc_exit
callback, only in case the module is reloaded and not in every init
call.
Closes#5233
The copy fetcher fetches tuples in batches. When the last element in the
batch is the file trailer, the trailer was not handled correctly. The
existing logic did not perform a PQgetCopyData in that case. Therefore
the state of the fetcher was not set to EOF and the copy operation was
not correctly finished at this point.
Fixes: #5323
WHERE clause with SEGMENTBY column of type text/bytea
non-equality operators are not pushed down to Seq Scan
node of compressed chunk. This patch fixes this issue.
Fixes#5286
Make `partialize_agg()` support parallel query execution. To make this
work, the finalize node need combine the individual partials from each
parallel worker, but the final step that turns the resulting partial
into the finished aggregate should not happen. Thus, in the case of
distributed hypertables, each data node can run a parallel query to
compute a partial, and the access node can later combine and finalize
these partials into the final aggregate. Esssentially, there will be
one combine step (minus final) on each data node, and then another one
plus final on the access node.
To implement this, the finalize aggregate plan is simply modified to
elide the final step, and to reserialize the partial. It is only
possible to do this at the plan stage; if done at the path stage, the
PostgreSQL planner will hit assertions that assume that the node has
certain values (e.g., it doesn't expect combine Paths to skip the
final step).
Previously we used date_part("epoch", interval) and integer division
internally to determine whether the top cagg's interval is a
multiple of its parent's.
This led to precision loss and wrong results
in the case of intervals with sub-second components.
Fixed by using the `ts_interval_value_to_internal` function to convert
intervals to appropriate integer representation for division.
Fixes#5277
This release contains bug fixes since the 2.10.0 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #5159 Support Continuous Aggregates names in hypertable_(detailed_)size
* #5226 Fix concurrent locking with chunk_data_node table
* #5317 Fix some incorrect memory handling
* #5336 Use NameData and namestrcpy for names
* #5343 Set PortalContext when starting job
* #5360 Fix uninitialized bucket_info variable
* #5362 Make copy fetcher more async
* #5364 Fix num_chunks inconsistency in hypertables view
* #5367 Fix column name handling in old-style continuous aggregates
* #5378 Fix multinode DML HA performance regression
* #5384 Fix Hierarchical Continuous Aggregates chunk_interval_size
**Thanks**
* @justinozavala for reporting an issue with PL/Python procedures in the background worker
* @Medvecrab for discovering an issue with copying NameData when forming heap tuples.
* @pushpeepkmonroe for discovering an issue in upgrading old-style
continuous aggregates with renamed columns
* @pushpeepkmonroe for discovering an issue in upgrading old-style continuous aggregates with renamed columns
Concurrent insert into dist hypertable after a data node marked as
unavailable would produce 'tuple concurrently deleted` error.
The problem occurs because of missing tuple level locking during
scan and concurrent delete from chunk_data_node table afterwards,
which should be treated as `SELECT … FOR UPDATE` case instead.
Based on the fix by @erimatnor.
Fix#5153
When a Continuous Aggregate is created the `chunk_interval_size` is
defined my the `chunk_interval_size` of the original hypertable
multiplied by a fixed factor of 10.
The problem is currently when we create a Hierarchical Continuous
Aggregate the same factor is applied and it lead to an exponential
`chunk_interval_size`.
Fixed it by just copying the `chunk_interval_size` from the base
Continuous Aggregate for an Hierachical Continuous Aggreagate.
Fixes#5382
For continuous aggregates with the old-style partial aggregates
renaming columns that are not in the group-by clause will generate an
error when upgrading to a later version. The reason is that it is
implicitly assumed that the name of the column is the same as for the
direct view. This holds true for new-style continous aggregates, but is
not always true for old-style continuous aggregates. In particular,
columns that are not part of the `GROUP BY` clause can have an
internally generated name.
This commit fixes that by extracting the name of the column from the
partial view and use that when renaming the partial view column and the
materialized table column.
Make the copy fetcher more asynchronous by separating the sending of
the request for data from the receiving of the response. By doing
that, the async append node can send the request to each data node
before it starts reading the first response. This can massively
improve the performance because the response isn't returned until the
remote node has finished executing the query and is ready to return
the first tuple.
Different num_chunks values reported by
timescaledb_information.hypertables and
timescaledb_information.chunks.
View definition of hypertables was
not filtering dropped and osm_chunks.
Fixes#5338
This patch adds the functionality that is needed to perform distributed,
parallel joins on reference tables on access nodes. This code allows the
pushdown of a join if:
* (1) The setting "ts_guc_enable_per_data_node_queries" is enabled
* (2) The outer relation is a distributed hypertable
* (3) The inner relation is marked as a reference table
* (4) The join is a left join or an inner join