1609 Commits

Author SHA1 Message Date
Nikhil Sontakke
7e43f45ccb Ensure superuser perms during copy/move chunk
There is a security loophole in current core Postgres, due to which
it's possible for a non-superuser to gain superuser access by attaching
dependencies like expression indexes, triggers, etc. before logical
replication commences.

To avoid this, we now ensure that the chunk objects that get created
for the subscription are done so as a superuser. This avoids malicious
dependencies by regular users.
2023-03-23 13:26:47 +05:30
Fabrízio de Royes Mello
38fcd1b76b Improve Realtime Continuous Aggregate performance
When calling the `cagg_watermark` function to get the watermark of a
Continuous Aggregate we execute a `SELECT MAX(time_dimension)` query
in the underlying materialization hypertable.

The problem is that a `SELECT MAX(time_dimention)` query can be
expensive because it will scan all hypertable chunks increasing the
planning time for a Realtime Continuous Aggregates.

Improved it by creating a new catalog table to serve as a cache table
to store the current Continous Aggregate watermark in the following
situations:
- Create CAgg: store the minimum value of hypertable time dimension
  data type;
- Refresh CAgg: store the last value of the time dimension materialized
  in the underlying materialization hypertable (or the minimum value of
  materialization hypertable time dimension data type if there's no
  data materialized);
- Drop CAgg Chunks: the same as refresh cagg.

Closes #4699, #5307
2023-03-22 16:35:23 -03:00
shhnwz
699fcf48aa Stats improvement for Uncompressed Chunks
During the compression autovacuum use to be disabled for uncompressed
chunk and enable after decompression. This leads to postgres
maintainence issue. Let's not disable autovacuum for uncompressed
chunk anymore. Let postgres take care of the stats in its natural way.

Fixes #309
2023-03-22 23:51:13 +05:30
Alexander Kuzmenkov
5c07a57a02 Simplify control flow in decompress_chunk_exec
No functional changes, mostly just reshuffles the code to prepare for
batch decompression.

Also removes unneeded repeated column value stores and ExecStoreTuple,
to save 3-5% execution time on some queries.
2023-03-22 13:08:22 +04:00
Fabrízio de Royes Mello
7d6cf90ee7 Add missing gitignore entry
Pull request #4827 introduced a new template SQL test file but missed
to add the properly `.gitignore` entry to ignore generated test files.
2023-03-20 14:43:05 -03:00
Bharathy
cc51e20e87 Add support for ON CONFLICT DO UPDATE for compressed hypertables
This patch fixes execution of INSERT with ON CONFLICT DO UPDATE by
removing error and allowing UPDATE do happen on the given compressed
hypertable.
2023-03-20 22:55:27 +05:30
Mats Kindahl
67ff84e8f2 Add check for malloc failure in libpq calls
The functions `PQconndefaults` and `PQmakeEmptyPGresult` calls
`malloc` and can return NULL if it fails to allocate memory for the
defaults and the empty result. It is checked with an `Assert`, but this
will be removed in production builds.

Replace the `Assert` with an checks to generate an error in production
builds rather than trying to de-reference the pointer and cause a
crash.
2023-03-16 14:20:54 +01:00
Zoltan Haindrich
790b322b24 Fix DEFAULT value handling in decompress_chunk
The sql function decompress_chunk did not filled in
default values during its operation.

Fixes #5412
2023-03-16 09:16:50 +01:00
Alexander Kuzmenkov
827684f3e2 Use prepared statements for parameterized data node scans
This allows us to avoid replanning the inner query on each new loop,
speeding up the joins.
2023-03-15 18:22:01 +04:00
Dmitry Simonenko
f8022eb332 Add additional tests for compression with HA
Make sure inserts into compressed chunks work when a DN is down

Fix #5039
2023-03-13 17:43:48 +02:00
Sven Klemm
65562f02e8 Support unique constraints on compressed chunks
This patch allows unique constraints on compressed chunks. When
trying to INSERT into compressed chunks with unique constraints
any potentially conflicting compressed batches will be decompressed
to let postgres do constraint checking on the INSERT.
With this patch only INSERT ON CONFLICT DO NOTHING will be supported.
For decompression only segment by information is considered to
determine conflicting batches. This will be enhanced in a follow-up
patch to also include orderby metadata to require decompressing
less batches.
2023-03-13 12:04:38 +01:00
Sven Klemm
c02cb76b38 Don't reindex relation during decompress_chunk
Reindexing a relation requires AccessExclusiveLock which prevents
queries on that chunk. This patch changes decompress_chunk to update
the index during decompression instead of reindexing. This patch
does not change the required locks as there are locking adjustments
needed in other places to make it safe to weaken that lock.
2023-03-13 10:58:26 +01:00
Jan Nidzwetzki
356a20777c Handle user-defined FDW options properly
This patch changes the way user-defined FDW options (e.g., startup
costs, per-tuple costs) are handled. So far, these values were retrieved
in apply_fdw_and_server_options() but reset to default values afterward.
2023-03-13 10:39:52 +01:00
Maheedhar PV
5e0391392a Out of on_proc_exit slots on guc license change
Problem:

When the guc timescaledb.license = 'timescale' is set in the conf file
and a SIGHUP is sent to postgress process and a reload of the tsl
module is triggered.

This reload happens in 2 phases 1. tsl_module_load is called which
will load the module only if not already loaded and 2.The
ts_module_init is called for every ts_license_guc_assign_hook
irrespective of if it is new load.This ts_module_init initialization
function also registers a on_proc_exit function to be called on exit.

The list of on_proc_exit methods are maintained in a fixed array
on_proc_exit_list of size MAX_ON_EXITS (20) which gets filled up on
repeated SIGHUPs and hence an error.

Fix:

The fix is to make the ts_module_init() register the on_proc_exit
callback, only in case the module is reloaded and not in every init
call.

Closes #5233
2023-03-13 06:24:01 +05:30
Alexander Kuzmenkov
e92d5ba748 Add more tests for compression
Unit tests for different data sequences, and SQL test for float4.
2023-03-10 20:34:17 +04:00
Jan Nidzwetzki
f5db023152 Track file trailer only in debug builds
The commit 96574a7 changes the handling of the file_trailer_received
flag. It is now only used in asserts and not in any other kind of logic.
This patch encapsulates the file_trailer_received in a
USE_ASSERT_CHECKING macro.
2023-03-10 10:44:53 +01:00
Jan Nidzwetzki
7b8177aa74 Fix file trailer handling in the COPY fetcher
The copy fetcher fetches tuples in batches. When the last element in the
batch is the file trailer, the trailer was not handled correctly. The
existing logic did not perform a PQgetCopyData in that case. Therefore
the state of the fetcher was not set to EOF and the copy operation was
not correctly finished at this point.

Fixes: #5323
2023-03-09 14:29:06 +01:00
Bharathy
f54dd7b05d Fix SEGMENTBY columns predicates to be pushed down
WHERE clause with SEGMENTBY column of type text/bytea
non-equality operators are not pushed down to Seq Scan
node of compressed chunk. This patch fixes this issue.

Fixes #5286
2023-03-08 19:17:43 +05:30
Erik Nordström
c76a0cff68 Add parallel support for partialize_agg()
Make `partialize_agg()` support parallel query execution. To make this
work, the finalize node need combine the individual partials from each
parallel worker, but the final step that turns the resulting partial
into the finished aggregate should not happen. Thus, in the case of
distributed hypertables, each data node can run a parallel query to
compute a partial, and the access node can later combine and finalize
these partials into the final aggregate. Esssentially, there will be
one combine step (minus final) on each data node, and then another one
plus final on the access node.

To implement this, the finalize aggregate plan is simply modified to
elide the final step, and to reserialize the partial. It is only
possible to do this at the plan stage; if done at the path stage, the
PostgreSQL planner will hit assertions that assume that the node has
certain values (e.g., it doesn't expect combine Paths to skip the
final step).
2023-03-08 14:14:25 +01:00
Konstantina Skovola
5a3cacd06f Fix sub-second intervals in hierarchical caggs
Previously we used date_part("epoch", interval) and integer division
internally to determine whether the top cagg's interval is a
multiple of its parent's.
This led to precision loss and wrong results
in the case of intervals with sub-second components.

Fixed by using the `ts_interval_value_to_internal` function to convert
intervals to appropriate integer representation for division.

Fixes #5277
2023-03-07 13:25:49 +02:00
Dmitry Simonenko
830c37b5b0 Fix concurrent locking with chunk_data_node table
Concurrent insert into dist hypertable after a data node marked as
unavailable would produce 'tuple concurrently deleted` error.

The problem occurs because of missing tuple level locking during
scan and concurrent delete from chunk_data_node table afterwards,
which should be treated as `SELECT … FOR UPDATE` case instead.

Based on the fix by @erimatnor.

Fix #5153
2023-03-06 18:40:59 +02:00
Ildar Musin
4c0075010d Add hooks for hypertable drops
To properly clean up the OSM catalog we need a way to reliably track
hypertable deletion (including internal hypertables for CAGGS).
2023-03-06 15:10:49 +01:00
Fabrízio de Royes Mello
32046832d3 Fix Hierarchical CAgg chunk_interval_size
When a Continuous Aggregate is created the `chunk_interval_size` is
defined my the `chunk_interval_size` of the original hypertable
multiplied by a fixed factor of 10.

The problem is currently when we create a Hierarchical Continuous
Aggregate the same factor is applied and it lead to an exponential
`chunk_interval_size`.

Fixed it by just copying the `chunk_interval_size` from the base
Continuous Aggregate for an Hierachical Continuous Aggreagate.

Fixes #5382
2023-03-03 12:31:24 -03:00
Nikhil Sontakke
1423b55d18 Fix perf regression due to DML HA
We added checks via #4846 to handle DML HA when replication factor is
greater than 1 and a datanode is down. Since each insert can go to a
different chunk with a different set of datanodes, we added checks
on every insert to check if DNs are unavailable. This increased CPU
consumption on the AN leading to a performance regression for RF > 1
code paths.

This patch fixes this regression. We now track if any DN is marked as
unavailable at the start of the transaction and use that information to
reduce unnecessary checks for each inserted row.
2023-03-03 18:34:05 +05:30
Pallavi Sontakke
6be14423d5
Flag test space_constraint.sql.in for release run (#5380)
It was incorrectly flagged as requiring a debug build.

Disable-check: force-changelog-changed
2023-03-03 15:52:34 +05:30
Erik Nordström
386d31bc6e Make copy fetcher more async
Make the copy fetcher more asynchronous by separating the sending of
the request for data from the receiving of the response. By doing
that, the async append node can send the request to each data node
before it starts reading the first response. This can massively
improve the performance because the response isn't returned until the
remote node has finished executing the query and is ready to return
the first tuple.
2023-03-02 15:07:23 +01:00
Sotiris Stamokostas
750e69ede1 Renamed size_utils.sql
Renamed:
tsl/test/sql/size_utils.sql
tsl/test/expected/size_utils.out
To:
tsl/test/sql/size_utils_tsl.sql
tsl/test/expected/size_utils_tsl.out
because conflicting with test/sql/size_utils.sql
2023-03-02 13:20:08 +02:00
Jan Nidzwetzki
7887576afa Handle MERGE command in reference join pushdown
At the moment, the MERGE command is not supported on distributed
hypertables. This patch ensures that the join pushdown code ignores the
invocation by the MERGE command.
2023-02-28 15:49:19 +01:00
shhnwz
e6f6eb3ab8 Fix for inconsistent num_chunks
Different num_chunks values reported by
timescaledb_information.hypertables and
timescaledb_information.chunks.
View definition of hypertables was
not filtering dropped and osm_chunks.

Fixes #5338
2023-02-28 16:32:03 +05:30
gayyappan
2f7e0433a9 Create index fails if hypertable has foreign table chunk
We cannot create indexes on foreign tables. This PR modifies
process_index_chunk to skip OSM chunks
2023-02-27 12:56:52 -05:00
Fabrízio de Royes Mello
152ef02d74 Fix uninitialized bucket_info variable
The `bucket_info` variable is initialized by `caggtimebucketinfo_init`
function called inside the following branch:

`if (rte->relkind == RELKIND_RELATION || rte->relkind == RELKIND_VIEW)`

If for some reason we don't enter in this branch then the `bucket_info`
will not be initialized leading to an uninitialized variable when
returning `bucket_info` at the end of the `cagg_validate_query`
function.

Fixed it by initializing with zeros the `bucket_info` variable when
declaring it.

Found by coverity scan.
2023-02-24 15:54:53 -03:00
noctarius aka Christoph Engelbert
0118e6b952 Support CAGG names in hypertable_(detailed_)size
This small patch adds support for continuous aggregates to the
`hypertable_detailed_size` (and with that `hypertable_size`).
It adds an additional check to see if a continuous aggregate exists
if a hypertable with the given regclass name isn't found.
2023-02-24 10:48:31 -03:00
Jan Nidzwetzki
e0be9eaa28 Allow pushdown of reference table joins
This patch adds the functionality that is needed to perform distributed,
parallel joins on reference tables on access nodes. This code allows the
pushdown of a join if:

 * (1) The setting "ts_guc_enable_per_data_node_queries" is enabled
 * (2) The outer relation is a distributed hypertable
 * (3) The inner relation is marked as a reference table
 * (4) The join is a left join or an inner join
2023-02-23 14:32:12 +01:00
Dmitry Simonenko
f12a361ef7 Add timeout argument to the ping_data_node()
This PR introduces a timeout argument and a new logic to the
timescale_internal.ping_data_node() function which allows
to handle io timeouts for nodes being unresponsive.

Fix #5312
2023-02-21 19:52:03 +02:00
Mats Kindahl
0cbd7407a6 Get PortalContext when starting job
When executing functions, SPI assumes that `TopTransactionContext` is
used for atomic execution contexts and `PortalContext` is used for
non-atomic contexts. Since jobs need to be able to commit and start
transactions, they are executing in a non-atomic context hence
`PortalContext` will be used, but `PortalContext` is not set when
starting the job. This is not a problem for PL/PgSQL executor, but for
other executors (such as PL/Python) it would be.

This commit fixes the issue by setting the `PortalContext` variable to
the portal context created for the portal and restores it (to NULL)
after execution.

Fixes #5326
2023-02-20 10:54:05 +01:00
Fabrízio de Royes Mello
c7f46393e7 Change usage of term nested to hierarchical
To don't make developers confused the right name for Continuous
Aggregates on top of another Continuous Aggregates is `Hierarchical
Continuous Aggregates`, so changed the usage of term `nested` for
`hierarchical`.
2023-02-18 11:04:51 -03:00
Nikhil Sontakke
d50de8a72d Fix uninitialized bucket_info.htpartcolno warning
Found by coverity.
2023-02-17 16:42:20 +01:00
Jacob Champion
20e468f40c Fix use of TextDatumGetCString()
TextDatumGetCString() was made typesafe in upstream HEAD (16devel), so
now the compiler catches this. As Tom puts it in ac50f84866:

    "TextDatumGetCString(PG_GETARG_TEXT_P(x))" is formally wrong: a text*
    is not a Datum.  Although this coding will accidentally fail to fail on
    all known platforms, it risks leaking memory if a detoast step is needed,
    unlike "TextDatumGetCString(PG_GETARG_DATUM(x))" which is what's used
    elsewhere.
2023-02-14 07:54:16 -08:00
Alexander Kuzmenkov
fd66f5936a Warn about mismatched chunk cache sizes
Just noticed abysmal INSERT performance when experimenting with one of
our customers' data set, and turns out my cache sizes were
misconfigured, leading to constant hypertable chunk cache thrashing.
Show a warning to detect this misconfiguration. Also use more generous
defaults, we're not supposed to run on a microwave (unlike Postgres).
2023-02-14 19:32:41 +04:00
Zoltan Haindrich
9d3866a50e Accept all compression options on caggs
Enable to properly handle 'compress_segmentby' and 'compress_orderby'
compression options on continous aggregates.

ALTER MATERIALIZED VIEW test_table_cagg SET (
  timescaledb.compress = true,
  timescaledb.compress_segmentby = 'device_id'
);

Fixes #5161
2023-02-13 22:21:18 +01:00
Sven Klemm
ef25fb9ec7 Add dist_ref_table_join generated test files to .gitignore 2023-02-11 11:31:26 +01:00
Rafia Sabih
ece15d66a4 Enable real time aggregation for caggs with joins 2023-02-10 22:12:29 +05:30
Konstantina Skovola
348796f9d9 Fix next_start calculation for fixed schedules
This patch fixes several issues with next_start calculation.

- Previously, the offset was added twice in some cases.
This is fixed by this patch.

- Additionally, schedule intervals with month components
were not handled correctly.
Internally, time_bucket with origin is used to calculate
the next start. However, in the case of month intervals, the
timestamp calculated for a bucket is always aligned on the first
day of the month, regardless of origin.
Therefore, previously the result was aligned with origin by adding
the difference between origin and its respective time bucket.
This difference was computed as a fixed length interval in terms
of days and time. That computation led to incorrect computation of
next start occasionally, for example when a job should be executed on
the last day of a month.
That is fixed by adding an appropriate interval of months to
initial_start and letting Postgres handle this computation properly.

Fixes #5216
2023-02-09 17:57:17 +02:00
Sven Klemm
756ef68d0a Fix compression_hypertable ordering reliance
The hypertable_compression test had on implicit reliance on the
ordering of tuples when querying the materialized results.
This patch makes the ordering explicit in this test.
2023-02-09 15:23:07 +01:00
Alexander Kuzmenkov
063a9dae29 Improve cost model for data node scans
1) Simplify the path generation for the parameterized data node scans.
1) Adjust the data node scan cost if it's an index scan, instead of always
   treating it as a sequential scan.
1) Hard-code the grouping estimation for distributed hypertable, instead
   of using the totally bogus per-column ndistinct value.
1) Add the GUC to disable parameterized
data node scan.
1) Add more tests.
2023-02-08 16:12:01 +04:00
Zoltan Haindrich
cad2440b58 Compression can't be enabled on caggs
The continuous aggregate creation failed in case segmentby/orderby
columns needed quotation.
2023-02-07 21:01:56 +01:00
Rafia Sabih
4cb76bc053 Cosmetic changes to create.c 2023-02-06 22:39:57 +05:30
Sven Klemm
8132908c97 Refactor chunk decompression functions
Restructure the code inside decompress_chunk slightly to make core
loop reusable by other functions.
2023-02-06 14:52:06 +01:00
Erik Nordström
206056ca12 Fix dist_hypertable test
A previous change accidentally broke the dist_hypertable test so that
it prematurely exited. This change restores the test so that it
executes properly.
2023-02-03 13:35:36 +01:00
Erik Nordström
b81033b835 Make data node command execution interruptible
The function to execute remote commands on data nodes used a blocking
libpq API that doesn't integrate with PostgreSQL interrupt handling,
making it impossible for a user or statement timeout to cancel a
remote command.

Refactor the remote command execution function to use a non-blocking
API and integrate with PostgreSQL signal handling via WaitEventSets.

Partial fix for #4958.

Refactor remote command execution function
2023-02-03 13:15:28 +01:00