Calling `ts_dist_cmd_invoke_on_data_nodes_using_search_path()` function
without an active transaction allows connection invalidation event
happen between applying `search_path` and the actual command
execution, which leads to an error.
This change introduces a way to ignore connection cache invalidations
using `remote_connection_cache_invalidation_ignore()` function.
This work is based on @nikkhils original fix and the problem research.
Fix#4022
This commit gives more visibility into job failures by making the
information regarding a job runtime error available in an extension
table (`job_errors`) that users can directly query.
This commit also adds an infromational view on top of the table for
convenience.
To prevent the `job_errors` table from growing too large,
a retention job is also set up with a default retention interval
of 1 month. The retention job is registered with a custom check
function that requires that a valid "drop_after" interval be provided
in the config field of the job.
The constify code constifying TIMESTAMPTZ expressions when doing
chunk exclusion did not account for daylight saving time switches
leading to different calculation outcomes when timezone changes.
This patch adds a 4 hour safety buffer to any such calculations.
The code added to support VIEWs did not account for the fact that
varno could be from a different nesting level and therefore not
be present in the current range table.
Allow planner chunk exclusion in subqueries. When we decicde on
whether a query may benefit from constifying now and encounter a
subquery peek into the subquery and check if the constraint
references a hypertable partitioning column.
Fixes#4524
The schema of base table on which hypertables are created, should define
columns with proper data types. As per postgres best practices Wiki
(https://wiki.postgresql.org/wiki/Don't_Do_This), one should not define
columns with CHAR, VARCHAR, VARCHAR(N), instead use TEXT data type.
Similarly instead of using timestamp, one should use timestamptz.
This patch reports a WARNING to end user when creating hypertables,
if underlying parent table, has columns of above mentioned data types.
Fixes#4335
This patch adjusts the operator logic for valid space dimension
constraints to no longer look for an exact match on both sides
of the operator but instead allow mismatched datatypes.
Previously a constraint like `col = value` would require `col`
and `value` to have matching datatype with this change `col` and
`value` can be different datatype as long as they have equality
operator in btree family.
Mismatching datatype can happen commonly when using int8 columns
and comparing them with integer literals. Integer literals default
to int4 so the datatypes would not match unless special care has
been taken in writing the constraints and therefore the optimization
would never apply in those cases.
Since we do not use our own hypertable expansion for SELECT FOR UPDATE
queries we need to make sure to add the extra information necessary to
get hashed space partitions with the native postgres inheritance
expansion working.
This patch adds a new time_bucket_gapfill function that
allows bucketing in a specific timezone.
You can gapfill with explicit timezone like so:
`SELECT time_bucket_gapfill('1 day', time, 'Europe/Berlin') ...`
Unfortunately this introduces an ambiguity with some previous
call variations when an untyped start/finish argument was passed
to the function. Some queries might need to be adjusted and either
explicitly name the positional argument or resolve the type ambiguity
by casting to the intended type.
This PR introduces a new `distributed` argument to the
create_hypertable() function as well as two new GUC's to
control its default behaviour: timescaledb.hypertable_distributed_default
and timescaledb.hypertable_replication_factor_default.
The main idea of this change is to allow automatic creation
of the distributed hypertables by default.
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.
When upgrading to Timescale 2.7, new created Continuous Aggregates
are using the new format, but existing Continuous Aggregates keep
using the format they were defined with.
Created a procedure to upgrade existing Continuous Aggregates from
the old format to the new format, by calling a simple procedure:
test=# CALL cagg_migrate('conditions_summary_daily');
Closes#4424
In some cases, entire hypertables can be excluded
at runtime. Some Examples:
WHERE col @> ANY(subselect)
if the subselect returns empty set
WHERE col op (subselect)
if the op is a strict operator and
the subselect returns empty set.
When qual clauses are not on partition columns, we use
the old chunk exclusion, otherwise we try hypertable exclusion.
Hypertable exclusion is executed once per hypertable.
This is cheaper than the chunk exclusion
that is once-per-chunk.
Previously users had no way to update the check function
registered with add_job. This commit adds a parameter check_config
to alter_job to allow updating the check function field.
Also, previously the signature expected from a check was of
the form (job_id, config) and there was no validation
that the check function given had the correct signature.
This commit removes the job_id as it is not required and
also checks that the check function has the correct signature
when it is registered with add_job, preventing an error being
thrown at job runtime.
Old patch was using old validation functions, but there are already
validation functions that both read and validate the policy, so using
those. Also removing the old `job_config_check` function since that is
no longer use and instead adding a `job_config_check` that calls the
checking function with the configuration.
At the time of adding or updating policies, it is
checked if the policies are compatible with each
other and to those already on the CAgg.
These checks are:
- refresh and compression policies should not overlap
- refresh and retention policies should not overlap
- compression and retention policies should not overlap
Co-authored-by: Markos Fountoulakis <markos@timescale.com>
-Add infinity for refresh window range
Now to create open ended refresh policy
use +/- infinity for end_offset and star_offset
respectivly for the refresh policy.
-Add remove_all_policies function
This will remove all the policies on a given
CAgg.
-Remove parameter refresh_schedule_interval
-Fix downgrade scripts
-Fix IF EXISTS case
Co-authored-by: Markos Fountoulakis <markos@timescale.com>
This simplifies the process of adding the policies
for the CAggs. Now, with one single sql statements
all the policies can be added for a given CAgg.
Similarly, all the policies can be removed or modified
via single sql statement only.
This also adds a new function as well as a view to show all
the policies on a continuous aggregate.
This patch changes get_git_commit to always return the full hash.
Since different git versions do not agree on the length of the
abbreviated hash this made the length flaky. To make the length
consistent change it to always be the full hash.
When a query has multiple distributed hypertables the row-by-by
fetcher cannot be used. This patch changes the fetcher selection
logic to throw a better error message in those situations.
Previously the following error would be produced in those situations:
unexpected PQresult status 7 when starting COPY mode
The gapfill mechanism to detect an aggregation group change was
using datumIsEqual to compare the group values. datumIsEqual does
not detoast values so when one value is toasted and the other value
is not it will not return the correct result. This patch changes
the gapfill code to use the correct equal operator for the type
of the group column instead of datumIsEqual.
This patch fixes the param handling in prepared statements for generic
plans in ChunkAppend making those params usable in chunk exclusion.
Previously those params would not be resolved and therefore not used
for chunk exclusion.
Fixes#3719
When executing multinode queries that initialize row-by-row fetcher
but never execute it the node cleanup code would hit an assertion
checking the state of the fetcher. Found by sqlsmith.
A chunk in frozen state cannot be dropped.
drop_chunks will skip over frozen chunks without erroring.
Internal api , drop_chunk will error if you attempt to drop
a chunk without unfreezing it.
This PR also adds a new internal API to unfreeze a chunk.
This PR introduces a new SQL function to associate a
hypertable or continuous agg with a custom job. If
this dependency is setup, the job is automatically
deleted when the hypertable/cagg is dropped.
Add _timescaledb_internal.attach_osm_table_chunk.
This treats a pre-existing foreign table as a
hypertable chunk by adding dummy metadata to the
catalog tables.
The "empty" bytea value in a column of a distributed table when
selected was being returned as "null". The actual value on the
datanodes was being stored appropriately but just the return code path
was converting it into "null" on the AN. This has been handled via the
use of PQgetisnull() function now.
Fixes#3455
Add a parameter `drop_remote_data` to `detach_data_node()` which
allows dropping the hypertable on the data node when detaching
it. This is useful when detaching a data node and then immediately
attaching it again. If the data remains on the data node, the
re-attach will fail with an error complaining that the hypertable
already exists.
The new parameter is analogous to the `drop_database` parameter of
`delete_data_node`. The new parameter is `false` by default for
compatibility and ensures that a data node can be detached without
requiring communicating with the data node (e.g., if the data node is
not responding due to a failure).
Closes#4414
This patch transforms constraints on hash-based space partitions to make
them usable by postgres constraint exclusion.
If we have an equality condition on a space partitioning column, we add
a corresponding condition on get_partition_hash on this column. These
conditions match the constraints on chunks, so postgres' constraint
exclusion is able to use them and exclude the chunks.
The following transformations are done:
device_id = 1
becomes
((device_id = 1) AND (_timescaledb_internal.get_partition_hash(device_id) = 242423622))
s1 = ANY ('{s1_2,s1_2}'::text[])
becomes
((s1 = ANY ('{s1_2,s1_2}'::text[])) AND
(_timescaledb_internal.get_partition_hash(s1) = ANY ('{1583420735,1583420735}'::integer[])))
These transformations are not visible in EXPLAIN output as we remove
them again after hypertable expansion is done.
Commit dcb7dcc5 removed the constified intermediate values used
during hypertable expansion but only did so completely for PG14.
For PG12 and PG13 some constraints remained in the plan.
Add a parameter `schedule_interval` to retention and
compression policies to allow users to define the schedule
interval. Fall back to previous default if no value is
specified.
Fixes#3806
For certain inserts on a distributed hypertable, e.g., involving CTEs
and upserts, plans can be generated that weren't properly handled by
the DataNodeCopy and DataNodeDispatch execution nodes. In particular,
the nodes expect ChunkDispatch as a child node, but PostgreSQL can
sometimes insert a Result node above ChunkDispatch, causing the crash.
Further, behavioral changes in PG14 also caused the DataNodeCopy node
to sometimes wrongly believe a RETURNING clause was present. The check
for returning clauses has been updated to fix this issue.
Fixes#4339
Postgres knows whether a given aggregate is parallel-safe, and creates
parallel aggregation plans based on that. The `partialize_agg` is a
wrapper we use to perform partial aggregation on data nodes. It is a
pure function that produces serialized aggregation state as a result.
Being pure, it doesn't influence parallel safety. This means we don't
need to mark it parallel-unsafe to artificially disable the parallel
plans for partial aggregation. They will be chosen as usual based on
the parallel-safety of the underlying aggregate function.
When dealing with Intervals with month component timezone changes
can result in multiple day differences in the outcome of these
calculations due to different month lengths. When dealing with
months we add a 7 day safety buffer.
For all these calculations it is fine if we exclude less chunks
than strictly required for the operation, additional exclusion
with exact values will happen in the executor. But under no
circumstances must we exclude too much cause there would be
no way for the executor to get those chunks back.
The initial patch to use now() expressions during planner hypertable
expansion only supported intervals with no day or month component.
This patch adds support for intervals with day component.
If the interval has a day component then the calculation needs
to take into account daylight saving time switches and thereby a
day would not always be exactly 24 hours. We mitigate this by
adding a safety buffer to account for these dst switches when
dealing with intervals with day component. These calculations
will be repeated with exact values during execution.
Since dst switches seem to range between -1 and 2 hours we set
the safety buffer to 4 hours.
This patch also refactors the tests since the previous tests
made it hard to tell the feature was working after the constified
values have been removed from the plans.
Commit 35ea80ff added an optimization to enable expressions with
now() to be used during plan-time chunk exclusion by constifying
the now() expression. The added constified constraints were left
in the plan even though they were only required during the
hypertable explansion. This patch marks those constified constraints
and removes them once they are no longer required.
Multinode queries use _timescaledb_internal.chunks_in to specify
the chunks from which to select data. The chunk id in
regresscheck-shared is not stable and may differ depending on
execution order leading to flaky tests.
The table metrics_dist1 was only used by a single test and therefore
should not be part of shared_setup but instead be created in the
test that actually uses it. This reduces executed time of
regresscheck-shared when that test is not run.
This patch fixes the constify_now optimization to ignore Vars of
different level. Previously this could potentially lead to an
assertion failure cause the varno of that varno might be bigger
than the number of entries in the rangetable. Found by sqlsmith.
While we do filter out chunk ids and hypertable ids from the test
output, the output was still unstable when those ids switch between
single and double digit as that changes the length of the query
decorator in EXPLAIN output. This patch removes this decorator
entirely from all shared test output.
The non-superuser needs to have REPLICATION privileges atleast. A
new function "subscription_cmd" has been added to allow running
subscription related commands on datanodes. This function implicitly
upgrades to the bootstrapped superuser and then performs subscription
creation/alteration/deletion commands. It only accepts subscriptions
related commands and errors out otherwise.
This implements an optimization to allow now() expression to be
used during plan time chunk exclusions. Since now() is stable it
would not normally be considered for plan time chunk exclusion.
To enable this behaviour we convert `column > now()` expressions
into `column > const AND column > now()`. Assuming that time
always moves forward this is safe even for prepared statements.
This optimization works for SELECT, UPDATE and DELETE.
On hypertables with many chunks this can lead to a considerable
speedup for certain queries.
The following expressions are supported:
- column > now()
- column >= now()
- column > now() - Interval
- column > now() + Interval
- column >= now() - Interval
- column >= now() + Interval
Interval must not have a day or month component as those depend
on timezone settings.
Some microbenchmark to show the improvements, I did best of five
for all of the queries.
-- hypertable with 1k chunks
-- with optimization
select * from metrics1k where time > now() - '5m'::interval;
Time: 3.090 ms
-- without optimization
select * from metrics1k where time > now() - '5m'::interval;
Time: 145.640 ms
-- hypertable with 5k chunks
-- with optimization
select * from metrics5k where time > now() - '5m'::interval;
Time: 4.317 ms
-- without optimization
select * from metrics5k where time > now() - '5m'::interval;
Time: 775.259 ms
-- hypertable with 10k chunks
-- with optimization
select * from metrics10k where time > now() - '5m'::interval;
Time: 4.853 ms
-- without optimization
select * from metrics10k where time > now() - '5m'::interval;
Time: 1766.319 ms (00:01.766)
-- hypertable with 20k chunks
-- with optimization
select * from metrics20k where time > now() - '5m'::interval;
Time: 6.141 ms
-- without optimization
select * from metrics20k where time > now() - '5m'::interval;
Time: 3321.968 ms (00:03.322)
Speedup with 1k chunks: 47x
Speedup with 5k chunks: 179x
Speedup with 10k chunks: 363x
Speedup with 20k chunks: 540x