This resolves an issue when using both constraints with index
tablespaces AND constraints that have no indexes on a single hypertable.
chunk_constraint_add_table_constraint() was attempting to add
`USING INDEX TABLESPACE` to all constraints when a given hypertable had
any constraint configured to use a tablespace, resulting in a SYNTAX
error and blocking the creation of new chunks.
The solution here is to limit index tablespace lookups to only the
constraint types which use indexes: primary key and unique so that only
those constraits will have `USING INDEX TABLESPACE` prepended when
necessary.
Change database names to be unique over the test suite by adding the
test database name in front of the created database names in the test.
This will allow the test to be executed in parallel with other tests
since it will not have conflicting databases in the same cluster.
Previously, there were a few directories created for tablespaces, but
this commit changes that to create one directory for each test where
the tablespace can be put. This is done by using a directory prefix for
each tablespace and each test should then create a subdirectory under
that prefix for the tablespace. The commit keeps variables for the old
tablespace paths around so that old tests work while transitioning to
the new system.
In #2514 a a race condition between inserts and `drop_chunks` is fixed
and this commit will repair the dimension slices table by
re-constructing missing dimension slices from the corresponding
constraint expressions.
Closes#1986
When upgrading from 1.7, it's possible to have retention policies
which overlap with continuous aggregates. These make use of the
cascade_to_materializations parameter to avoid invalidating the
aggregate.
In 2.0 there is no equivalent behavior to prevent the retention from
disrupting the aggregate. So during the 2.0 upgrade, check for any
running retention policies that are dropping chunks still used by a
continuous aggregate and suspend them (scheduled=>false). This will
also print a notice informing the user of what happened and how to
resume the retention policy if that's what they truly want.
Fixes#2530
Fix a check for a compatible chunk time interval type when creating a
hypertable with a custom time type.
Previously, the check allowed `Interval` type intervals for any
dimension type that is not an integer type, including custom time
types. The check is now changed so that it only accepts an `Interval`
for timestamp and date type dimensions.
A number of related error messages are also cleaned up so that they
are more consistent and conform to the error style guide.
The database must know the valid time range of a custom time type,
similar to how it knows the time ranges of officially supported time
types. However, the only way to "know" the valid time range of a
custom time type is to assume it is the same as the one of a supported
time type.
A previous commit tried to make such assumptions by finding an
appropriate cast from the custom time type to a supported time
type. However, this fails in case there are multiple casts available
that each could return a different type and range.
This change restricts the choice of valid time ranges to only that of
the bigint time type.
Fixes#2523
Removes unlrelated column schedule_interval from
timescaledb_information.continuous_aggregates view and simplifies it.
Renames argument cagg in refresh_continuous_aggregate into
continuous_aggregate as in add_continuous_aggregate_policy.
Part of #2521
Since handling of telemetry differs between timescaledb versions
when telemetry is disabled via environment variable we ignore the
scheduled flag in the post update diff.
Renaming the parameter `hypertable_or_cagg` in functions `drop_chunks`
and `show_chunks` to `relation` and changing parameter name from
`main_table` to `hypertable` or `relation` depending on context.
This change will add an invalidation to the
materialization_invalidation_log for any region earlier than the
ignore_invalidation_older_than parameter when updating a continuous
aggregate to 2.0. This is needed as we do not record invalidations
in this region prior to 2.0 and there is no way to ensure the
aggregate is up to date within this range.
Fixes#2450
This patch removes enterprise license support and moves
move_chunk() function under community license (TSL).
Licensing validation code been reworked and simplified.
Previously used timescaledb.license_key guc been renamed to
timescaledb.license.
This change also makes testing code more strict against
used license. Apache test suite now can test only apache-licensed
functions.
Fixes#2359
This patch changes the update test to use the same checks
between clean / updated install and dumped/restored install.
Previously only a small subset of the checks would be run against
the updated instance and most of the tests would only run against
the dumped and restored container.
As part of the 2.0 continous aggregate changes, we are removing the
continuous_aggs_completed_threshold table. However, this may result
in currently running aggregates being considered complete even if
their completed threshold hadn't reached the invalidation threshold.
This change fixes this by adding an entry to the invalidation log
for any such aggregates.
Fixes#2314
This patch fixes the format strings used to construct object names
in tests. The format strings used in those tests would break when
object names are involved that require quoting.
The recently added test for hypertable detection used compression
which is not available in ApacheOnly tests so we move that test
to regresscheck-t. Additionally we move the other test in
plan_hypertable_cache to plan_expand_hypertable to reduce the number
of tests.
Rename the `refresh_interval` field in
`timescaledb_information.continuous_aggregate` view to match the
parameter name in `add_continuous_aggregate_policy`.
Removes options refresh_lag, max_interval_per_job and
ignore_invalidation_older_than from continuous aggregate creation with
CREATE MATERIALIZED VIEW, since they are not related this statement
any more. They are already replaced with the corresponding options in
add_continuous_aggregate_policy.
This commit removes only options, while the options are still stored
in the catalog and need to be removed from there in a separate PR.
When a hypertable was referenced in a subquery that was not already
in our hypertable cache we would fail to detect it as hypertable
leading to transparent decompression not working for that hypertable.
When the extension is updated to 2.0, we need to migrate
existing ignore_invalidation_older_than settings to the new
continuous aggregate policy framework.
ignore_invalidation_older_than setting is mapped to start_interval
of the refresh policy.If the default value is used, it is mapped
to NULL start_interval, otherwise it is converted to an
interval value.
When a constraint is backed by an index like a unique constraint
or a primary key constraint the constraint can be renamed by either
ALTER TABLE RENAME CONSTRAINT or by ALTER INDEX RENAME. Depending
on the command used to rename different internal metadata tables
would be adjusted leading to corrupt metadata. This patch makes
ALTER TABLE RENAME CONSTRAINT and ALTER INDEX RENAME adjust the
same metadata tables.
This change simplifies the name of the functions for adding and
removing a continuous aggregate policy. The functions are renamed
from:
- `add_refresh_continuous_aggregate_policy`
- `remove_refresh_continuous_aggregate_policy`
to
- `add_continuous_aggregate_policy`
- `remove_continuous_aggregate_policy`
Fixes#2320
This commit will add support for `WITH NO DATA` when creating a
continuous aggregate and will refresh the continuous aggregate when
creating it unless `WITH NO DATA` is provided.
All test cases are also updated to use `WITH NO DATA` and an additional
test case for verifying that both `WITH DATA` and `WITH NO DATA` works
as expected.
Closes#2341
With the new continuous aggregate API, some of
the parameters used to create a continuous agg are
now obsolete. Remove refresh_lag, max_interval_per_job
and ignore_invalidation_older_than information from
timescaledb_information.continuous_aggregates.
If a tablespace is attached to a hypertable the tablespace of the
hypertable is not set, but if the tablespace is set it is also
attached. A similar situation occurs if tablespaces are detached.
This means that if a hypertable is created with a tablespace and then
all tablespaces are detached, the chunks will still be put in the
tablespace of the hypertable.
With this commit, attaching a tablespace to a hypertable will set the
tablespace of the hypertable if it does not already have one. Detaching
a tablespace from a hypertable will set the tablespace to the default
tablespace if the tablespace being detached is the tablespace for the
hypertable.
If `detach_tablespace` is called with only a tablespace name, it will
be detached from all tables it is attached to. This commit ensures that
the tablespace for the hypertable is set to the default tablespace if
it was set to the tablespace being detached.
Fixes#2299
This patch changes the scheduler to ignore telemetry jobs when
telemetry is disabled. With this change telemetry jobs will no
longer use background worker resources when telemetry is disabled.
Time types, like date and timestamps, have limits that aren't the same
as the underlying storage type. For instance, while a timestamp is
stored as an `int64` internally, its max supported time value is not
`INT64_MAX`. Instead, `INT64_MAX` represents `+Infinity` and the
actual largest possible timestamp is close to `INT64_MAX` (but not
`INT64_MAX-1` either). The same applies to min values.
Unfortunately, time handling code does not check for these boundaries;
in most cases, overflow handling when, e.g., bucketing, are checked
against the max integer values instead of type-specific boundaries. In
other cases, overflows simply throw errors instead of clamping to the
boundary values, which makes more sense in many situations.
Using integer time suffers from similar issues. To take one example,
simply inserting a valid `smallint` value close to the max into a
table with a `smallint` time column fails:
```
INSERT INTO smallint_table VALUES ('32765', 1, 2.0);
ERROR: value "32770" is out of range for type smallint
```
This happens because the code that adds dimensional constraints always
checks for overflow against `INT64_MAX` instead of the type-specific
max value. Therefore, it tries to create a chunk constraint that ends
at `32770`, which is outside the allowed range of `smallint`.
The resolve these issues, several time-related utility functions have
been implemented that, e.g., return type-specific range boundaries,
and perform saturated addition and subtraction while clamping to
supported boundaries.
Fixes#2292
Tablespaces are created cluster-wide, which means that tests that
create tablespaces cannot run together with other tests that create the
same tablespaces. This commit make those tests into solo tests to avoid
collisions with other tablespace-creating tests and also fix a test.
This change renames function to approximate_row_count() and adds
support for regular tables. Return a row count estimate for a
table instead of a table list.
Support add and remove continuous agg policy functions
Integrate policy execution with refresh api for continuous
aggregates
The old api for continuous aggregates adds a job automatically
for a continuous aggregate. This is an explicit step with the
new API. So remove this functionality.
Refactor some of the utility functions so that the code can be shared
by multiple policies.
The ddl_single test was almost exactly the same as the ddl test except
for 5 statements not part of the ddl_single test. So the ddl_single test
can safely be removed.
An optimization for `time_bucket` transforms expressions of the form
`time_bucket(10, time) < 100` to `time < 100 + 10` in order to do
chunk exclusion and make better use of indexes on the time
column. However, since one bucket is added to the timestamp when doing
this transformation, the timestamp can overflow.
While a check for such overflows already exists, it uses `+Infinity`
(INT64_MAX/DT_NOEND) as the upper bound instead of the actual end of
the valid timestamp range. A further complication arises because
TimescaleDB internally converts timestamps to UNIX epoch time, thus
losing a little bit of the valid timestamp range in the process. Dates
are further restricted by the fact that they are internally first
converted to timestamps (thus limited by the timestamp range) and then
converted to UNIX epoch.
This change fixes the overflow issue by only applying the
transformation if the resulting timestamps or dates stay within the
valid (TimescaleDB-specific) ranges.
A test has also been added to show the valid timestamp and date
ranges, both PostgreSQL and TimescaleDB-specific ones.
We change the syntax for defining continuous aggregates to use `CREATE
MATERIALIZED VIEW` rather than `CREATE VIEW`. The command still creates
a view, while `CREATE MATERIALIZED VIEW` creates a table. Raise an
error if `CREATE VIEW` is used to create a continuous aggregate and
redirect to `CREATE MATERIALIZED VIEW`.
In a similar vein, `DROP MATERIALIZED VIEW` is used for continuous
aggregates and continuous aggregates cannot be dropped with `DROP
VIEW`.
Continuous aggregates are altered using `ALTER MATERIALIZED VIEW`
rather than `ALTER VIEW`, so we ensure that it works for `ALTER
MATERIALIZED VIEW` and gives an error if you try to use `ALTER VIEW` to
change a continuous aggregate.
Note that we allow `ALTER VIEW ... SET SCHEMA` to be used with the
partial view as well as with the direct view, so this is handled as a
special case.
Fixes#2233
Co-authored-by: =?UTF-8?q?Erik=20Nordstr=C3=B6m?= <erik@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>
In the function `ts_hypercube_from_constraints` a hypercube is build
from constraints which reference dimension slices in `dimension_slice`.
As part of a run of `drop_chunks` or when a chunk is explicitly dropped
as part of other operations, dimension slices can be removed from this
table causing the dimension slices to be removed, which makes the
hypercube reference non-existent dimension slices which subsequently
causes a crash.
This commit fixes this by adding a tuple lock on the dimension slices
that are used to build the hypercube.
If two `drop_chunks` are running concurrently, there can be a race if
dimension slices are removed as a result removing a chunk. We treat
this case in the same way as if the dimension slice was updated: report
an error that another session locked the tuple.
Fixes#1986