574 Commits

Author SHA1 Message Date
Tyler Fontaine
1bccedf3f8 Limit Index Tablespace Lookup To p and u Constraints
This resolves an issue when using both constraints with index
tablespaces AND constraints that have no indexes on a single hypertable.

chunk_constraint_add_table_constraint() was attempting to add
`USING INDEX TABLESPACE` to all constraints when a given hypertable had
any constraint configured to use a tablespace, resulting in a SYNTAX
error and blocking the creation of new chunks.

The solution here is to limit index tablespace lookups to only the
constraint types which use indexes: primary key and unique so that only
those constraits will have `USING INDEX TABLESPACE` prepended when
necessary.
2020-11-04 08:39:44 -06:00
Mats Kindahl
2cf3af1eb6 Fix dist_hypertable test for parallel execution
Change database names to be unique over the test suite by adding the
test database name in front of the created database names in the test.
This will allow the test to be executed in parallel with other tests
since it will not have conflicting databases in the same cluster.

Previously, there were a few directories created for tablespaces, but
this commit changes that to create one directory for each test where
the tablespace can be put. This is done by using a directory prefix for
each tablespace and each test should then create a subdirectory under
that prefix for the tablespace. The commit keeps variables for the old
tablespace paths around so that old tests work while transitioning to
the new system.
2020-10-21 15:28:58 +02:00
Mats Kindahl
03f2fbcf32 Repair dimension slice table on update
In #2514 a a race condition between inserts and `drop_chunks` is fixed
and this commit will repair the dimension slices table by
re-constructing missing dimension slices from the corresponding
constraint expressions.

Closes #1986
2020-10-19 11:41:11 +02:00
Sven Klemm
5642ccaa1d Fix index creation on hypertables with dropped columns
Fix creating expression indexes on hypertables with dropped columns.
2020-10-17 20:14:49 +02:00
Brian Rowe
3f23cb64e8 Suspend retention policies with caggs conflicts
When upgrading from 1.7, it's possible to have retention policies
which overlap with continuous aggregates.  These make use of the
cascade_to_materializations parameter to avoid invalidating the
aggregate.

In 2.0 there is no equivalent behavior to prevent the retention from
disrupting the aggregate.  So during the 2.0 upgrade, check for any
running retention policies that are dropping chunks still used by a
continuous aggregate and suspend them (scheduled=>false).  This will
also print a notice informing the user of what happened and how to
resume the retention policy if that's what they truly want.

Fixes #2530
2020-10-16 14:27:03 -07:00
gayyappan
ff560a903c Fix outer join qual propagation
time_bucket_annotate_walker passes an incorrect status
for outer join to the function that checks quals eligibility
for propagation.

Fixes #2500
2020-10-16 08:55:30 -04:00
Erik Nordström
ce6387aa90 Allow only integer intervals for custom time types
Fix a check for a compatible chunk time interval type when creating a
hypertable with a custom time type.

Previously, the check allowed `Interval` type intervals for any
dimension type that is not an integer type, including custom time
types. The check is now changed so that it only accepts an `Interval`
for timestamp and date type dimensions.

A number of related error messages are also cleaned up so that they
are more consistent and conform to the error style guide.
2020-10-15 18:58:01 +02:00
Erik Nordström
c4a91e5ae8 Assume custom time type range is same as bigint
The database must know the valid time range of a custom time type,
similar to how it knows the time ranges of officially supported time
types. However, the only way to "know" the valid time range of a
custom time type is to assume it is the same as the one of a supported
time type.

A previous commit tried to make such assumptions by finding an
appropriate cast from the custom time type to a supported time
type. However, this fails in case there are multiple casts available
that each could return a different type and range.

This change restricts the choice of valid time ranges to only that of
the bigint time type.

Fixes #2523
2020-10-15 18:58:01 +02:00
Sven Klemm
4b4db04c1e Clean up update tests
Refactor update test files and remove obsolete test files.
2020-10-13 18:06:28 +02:00
Sven Klemm
336d8f9c47 Check function linkage in update test
This patch adds a check that all c functions link to the correct
library after an update.
2020-10-13 18:06:28 +02:00
Ruslan Fomkin
85095b6eef Cleanup public API
Removes unlrelated column schedule_interval from
timescaledb_information.continuous_aggregates view and simplifies it.
Renames argument cagg in refresh_continuous_aggregate into
continuous_aggregate as in add_continuous_aggregate_policy.

Part of #2521
2020-10-13 09:41:12 +02:00
Sven Klemm
967a10afcb Fix flaky update test
Disable background workers during update tests to prevent deadlocks
in continuous aggregates on 1.7.x
2020-10-12 16:15:29 +02:00
Sven Klemm
641eb4e86b Ignore details of telemetry job in update test
Since handling of telemetry differs between timescaledb versions
when telemetry is disabled via environment variable we ignore the
scheduled flag in the post update diff.
2020-10-06 01:50:53 +02:00
Erik Nordström
4623db14ad Use consistent column names in views
Make all views that reference hypertables use `hypertable_schema` and
`hypertable_name`.
2020-10-05 15:18:47 +02:00
Mats Kindahl
da97ce6e8b Make function parameter names consistent
Renaming the parameter `hypertable_or_cagg` in functions `drop_chunks`
and `show_chunks` to `relation` and changing parameter name from
`main_table` to `hypertable` or `relation` depending on context.
2020-10-02 08:52:20 +02:00
Brian Rowe
0703822a83 Create low end invalidation when updating caggs
This change will add an invalidation to the
materialization_invalidation_log for any region earlier than the
ignore_invalidation_older_than parameter when updating a continuous
aggregate to 2.0. This is needed as we do not record invalidations
in this region prior to 2.0 and there is no way to ensure the
aggregate is up to date within this range.

Fixes #2450
2020-10-01 10:39:41 -07:00
gayyappan
ef7f21df6d Modify job_stats and continuous_aggregates view
Use hypertable_schema and hypertable_name instead
of regclass hypertable in job_stats and
continuous_aggregates views.
2020-10-01 11:39:10 -04:00
Dmitry Simonenko
a51aa6d04b Move enterprise features to community
This patch removes enterprise license support and moves
move_chunk() function under community license (TSL).

Licensing validation code been reworked and simplified.
Previously used timescaledb.license_key guc been renamed to
timescaledb.license.

This change also makes testing code more strict against
used license. Apache test suite now can test only apache-licensed
functions.

Fixes #2359
2020-09-30 15:14:17 +03:00
Sven Klemm
db0e210b8f Block REFRESH MATERIALIZED VIEW on caggs 2020-09-29 16:36:16 +02:00
Sven Klemm
6cc9871be8 Make update test consistent
This patch changes the update test to use the same checks
between clean / updated install and dumped/restored install.
Previously only a small subset of the checks would be run against
the updated instance and most of the tests would only run against
the dumped and restored container.
2020-09-28 17:19:04 +02:00
Sven Klemm
dbb9988eee Fix result ordering in tests
This patch fixes the result sorting in tests that had no ORDER BY
clause or where ORDER BY clause did not result in fixed ordering.
2020-09-28 12:15:42 +02:00
Brian Rowe
e79308218a Add invalidations for incomplete aggregates
As part of the 2.0 continous aggregate changes, we are removing the
continuous_aggs_completed_threshold table.  However, this may result
in currently running aggregates being considered complete even if
their completed threshold hadn't reached the invalidation threshold.
This change fixes this by adding an entry to the invalidation log
for any such aggregates.

Fixes #2314
2020-09-25 09:17:53 -07:00
Sven Klemm
eb30e54d92 Fix format strings used in tests
This patch fixes the format strings used to construct object names
in tests. The format strings used in those tests would break when
object names are involved that require quoting.
2020-09-24 12:54:45 +02:00
Sven Klemm
17cc6f6bd7 Fix ApacheOnly regression test
The recently added test for hypertable detection used compression
which is not available in ApacheOnly tests so we move that test
to regresscheck-t. Additionally we move the other test in
plan_hypertable_cache to plan_expand_hypertable to reduce the number
of tests.
2020-09-18 20:27:56 +02:00
Mats Kindahl
7abe65d87e Fix field name in continuous_aggregate view
Rename the `refresh_interval` field in
`timescaledb_information.continuous_aggregate` view to match the
parameter name in `add_continuous_aggregate_policy`.
2020-09-18 10:13:23 +02:00
gayyappan
c21839ddb9 Add test for tablespaces with views
Add test for chunks and hypertables view that shows
tablespaces.
2020-09-17 16:35:25 -04:00
Ruslan Fomkin
6f1a0bd24a Remove options from continuous aggregate
Removes options refresh_lag, max_interval_per_job and
ignore_invalidation_older_than from continuous aggregate creation with
CREATE MATERIALIZED VIEW, since they are not related this statement
any more. They are already replaced with the corresponding options in
add_continuous_aggregate_policy.

This commit removes only options, while the options are still stored
in the catalog and need to be removed from there in a separate PR.
2020-09-14 19:14:04 +02:00
Sven Klemm
c9539b95ea Fix segfault in ALTER VIEW SET
Fix a segfault in ALTER VIEW when trying to SET view options
on normal postgres views.
2020-09-14 10:22:50 +02:00
Sven Klemm
e28ce96141 Check sequence values don't get reset in update test 2020-09-13 14:30:43 +02:00
Sven Klemm
b245360502 Fix detection of hypertables in subqueries
When a hypertable was referenced in a subquery that was not already
in our hypertable cache we would fail to detect it as hypertable
leading to transparent decompression not working for that hypertable.
2020-09-13 14:11:59 +02:00
gayyappan
802524ec20 Migrate ignore_invalidation_older_than for continuous aggregates
When the extension is updated to 2.0, we need to migrate
existing ignore_invalidation_older_than settings to the new
continuous aggregate policy framework.

ignore_invalidation_older_than setting is mapped to start_interval
of the refresh policy.If the default value is used, it is mapped
to NULL start_interval, otherwise it is converted to an
interval value.
2020-09-11 12:51:19 -04:00
Sven Klemm
6d7edb99ba Fix rename constraint/rename index
When a constraint is backed by an index like a unique constraint
or a primary key constraint the constraint can be renamed by either
ALTER TABLE RENAME CONSTRAINT or by ALTER INDEX RENAME. Depending
on the command used to rename different internal metadata tables
would be adjusted leading to corrupt metadata. This patch makes
ALTER TABLE RENAME CONSTRAINT and ALTER INDEX RENAME adjust the
same metadata tables.
2020-09-11 16:51:14 +02:00
Erik Nordström
07ebd5c9b2 Rename continuous aggregate policy API
This change simplifies the name of the functions for adding and
removing a continuous aggregate policy. The functions are renamed
from:

- `add_refresh_continuous_aggregate_policy`
- `remove_refresh_continuous_aggregate_policy`

to

- `add_continuous_aggregate_policy`
- `remove_continuous_aggregate_policy`

Fixes #2320
2020-09-11 15:22:54 +02:00
Mats Kindahl
9565cbd0f7 Continuous aggregates support WITH NO DATA
This commit will add support for `WITH NO DATA` when creating a
continuous aggregate and will refresh the continuous aggregate when
creating it unless `WITH NO DATA` is provided.

All test cases are also updated to use `WITH NO DATA` and an additional
test case for verifying that both `WITH DATA` and `WITH NO DATA` works
as expected.

Closes #2341
2020-09-11 14:02:41 +02:00
gayyappan
3f7c5d22c7 Continuous aggregate view changes
With the new continuous aggregate API, some of
the parameters used to create a continuous agg are
now obsolete. Remove refresh_lag, max_interval_per_job
and ignore_invalidation_older_than information from
timescaledb_information.continuous_aggregates.
2020-09-09 14:45:17 -04:00
Mats Kindahl
4f32439362 Update tablespace of table on attach and detach
If a tablespace is attached to a hypertable the tablespace of the
hypertable is not set, but if the tablespace is set it is also
attached. A similar situation occurs if tablespaces are detached.
This means that if a hypertable is created with a tablespace and then
all tablespaces are detached, the chunks will still be put in the
tablespace of the hypertable.

With this commit, attaching a tablespace to a hypertable will set the
tablespace of the hypertable if it does not already have one. Detaching
a tablespace from a hypertable will set the tablespace to the default
tablespace if the tablespace being detached is the tablespace for the
hypertable.

If `detach_tablespace` is called with only a tablespace name, it will
be detached from all tables it is attached to. This commit ensures that
the tablespace for the hypertable is set to the default tablespace if
it was set to the tablespace being detached.

Fixes #2299
2020-09-09 09:30:07 +02:00
Sven Klemm
eb5420e485 Fix telemetry handling in background worker scheduler
This patch changes the scheduler to ignore telemetry jobs when
telemetry is disabled. With this change telemetry jobs will no
longer use background worker resources when telemetry is disabled.
2020-09-07 18:44:50 +02:00
Erik Nordström
417b66e974 Fix boundary handling in time types and constraints
Time types, like date and timestamps, have limits that aren't the same
as the underlying storage type. For instance, while a timestamp is
stored as an `int64` internally, its max supported time value is not
`INT64_MAX`. Instead, `INT64_MAX` represents `+Infinity` and the
actual largest possible timestamp is close to `INT64_MAX` (but not
`INT64_MAX-1` either). The same applies to min values.

Unfortunately, time handling code does not check for these boundaries;
in most cases, overflow handling when, e.g., bucketing, are checked
against the max integer values instead of type-specific boundaries. In
other cases, overflows simply throw errors instead of clamping to the
boundary values, which makes more sense in many situations.

Using integer time suffers from similar issues. To take one example,
simply inserting a valid `smallint` value close to the max into a
table with a `smallint` time column fails:

```
INSERT INTO smallint_table VALUES ('32765', 1, 2.0);
ERROR:  value "32770" is out of range for type smallint
```

This happens because the code that adds dimensional constraints always
checks for overflow against `INT64_MAX` instead of the type-specific
max value. Therefore, it tries to create a chunk constraint that ends
at `32770`, which is outside the allowed range of `smallint`.

The resolve these issues, several time-related utility functions have
been implemented that, e.g., return type-specific range boundaries,
and perform saturated addition and subtraction while clamping to
supported boundaries.

Fixes #2292
2020-09-04 23:27:22 +02:00
Mats Kindahl
74dabc4c77 Make tests with tablespaces solo tests
Tablespaces are created cluster-wide, which means that tests that
create tablespaces cannot run together with other tests that create the
same tablespaces. This commit make those tests into solo tests to avoid
collisions with other tablespace-creating tests and also fix a test.
2020-09-04 08:02:59 +02:00
Dmitry Simonenko
e10b437712 Make hypertable_approximate_row_count return row count only
This change renames function to approximate_row_count() and adds
support for regular tables. Return a row count estimate for a
table instead of a table list.
2020-09-02 12:18:34 +03:00
gayyappan
97b4d1cae2 Support refresh continuous aggregate policy
Support add and remove continuous agg policy functions
Integrate policy execution with refresh api for continuous
aggregates
The old api for continuous aggregates adds a job automatically
for a continuous aggregate. This is an explicit step with the
new API. So remove this functionality.
Refactor some of the utility functions so that the code can be shared
by multiple policies.
2020-09-01 21:41:00 -04:00
Sven Klemm
4bc88cb694 Merge index and reindex test
Combine index and reindex test to reduce number of test cases.
2020-09-01 23:55:58 +02:00
Sven Klemm
7f0ec49fa2 Merge ddl and ddl_alter_column test 2020-09-01 15:07:17 +02:00
Sven Klemm
3859b5a6d2 Change ddl test to not depend on PG version 2020-09-01 15:07:17 +02:00
Sven Klemm
d4240becda Remove ddl_single test
The ddl_single test was almost exactly the same as the ddl test except
for 5 statements not part of the ddl_single test. So the ddl_single test
can safely be removed.
2020-09-01 15:07:17 +02:00
Sven Klemm
4397e57497 Remove job_type from bgw_job table
Due to recent refactoring all policies now use the columns added
with the generic job support so the job_type column is no longer
needed.
2020-09-01 14:49:30 +02:00
Sven Klemm
d19f93e191 Merge telemetry_community and telemetry_compression 2020-08-31 18:48:33 +02:00
Erik Nordström
c5a202476e Fix timestamp overflow in time_bucket optimization
An optimization for `time_bucket` transforms expressions of the form
`time_bucket(10, time) < 100` to `time < 100 + 10` in order to do
chunk exclusion and make better use of indexes on the time
column. However, since one bucket is added to the timestamp when doing
this transformation, the timestamp can overflow.

While a check for such overflows already exists, it uses `+Infinity`
(INT64_MAX/DT_NOEND) as the upper bound instead of the actual end of
the valid timestamp range. A further complication arises because
TimescaleDB internally converts timestamps to UNIX epoch time, thus
losing a little bit of the valid timestamp range in the process. Dates
are further restricted by the fact that they are internally first
converted to timestamps (thus limited by the timestamp range) and then
converted to UNIX epoch.

This change fixes the overflow issue by only applying the
transformation if the resulting timestamps or dates stay within the
valid (TimescaleDB-specific) ranges.

A test has also been added to show the valid timestamp and date
ranges, both PostgreSQL and TimescaleDB-specific ones.
2020-08-27 19:16:24 +02:00
Mats Kindahl
c054b381c6 Change syntax for continuous aggregates
We change the syntax for defining continuous aggregates to use `CREATE
MATERIALIZED VIEW` rather than `CREATE VIEW`. The command still creates
a view, while `CREATE MATERIALIZED VIEW` creates a table.  Raise an
error if `CREATE VIEW` is used to create a continuous aggregate and
redirect to `CREATE MATERIALIZED VIEW`.

In a similar vein, `DROP MATERIALIZED VIEW` is used for continuous
aggregates and continuous aggregates cannot be dropped with `DROP
VIEW`.

Continuous aggregates are altered using `ALTER MATERIALIZED VIEW`
rather than `ALTER VIEW`, so we ensure that it works for `ALTER
MATERIALIZED VIEW` and gives an error if you try to use `ALTER VIEW` to
change a continuous aggregate.

Note that we allow `ALTER VIEW ... SET SCHEMA` to be used with the
partial view as well as with the direct view, so this is handled as a
special case.

Fixes #2233

Co-authored-by: =?UTF-8?q?Erik=20Nordstr=C3=B6m?= <erik@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>
2020-08-27 17:16:10 +02:00
Mats Kindahl
769bc31dc2 Lock dimension slice tuple when scanning
In the function `ts_hypercube_from_constraints` a hypercube is build
from constraints which reference dimension slices in `dimension_slice`.
As part of a run of `drop_chunks` or when a chunk is explicitly dropped
as part of other operations, dimension slices can be removed from this
table causing the dimension slices to be removed, which makes the
hypercube reference non-existent dimension slices which subsequently
causes a crash.

This commit fixes this by adding a tuple lock on the dimension slices
that are used to build the hypercube.

If two `drop_chunks` are running concurrently, there can be a race if
dimension slices are removed as a result removing a chunk. We treat
this case in the same way as if the dimension slice was updated: report
an error that another session locked the tuple.

Fixes #1986
2020-08-26 09:44:20 +02:00