34 Commits

Author SHA1 Message Date
Rafia Sabih
a584263179 Fix alter column for compressed table
Enables adding a boolean column with default value to a compressed table.
This limitation was occurring due to the internal representation of default
boolean values like 'True' or 'False', hence more checks are added for this.

Fixes #4486
2022-07-27 17:19:01 +02:00
Erik Nordström
9b91665162 Fix crashes in functions using AlterTableInternal
A number of TimescaleDB functions internally call `AlterTableInternal`
to modify tables or indexes. For instance, `compress_chunk` and
`attach_tablespace` act as DDL commands to modify
hypertables. However, crashes occur when these functions are called
via `SELECT * INTO FROM <function_name>` or the equivalent `CREATE
TABLE AS` statement. The crashes happen because these statements are
considered process utility commands and therefore sets up an event
trigger context for collecting commands. However, the event trigger
context is not properly set up to record alter table statements in
this code path, thus causing the crashes.

To prevent crashes, wrap `AlterTableInternal` with the event trigger
functions to properly initialize the event trigger context.
2022-05-19 17:37:09 +02:00
Mats Kindahl
f5fd06cabb Ignore invalid relid when deleting hypertable
When running `performDeletion` is is necessary to have a valid relation
id, but when doing a lookup using `ts_hypertable_get_by_id` this might
actually return a hypertable entry pointing to a table that does not
exist because it has been deleted previously. In this case, only the
catalog entry should be removed, but it is not necessary to delete the
actual table.

This scenario can occur if both the hypertable and a compressed table
are deleted as part of running a `sql_drop` event, for example, if a
compressed hypertable is defined inside an extension. In this case, the
compressed hypertable (indeed all tables) will be deleted first, and
the lookup of the compressed hypertable will find it in the metadata
but a lookup of the actual table will fail since the table does not
exist.

Fixes #4140
2022-03-14 14:03:49 +01:00
gayyappan
264540610e Fix tablespace for compressed chunk's index
When a hypertable uses a non default tablespace, based
on attach_tablespace settings, the compressed chunk's
index is still created in the default tablespace.
This PR fixes this behavior and creates the compressed
chunk and its indexes in the same tablespace.

When move_chunk is executed on a compressed chunk,
move the indexes to the specified destination tablespace.

Fixes #4000
2022-02-14 11:06:10 -05:00
Sven Klemm
93ffec7c10 Allow ALTER TABLE ADD COLUMN with DEFAULT on compressed hypertable
When adding a new column with a default postgres doesnt materialize
the default expression but just stores it in catalog with a flag
signaling that it is missing from the physical row.
Any expressions used as default that do not require materialization
can be allowed on compressed hypertables as well.

The following statements will work on compressed hypertables with
this patch:
ALTER TABLE t ADD COLUMN c1 int DEFAULT 42;
ALTER TABLE t ADD COLUMN c2 text NOT NULL DEFAULT 'abc';
2021-11-03 11:10:54 +01:00
gayyappan
5be6a3e4e9 Support column rename for hypertables with compression enabled
ALTER TABLE <hypertable> RENAME <column_name> TO <new_column_name>
is now supported for hypertables that have compression enabled.

Note: Column renaming is not supported for distributed hypertables.
So this will not work on distributed hypertables that have
compression enabled.
2021-02-19 10:21:50 -05:00
gayyappan
f649736f2f Support ADD COLUMN for compressed hypertables
Support ALTER TABLE .. ADD COLUMN <colname> <typname>
for hypertables with compressed chunks.
2021-01-14 09:32:50 -05:00
Erik Nordström
202692f1ef Make tests use the new continuous aggregate API
Tests are updated to no longer use continuous aggregate options that
will be removed, such as `refresh_lag`, `max_interval_per_job` and
`ignore_invalidation_older_than`. `REFRESH MATERIALIZED VIEW` has also
been replaced with `CALL refresh_continuous_aggregate()` using ranges
that try to replicate the previous refresh behavior.

The materializer test (`continuous_aggregate_materialize`) has been
removed, since this tested the "old" materializer code, which is no
longer used without `REFRESH MATERIALIZED VIEW`. The new API using
`refresh_continuous_aggregate` already allows manual materialization
and there are two previously added tests (`continuous_aggs_refresh`
and `continuous_aggs_invalidate`) that cover the new refresh path in
similar ways.

When updated to use the new refresh API, some of the concurrency
tests, like `continuous_aggs_insert` and `continuous_aggs_multi`, have
slightly different concurrency behavior. This is explained by
different and sometimes more conservative locking. For instance, the
first transaction of a refresh serializes around an exclusive lock on
the invalidation threshold table, even if no new threshold is
written. The previous code, only took the heavier lock once, and if, a
new threshold was written. This new, and stricter locking, means that
insert processes that read the invalidation threshold will block for a
short time when there are concurrent refreshes. However, since this
blocking only occurs during the first transaction of the refresh
(which is quite short), it probably doesn't matter too much in
practice. The relaxing of locks to improve concurrency and performance
can be implemented in the future.
2020-09-11 16:07:21 +02:00
Erik Nordström
07ebd5c9b2 Rename continuous aggregate policy API
This change simplifies the name of the functions for adding and
removing a continuous aggregate policy. The functions are renamed
from:

- `add_refresh_continuous_aggregate_policy`
- `remove_refresh_continuous_aggregate_policy`

to

- `add_continuous_aggregate_policy`
- `remove_continuous_aggregate_policy`

Fixes #2320
2020-09-11 15:22:54 +02:00
Mats Kindahl
9565cbd0f7 Continuous aggregates support WITH NO DATA
This commit will add support for `WITH NO DATA` when creating a
continuous aggregate and will refresh the continuous aggregate when
creating it unless `WITH NO DATA` is provided.

All test cases are also updated to use `WITH NO DATA` and an additional
test case for verifying that both `WITH DATA` and `WITH NO DATA` works
as expected.

Closes #2341
2020-09-11 14:02:41 +02:00
gayyappan
97b4d1cae2 Support refresh continuous aggregate policy
Support add and remove continuous agg policy functions
Integrate policy execution with refresh api for continuous
aggregates
The old api for continuous aggregates adds a job automatically
for a continuous aggregate. This is an explicit step with the
new API. So remove this functionality.
Refactor some of the utility functions so that the code can be shared
by multiple policies.
2020-09-01 21:41:00 -04:00
Sven Klemm
4397e57497 Remove job_type from bgw_job table
Due to recent refactoring all policies now use the columns added
with the generic job support so the job_type column is no longer
needed.
2020-09-01 14:49:30 +02:00
Mats Kindahl
c054b381c6 Change syntax for continuous aggregates
We change the syntax for defining continuous aggregates to use `CREATE
MATERIALIZED VIEW` rather than `CREATE VIEW`. The command still creates
a view, while `CREATE MATERIALIZED VIEW` creates a table.  Raise an
error if `CREATE VIEW` is used to create a continuous aggregate and
redirect to `CREATE MATERIALIZED VIEW`.

In a similar vein, `DROP MATERIALIZED VIEW` is used for continuous
aggregates and continuous aggregates cannot be dropped with `DROP
VIEW`.

Continuous aggregates are altered using `ALTER MATERIALIZED VIEW`
rather than `ALTER VIEW`, so we ensure that it works for `ALTER
MATERIALIZED VIEW` and gives an error if you try to use `ALTER VIEW` to
change a continuous aggregate.

Note that we allow `ALTER VIEW ... SET SCHEMA` to be used with the
partial view as well as with the direct view, so this is handled as a
special case.

Fixes #2233

Co-authored-by: =?UTF-8?q?Erik=20Nordstr=C3=B6m?= <erik@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>
2020-08-27 17:16:10 +02:00
Sven Klemm
d547d61516 Refactor continuous aggregate policy
This patch modifies the continuous aggregate policy to store its
configuration in the jobs table.
2020-08-11 22:57:02 +02:00
Sven Klemm
0d5f1ffc83 Refactor compress chunk policy
This patch changes the compression policy to store its configuration
in the bgw_job table and removes the bgw_policy_compress_chunks table.
2020-07-30 19:58:37 +02:00
Dmitry Simonenko
fca7e36898 Support moving compressed chunks
Allow move_chunk() to work with uncompressed chunk and
automatically move associated compressed chunk to specified
tablespace.

Block move_chunk() execution for compressed chunks.

Issue: #2067
2020-07-24 19:26:15 +03:00
Dmitry Simonenko
add97fbadf Apply SET TABLESPACE for compressed chunks
Allow ALTER SET TABLESPACE on an uncompressed chunk and
automatically execute it on the associated compressed chunk,
if any. Block SET TABLESPACE command for compressed chunks.

Issue #2068
2020-07-15 15:14:25 +03:00
Mats Kindahl
a089843ffd Make table mandatory for drop_chunks
The `drop_chunks` function is refactored to make table name mandatory
for the function. As a result, the function was also refactored to
accept the `regclass` type instead of table name plus schema name and
the parameters were reordered to match the order for `show_chunks`.

The commit also refactor the code to pass the hypertable structure
between internal functions rather than the hypertable relid and moving
error checks to the PostgreSQL function.  This allow the internal
functions to avoid some lookups and use the information in the
structure directly and also give errors earlier instead of first
dropping chunks and then error and roll back the transaction.
2020-06-17 06:56:50 +02:00
Sven Klemm
663463771b Use EXECUTE FUNCTION instead of EXECUTE PROCEDURE
Replace EXECUTE PROCEDURE with EXECUTE FUNCTION because the former
is deprecated in PG11+. Unfortunately some test output will still
have EXECUTE PROCEDURE because pg_get_triggerdef in PG11 still
generates a definition with EXECUTE PROCEDURE.
2020-06-02 17:33:05 +02:00
Mats Kindahl
92b6c03e43 Remove cascade option from drop_chunks
This commit removes the `cascade` option from the function
`drop_chunks` and `add_drop_chunk_policy`, which will now never cascade
drops to dependent objects.  The tests are fixed accordingly and
verbosity turned up to ensure that the dependent objects are printed in
the error details.
2020-06-02 16:08:51 +02:00
Sven Klemm
2ae4592930 Add real-time support to continuous aggregates
This PR adds a new mode for continuous aggregates that we name
real-time aggregates. Unlike the original this new mode will
combine materialized data with new data received after the last
refresh has happened. This new mode will be the default behaviour
for newly created continuous aggregates.

To upgrade existing continuous aggregates to the new behaviour
the following command needs to be run for all continuous aggregates

ALTER VIEW continuous_view_name SET (timescaledb.materialized_only=false);

To disable this behaviour for newly created continuous aggregates
and get the old behaviour the following command can be run

ALTER VIEW continuous_view_name SET (timescaledb.materialized_only=true);
2020-03-31 22:09:42 +02:00
Matvey Arye
d52b48e0c3 Delete compression policy when drop hypertable
Previously we could have a dangling policy and job referring
to a now-dropped hypertable.

We also block changing the compression options if a policy exists.

Fixes #1570
2020-01-02 16:40:59 -05:00
Matvey Arye
2f7d69f93b Make continuous agg relative to now()
Previously, refresh_lag in continuous aggs was calculated
relative to the maximum timestamp in the table. Change the
semantics so that it is relative to now(). This is more
intuitive.

Requires an integer_now function applied to hypertables
with integer-based time dimensions.
2019-11-21 14:17:37 -05:00
Joshua Lockerman
e9e7c5f38e Add missing tests discovered by Codecov 3
Tests for continuous aggregates over compressed data, which also tests
selecting tableoids from compressed tables.
2019-10-29 19:02:58 -04:00
gayyappan
940d5aa3ac Compression related ddl tests
Trigger tests with compress/decompress_chunk
2019-10-29 19:02:58 -04:00
Matvey Arye
85d30e404d Add ability to turn off compression
Since enabling compression creates limits on the hypertable
(e.g. types of constraints allowed) even if there are no
compressed chunks, we add the ability to turn off compression.
This is only possible if there are no compressed chunks.
2019-10-29 19:02:58 -04:00
Matvey Arye
7380efa0fe Add tests that constraint adding is blocked
Adding constraints to tables that have compression enabled should
be blocked for now.
2019-10-29 19:02:58 -04:00
Matvey Arye
b8a98c1f18 Make compressed chunks use same tablespace as uncompressed
For tablepaces with compressed chunks the semantics are the following:
  - compressed chunks get put into the same tablespace as the
    uncommpressed chunk on compression.
 - set tablespace on uncompressed hypertable cascades to compressed hypertable+chunks
 - set tablespace on all chunks is blocked (same as w/o compression)
 - move chunks on a uncompressed chunk errors
 - move chunks on compressed chunk works

In the future we will:
 - add tablespace option to compress_chunk function and policy (this will override the setting
   of the uncompressed chunk). This will allow changing tablespaces upon compression
 - Note: The current plan is to never listen to the setting on compressed hypertable. In fact,
   we will block setting tablespace on  compressed hypertables
2019-10-29 19:02:58 -04:00
Matvey Arye
4d65a01a57 Handle change owner on ht with compressio
Pass down the change owner command to the compressed hypertable.
2019-10-29 19:02:58 -04:00
Matvey Arye
c5d4ce7f90 Handle set tablespace on ht with compression
Pass down the set tablespace command to the compressed hypertable.
2019-10-29 19:02:58 -04:00
Matvey Arye
a399a57af9 Block most DDL on hypertable with compression
Block most DDL commands on hypertables with compression enabled.
This restriction will be relaxed over time.

The only alter table commands currently allowed are:
Add/Remove index, Set storage options, clustering index,
set statistics and set tablespace.

We also disallow reseting compression options.
2019-10-29 19:02:58 -04:00
Matvey Arye
8250714a29 Add fixes for Windows
- Fix declaration of functions wrt TSDLLEXPORT consistency
- Empty structs need to be created with '{ 0 }' syntax.
- Alignment sentinels have to use uint64 instead of a struct
  with a 0-size member
- Add some more ORDER BY clauses in the tests to constrain
  the order of results
- Add ANALYZE after running compression in
  transparent-decompression test
2019-10-29 19:02:58 -04:00
Matvey Arye
df4c444551 Delete related rows for compression
This fixes delete of relate rows when we have compressed
hypertables. Namely we delete rows from:

- compression_chunk_size
- hypertable_compression

We also fix hypertable_compression to handle NULLS correctly.

We add a stub for tests with continuous aggs as well as compression.
But, that's broken for now so it's commented out. Will be fixed
in another PR.
2019-10-29 19:02:58 -04:00
Matvey Arye
0db50e7ffc Handle drops of compressed chunks/hypertables
This commit add handling for dropping of chunks and hypertables
in the presence of associated compressed objects. If the uncompressed
chunk/hypertable is dropped than drop the associated compressed object
using DROP_RESTRICT unless cascading is explicitly enabled.

Also add a compressed_chunk_id index on compressed tables for
figuring out whether a chunk is compressed or not.

Change a bunch of APIs to use DropBehavior instead of a cascade bool
to be more explicit.

Also test the drop chunks policy.
2019-10-29 19:02:58 -04:00