Enables adding a boolean column with default value to a compressed table.
This limitation was occurring due to the internal representation of default
boolean values like 'True' or 'False', hence more checks are added for this.
Fixes#4486
A number of TimescaleDB functions internally call `AlterTableInternal`
to modify tables or indexes. For instance, `compress_chunk` and
`attach_tablespace` act as DDL commands to modify
hypertables. However, crashes occur when these functions are called
via `SELECT * INTO FROM <function_name>` or the equivalent `CREATE
TABLE AS` statement. The crashes happen because these statements are
considered process utility commands and therefore sets up an event
trigger context for collecting commands. However, the event trigger
context is not properly set up to record alter table statements in
this code path, thus causing the crashes.
To prevent crashes, wrap `AlterTableInternal` with the event trigger
functions to properly initialize the event trigger context.
When running `performDeletion` is is necessary to have a valid relation
id, but when doing a lookup using `ts_hypertable_get_by_id` this might
actually return a hypertable entry pointing to a table that does not
exist because it has been deleted previously. In this case, only the
catalog entry should be removed, but it is not necessary to delete the
actual table.
This scenario can occur if both the hypertable and a compressed table
are deleted as part of running a `sql_drop` event, for example, if a
compressed hypertable is defined inside an extension. In this case, the
compressed hypertable (indeed all tables) will be deleted first, and
the lookup of the compressed hypertable will find it in the metadata
but a lookup of the actual table will fail since the table does not
exist.
Fixes#4140
When a hypertable uses a non default tablespace, based
on attach_tablespace settings, the compressed chunk's
index is still created in the default tablespace.
This PR fixes this behavior and creates the compressed
chunk and its indexes in the same tablespace.
When move_chunk is executed on a compressed chunk,
move the indexes to the specified destination tablespace.
Fixes#4000
When adding a new column with a default postgres doesnt materialize
the default expression but just stores it in catalog with a flag
signaling that it is missing from the physical row.
Any expressions used as default that do not require materialization
can be allowed on compressed hypertables as well.
The following statements will work on compressed hypertables with
this patch:
ALTER TABLE t ADD COLUMN c1 int DEFAULT 42;
ALTER TABLE t ADD COLUMN c2 text NOT NULL DEFAULT 'abc';
ALTER TABLE <hypertable> RENAME <column_name> TO <new_column_name>
is now supported for hypertables that have compression enabled.
Note: Column renaming is not supported for distributed hypertables.
So this will not work on distributed hypertables that have
compression enabled.
Tests are updated to no longer use continuous aggregate options that
will be removed, such as `refresh_lag`, `max_interval_per_job` and
`ignore_invalidation_older_than`. `REFRESH MATERIALIZED VIEW` has also
been replaced with `CALL refresh_continuous_aggregate()` using ranges
that try to replicate the previous refresh behavior.
The materializer test (`continuous_aggregate_materialize`) has been
removed, since this tested the "old" materializer code, which is no
longer used without `REFRESH MATERIALIZED VIEW`. The new API using
`refresh_continuous_aggregate` already allows manual materialization
and there are two previously added tests (`continuous_aggs_refresh`
and `continuous_aggs_invalidate`) that cover the new refresh path in
similar ways.
When updated to use the new refresh API, some of the concurrency
tests, like `continuous_aggs_insert` and `continuous_aggs_multi`, have
slightly different concurrency behavior. This is explained by
different and sometimes more conservative locking. For instance, the
first transaction of a refresh serializes around an exclusive lock on
the invalidation threshold table, even if no new threshold is
written. The previous code, only took the heavier lock once, and if, a
new threshold was written. This new, and stricter locking, means that
insert processes that read the invalidation threshold will block for a
short time when there are concurrent refreshes. However, since this
blocking only occurs during the first transaction of the refresh
(which is quite short), it probably doesn't matter too much in
practice. The relaxing of locks to improve concurrency and performance
can be implemented in the future.
This change simplifies the name of the functions for adding and
removing a continuous aggregate policy. The functions are renamed
from:
- `add_refresh_continuous_aggregate_policy`
- `remove_refresh_continuous_aggregate_policy`
to
- `add_continuous_aggregate_policy`
- `remove_continuous_aggregate_policy`
Fixes#2320
This commit will add support for `WITH NO DATA` when creating a
continuous aggregate and will refresh the continuous aggregate when
creating it unless `WITH NO DATA` is provided.
All test cases are also updated to use `WITH NO DATA` and an additional
test case for verifying that both `WITH DATA` and `WITH NO DATA` works
as expected.
Closes#2341
Support add and remove continuous agg policy functions
Integrate policy execution with refresh api for continuous
aggregates
The old api for continuous aggregates adds a job automatically
for a continuous aggregate. This is an explicit step with the
new API. So remove this functionality.
Refactor some of the utility functions so that the code can be shared
by multiple policies.
We change the syntax for defining continuous aggregates to use `CREATE
MATERIALIZED VIEW` rather than `CREATE VIEW`. The command still creates
a view, while `CREATE MATERIALIZED VIEW` creates a table. Raise an
error if `CREATE VIEW` is used to create a continuous aggregate and
redirect to `CREATE MATERIALIZED VIEW`.
In a similar vein, `DROP MATERIALIZED VIEW` is used for continuous
aggregates and continuous aggregates cannot be dropped with `DROP
VIEW`.
Continuous aggregates are altered using `ALTER MATERIALIZED VIEW`
rather than `ALTER VIEW`, so we ensure that it works for `ALTER
MATERIALIZED VIEW` and gives an error if you try to use `ALTER VIEW` to
change a continuous aggregate.
Note that we allow `ALTER VIEW ... SET SCHEMA` to be used with the
partial view as well as with the direct view, so this is handled as a
special case.
Fixes#2233
Co-authored-by: =?UTF-8?q?Erik=20Nordstr=C3=B6m?= <erik@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>
Allow move_chunk() to work with uncompressed chunk and
automatically move associated compressed chunk to specified
tablespace.
Block move_chunk() execution for compressed chunks.
Issue: #2067
Allow ALTER SET TABLESPACE on an uncompressed chunk and
automatically execute it on the associated compressed chunk,
if any. Block SET TABLESPACE command for compressed chunks.
Issue #2068
The `drop_chunks` function is refactored to make table name mandatory
for the function. As a result, the function was also refactored to
accept the `regclass` type instead of table name plus schema name and
the parameters were reordered to match the order for `show_chunks`.
The commit also refactor the code to pass the hypertable structure
between internal functions rather than the hypertable relid and moving
error checks to the PostgreSQL function. This allow the internal
functions to avoid some lookups and use the information in the
structure directly and also give errors earlier instead of first
dropping chunks and then error and roll back the transaction.
Replace EXECUTE PROCEDURE with EXECUTE FUNCTION because the former
is deprecated in PG11+. Unfortunately some test output will still
have EXECUTE PROCEDURE because pg_get_triggerdef in PG11 still
generates a definition with EXECUTE PROCEDURE.
This commit removes the `cascade` option from the function
`drop_chunks` and `add_drop_chunk_policy`, which will now never cascade
drops to dependent objects. The tests are fixed accordingly and
verbosity turned up to ensure that the dependent objects are printed in
the error details.
This PR adds a new mode for continuous aggregates that we name
real-time aggregates. Unlike the original this new mode will
combine materialized data with new data received after the last
refresh has happened. This new mode will be the default behaviour
for newly created continuous aggregates.
To upgrade existing continuous aggregates to the new behaviour
the following command needs to be run for all continuous aggregates
ALTER VIEW continuous_view_name SET (timescaledb.materialized_only=false);
To disable this behaviour for newly created continuous aggregates
and get the old behaviour the following command can be run
ALTER VIEW continuous_view_name SET (timescaledb.materialized_only=true);
Previously we could have a dangling policy and job referring
to a now-dropped hypertable.
We also block changing the compression options if a policy exists.
Fixes#1570
Previously, refresh_lag in continuous aggs was calculated
relative to the maximum timestamp in the table. Change the
semantics so that it is relative to now(). This is more
intuitive.
Requires an integer_now function applied to hypertables
with integer-based time dimensions.
Since enabling compression creates limits on the hypertable
(e.g. types of constraints allowed) even if there are no
compressed chunks, we add the ability to turn off compression.
This is only possible if there are no compressed chunks.
For tablepaces with compressed chunks the semantics are the following:
- compressed chunks get put into the same tablespace as the
uncommpressed chunk on compression.
- set tablespace on uncompressed hypertable cascades to compressed hypertable+chunks
- set tablespace on all chunks is blocked (same as w/o compression)
- move chunks on a uncompressed chunk errors
- move chunks on compressed chunk works
In the future we will:
- add tablespace option to compress_chunk function and policy (this will override the setting
of the uncompressed chunk). This will allow changing tablespaces upon compression
- Note: The current plan is to never listen to the setting on compressed hypertable. In fact,
we will block setting tablespace on compressed hypertables
Block most DDL commands on hypertables with compression enabled.
This restriction will be relaxed over time.
The only alter table commands currently allowed are:
Add/Remove index, Set storage options, clustering index,
set statistics and set tablespace.
We also disallow reseting compression options.
- Fix declaration of functions wrt TSDLLEXPORT consistency
- Empty structs need to be created with '{ 0 }' syntax.
- Alignment sentinels have to use uint64 instead of a struct
with a 0-size member
- Add some more ORDER BY clauses in the tests to constrain
the order of results
- Add ANALYZE after running compression in
transparent-decompression test
This fixes delete of relate rows when we have compressed
hypertables. Namely we delete rows from:
- compression_chunk_size
- hypertable_compression
We also fix hypertable_compression to handle NULLS correctly.
We add a stub for tests with continuous aggs as well as compression.
But, that's broken for now so it's commented out. Will be fixed
in another PR.
This commit add handling for dropping of chunks and hypertables
in the presence of associated compressed objects. If the uncompressed
chunk/hypertable is dropped than drop the associated compressed object
using DROP_RESTRICT unless cascading is explicitly enabled.
Also add a compressed_chunk_id index on compressed tables for
figuring out whether a chunk is compressed or not.
Change a bunch of APIs to use DropBehavior instead of a cascade bool
to be more explicit.
Also test the drop chunks policy.