To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- restart_background_workers()
- stop_background_workers()
- start_background_workers()
- alter_job_set_hypertable_id(integer,regclass)
To increase schema security we do not want to mix our own internal
objects with user objects. Since chunks are created in the
_timescaledb_internal schema our internal functions should live in
a different dedicated schema. This patch make the necessary
adjustments for the following functions:
- generate_uuid()
- get_git_commit()
- get_os_info()
- tsl_loaded()
Removed the PG12 specific macros and all the now, dead code. Also updated
the testcases which had workarounds in place to make them compatible
with PG12.
Version 15 pg_dump program does not log any messages with log level <
PG_LOG_WARNING to stdout. Whereas this check is not present in version
14, thus we see corresponding tests fail with missing log information.
This patch fixes by supressing those log information, so that the tests
pass on all versions of postgresql.
Fixes#4832
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that allows users efectivelly create and use indexes in the
materialization hypertable. The boring part of it is that users should
discover what is the associated materialization hypertable to issue a
`CREATE INDEX` statement.
Improved it by allowing users to easily create indexes in the
materialization hypertable by simple executing a `CREATE INDEX` direct
in the Continuous Aggregate.
Example:
`CREATE INDEX name_of_the_index ON continuous_agregate (column);`
Triggers on distributed hypertables can now be renamed due to having
the rename statement forwarded to data nodes. This also applies to
other objects on such tables, like constraints and indexes, since they
share the same DDL "rename" machinery. Tests are added for these
cases.
For convenience, trigger functions on distributed hypertables will now
also be created automatically on the data nodes. In other words, it is
no longer necessary to pre-create the trigger function on all data
nodes.
This change also fixes an issue with statement-level triggers, which
weren't properly propagated to data nodes during `CREATE TRIGGER`.
Fixes#3238
Change database names to be unique over the test suite by adding the
test database name in front of the created database names in the test.
This will allow the test to be executed in parallel with other tests
since it will not have conflicting databases in the same cluster.
Previously, there were a few directories created for tablespaces, but
this commit changes that to create one directory for each test where
the tablespace can be put. This is done by using a directory prefix for
each tablespace and each test should then create a subdirectory under
that prefix for the tablespace. The commit keeps variables for the old
tablespace paths around so that old tests work while transitioning to
the new system.
testsupport.sql had a reference to TSL code which will fail in
apache-only code since this file is included in every test run
for every build configuration.
In case when there are no stats (number of tuples/pages) we
can use two approaches to estimate relation size: interpolate
relation size using stats from previous chunks (if exists)
or estimate using shared buffer size (shared buffer size should
align with chunk size).
This refactors the `hypertable_distributed` test to make better use of
the `remote_exec` utility function. The refactoring also makes sure we
actually use space partitioning when testing distributed hypertables.
Deparse a table into a set of SQL commands that can be used to
reconstruct it. Together with column definiton it deparses
constraints, indexes, triggers and rules as well. There are some table
types that are not supported: temporary, partitioned, foreign,
inherited and a table that uses options. Row security is also not
supported.
PG12 allows users to add a WHERE clause when copying from from a
file into a table. This change adds support for such clauses on
hypertables. Also fixes a issue that would have arisen in cases
where a table being copied into had a trigger that caused a row to
be skipped.
The `pg_dump` command has slightly different informational output
across PostgreSQL versions, which causes problems for tests. This
change makes sure that all tests that use `pg_dump` uses the
appropriate wrapper script where we can better control the output to
make it the same across PostgreSQL versions.
Note that the main `pg_dump` test still fails for other reasons that
will be fixed separately.
PostgreSQL 12 changed the log level in client tools, such as
`pg_dump`, which makes some of our tests fail due to different log
level labels.
This change filters and modifies the log level output of `pg_dump` in
earlier PostgreSQL versions to adopt the new PostgreSQL 12 format.
To make tests more stable and to remove some repeated code in the
tests this PR changes the test runner to stop background workers.
Individual tests that need background workers can still start them
and this PR will only stop background workers for the initial database
for the test, behaviour for additional databases created during the
tests will not change.
When using `COPY TO` on a hypertable (which copies from the hypertable
to some other destination), nothing will be printed and nothing will be
copied. Since this can be potentially confusing for users, this commit
print a notice when an attempt is made to copy from a hypertable
directly (not using a query) to some other destination.
When restoring a database, people would encounter errors if
the restore happened after telemetry has run. This is because
a 'exported_uuid' field would then exist and people would encounter
a "duplicate key value" when the restore tried to overwrite it.
We fix this by moving this metadata to a different key
in pre_restore and trying to move it back in post_restore.
If the restore create an exported_uuid, that restored
value is used and the moved version is simply deleted
We also remove the error redirection in restore so that errors
will show up in tests in the future.
Fixes#1409.
We currently check and throw an error if the version loaded in the
client is different from the installed extension version, however
there is no way to recover from this state in a backend. (There is
no way to load the new version as we cannot unload the old and no
commands can be effectively run). Now, we instead throw a
FATAL error which will cause the client to reconnect so it can load
the proper extension version.
The "Nullable" column in the output of `show_columns` actually shows
NOT NULL constraints on a column, which is the inverse of what the
column name suggests. This changes "Nullable" to "NotNull" to avoid
confusion.
Currently CREATE INDEX creates the indices for all chunks in a single
transaction, which holds a lock on the root hypertable and all chunks. This
means that during CREATE INDEX no new inserts can occur, even if we're not
currently building an index on the table being inserted to.
This commit adds the option to create indices using a separate
transaction for each chunk. This option, used like
CREATE INDEX ON <table> WITH (timescaledb.transaction_per_chunk);
should cause less contention than a regular CREATE INDEX, in exchange
for the possibility that the index will be created on only some, or none,
of the chunks, if the command fails partway through. The command holds a lock on
the root index used as a template throughout the command, and each of the only
additionally locks the chunk being indexed. This means that that chunks which
are not currently being indexed can be inserted to, and new chunks can be
created while the CREATE INDEX command is in progress.
To enable detection of failed transaction_per_chunk CREATE INDEXs, the
hypertable's index is marked as invalid while the CREATE INDEX is in progress,
if the command fails partway through, the index will remain invalid. If such an
invalid index is discovered, it can be dropped an recreated to ensure that all
chunks have a copy of the index, in the future, we may add a command to create
indexes on only those chunks which are missing them. Note that even though the
hypertable's index is marked as invalid, new chunks will have a copy of the
index build as normal.
As part of the refactoring to make this command work, normal index creation was
slightly modified. Instead of getting the column names an index uses
one-at-a-time we get them all at once at the beginning of index creation, this
allows to close the hypertable's root table once we've determined all of while
we create the index info for each chunk. Secondly, it changes our function to
lookup a tablespace, ts_hypertable_get_tablespace_at_offset_from, to only take a
hypertable id, instead of the hypertable's entire cache entry; this function
only ever used the id, so this allows us to release the hypertable cache earlier
Replace hardcoded database name from regression tests with :TEST_DBNAME
Remove creation of database single_2 from test runner and add it to
bgw_launcher and loader test since no other tests used those
use SQL comments in test scripts
Fixes get_create_command to produce a valid create_hypertable
call when the column name is a keyword
Add test.execute_sql helper function to test support functions
We've decided to adopt the ts_ prefix on all exported C functions in
order to avoid having symbol conflicts with future postgres functions.
We've already started using this prefix on new functions and this commit
adds the prefix to to the old functions.
When timescaledb is installed in template1 and a user with only createdb
privileges creates a database, the user won't be able to dump the
database because of lacking permissions. This patch grants the missing
permissions to PUBLIC for pg_dump to succeed.
We need to grant SELECT to PUBLIC for all tables even those not
marked as being dumped because pg_dump will try to access all
tables initially to detect inheritance chains and then decide
which objects actually need to be dumped.
Suppress NOTICE in tests, either by setting client_min_messages=error or by redirecting pg_restore output (because pg_restore will clobber any client_min_messages value). Also added installation_metadata test back into CMakeList.
Using shell scripts in utils/ along with environmental vars we
know to be setup for the test, we can make the dump/restore test
work on remote platforms.
Previously, the pg_dump test was broken because it is not possible to reference psql variables
from inside bash commands run through psql. This is fixed by hardcoding the username passed to
the bash commands inside the test.
Also, we changed the insert block trigger preventing inserts into hypertable to a non-internal
trigger, because internal triggers are not dumped by pg_dump. We need to dump the trigger so that
it is already in place after a pg_restore, to prevent users from accidentally inserting rows into
a hypertable while timescaledb_restoring=on.
When configuring adaptive chunking and estimating the
chunk_target_size, we should use shared_buffers as an indication of
cache memory instead of trying to estimate based on
effective_cache_memory, system memory or other settings. This is
because shared_buffers should already have been set by the user based
on system memory, and also accurately reflects the cache memory
available to PostgreSQL.
We use a fraction (currently 0.9) of this cache memory as the
chunk_target_size. This assumes no concurrency with other hypertables.
Users can now (optionally) set a target chunk size and TimescaleDB
will try to adapt the interval length of the first open ("time")
dimension in order to reach that target chunk size. If a hypertable
has more than one open dimension, only the first one will have a
dynamically adapting interval.
Users can optionally specify their own function that calculates the
new dimension interval. They can also set a target size of 0 in order
to estimate a suitable target size for a chunk based on available
memory.
A hypertable's root table should never have any tuples, but it can
acquire tuples by accident if the TimescaleDB extension is not
preloaded or `timescaledb.restoring` is set to ON.
To avoid the above issue, a hypertable's root table now has a
(internal) trigger that generates an error when tuples are
inserted. This preserves the integrity of the hypertable even when
restoring or the extension is not preloaded.
An internal trigger has the advantage of being mostly transparent to
users (e.g., it doesn't show up with \d) and it is not inherited by
chunks, so it needs no special handling to avoid adding it to chunks.
The blocking trigger is added in the update script and if it is
detected that the hypertable's root table has data in the process, the
update will fail with an error and instructions on how to fix.
This fixes the show_indexes test support function to properly show the
columns of the indexes instead of the table. The function now also
shows the expressions of expression indexes.
Currently, chunk indexes are always created in the tablespace of the
index on the main table (which could be none/default one), even if the
chunks themselves are created in different tablespaces. This is
problematic in a multi-disk setting where each disk is a separate
tablespace where chunks are placed. The chunk indexes might exhaust
the space on the common (often default tablespace) which might not
have a lot of disk space. This also prohibits the database, including
index storage to grow by adding new tablespaces.
Instead, chunk indexes are now created in the "next" tablespace after
that of their chunks to both spread indexes across tablespaces and
avoid colocating indexes with their chunks (for I/O throughput
reasons). To optionally avoid this spreading, one can pin chunk
indexes to a specific tablespace by setting an explicit tablespace on
a main table index.
The extension now works with PostgreSQL 10, while
retaining compatibility with version 9.6.
PostgreSQL 10 has numerous internal changes to functions and
APIs, which necessitates various glue code and compatibility
wrappers to seamlessly retain backwards compatiblity with older
versions.
Test output might also differ between versions. In particular,
the psql client generates version-specific output with `\d` and
EXPLAINs might differ due to new query optimizations. The test
suite has been modified as follows to handle these issues. First,
tests now use version-independent functions to query system
catalogs instead of using `\d`. Second, changes have been made to
the test suite to be able to verify some test outputs against
version-dependent reference files.