This commit removes the `cascade` option from the function
`drop_chunks` and `add_drop_chunk_policy`, which will now never cascade
drops to dependent objects. The tests are fixed accordingly and
verbosity turned up to ensure that the dependent objects are printed in
the error details.
This change fixes a couple issues with the num_dist_tables column of
the timescaledb_information.data_node view. The first fix will
allow the column to correctly report 0 when no tables are yet created
(it currently will count a NULL table as 1 in this case). The second
fix addresses a bug in the dist_util_remote_hypertable_info function
which was causing the code to only see the first hypertable returned.
This second bug will also cause incorrect results for many of our usage
reporting views and utilities when there are multiple distributed
hypertables.
The timescale clustering code so far has been written referring to the
remote databases as 'servers'. This terminology is a bit overloaded,
and in particular we don't enforce any network topology limitations
that the term 'server' would suggest. In light of this we've decided
to change to use the term 'node' when referring to the different
databases in a distributed database. Specifically we refer to the
frontend as an 'access node' and to the backends as 'data nodes',
though we may omit the access or data qualifier where it's unambiguous.
As the vast bulk of the code so far has been written for the case where
there was a single access node, almost all instances of 'server' were
references to data nodes. This change has updated the code to rename
those instances.
This change adds a new utility function for postgres
`server_hypertable_info`. This function will contact a provided node
and pull down the space information for all the distributed hypertables
on that node.
Additionally, a new view `distributed_server_info` has been added to
timescaledb_information. This view leverages the new
remote_hypertable_data function to display a list of nodes, along with
counts of tables, chunks, and total bytes used by distributed data.
Finally, this change also adds a `hypertable_server_relation_size`
function, which, given the name of a distributed hypertable, will print
the space information for that hypertable on each node of the
distributed database.
Try connecting to a server and running `SELECT 1`. It returns true
if succeed. If fails false is returned. There can be many reasons to
fail: no valid UserMapping, server is down or failed running `SELECT 1`.
More information about failure is written to server log.
`timescaledb_information.server` view is updated to show server status.
This adds a skeleton TimescaleDB foreign data wrapper (FDW) for
scale-out clustering. It currently works as a noop FDW that can be
used for testing, although the intention is to develop it into a
full-blown implementation.
Servers for a scale-out clustering setup can now be added and deleted
with `add_server()` and `delete_server()`, providing a convenience API
for server management.
While similar functionality can be achieved using the standard
PostgreSQL `CREATE SERVER` and `CREATE USER MAPPING` commands, this
new API makes it easier to add clustering servers and user mappings
consistent with the needs of TimescaleDBs particular clustering setup.
The API currently works with the `postgres_fdw` foreign data
wrapper. It will be updated to use our own foreign data wrapper once
it is available.
last_run_success value is reset when a job is started.
So mask the value if the status of a job is
running, otherwise it will show an incorrect state.
Fixes#1781
If `ignore_invalidation_older_than` is undefined, it is set to maximum
for `BIGINT` type. This is not handled in `continuous_aggregates`
information schema so the column shows up as a very strange value.
This commit fixes this by checking if `ignore_invalidation_older_than`
is set to maximum, and uses `NULL` in the view in that case, which will
show up as empty.
This commit adds a cascade_to_materializations flag to the scheduled
version of drop_chunks that behaves much like the one from manual
drop_chunks: if a hypertable that has a continuous aggregate tries to
drop chunks, and this flag is not set, the chunks will not be dropped.
This commit switches the remaining JOIN in the continuous_aggs_stats
view to LEFT JOIN. This way we'll still see info from the other columns
even when the background worker has not run yet.
This commit also switches the time fields to output text in the correct
format for the underlying time type.
Add the query definition to
timescaledb_information.continuous_aggregates.
The user query (specified in the CREATE VIEW stmt of a continuous
aggregate) is transformed in the process of creating a continuous
aggregate and this modified query is saved in the pg_rewrite catalog
tables. In order to display the original query, we create an internal
view which is a replica of the user query. This is used to display the
definition in timescaledb_information.continuous_aggregates.
As an alternative we could save the original user query in our internal
catalogs. But this approach involves replicating a lot of postgres code
and causes portability problems.
Add views so that users can see what the parameters are for policies they have created
and a separate view so that they can see policies that have been created and scheduled on hypertables.
Currently the view displays the current edition, expiry date, and
whether the license is expired. We're not displaying the license key
itself in the view as it can get rather long, and get be read via SHOW.
We also do not display the license's ID since that is for internal use.