119 Commits

Author SHA1 Message Date
Nikhil
762053431e Implement drop_chunk_replica API
This function drops a chunk on a specified data node. It then removes
the metadata about the datanode, chunk association on the access node.

This function is meant for internal use as part of the "move chunk"
functionality.

If only one chunk replica remains then this function refuses to drop it
to avoid data loss.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
404f1cdbad Create chunk table from access node
Creates a table for chunk replica on the given data node. The table
gets the same schema and name as the chunk. The created chunk replica
table is not added into metadata on the access node or data node.

The primary goal is to use it during copy/move chunk.
2021-07-29 16:53:12 +03:00
Ruslan Fomkin
28ccecbe7c Create an empty chunk table
Adds an internal API function to create an empty chunk table according
the given hypertable for the given chunk table name and dimension
slices. This functions creates a chunk table inheriting from the
hypertable, so it guarantees the same schema. No TimescaleDB's
metadata is updated.

To be able to create the chunk table in a tablespace attached to the
hyeprtable, this commit allows calculating the tablespace id without
the dimension slice to exist in the catalog.

If there is already a chunk, which collides on dimension slices, the
function fails to create the chunk table.

The function will be used internally in multi-node to be able to
replicate a chunk from one data node to another.
2021-07-29 16:53:12 +03:00
Erik Nordström
98110af75b Constify parameters and return values of core APIs
Harden core APIs by adding the `const` qualifier to pointer parameters
and return values passed by reference. Adding `const` to APIs has
several benefits and potentially reduces bugs.

* Allows core APIs to be called using `const` objects.
* Callers know that objects passed by reference are not modified as a
  side-effect of a function call.
* Returning `const` pointers enforces "read-only" usage of pointers to
  internal objects, forcing users to copy objects when mutating them
  or using explicit APIs for mutations.
* Allows compiler to apply optimizations and helps static analysis.

Note that these changes are so far only applied to core API
functions. Further work can be done to improve other parts of the
code.
2021-06-14 22:09:10 +02:00
Sven Klemm
fe872cb684 Add policy_recompression procedure
This patch adds a recompress procedure that may be used as custom
job when compression and recompression should run as separate
background jobs.
2021-05-24 18:03:47 -04:00
gayyappan
4f865f7870 Add recompress_chunk function
After inserts go into a compressed chunk, the chunk is marked as
unordered.This PR adds a new function recompress_chunk that
compresses the data and sets the status back to compressed. Further
optimizations for this function are planned but not part of this PR.

This function can be invoked by calling
SELECT recompress_chunk(<chunk_name>).

recompress_chunk function is automatically invoked by the compression
policy job, when it sees that a chunk is in unordered state.
2021-05-24 18:03:47 -04:00
gayyappan
93be235d33 Support for inserts into compressed hypertables
Add CompressRowSingleState .
This has functions to compress a single row.
2021-05-24 18:03:47 -04:00
Erik Nordström
f6967b349f Use COPY when executing distributed INSERTs
A new custom plan/executor node is added that implements distributed
INSERT using COPY in the backend (between access node and data
nodes). COPY is significantly faster than the existing method that
sets up prepared INSERT statements on each data node. With COPY,
tuples are streamed to data nodes instead of batching them in order to
"fill" a configured prepared statement. A COPY also avoids the
overhead of having to plan the statement on each data node.

Using COPY doesn't work in all situations, however. Neither ON
CONFLICT nor RETURNING clauses work since COPY lacks support for
them. Still, RETURNING is possible if one knows that the tuples aren't
going to be modified by, e.g., a trigger. When tuples aren't modified,
one can return the original tuples on the access node.

In order to implement the new custom node, some refactoring has been
performed to the distributed COPY code. The basic COPY support
functions have been moved to the connection module so that switching
in and out of COPY_IN mode is part of the core connection
handling. This allows other parts of the code to manage the connection
mode, which is necessary when, e.g., creating a remote chunk. To
create a chunk, the connection needs to be switched out of COPY_IN
mode so that regular SQL statements can be executed again.

Partial fix for #3025.
2021-05-12 16:14:28 +02:00
Dmitry Simonenko
6a1c81b63e Add distributed restore point functionality
This change adds create_distributed_restore_point() function
which allows to create recovery restore point across data
nodes.

Fix #2846
2021-02-25 15:39:50 +03:00
gayyappan
5be6a3e4e9 Support column rename for hypertables with compression enabled
ALTER TABLE <hypertable> RENAME <column_name> TO <new_column_name>
is now supported for hypertables that have compression enabled.

Note: Column renaming is not supported for distributed hypertables.
So this will not work on distributed hypertables that have
compression enabled.
2021-02-19 10:21:50 -05:00
gayyappan
f649736f2f Support ADD COLUMN for compressed hypertables
Support ALTER TABLE .. ADD COLUMN <colname> <typname>
for hypertables with compressed chunks.
2021-01-14 09:32:50 -05:00
Erik Nordström
2ecb53e7bb Improve memory handling for remote COPY
This change improves memory usage in the `COPY` code used for
distributed hypertables. The following issues have been addressed:

* `PGresult` objects were not cleared, leading to memory leaks.
* The caching of chunk connections didn't work since the lookup
  compared ephemeral chunk pointers instead of chunk IDs. The effect
  was that cached chunk connection state was reallocated every time
  instead of being reused. This likely also caused worse performance.

To address these issues, the following changes are made:

* All `PGresult` objects are now cleared with `PQclear`.
* Lookup for chunk connections now compares chunk IDs instead of chunk
  pointers.
* The per-tuple memory context is moved the to the outer processing
  loop to ensure that everything in the loop is allocated on the
  per-tuple memory context, which is also reset at every iteration of
  the loop.
* The use of memory contexts is also simplified to have only one
  memory context for state that should survive across resets of the
  per-tuple memory context.

Fixes #2677
2020-12-02 17:40:44 +01:00
Ruslan Fomkin
791b0a4db7 Add API to refresh continuous aggregate on chunk
Function refresh_continuous_aggregate, which takes a continuous
aggregate and a chunk, is added. It refreshes the continuous aggregate
on the given chunk if there are invalidations. The function can be
used in a transaction, e.g., together with following drop_chunks. This
allows users to create a user defined action to refresh and drop
chunks. Therefore, the refresh on drop is removed from drop_chunks.
2020-11-12 08:33:35 +01:00
gayyappan
05319cd424 Support analyze of internal compression table
This commit modifies analyze behavior as follows:
1. When an internal compression table is analyzed,
statistics from the compressed chunk (such as page
count and tuple count) is used to update the
statistics of the corresponding chunk parent, if
it is missing.

2. Analyze compressed chunk instead of raw chunks
When the command ANALYZE <hypertable> is executed,
a) analyze uncompressed chunks and b) skip the raw chunk,
but analyze the compressed chunk.
2020-11-11 15:05:14 -05:00
Ruslan Fomkin
307ac6afae Refresh correct partial during refresh on drop
When drop_chunks is called on a hypertable manually or through
a retention policy, it initiates refresh in hypertable's continuous
aggregates. The refresh includes all buckets of each dropped chunk.
Some buckets are inside a dropped chunk only partially. In such case,
this commit fixes to refresh only the correct partials of the buckets.

The fix passes chunk id to refresh on drop of a chunk. The chunk id is
used to identify and update correct partials. This is a workaround
that the invalidation outside of the refreshed chunk doesn't update
the other partial, which is outside the refreshed chunk.

The commit also adds the test demonstrating correct refresh.

Fixes #2592
2020-11-06 14:12:30 +01:00
Dmitry Simonenko
a51aa6d04b Move enterprise features to community
This patch removes enterprise license support and moves
move_chunk() function under community license (TSL).

Licensing validation code been reworked and simplified.
Previously used timescaledb.license_key guc been renamed to
timescaledb.license.

This change also makes testing code more strict against
used license. Apache test suite now can test only apache-licensed
functions.

Fixes #2359
2020-09-30 15:14:17 +03:00
Erik Nordström
c884fe43f0 Remove unused materializer code
The new refresh functionality for continuous aggregates replaces the
old materializer, which means some code is no longer used and should
be removed.

Closes #2395
2020-09-15 17:18:59 +02:00
Erik Nordström
caf64357f4 Handle dropping chunks with continuous aggregates
This change makes the behavior of dropping chunks on a hypertable that
has associated continuous aggregates consistent with other
mutations. In other words, any way of deleting data, irrespective of
whether this is done through a `DELETE`, `DROP TABLE <chunk>` or
`drop_chunks` command, will invalidate the region of deleted data so
that a subsequent refresh of a continuous aggregate will know that the
region is out-of-date and needs to be materialized.

Previously, only a `DELETE` would invalidate continuous aggregates,
while `DROP TABLE <chunk>` and `drop_chunks` did not. In fact, each
way to delete data had different behavior:

1. A `DELETE` would generate invalidations and the materializer would
   update any aggregates to reflect the changes.
2. A `DROP TABLE <chunk>` would not generate invalidations and the
   changes would therefore not be reflected in aggregates.
3. A `drop_chunks` command would not work unless
   `ignore_invalidation_older_than` was set. When enabled, the
   `drop_chunks` would first materialize the data to be dropped and
   then never materialize that region again, unless
   `ignore_invalidation_older_than` was reset. But then the continuous
   aggregates would be in an undefined state since invalidations had
   been ignored.

Due to the different behavior of these mutations, a continuous
aggregate could get "out-of-sync" with the underlying hypertable. This
has now been fixed.

For the time being, the previous behavior of "refresh-on-drop" (i.e.,
materializing the data on continuous aggregates before dropping it) is
retained for `drop_chunks`. However, such "refresh-on-drop" behavior
should probably be revisited in the future since it happens silently
by default without an opt out. There are situations when such silent
refreshing might be undesirable; for instance, let's say the dropped
data had seen erroneous backfill that a user wants to ignore. Another
issue with "refresh-on-drop" is that it only happens for `drop_chunks`
and not other ways of deleting data.

Fixes #2242
2020-09-09 21:14:45 +02:00
gayyappan
97b4d1cae2 Support refresh continuous aggregate policy
Support add and remove continuous agg policy functions
Integrate policy execution with refresh api for continuous
aggregates
The old api for continuous aggregates adds a job automatically
for a continuous aggregate. This is an explicit step with the
new API. So remove this functionality.
Refactor some of the utility functions so that the code can be shared
by multiple policies.
2020-09-01 21:41:00 -04:00
Erik Nordström
8bd7fcc123 Add TRUNCATE support for continuous aggregates
This change adds the ability to truncate a continuous aggregate and
its source hypertable. Like other mutations (DELETE, UPDATE) on a
continuous aggregate or the underlying hypertable, an invalidation
needs to be added to ensure the aggregate can be refreshed again.

When a hypertable is truncated, an invalidation is added to the
hypertable invalidation log so that all its continuous aggregates will
be invalidated. When a specific continuous aggregate is truncated, the
invaliation is instead added to the continuous aggregate invalidation
log, so that only the one aggregate is invalidated.
2020-09-01 00:53:38 +02:00
Mats Kindahl
c054b381c6 Change syntax for continuous aggregates
We change the syntax for defining continuous aggregates to use `CREATE
MATERIALIZED VIEW` rather than `CREATE VIEW`. The command still creates
a view, while `CREATE MATERIALIZED VIEW` creates a table.  Raise an
error if `CREATE VIEW` is used to create a continuous aggregate and
redirect to `CREATE MATERIALIZED VIEW`.

In a similar vein, `DROP MATERIALIZED VIEW` is used for continuous
aggregates and continuous aggregates cannot be dropped with `DROP
VIEW`.

Continuous aggregates are altered using `ALTER MATERIALIZED VIEW`
rather than `ALTER VIEW`, so we ensure that it works for `ALTER
MATERIALIZED VIEW` and gives an error if you try to use `ALTER VIEW` to
change a continuous aggregate.

Note that we allow `ALTER VIEW ... SET SCHEMA` to be used with the
partial view as well as with the direct view, so this is handled as a
special case.

Fixes #2233

Co-authored-by: =?UTF-8?q?Erik=20Nordstr=C3=B6m?= <erik@timescale.com>
Co-authored-by: Mats Kindahl <mats@timescale.com>
2020-08-27 17:16:10 +02:00
Erik Nordström
f8727756a6 Cleanup drop and show chunks
This change removes, simplifies, and unifies code related to
`drop_chunks` and `show_chunks`. As a result of prior changes to
`drop_chunks`, e.g., making table relid mandatory and removing
cascading options, there's an opportunity to clean up and simplify the
rather complex code for dropping and showing chunks.

In particular, `show_chunks` is now consistent with `drop_chunks`; the
relid argument is mandatory, a continuous aggregate can be used in
place of a hypertable, and the input time ranges are checked and
handled in the same way.

Unused code is also removed, for instance, code that cascaded drop
chunks to continuous aggregates remained in the code base while the
option no longer exists.
2020-08-25 14:36:15 +02:00
Sven Klemm
a9c087eb1e Allow scheduling custom functions as bgw jobs
This patch adds functionality to schedule arbitrary functions
or procedures as background jobs.

New functions:

add_job(
  proc REGPROC,
  schedule_interval INTERVAL,
  config JSONB DEFAULT NULL,
  initial_start TIMESTAMPTZ DEFAULT NULL,
  scheduled BOOL DEFAULT true
)

Add a job that runs proc every schedule_interval. Proc can
be either a function or a procedure implemented in any language.

delete_job(job_id INTEGER)

Deletes the job.

run_job(job_id INTEGER)

Execute a job in the current session.
2020-08-20 11:23:49 +02:00
Sven Klemm
d547d61516 Refactor continuous aggregate policy
This patch modifies the continuous aggregate policy to store its
configuration in the jobs table.
2020-08-11 22:57:02 +02:00
gayyappan
eecc93f3b6 Add hypertable_index_size function
Function to compute the size for a specific
index of a hypertable
2020-08-10 18:00:51 -04:00
Sven Klemm
bb891cf4d2 Refactor retention policy
This patch changes the retention policy to store its configuration
in the bgw_job table and removes the bgw_policy_drop_chunks table.
2020-08-03 22:33:54 +02:00
gayyappan
9f13fb9906 Add functions for compression stats
Add chunk_compression_stats and hypertable_compression_stats
functions to get before/after compression sizes
2020-08-03 10:19:55 -04:00
Sven Klemm
0d5f1ffc83 Refactor compress chunk policy
This patch changes the compression policy to store its configuration
in the bgw_job table and removes the bgw_policy_compress_chunks table.
2020-07-30 19:58:37 +02:00
Brian Rowe
68aee5144c Rename add_drop_chunks_policy
This change replaces the add_drop_chunks_policy function with
add_retention_policy.  This also renames the older_than parameter
of that function as retention_window.  Likewise, the
remove_drop_chunks_policy is also being renamed
remove_retention_policy.

Fixes #2119
2020-07-30 09:53:21 -07:00
Sven Klemm
6a3d31d045 Rename variable to be according to our code style
This changes the parseState variable in the telemetry code to
parse_state to conform with our code style.
2020-07-30 05:03:52 +02:00
Erik Nordström
84fd3b09b4 Add refresh function for continuous aggregates
This change adds a new refresh function called
`refresh_continuous_aggregate` that allows refreshing a continuous
aggregate over a given window of data, called the "refresh window".

This is the first step in a larger overhaul of the continuous
aggregate feature with the goal of cleaning up the API and separating
policy from the core functionality.

Currently, the refresh function does a brute-force refresh of a window
and it bypasses the whole invalidation framework. Future updates
intend to integrate with this framework (with modifications) to
optimize refreshes. An exclusive lock is take on the continuous
aggregate's internal materialized hypertable in order to protect
against concurrent refreshing. However, as this serializes refreshes,
we might want to relax this locking in the future to allow, e.g.,
concurrent refreshes of non-overlapping windows.

The new refresh functionality includes basic tests for bad input and
refreshing across different windows. Unfortunately, a bug in the
optimization code for `time_bucket` causes timestamps to overflow the
allowed MAX time. Therefore, refresh windows that are close to the MAX
allowed size are not yet supported or tested.
2020-07-30 01:04:32 +02:00
gayyappan
7d3b4b5442 New size utils functions
Add hypertable_detailed_size , chunk_detailed_size,
hypertable_size functions.
Remove hypertable_relation_size,
hypertable_relation_size_pretty, and indexes_relation_size_pretty
Remove size information from hypertables view.
2020-07-29 15:30:39 -04:00
Sven Klemm
3e83577916 Refactor reorder policy
This patch changes the reorder policy to store it's configuration
in the bgw_job table and removes the bgw_policy_reorder table.
2020-07-29 12:07:13 +02:00
Sven Klemm
fa223ae701 Use macros to define crossmodule wrapper functions
This patch adds a helper macro to define cross module
wrapper functions to reduce code repetition. It also
changes the crossmodule struct names to match the function
names where this wasn't the case already.
2020-07-07 16:55:35 +02:00
Dmitry Simonenko
9ec944a4dc Refactor process utility functions result type
Introduce and use explicit enum type for process utility functions return
value instead of using bool type.
2020-06-25 14:40:25 +03:00
Mats Kindahl
a089843ffd Make table mandatory for drop_chunks
The `drop_chunks` function is refactored to make table name mandatory
for the function. As a result, the function was also refactored to
accept the `regclass` type instead of table name plus schema name and
the parameters were reordered to match the order for `show_chunks`.

The commit also refactor the code to pass the hypertable structure
between internal functions rather than the hypertable relid and moving
error checks to the PostgreSQL function.  This allow the internal
functions to avoid some lookups and use the information in the
structure directly and also give errors earlier instead of first
dropping chunks and then error and roll back the transaction.
2020-06-17 06:56:50 +02:00
Erik Nordström
31d5254c2e Add internal function to show connection cache
The connection cache for remote transactions can now be examined using
a function that shows all connections in the cache. This allows easier
debugging and validation both in tests and on live systems.

In particular, we'd like to know that connections are in good state
post commit or rollback and that we don't leave bad connections in the
cache.

The remote transaction test (`remote_txn`) has been updated to show
the connection cache as remote transactions are
executed. Unfortunately, the whole cache is replaced on every
(sub-)transaction rollback, which makes it hard to debug the
connection state of a particular remote transaction. Further, some
connections are left in the cache in a bad state after, e.g.,
connection loss.

These issues will be fixed with an upcoming change.
2020-06-13 12:05:41 +02:00
Mats Kindahl
92b6c03e43 Remove cascade option from drop_chunks
This commit removes the `cascade` option from the function
`drop_chunks` and `add_drop_chunk_policy`, which will now never cascade
drops to dependent objects.  The tests are fixed accordingly and
verbosity turned up to ensure that the dependent objects are printed in
the error details.
2020-06-02 16:08:51 +02:00
Ruslan Fomkin
c44a202576 Implement altering replication factor
Implements SQL function set_replication_factor, which changes
replication factor of a distributed hypertable. The change of the
replication factor doesn't affect existing chunks. Newly created
chunks are replicated according to new replication factor.
2020-05-27 17:31:09 +02:00
Brian Rowe
fad33fe954 Collect column stats for distributed tables.
This change adds a new command to return a subset of the column
stats for a hypertable (column width, percent null, and percent
distinct).  As part of the execution of this command on an access
node, these stats will be collected for distributed chunks and
updated on the access node.
2020-05-27 17:31:09 +02:00
Erik Nordström
5933c1785e Avoid attaching data nodes without permissions
This change will make `create_distributed_hypertable` attach only
those data nodes that the creating user has USAGE permission on
instead of trying to attach all data nodes (even when permissions do
not allow it). This prevents an ERROR when a user tries to create a
distributed hypertable and lacks USAGE on one or more data nodes. This
new behavior *only* applies when no explicit set of data nodes is
specified. When an explicit set is specified the behavior remains
unchanged, i.e., USAGE is required on all the explicitly specified
data nodes.

Note that, with distributed hypertables, USAGE permission on a data
node (foreign server) only governs the ability to attach it to a
hypertable. This is analogous to the regular behavior of foreign
servers where USAGE governs who can create foreign tables using the
server. The actual usage of the server once associated with a table is
not affected as that would break the table if permissions are revoked.

The new behavior introduced with this change makes it simpler to use
`create_distributed_hypertable` in a multi-user and multi-permission
environment where users have different permissions on data nodes
(DNs). For instance, imagine user A being allowed to attach DN1 and
DN2, while user B can attach DN2 and DN3. Without this change,
`create_distributed_hypertable` will always fail since it tries to
attach DN1, DN2, and DN3 irrespective of the user that calls the
function. Even worse, if only DN1 and DN2 existed initially, user A
would be able to create distributed hypertables without errors, but,
as soon as DN3 is added, `create_distributed_hypertable` would start
to fail for user A.

The only way to avoid the above described errors when creating
distributed hypertables is for users to pass an explicit set of data
nodes that only includes the data nodes they have USAGE on. If a
user is forced to do that, the result would in any case be the same as
that introduced with this change. Unfortunately, users currently have
no way of knowing which data nodes they have USAGE on unless they
query the PostgreSQL catalogs. In many cases, a user might not even
care about the data nodes they are using if they aren't DBAs
themselves.

To summarize the new behavior:

* If public (everyone) has USAGE on all data nodes, there is no
  change in behavior.
* If a user has USAGE on a subset of data nodes, it will by default
  attach only those data nodes since it cannot attach the other ones
  anyway.
* If the user specifies an explicit set of data nodes to attach, all
  of those nodes still require USAGE permissions. (Behavior
  unchanged.)
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
296d134a1e Add telemetry to track distributed databases
This change extends telemetry report and adds new 'distributed_db'
section which includes following keys: 'distributed_member',
'data_nodes_count' and 'distributed_hypertables_count'.
2020-05-27 17:31:09 +02:00
Erik Nordström
6a9db8a621 Add function to fetch remote chunk relation stats
A new function, `get_chunk_relstats()`, allows fetching relstats
(basically `pg_class.{relpages,reltuples`) from remote chunks on data
nodes and writing it to the `pg_class` entry for the corresponding
local chunk. The function expects either a chunk or a hypertable as
input and returns the relstats for the given chunk or all chunks for
the given hypertable, respectively.

Importing relstats as described is useful as part of a distributed
ANALYZE/VACUUM that won't require fetching all data into the access
node for local sampling (like the current implemention does).

In a future change, this function will be called as part of a local
ANALYZE on the access node that runs ANALYZE on all data nodes
followed by importing of the resulting relstats for the analyzed
chunks.
2020-05-27 17:31:09 +02:00
Mats Kindahl
c2366ece59 Don't clear dist_uuid in delete_data_node
When deleting a data node it currently clear the `dist_uuid` in the
database on the data node, which require it to be able to connect to
the data node and would also mean that it is possible to re-add the
data node to a new cluster without checking that it is in a consistent
state.

This commit remove the code that clear the `dist_uuid` and hence do not
need to connect to the data nodel. All tests are updated to reflect the
fact that no connection will be made to the data node and that the
`dist_uuid` is not cleared.
2020-05-27 17:31:09 +02:00
Ruslan Fomkin
5cdef03880 Fix bug in allow or block new chunks
Process arguments of data node allow or block new chunks SQL API
functions separately, since the number of optional arguments is
different between allow and block functions. This fixes the bug with
memory access.
2020-05-27 17:31:09 +02:00
Erik Nordström
7f3bc09eb6 Generalize deparsing of remote function calls
Certain functions invoked on an access node need to be passed on to
data nodes to ensure any mutations happen also on those
nodes. Examples of such functions are `drop_chunks`, `add_dimension`,
`set_chunk_time_interval`, etc. So far, the approach has been to
deparse these "manually" on a case-by-case basis.

This change implements a generalized deparsing function that deparses
the function based on the function call info (`FunctionCallInfo`) that
holds the information about any invoked function that can be used to
deparse the function call.

The `drop_chunks` function has been updated to use this generalized
deparsing functionality when it is invoking remote nodes.
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
c8563b2d46 Add distributed_exec() function
This function allows users to execute a SQL query on a list of data
nodes. The purpose is to provide users a way to, e.g., create roles on
data nodes.

The current implementation is quite straightforward. Just execute any
provided query on a list of data nodes. The query will execute with
the current user role. The function does not return or print any
result values. In case of error, it will print the data node name and
a related error message.
2020-05-27 17:31:09 +02:00
Brian Rowe
a50db32c18 Check data node for valid postgres version
This change will check if the postgres version of the data node is 11
or greater during the add_data_node call.  It will also now print a
more meaningful error message if the data node validation fails.
2020-05-27 17:31:09 +02:00
niksa
d97c32d0c7 Support distributed drop_chunks
Running `drop_chunks` on a distributed hypertable should remove chunks
from all its data nodes. To make it work we send the same SQL command
to all involved data nodes.
2020-05-27 17:31:09 +02:00
Brian Rowe
31953f0dc6 Verify configuration before adding data node
This change will call a function on a remote database to validate
its configuration before following through with an add_data_node
call.  Right now the check will ensure that the data is able to use
prepared transactions, but further checks can be easily added in
the future.

Since this uses the timescaledb extension to validate the remote
database, it runs at the end of bootstrapping.  We may want to
consider adding code to undo our bootstrapping changes if this check
fails.
2020-05-27 17:31:09 +02:00