447 Commits

Author SHA1 Message Date
Ruslan Fomkin
effdc478ae Check replication factor for exceeding data nodes
set_replication_factor will check if the replication factor is bigger than the amount of
attached data nodes. It returns an error in such case.
2020-05-27 17:31:09 +02:00
Ruslan Fomkin
c44a202576 Implement altering replication factor
Implements SQL function set_replication_factor, which changes
replication factor of a distributed hypertable. The change of the
replication factor doesn't affect existing chunks. Newly created
chunks are replicated according to new replication factor.
2020-05-27 17:31:09 +02:00
Brian Rowe
d49e9a5739 Add repartition option on detach/delete_data_node
This change adds a new parameter to the detach_data_node and
delete_data_node functions that will allow the user to automatically
shrink their space dimension to match the number of nodes.
2020-05-27 17:31:09 +02:00
Brian Rowe
fad33fe954 Collect column stats for distributed tables.
This change adds a new command to return a subset of the column
stats for a hypertable (column width, percent null, and percent
distinct).  As part of the execution of this command on an access
node, these stats will be collected for distributed chunks and
updated on the access node.
2020-05-27 17:31:09 +02:00
Mats Kindahl
222bf75910 Use template1 as secondary connection database
The `postgres` database might not exists on a data node, but
`template1` will always exist so if a connection using `postgres`
fails, we use `template1` as a secondary database.

This is similar to how `connectMaintenanceDatabase` in the PostgreSQL
code base works.
2020-05-27 17:31:09 +02:00
Erik Nordström
6a9db8a621 Add function to fetch remote chunk relation stats
A new function, `get_chunk_relstats()`, allows fetching relstats
(basically `pg_class.{relpages,reltuples`) from remote chunks on data
nodes and writing it to the `pg_class` entry for the corresponding
local chunk. The function expects either a chunk or a hypertable as
input and returns the relstats for the given chunk or all chunks for
the given hypertable, respectively.

Importing relstats as described is useful as part of a distributed
ANALYZE/VACUUM that won't require fetching all data into the access
node for local sampling (like the current implemention does).

In a future change, this function will be called as part of a local
ANALYZE on the access node that runs ANALYZE on all data nodes
followed by importing of the resulting relstats for the analyzed
chunks.
2020-05-27 17:31:09 +02:00
Mats Kindahl
c2366ece59 Don't clear dist_uuid in delete_data_node
When deleting a data node it currently clear the `dist_uuid` in the
database on the data node, which require it to be able to connect to
the data node and would also mean that it is possible to re-add the
data node to a new cluster without checking that it is in a consistent
state.

This commit remove the code that clear the `dist_uuid` and hence do not
need to connect to the data nodel. All tests are updated to reflect the
fact that no connection will be made to the data node and that the
`dist_uuid` is not cleared.
2020-05-27 17:31:09 +02:00
niksa
94979412ef Fix chunks_in function declaration
We need to mark this function as stable and parallel safe
so the planner can pick the most optimal plan.
2020-05-27 17:31:09 +02:00
Mats Kindahl
0d71f952f8 Add bootstrap option to add_data_node
When the access node executes `add_data_node`, bootstrapping the data
node is done by:

1. Optionally creating the database on the remote server.
2. Creating a schema for the TimescaleDB extension objects.
3. Creating the TimescaleDB extension in the database.

After bootstrapping, the `dist_uuid` of the data node and access node
is set to the `uuid` of the access node.

If `bootstrap` is `true`, bootstrapping of the data node is done.

 If `boostrap` is `false`, bootstrapping is not done, but the procedure
attempts to connect to the database and verify that the TimescaleDB
extension is loaded and that the `dist_uuid` is clear. If it is not
possible to connect to the database, or if `dist_uuid` is set,
`add_data_node` will fail.
2020-05-27 17:31:09 +02:00
Erik Nordström
7f3bc09eb6 Generalize deparsing of remote function calls
Certain functions invoked on an access node need to be passed on to
data nodes to ensure any mutations happen also on those
nodes. Examples of such functions are `drop_chunks`, `add_dimension`,
`set_chunk_time_interval`, etc. So far, the approach has been to
deparse these "manually" on a case-by-case basis.

This change implements a generalized deparsing function that deparses
the function based on the function call info (`FunctionCallInfo`) that
holds the information about any invoked function that can be used to
deparse the function call.

The `drop_chunks` function has been updated to use this generalized
deparsing functionality when it is invoking remote nodes.
2020-05-27 17:31:09 +02:00
Mats Kindahl
8145d75c3f Remove bootstrap_user from add_data_node
This commit changes so that the same user is used both on the access
node and the data nodes when executing a `add_data_node`, which means
that the `bootstrap_user` parameter is removed.

Since most tests assume that you can pass a separate user with
superuser privileges to `add_data_node`, this affected a lot of tests.
2020-05-27 17:31:09 +02:00
Mats Kindahl
6e9f644714 Require host parameter in add_data_node
Change `add_data_node` so that host parameter is required. If the host
parameter is not provided, or is `NULL`, an error will be printed.

Also change logic for how the default value for `port` is picked. Now
it will by default use the port given in the configuration file.

The commit update all the result files, add the `host` parameter to all
calls of `add_data_node` and add a few tests to check that an error is
given when `host` is not provided.
2020-05-27 17:31:09 +02:00
Mats Kindahl
33923548c7 Remove cascade option from delete_data_node
The `cascade` option was added earlier since it was necessary to allow
cascading the delete of user mappings when removing the server objects.
Since the user mappings are removed from the code, the `cascade` option
is not needed any more.

This commit remove the option and fix all the tests.
2020-05-27 17:31:09 +02:00
Mats Kindahl
77776faf20 Fix port usage for add_data_node()
For a statement which only specify the database, we expect the data
node to be created on the same Postgres instance as the one where the
statement is executed.

    SELECT * FROM add_data_node('data1', database => 'base1');

However, if the port for the server is changed in the configuration
file to not use the default port, the command will try to connect to
the wrong Postgres server, namly the one listening on port 5432.

This commit fixes this by letting `host` and `port` parameters be NULL
by default and use the following logic to decide what port should be
used.

- If a port is explicitly provided, use that.

- If a port is not provided but a host is provided, it is assumed that
  the intention is to connect to a default-installed Postgres server on
  a different address, so use the default Postgres port (5432).

- If neither port nor host is provided, it assumed that the intention
  is to connect to the same server as where the command is executed, so
  use the port that was written in the configuration file.

The default host to use is still 'localhost', but it is not written
explicitly in the function definition in `ddl_api.sql`.

The commit also fixes one warning where an uninitialized variable could
be used.
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
c8563b2d46 Add distributed_exec() function
This function allows users to execute a SQL query on a list of data
nodes. The purpose is to provide users a way to, e.g., create roles on
data nodes.

The current implementation is quite straightforward. Just execute any
provided query on a list of data nodes. The query will execute with
the current user role. The function does not return or print any
result values. In case of error, it will print the data node name and
a related error message.
2020-05-27 17:31:09 +02:00
Brian Rowe
a50db32c18 Check data node for valid postgres version
This change will check if the postgres version of the data node is 11
or greater during the add_data_node call.  It will also now print a
more meaningful error message if the data node validation fails.
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
79f6223631 Replace UserMappings with a connection ID
This change replace UserMappings with newly introduced TSConnectionId
object, which represent a pair of foreign server id and local user id.

Authentication has been moved to non-password based, since original
UserMappings were used to store a data node user passwords as
well. This is a temporary step, until introduction of certificate
based authentication.

List of changes:

* add_data_node() password and bootstrap_password arguments removed

* introduced authentication using pgpass file

* RemoteTxn format string which represents tx changed to
  tx-version-xid-server_id-user_id

* data_node_dispatch, remote transaction cache, connection cache hash
  tables keys switched to TSConnectionId instead of user mappings

* remote_connection_open() been rework to exclude user options

* Tests upgraded, user mappings and passwords usage has been excluded
2020-05-27 17:31:09 +02:00
Brian Rowe
31953f0dc6 Verify configuration before adding data node
This change will call a function on a remote database to validate
its configuration before following through with an add_data_node
call.  Right now the check will ensure that the data is able to use
prepared transactions, but further checks can be easily added in
the future.

Since this uses the timescaledb extension to validate the remote
database, it runs at the end of bootstrapping.  We may want to
consider adding code to undo our bootstrapping changes if this check
fails.
2020-05-27 17:31:09 +02:00
Brian Rowe
3d3824dbc1 Fix some issues with num_dist_tables
This change fixes a couple issues with the num_dist_tables column of
the timescaledb_information.data_node view.  The first fix will
allow the column to correctly report 0 when no tables are yet created
(it currently will count a NULL table as 1 in this case).  The second
fix addresses a bug in the dist_util_remote_hypertable_info function
which was causing the code to only see the first hypertable returned.
This second bug will also cause incorrect results for many of our usage
reporting views and utilities when there are multiple distributed
hypertables.
2020-05-27 17:31:09 +02:00
Mats Kindahl
ac3f0bcb92 Change order of parameters in attach_data_node
All data node functions except `attach_data_node` take the node name as
the first parameter. This commit changes the order of the two first
parameters to `attach_data_node` so that the node name is the first
parameter and the hypertable is the second parameter.
2020-05-27 17:31:09 +02:00
Erik Nordström
5309cd6c5f Repartition hypertables when attaching data node
Distributed hypertables are now repartitioned when attaching new data
nodes and the current number of partition (slices) in the first closed
(space) dimension is less than the number of data nodes. Increasing
the number of partitions is necessary to make use of a newly attached
data node. However, repartitioning is optional and can be avoided via
a boolean parameter in `attach_server()`.

In addition to the above repartitioning, this change also adds
informational messages to `create_hypertable` and
`set_number_partitions` to raise awareness of situations when the
number of partitions in the space dimensions is lower than the number
of attached data nodes.
2020-05-27 17:31:09 +02:00
Erik Nordström
9108ddad15 Fix corner cases when detaching data nodes
This change fixes the following:

* Refactor the code for setting the default data node for a chunk. The
  `set_chunk_default_data_node()` API function now takes a
  `regclass`/`oid` instead of separate schema + table names and
  returns `true` when a new data node is set and `false` if called
  with a data node that is already the default. Like before,
  exceptions are thrown on errors. It also does proper permissions
  checks. The related code has been moved from `data_node.c` to
  `chunk.c` since this is an operation on a chunk, and the code now
  also lives in the `tsl` directory since this is non-trivial logic
  that should fall under the TSL license.
* When setting the default data node on a chunk (failing over to
  another data node), it is now verified that the new data node
  actually has a replica of the chunk and that the corresponding
  foreign server belongs to the "right" foreign data wrapper.
* Error messages and permissions handling have been tweaked.
2020-05-27 17:31:09 +02:00
Erik Nordström
b07461ec00 Refactor and harden data node management
This change refactors and hardens parts of data node management
functionality.

* A number of of permissions checks have been added to data node
  management functions. This includes checking that the user has
  proper permissions for both table and server objects.
* Permissions checks are now done when creating remote chunks on data
  nodes.
* The add_data_node() API function has been simplified and now returns
  more intuitive status about created objects (foreign server,
  database, extension). It is no longer necessary to specify a user to
  connect with as this is always assumed to be the current user. The
  bootstrap user can still be specified explicitly, however, as that
  user might require elevated permissions on the remote node to
  bootstrap.
* Functions that capture exceptions without re-throwing, such as
  `ping_data_node()` and `get_user_mapping()`, have been refactored to
  not do this as the transaction state and memory contexts are not in
  states where it is safe to proceed as normal.
* Data node management functions now consistently check that any
  foreign servers operated on are actually TimescaleDB server objects.
* Tests now run with a superuser a regular user specific to
  clustering. These users have password auth enabled in `pg_hba.conf`,
  which is required by the connection library when connecting as a
  non-superuser. Tests have been refactored to bootstrap data nodes
  using these user roles.
2020-05-27 17:31:09 +02:00
Brian Rowe
79fb46456f Rename server to data node
The timescale clustering code so far has been written referring to the
remote databases as 'servers'.  This terminology is a bit overloaded,
and in particular we don't enforce any network topology limitations
that the term 'server' would suggest.  In light of this we've decided
to change to use the term 'node' when referring to the different
databases in a distributed database.  Specifically we refer to the
frontend as an 'access node' and to the backends as 'data nodes',
though we may omit the access or data qualifier where it's unambiguous.

As the vast bulk of the code so far has been written for the case where
there was a single access node, almost all instances of 'server' were
references to data nodes.  This change has updated the code to rename
those instances.
2020-05-27 17:31:09 +02:00
Brian Rowe
dd3847a7e0 Rename files in preparation for large refactor
This change includes the only rename changes required by the renaming
of server to data node across the clustering codebase.  This change
is being committed separately from the bulk of the rename changes to
prevent git from losing the file history of renamed files (merging the
rename with extensive code modifications resulted in git treating some
of the file moves as a file delete and new file creation).
2020-05-27 17:31:09 +02:00
Brian Rowe
e110a42a2b Add space usage utilities to distributed database
This change adds a new utility function for postgres
`server_hypertable_info`.  This function will contact a provided node
and pull down the space information for all the distributed hypertables
on that node.

Additionally, a new view `distributed_server_info` has been added to
timescaledb_information.  This view leverages the new
remote_hypertable_data function to display a list of nodes, along with
counts of tables, chunks, and total bytes used by distributed data.

Finally, this change also adds a `hypertable_server_relation_size`
function, which, given the name of a distributed hypertable, will print
the space information for that hypertable on each node of the
distributed database.
2020-05-27 17:31:09 +02:00
niksa
0da34e840e Fix server detach/delete corner cases
Prevent server delete if the server contains data, unless user
specifies `force => true`. In case the server is the only data
replica, we don't allow delete/detach unless table/chunks are dropped.
The idea is to have the same semantics for delete as for detach since
delete actually calls detach

We also try to update pg_foreign_table when we delete server if there
is another server containing the same chunk.

An internal function is added to enable updating foreign table server
which might be useful in some cases since foreign table server is
considered a default server for that particular chunk.

Since this command needs to work even if the server we're trying to
remove is non responsive, we're not removing any data on the remote
data node.
2020-05-27 17:31:09 +02:00
niksa
2fd99c6f4b Block new chunks on data nodes
This functionality enables users to block or allow creation of new
chunks on a data node for one or more hypertables. Use cases for this
include the ability to block new chunks when a data node is running
low on disk space or to affect chunk distribution across data nodes.

Sometimes blocking data nodes for new chunks can make a hypertable
under-replicated. For that case an additional argument `force => true`
can be supplied to force blocking new chunks.

Here are some examples.

Block for a specific hypertable:
`SELECT * FROM block_new_chunks_on_server('server_1', 'disttable');`

Block for all hypertables on the server:
`SELECT * FROM block_new_chunks_on_server('server_1', force =>true);`

Unblock:
`SELECT * FROM allow_new_chunks_on_server('server_1', true);`

This change adds the `force` argument to `detach_server` as well.  If
detaching or blocking new chunks will make a hypertable
under-replicated then `force => true` needs to used.
2020-05-27 17:31:09 +02:00
niksa
d8d13d9475 Allow detaching servers from hypertables
A server can now be detached from one or more distributed hypertables
so that it no longer in use. We only allow detaching a server if there
is no data on the server and detaching it doesn't risk making a
hypertable under-replicated.

A user can detach a server for a specific hypertable, or for all
hypertables to which the server is attached.

`SELECT * FROM detach_server('server1', 'my_hypertable');`
`SELECT * FROM detach_server('server2');`
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
96727fa5c4 Add support for distributed peer ID
This change makes it possible for a data node to distinguish between
regular client connections and distributed database connections (from
the access node).

This functionality will be needed for decision making based on the
connection type, for example allow or block a DDL commands on a data
node.
2020-05-27 17:31:09 +02:00
Brian Rowe
59e3d7f1bd Add create_distributed_hypertable command
This change adds a variant of the create_hypertable command that will
ensure the created table is distributed.
2020-05-27 17:31:09 +02:00
niksa
6f3848e744 Add function to check server liveness
Try connecting to a server and running `SELECT 1`. It returns true
if succeed. If fails false is returned. There can be many reasons to
fail: no valid UserMapping, server is down or failed running `SELECT 1`.
More information about failure is written to server log.

`timescaledb_information.server` view is updated to show server status.
2020-05-27 17:31:09 +02:00
Brian Rowe
5c643e0ac4 Add distributed group id and enforce topology
This change adds a distributed database id to the installation data for a
database.  It also provides a number of utilities that can be used for
getting/setting/clearing this value or using it to determing if a database is
a frontend, backend, or not a member of distributed database.

This change also includes modifications to the add_server and delete_server
functions to check the distributed id to ensure the operation is allowed, and
then update or clear it appropriately.  After this changes it will no longer
be possible to add a database as a backend to multiple frontend databases, nor
will it be possible to add a frontend database as a backend to any other
database.
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
11aab55094 Add support for basic distributed DDL
This is straightforward implementation which allows to execute
limited set of DDL commands on distributed hypertable.
2020-05-27 17:31:09 +02:00
Brian Rowe
b1c6172d0a Add attach_server function
This adds an attach_server function which is used to associate a
server with an existing hypertable.
2020-05-27 17:31:09 +02:00
Dmitry Simonenko
d8982c3e15 Add add_server() support for remote server bootstrapping
This patch adds functionality for automatic database and extension
creation on remote server. New function arguments: bootstrap_database, bootstrap_user
and bootstrap_password.
2020-05-27 17:31:09 +02:00
Matvey Arye
e7ba327f4c Add resolve and heal infrastructure for 2PC
This commit adds the ability to resolve whether or not 2PC
transactions have been committed or aborted and also adds a heal
function to resolve transactions that have been prepared but not
committed or rolled back.

This commit also removes the server id of the primary key on the
remote_txn table and adds another index. This was done because the
`remote_txn_persistent_record_exists` should not rely on the server
being contacted but should rather just check for the existance of the
id. This makes the resolution safe to setups where two frontend server
definitions point to the same database. While this may not be a
properly configured setup, it's better if the resolution process is
robust to this case.
2020-05-27 17:31:09 +02:00
Matvey Arye
0e109d209d Add tables for saving 2pc persistent records
The remote_txn table records commit decisions for 2pc transactions.
A successful 2pc transaction will have one row per remote connection
recorded in this table. In effect it is a mapping between the
distributed transaction and an identifier for each remote connection.

The records are needed to protect against crashes after a
frontend send a `COMMIT TRANSACTION` to one node
but not all nodes involved in the transaction. Towards this end,
the commitment of remote_txn rows represent a crash-safe irrevocable
promise that all participating datanodes will eventually get a `COMMIT
TRANSACTION` and occurs before any datanodes get a `COMMIT TRANSACTION`.

The irrevocable nature of the commit of these records means that this
can only happen after the system is sure all participating transactions
will succeed. Thus it can only happen after all datanodes have succeeded
on a `PREPARE TRANSACTION` and will happen as part of the frontend's
transaction commit..
2020-05-27 17:31:09 +02:00
Erik Nordström
e2371558f7 Create chunks on remote servers
This change ensures that chunk replicas are created on remote
(datanode) servers whenever a chunk is created in a local distributed
hypertable.

Remote chunks are created using the `create_chunk()` function, which
has been slightly refactored to allow specifying an explicit chunk
table name. The one making the remote call also records the resulting
remote chunk IDs in its `chunk_server` mappings table.

Since remote command invokation without super-user permissions
requires password authentication, the test configuration files have
been updated to require password authentication for a cluster test
user that is used in tests.
2020-05-27 17:31:09 +02:00
Erik Nordström
125f793307 Add password parameter to add_server()
Establishing a remote connection requires a password, unless the
connection is made as a superuser. Therefore, this change adds the
option to specify a password in the `add_server()` command.  This is a
required parameter unless called as a superuser.
2020-05-27 17:31:09 +02:00
Matvey Arye
3779af400d Change license header to new format in SQL files
The license header for SQL test files has been updated, but some tests
haven't had this new header applied. This change makes sure the new
header is applied to all test files.
2020-05-27 17:31:09 +02:00
Matvey Arye
d2b4b6e22e Add remote transaction ID module
The remote transaction ID is used in two phase commit. It is the
identifier sent to the datanodes in PREPARE TRANSACTION and related
postgresql commands.

This is the first in a series of commits for adding two phase
commit support to our distributed txn infrastructure.
2020-05-27 17:31:09 +02:00
Erik Nordström
596be8cda1 Add mappings table for remote chunks
A frontend node will now maintain mappings from a local chunk to the
corresponding remote chunks in a `chunk_server` table.

The frontend creates local chunks as foreign tables and adds entries
to `chunk_server` for each chunk it creates on remote data node.

Currently, the creation of remote chunks is not implemented, so a
dummy chunk_id for the remote chunk will be added instead for testing
purposes.
2020-05-27 17:31:09 +02:00
Erik Nordström
ece582d458 Add mappings table for remote hypertables
In a multi-node (clustering) setup, TimescaleDB needs to track which
remote servers have data for a particular distributed hypertable. It
also needs to know which servers to place new chunks on and to use in
queries against a distributed hypertable.

A new metadata table, `hypertable_server` is added to map a local
hypertable ID to a hypertable ID on a remote server. We require that
the remote hypertable has the same schema and name as the local
hypertable.

When a local server is removed (using `DROP SERVER` or our
`delete_server()`), all remote hypertable mappings for that server
should also be removed.
2020-05-27 17:31:09 +02:00
Erik Nordström
ae587c9964 Add API function for explicit chunk creation
This adds an internal API function to create a chunk using explicit
constraints (dimension slices). A function to export a chunk in a
format consistent with the chunk creation function is also added.

The chunk export/create functions are needed for distributed
hypertables so that an access node can create chunks on data nodes
according to its own (global) partitioning configuration.
2020-05-27 17:31:09 +02:00
niksa
538e27d140 Add Noop Foreign Data Wrapper
This adds a skeleton TimescaleDB foreign data wrapper (FDW) for
scale-out clustering. It currently works as a noop FDW that can be
used for testing, although the intention is to develop it into a
full-blown implementation.
2020-05-27 17:31:09 +02:00
Erik Nordström
eca7cc337a Add server management API and functionality
Servers for a scale-out clustering setup can now be added and deleted
with `add_server()` and `delete_server()`, providing a convenience API
for server management.

While similar functionality can be achieved using the standard
PostgreSQL `CREATE SERVER` and `CREATE USER MAPPING` commands, this
new API makes it easier to add clustering servers and user mappings
consistent with the needs of TimescaleDBs particular clustering setup.

The API currently works with the `postgres_fdw` foreign data
wrapper. It will be updated to use our own foreign data wrapper once
it is available.
2020-05-27 17:31:09 +02:00
Stephen Polcyn
b57d2ac388 Cleanup TODOs and FIXMEs
Unless otherwise listed, the TODO was converted to a comment or put
into an issue tracker.

test/sql/
- triggers.sql: Made required change

tsl/test/
- CMakeLists.txt: TODO complete
- bgw_policy.sql: TODO complete
- continuous_aggs_materialize.sql: TODO complete
- compression.sql: TODO complete
- compression_algos.sql: TODO complete

tsl/src/
- compression/compression.c:
  - row_compressor_decompress_row: Expected complete
- compression/dictionary.c: FIXME complete
- materialize.c: TODO complete
- reorder.c: TODO complete
- simple8b_rle.h:
  - compressor_finish: Removed (obsolete)

src/
- extension.c: Removed due to age
- adts/simplehash.h: TODOs are from copied Postgres code
- adts/vec.h: TODO is non-significant
- planner.c: Removed
- process_utility.c
  - process_altertable_end_subcmd: Removed (PG will handle case)
2020-05-18 20:16:03 -04:00
Sven Klemm
0ea509cc48 Release 1.7.1
This maintenance release contains bugfixes since the 1.7.0 release. We deem it medium
priority for upgrading and high priority for users with multiple continuous aggregates.

In particular the fixes contained in this maintenance release address bugs in continuous
aggregates with real-time aggregation and PostgreSQL 12 support.

**Bugfixes**
* #1834 Define strerror() for Windows
* #1846 Fix segfault on COPY to hypertable
* #1850 Fix scheduler failure due to bad next_start_time for jobs
* #1851 Fix hypertable expansion for UNION ALL
* #1854 Fix reorder policy job to skip compressed chunks
* #1861 Fix qual pushdown for compressed hypertables where quals have casts
* #1864 Fix issue with subplan selection in parallel ChunkAppend
* #1868 Add support for WHERE, HAVING clauses with real time aggregates
* #1869 Fix real time aggregate support for multiple continuous aggregates
* #1871 Don't rely on timescaledb.restoring for upgrade
* #1875 Fix hypertable detection in subqueries
* #1884 Fix crash on SELECT WHERE NOT with empty table

**Thanks**
* @airton-neto for reporting an issue with queries over UNIONs of hypertables
* @dhodyn for reporting an issue with UNION ALL queries
* @frostwind for reporting an issue with casts in where clauses on compressed hypertables
* @fvannee for reporting an issue with hypertable detection in inlined SQL functions and an issue with COPY
* @hgiasac for reporting missing where clause with real time aggregates
* @louisth for reporting an issue with real-time aggregation and multiple continuous aggregates
* @michael-sayapin for reporting an issue with INSERTs and WHERE NOT EXISTS
* @olernov for reporting and fixing an issue with compressed chunks in the reorder policy
* @pehlert for reporting an issue with pg_upgrade
2020-05-18 16:35:44 +02:00
gayyappan
ed64af76a5 Fix real time aggregate support for multiple aggregates
We should compute the watermark using the materialization
hypertable id and not by using the raw hypertable id.
New test cases added to continuous_aggs_multi.sql. Existing test
cases in continuous_aggs_multi.sql were not correctly updated
for this feature.

Fixes #1865
2020-05-15 10:15:53 -04:00