Release Docker builds are now provided in a separate repository,
https://github.com/timescale/timescaledb-docker.
Tests and development builds for Docker are now provided
by two new scripts in the scripts directory:
- `docker-build.sh` to build a development image from current sources.
- `docker-run-tests.sh` to run tests for current sources through a Docker container.
Clean up the table schema to get rid of legacy tables and functionality
that makes it more difficult to provide an upgrade path.
Notable changes:
* Get rid of legacy tables and code
* Simplify directory structure for SQL code
* Simplify table hierarchy: remove root table and make chunk tables
* inherit directly from main table
* Change chunk table suffix from _data to _chunk
* Simplify schema usage: _timescaledb_internal for internal functions.
* _timescaledb_catalog for metadata tables.
* Remove postgres_fdw dependency
* Improve code comments in sql code
This PR removes the need to run setup_timescaledb. It also fixes
pg_dump and pg_restore. Previously, the database would restore in
a broken state because trigger functions were never attached to
meta tables (since setup_timescaledb() was not run). However, attaching
those triggers at extension creation also causes problems since the data
copy happens after extension creation but we don't want triggers fired
on the data restored (that could cause duplicate rows, for example).
The solution to this chicken-and-egg problem in this PR is to have
a special configuration (GUC) variable `timescaledb.restoring` that,
if 'on', would prevent the extension from attaching triggers at
extension creation. Then, after restoration, you'd call
restore_timescaledb() to attach the triggers and unset the GUC above.
This procedure is documented in the README as part of this PR.
This commit adds an example how to run the Docker image so the
data is persisted between restarts based on our docker-run.sh
script. It also fixes a typo in the docker-run.sh script where
the default DATA_DIR was not correctly set.
The dblink extension is blacklisted by some cloud-hosting providers and
is an unnecessary dependency for single-node operation. Since we don't plan
to use dblink to implement clustering this PR removes the dependency.
If a user attempts to setup a database while not connecting using
the network, port is NULL and thus fails constraint checks. Instead,
we now use the default Postgres port of 5432 when this happens.
Also, setup_db() is renamed to setup_timescaledb() for clarity in
the presence of other extensions.
Currently TimescaleDB does not close chunks mid-insert, so large
batches will overfill a chunk. This commit adds a script to
split large CSV files into smaller batches to allow reasonable
closing of chunks.
Since create_hypertable() allows you to optionally specify a
partitioning column, it makes sense to default to one partition when
no column is specified and asking for the number of partitions when a
column is specified and the number of partitions is not (instead of
defaulting to one).
This patch also changes the order and type of partitioning-related
input arguments to create_hypertable() so that the number of
partitions can easily be specified alongside the partitioning column
and without type casting.
Setting up single node is now:
```
CREATE EXTENSION IF NOT EXISTS iobeamdb CASCADE;
select setup_single_node();
```
To setup a cluster do (on meta node):
```
CREATE EXTENSION IF NOT EXISTS iobeamdb CASCADE;
select set_meta();
```
on data node:
```
CREATE EXTENSION IF NOT EXISTS iobeamdb CASCADE;
select join_cluster('metadb', 'metahost');
```
This assumes that the commands are issued by the same user on both the
meta node and the data node. Otherwise the data node also has to
specify the user name to use when connecting to the meta node.
- Directory structure now matches common practices
- Regression tests now run with pg_regress via the PGXS infrastructure.
- Unit tests do not integrate well with pg_regress and have to be run
separately.
- Docker functionality is separate from main Makefile. Run with
`make -f docker.mk` to build and `make -f docker.mk run` to run
the database in a container.
Allowing database deployments that use the same database for both the
meta and node roles simplifies operations when iobeamdb is deployed on
a single-node. It allows the database to operate without any
cross-network communication (either through dblink or postgres_fdw) and
thus offers stronger consistency guarantees.