This PR adds more regression tests for index creation and tests for more
user-errors. Significantly, it checks for the presence of both the time
and spaced-partition columns in unique indexes. This is needed because
Timescale cannot guarantee uniqueness if colliding rows don't land in the
same chunk. Fixes#29.
remove all murmur3-related source code. Alter regression tests
to reflect new hash values for inputs, and a slightly different
set of input data to ensure that sufficient chunks and partitions
are tested. Some changes to .sh scripts in sql/setup that seem
to be used only to power the "unit tests", which I cannot
yet run successfully.
The dblink extension is blacklisted by some cloud-hosting providers and
is an unnecessary dependency for single-node operation. Since we don't plan
to use dblink to implement clustering this PR removes the dependency.
Previously chunks could be simultaneously created for the same
partition_id and a start_time and end_time of both NULL. This
prevents such bugs bug adding additional unique constraints and
locking the partition for chunk creation (as originally intended).
DROP EXTENSION didn't properly reset caches and other saved state
causing various errors related to bad state when the extension was
dropped and/or recreated later.
This patch adds functionality to track the state of the extension and
also signals DROP EXTENSION to other backends that might be running,
allowing them to reset their internal extension state.
If a user attempts to setup a database while not connecting using
the network, port is NULL and thus fails constraint checks. Instead,
we now use the default Postgres port of 5432 when this happens.
Also, setup_db() is renamed to setup_timescaledb() for clarity in
the presence of other extensions.
Previously, the planner used a direct query via the SPI interface to
retrieve metadata info needed for query planner functions like query
rewriting. This commit updates the planner to use our caching system.
This is a performance improvement for pretty much all operations,
both data modifications and queries.
For hypertables, this added a cache keyed by the main table OID and
added negative entries (because the planner often needs to know if a
table is /not/ a hypertable).
Previous to this commit non-superusers could not do anything inside
a database with the timescale extension loaded. Now, non-superuser
can create their own hypertables and work inside the db. There are
two big caveats:
1) All users have read/write permissions to the timescaledb
catalog.
2) Permission changes applied to the main tables are not
propagated to the associated tables.
This patch refactors the insert path to use insert triggers
instead of a temporary copy table. The copy table previously
served as an optimization for big batches where the cost of
inserting tuple-by-tuple into chunks was amortized by inserting
all tuples for a specific chunk in one insert. However, to avoid
deadlocks the tuples also had to inserted in a specific chunk
order, requiring adding an index to the copy table.
With trigger insertion, tuples are instead collected over a batch
into a sorting state, which is sorted in an "after" trigger. This
removes the overhead of the copy table and index. It also provides
a fast-path for single-tuple batches that avoids doing sorting
altogether.
This change is a performance improvement. Previously each insert called
a plpgsql function to check if there is a need to close the chunk. This
patch implements a c-only fastpath for the case when the table size is
less than the configured chunk size.
Since create_hypertable() allows you to optionally specify a
partitioning column, it makes sense to default to one partition when
no column is specified and asking for the number of partitions when a
column is specified and the number of partitions is not (instead of
defaulting to one).
This patch also changes the order and type of partitioning-related
input arguments to create_hypertable() so that the number of
partitions can easily be specified alongside the partitioning column
and without type casting.
The chunk catalog table is now scanned with a native
scan rather than SPI call.
The scanner module is also updated with the option of
of taking locks on found tuples. In the case of chunk
scanning, chunks are typically returned with a share
lock on the tuple.
This patch refactors the code to use native heap/index scans for
finding partition epochs and partitions. It also moves the
partitioning-related code and data structures to partitioning.{c,h}.
There are two reasons for adding the partition count to
the partition_epoch table:
* It makes the partition_epoch more self-describing as
it makes it easy to see how many partitions are
in the current epoch as well as past ones.
* It simplifies native code that can read the partition
epoch, allocate memory for the right number of partitions,
and finally scan the partition table filling in each entry.
This patch fixes two deadlock cases.
The first case occurred as a result of taking partition and chunk
locks in inconsistent orders. When creating the first chunk C1
in a table, concurrent INSERT workers would race to create
that chunk. The result would be that the transactions queue up on
the partition lock P, effectively serializing these transactions.
This would lead to these concurrent transactions to insert
at very different offsets in time, one at a time. At some point
in the future, some n'th transaction Tn queued up on P would get
that lock as the preceeding inserters T1-(n-1) finish their inserts
and move on to their next batches. When Tn finally holds P, one of
the preceeding workers starts a new transaction that finds that it
needs to close C1, grabbing a lock on C1 and then on P. However,
it will block on P since Tn already holds P. Tn will also believe
it needs to close C1, thus trying to grab a lock on C1, but will
block, causing a deadlock.
The second case can occur on multi-partition hypertables. With
multiple partitions there are more than one open-ended chunk
at a time (one for each partition). This leads to a deadlock case
when two processes try to close (and thus lock) the chunks in
different order. For instance process P1 closes chunk C1 and then
C2, while process P2 locks in order C2 and C1.
The fix for the first case is to remove the partition lock
altogether. As it turns out, this lock is not needed.
Instead, transactions can race to create new chunks, thus causing
conflicts. A conflict in creating a new chunk can safely be
ignored and it also avoids taking unecessary locks. Removing the
partition lock also avoids the transaction serialization that
happens around this lock, which is especially bad for long-running
transactions (e.g., big INSERT batches).
The fix for the second multi-partition deadlock case is to always
close chunks in chunk ID order. This requires closing chunks at
the end of a transaction, once a transaction knows all the chunks
it needs to close. This also has the added benefit of reducing the
time a transaction holds exclusive locks on chunks, potentially
improving insert performance.
Currently, the internal metadata tables for hypertables track time
as a BIGINT integer. Converting hypertable time columns in TIMESTAMP
format to this internal representation requires using Postgres' conversion
functions that are imprecise due to floating-point arithmetic. This patch
adds C-based conversion functions that offer the following conversions
using accurate integer arithmetic:
- TIMESTAMP to UNIX epoch BIGINT in microseconds
- UNIX epoch BIGINT in microseconds to TIMESTAMP
- TIMESTAMP to Postgres epoch BIGINT in microseconds
- Postgres epoch BIGINT in microseconds to TIMESTAMP
The downside of the UNIX epoch functions are that they don't offer the full
date range as offered by the Postgres to_timestamp() function. This is
because of the required epoch shift might otherwise overflow the BIGINT.
All functions should, however, offer appropriate range checks and will
throw errors if outside the range.
Setting up single node is now:
```
CREATE EXTENSION IF NOT EXISTS iobeamdb CASCADE;
select setup_single_node();
```
To setup a cluster do (on meta node):
```
CREATE EXTENSION IF NOT EXISTS iobeamdb CASCADE;
select set_meta();
```
on data node:
```
CREATE EXTENSION IF NOT EXISTS iobeamdb CASCADE;
select join_cluster('metadb', 'metahost');
```
This assumes that the commands are issued by the same user on both the
meta node and the data node. Otherwise the data node also has to
specify the user name to use when connecting to the meta node.
- Directory structure now matches common practices
- Regression tests now run with pg_regress via the PGXS infrastructure.
- Unit tests do not integrate well with pg_regress and have to be run
separately.
- Docker functionality is separate from main Makefile. Run with
`make -f docker.mk` to build and `make -f docker.mk run` to run
the database in a container.