Rob Kiefer cb90eef350 Rename setup_db() and fix port for local connections
If a user attempts to setup a database while not connecting using
the network, port is NULL and thus fails constraint checks. Instead,
we now use the default Postgres port of 5432 when this happens.

Also, setup_db() is renamed to setup_timescaledb() for clarity in
the presence of other extensions.
2017-03-22 09:26:03 -04:00
2017-02-21 11:54:17 +01:00
2017-03-16 10:17:16 +01:00
2017-03-16 18:27:28 -04:00
2017-03-06 11:06:49 -05:00
2017-03-06 11:06:49 -05:00
2017-03-07 12:02:21 -05:00
2017-03-07 12:02:21 -05:00

Build Status

TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL, providing automatic partitioning across time and space (partitioning key), as well as full SQL support.

TimescaleDB is packaged as a PostgreSQL extension and set of scripts.

For a more detailed description of our architecture, please read the technical paper. Additionally, more documentation can be found on our docs website.

There are two ways to install TimescaleDB: (1) Docker and (2) Postgres.

Installation (from source)

NOTE: Currently, upgrading to new versions requires a fresh install.

Installation Options

Prerequisites

Build and run in Docker

# To build a Docker image
make -f docker.mk build-image

# To run a container
make -f docker.mk run

Option 2 - Postgres

Prerequisites

  • A standard PostgreSQL 9.6 installation with development environment (header files) (e.g., Postgres.app for MacOS)

Build and install with local PostgreSQL

# To build the extension
make

# To install
make install

Update postgresql.conf

Also, you will need to edit your postgresql.conf file to include necessary libraries, and then restart PostgreSQL:

# Modify postgresql.conf to add required libraries. For example,
shared_preload_libraries = 'dblink,timescaledb'

# Then, restart PostgreSQL

Setting up your initial database

Now, we'll install our extension and create an initial database.

You again have two options for setting up your initial database:

  1. Empty Database - To set up a new, empty database, please follow the instructions below.

  2. Database with pre-loaded sample data - To help you quickly get started, we have also created some sample datasets. See Using our Sample Datasets for further instructions. (Includes installing our extension.)

Setting up an empty database

When creating a new database, it is necessary to install the extension and then run an initialization function.

# Connect to Postgres, using a superuser named 'postgres'
psql -U postgres -h localhost
-- Install the extension
CREATE database tutorial;
\c tutorial
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

-- Run initialization function
SELECT setup_timescaledb();

For convenience, this can also be done in one step by running a script from the command-line:

DB_NAME=tutorial ./scripts/setup-db.sh

Accessing your new database

You should now have a brand new time-series database running in Postgres.

# To access your new database
psql -U postgres -h localhost -d tutorial

Next let's load some data.

Working with time-series data

One of the core ideas of our time-series database are time-series optimized data tables, called hypertables.

Creating a (hyper)table

To create a hypertable, you start with a regular SQL table, and then convert it into a hypertable via the function create_hypertable()(API definition).

The following example creates a hypertable for tracking temperature and humidity across a collection of devices over time.

-- We start by creating a regular SQL table
CREATE TABLE conditions (
  time        TIMESTAMP WITH TIME ZONE NOT NULL,
  location TEXT                     NOT NULL,
  temperature DOUBLE PRECISION         NULL,
  humidity    DOUBLE PRECISION         NULL
);

Next, transform it into a hypertable using the provided function create_hypertable():

-- This creates a hypertable that is partitioned by time
--   using the values in the `time` column.
SELECT create_hypertable('conditions', 'time');

-- OR you can additionally partition the data on another dimension
--   (what we call 'space') such as `location`.
-- For example, to partition `location` into 2 partitions:
SELECT create_hypertable('conditions', 'time', 'location', 2);

Inserting and querying

Inserting data into the hypertable is done via normal SQL INSERT commands, e.g. using millisecond timestamps:

INSERT INTO conditions(time,location,temperature,humidity)
VALUES(NOW(), 'office', 70.0, 50.0);

Similarly, querying data is done via normal SQL SELECT commands. SQL UPDATE and DELETE commands also work as expected.

Indexing data

Data is indexed using normal SQL CREATE INDEX commands. For instance,

CREATE INDEX ON conditions (location, time DESC);

This can be done before or after converting the table to a hypertable.

Indexing suggestions:

Our experience has shown that different types of indexes are most-useful for time-series data, depending on your data.

For indexing columns with discrete (limited-cardinality) values (e.g., where you are most likely to use an "equals" or "not equals" comparator) we suggest using an index like this (using our hypertable conditions for the example):

CREATE INDEX ON conditions (location, time DESC);

For all other types of columns, i.e., columns with continuous values (e.g., where you are most likely to use a "less than" or "greater than" comparator) the index should be in the form:

CREATE INDEX ON conditions (time DESC, temperature);

Having a time DESC column specification in the index allows for efficient queries by column-value and time. For example, the index defined above would optimize the following query:

SELECT * FROM conditions WHERE location = 'garage' ORDER BY time DESC LIMIT 10

For sparse data where a column is often NULL, we suggest adding a WHERE column IS NOT NULL clause to the index (unless you are often searching for missing data). For example,

CREATE INDEX ON conditions (time DESC, humidity) WHERE humidity IS NOT NULL;

this creates a more compact, and thus efficient, index.

Current limitations

Below are a few current limitations of our database, which we are actively working to resolve:

  • Any user has full read/write access to the metadata tables for hypertables.
  • Permission changes on hypertables are not correctly propagated.
  • create_hypertable() can only be run on an empty table
  • COPYing a dataset will currently put all data in the same chunk, even if chunk size goes over max size. For now we recommend breaking down large files for COPY (e.g., large CSVs) into smaller files that are slightly larger than max_chunk size (currently 1GB by default). We provide scripts/migrate_data.sh to help with this.
  • Custom user-created triggers on hypertables currently not allowed
  • drop_chunks() (see our API Reference) is currently only supported for hypertables that are not partitioned by space.

More APIs

For more information on TimescaleDB's APIs, check out our API Reference.

Testing

If you want to contribute, please make sure to run the test suite before submitting a PR.

If you are running locally:

make installcheck

If you are using Docker:

make -f docker.mk test
Description
An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
Readme 88 MiB
Languages
C 67.7%
PLpgSQL 25.6%
CMake 1.8%
Ruby 1.7%
Python 1.3%
Other 1.9%