213 Commits

Author SHA1 Message Date
Matvey Arye
1e9bc6895b Add telemetry fields to track compression
The following fields are added:
-num_compressed_hypertables
-compressed_KIND_size
-uncompressed_KIND_size

Where KIND = heap, index, toast.

`num_hypertables` field does NOT count the internal hypertables
used for compressed data.

We also removed internal continuous aggs tables from the
`num_hypertables` count.
2019-10-29 19:02:58 -04:00
Sven Klemm
e2df62c81c Fix transparent decompression interaction with first/last
Queries with the first/last optimization on compressed chunks
would not properly decompress data but instead access the uncompressed
chunk. This patch fixes the behaviour and also unifies the check
whether a hypertable has compression.
2019-10-29 19:02:58 -04:00
Matvey Arye
c4efacdc92 Fix tests and function naming after rebase
Small fixup after rebase on the master branch. Fixes test
output and function naming (even test functions should
have the ts_ not the tsl_ prefix for exported C functions).
2019-10-29 19:02:58 -04:00
Matvey Arye
85d30e404d Add ability to turn off compression
Since enabling compression creates limits on the hypertable
(e.g. types of constraints allowed) even if there are no
compressed chunks, we add the ability to turn off compression.
This is only possible if there are no compressed chunks.
2019-10-29 19:02:58 -04:00
gayyappan
72588a2382 Restrict constraints on compressed hypertables.
Primary and unqiue constraints are limited to segment_by and order_by
columns and foreign key constraints are limited to segment_by columns
when creating a compressed hypertable. There are no restrictions on
check constraints.
2019-10-29 19:02:58 -04:00
Matvey Arye
df4c444551 Delete related rows for compression
This fixes delete of relate rows when we have compressed
hypertables. Namely we delete rows from:

- compression_chunk_size
- hypertable_compression

We also fix hypertable_compression to handle NULLS correctly.

We add a stub for tests with continuous aggs as well as compression.
But, that's broken for now so it's commented out. Will be fixed
in another PR.
2019-10-29 19:02:58 -04:00
Matvey Arye
0db50e7ffc Handle drops of compressed chunks/hypertables
This commit add handling for dropping of chunks and hypertables
in the presence of associated compressed objects. If the uncompressed
chunk/hypertable is dropped than drop the associated compressed object
using DROP_RESTRICT unless cascading is explicitly enabled.

Also add a compressed_chunk_id index on compressed tables for
figuring out whether a chunk is compressed or not.

Change a bunch of APIs to use DropBehavior instead of a cascade bool
to be more explicit.

Also test the drop chunks policy.
2019-10-29 19:02:58 -04:00
gayyappan
6e60d2614c Add compress chunks policy support
Add and drop compress chunks policy using bgw
infrastructure.
2019-10-29 19:02:58 -04:00
Matvey Arye
cdf6fcb69a Allow altering compression options
We now allow changing the compression options on a hypertable
as long as there are no existing compressed chunks.
2019-10-29 19:02:58 -04:00
gayyappan
1c6aacc374 Add ability to create the compressed hypertable
This happens when compression is turned on for regular hypertables.
2019-10-29 19:02:58 -04:00
Sven Klemm
d82ad2c8f6 Add ts_ prefix to all exported functions
This patch adds the `ts_` prefix to exported functions that didnt
have it and removes exports that are not needed.
2019-10-15 14:42:02 +02:00
Matvey Arye
01f2bbaf5a Add better errors for no permission in callbacks
Have better permission errors when setting
the integer now func and the partitioning func.

Also move tests from tsl to apache2 for the now func.
2019-10-11 13:00:55 -04:00
Matvey Arye
d2f68cbd64 Move the set_integer_now func into Apache2
We decided this should be an OSS capability.
2019-10-11 13:00:55 -04:00
Sven Klemm
41878e735b Fix background worker segfaults
The background worker function to check the owner of a job did not have
proper error handling when the object referenced by a job did not exist
leading to a segfault in that case.  This patch adds proper checking of
return values and errors when the object cannot be found.
2019-09-01 19:51:27 +02:00
Erik Nordström
fe47f10e25 Fix partitioned table check and enhance PG macros
This change enables a check in `create_hypertable` that prohibits
turning partitioned tables into hypertables. The check was only
enabled when compiling against PG10, but should be there for PG
version 10 and greater.

To avoid such disabled code in the future, some extra convenience
macros have been added. For instance `PG10_GE` means PG10 and greater.
2019-07-14 09:41:07 +02:00
Erik Nordström
12ce2b8803 Fail when adding space dimension with no partitions
Calling `create_hypertable` with a space dimension silently succeeds
without actually creating the space dimension if `num_partitions` is
not specified.

This change ensures that we raise an appropriate error when a user
fails to specify the number of partitions.
2019-07-03 19:04:32 +02:00
Matvey Arye
e049238a07 Adjust permissions on internal functions
The following functions have had permission checks
added or adjusted:
ts_chunk_index_clone
ts_chunk_index_replace
ts_hypertable_insert_blocker_trigger_add
ts_current_license_key
ts_calculate_chunk_interval
ts_chunk_adaptive_set

The following functions have been removed from the regular SQL install.
They are only installed and used in tests:

dimension_calculate_default_range_open
dimension_calculate_default_range_closed
2019-06-24 10:57:38 -04:00
Matvey Arye
991ba7afab Reword permission error
Reword the permission error to make clear that the permission issue
relates to ownership and to match PG errors.
2019-06-24 10:57:38 -04:00
Matvey Arye
d580abf04f Change how permissions work with continuous aggs
To create a continuous agg you now only need SELECT and
TRIGGER permission on the raw table. To continue refreshing
the continuous agg the owner of the continuous agg needs
only SELECT permission.

This commit adds tests to make sure that removing the
SELECT permission removes ability to refresh using
both REFRESH MATERIALIZED VIEW and also through a background
worker.

This work also uncovered divergence in permission logic for
creating triggers by a CREATE TRIGGER on chunks and when new
chunks are created. This has now been unified: there is a check
to make sure you can create the trigger on the main table and
then there is a check that the owner of the main table can create
triggers on chunks.

Alter view for continuous aggregates is allowed for the owner of the
view.
2019-06-24 10:57:38 -04:00
Matvey Arye
d3e582fd23 Adjust permission checks to ProcessUtility start
Items that are "handled" in process utility start never go through
the standard process utility. Thus they may not have permissions
checks called. This commit goes through all such items and adds
permissions checks as appropriate.
2019-06-24 10:57:38 -04:00
Matvey Arye
e834c2aba8 Better permission checks in API calls
This commit fixes and tests permissions in the following
API calls:
- reorder_chunk (test only)
- alter_job_schedule
- add_drop_chunks_policy
- remove_drop_chunks_policy
- add_reorder_policy
- remove_reorder_policy
- drop_chunks
2019-06-24 10:57:38 -04:00
Sven Klemm
8c2acecbf4 Skip runtime exclusion when Param is not partitioning column
When the column a Param references is not a partitioning column
the constraint is not useful for excluding chunks so we skip
enabling runtime exclusion for those cases.
2019-06-23 18:50:34 +02:00
Joshua Lockerman
2801c6a5f5 Fix handling of types with custom partitioning
In various places, most notably drop_chunks and show_chunks, we
dispatch based on the type of the "time" column of the hypertable, for
things such as determining which interval type to use. With a custom
partition function, this logic is incorrect, as we should instead be
determining this based on the return type of the partitioning function.

This commit changes all relevant access of dimension.column_type to a
new function, ts_dimension_get_partition_type, which has the correct
behavior: it returns the partitioning function's return type, if one
exists, and only otherwise uses the column type. After this commit, all
references to column_type directly should have a comment explaining why
this is appropriate.

fixes Gihub issue #1250
2019-06-21 13:08:51 -04:00
Joshua Lockerman
ae3480c2cb Fix continuous_aggs info
This commit switches the remaining JOIN in the continuous_aggs_stats
view to LEFT JOIN. This way we'll still see info from the other columns
even when the background worker has not run yet.
This commit also switches the time fields to output text in the correct
format for the underlying time type.
2019-04-26 13:08:00 -04:00
Matvey Arye
1f89478e4b Rename continous aggregate internal objects
Rename the materialized hypertable, partial view and direct view
with a hypertable_id suffix. This avoids truncation problems and
lets us use a longer prefix.

This change also renames the columns in the mat table to have either
a _agg or _grp prefix and to include the column number from the original
view.
2019-04-26 13:08:00 -04:00
Joshua Lockerman
ef50ee2ed5 Fix continuous agg trigger handling
Add the continuous aggregate invalidation trigger to chunks that
existed before the continuous aggregate was created. Propagate DROPs of
the invalidation trigger to chunks.
2019-04-26 13:08:00 -04:00
Matvey Arye
7a4191bd84 Handle drops on continuous agg views and tables
Previously we used postgres dependency tracking to ensure consistent
deletes between various continuous agg postgres objects (views and
tables). This does not work with dump/restore and thus this PR removes
that dependency tracking in favor of handling these deps ourselves in
various drop hooks.

This PR also adds logic for deleting rows in the continuous_agg metadata
table when the view is dropped. It does not yet handle deleting
associated threshold metadata, that's left for a future PR.

Most of this logic is in the apache-licensed code and not in the TSL
since we want people who downgraded from TSL to apache to still be
able to drop continuous views.
2019-04-26 13:08:00 -04:00
Joshua Lockerman
1e486ef2a4 Fix ts_chunk_for_tuple performance
ts_chunk_for_tuple should use the chunk cache.
ts_chunk_for_tuple should be marked stable.
These fixes markedly improve performance.
2019-04-19 12:46:36 -04:00
Matvey Arye
786250ae24 Add create function for dimension info
This PR adds a create function interface for dimension info since we
will want to create these objects in more places in the future. This
creates a more stable API then just setting struct elements directly.
2019-03-15 14:53:04 -04:00
Matvey Arye
ee945a5abd Create a non-SQL C interface for create_hypertable
This PR create a pure-C interface for create_hypertable. This
makes calling this function within C much easier. It also does
some interface cleanup. Most notably, it now disallows
chunk_sizing_func to be NULL since it has a NOT NULL constraint
on the hypertable catalog table.
2019-03-14 12:37:25 -04:00
Joshua Lockerman
ffdc095d6e Enable creating indexes with one transaction per chunk
Currently CREATE INDEX creates the indices for all chunks in a single
transaction, which holds a lock on the root hypertable and all chunks. This
means that during CREATE INDEX no new inserts can occur, even if we're not
currently building an index on the table being inserted to.

This commit adds the option to create indices using a separate
transaction for each chunk. This option, used like

    CREATE INDEX ON <table> WITH (timescaledb.transaction_per_chunk);

should cause less contention than a regular CREATE INDEX, in exchange
for the possibility that the index will be created on only some, or none,
of the chunks, if the command fails partway through. The command holds a lock on
the root index used as a template throughout the command, and each of the only
additionally locks the chunk being indexed. This means that that chunks which
are not currently being indexed can be inserted to, and new chunks can be
created while the CREATE INDEX command is in progress.

To enable detection of failed transaction_per_chunk CREATE INDEXs, the
hypertable's index is marked as invalid while the CREATE INDEX is in progress,
if the command fails partway through, the index will remain invalid. If such an
invalid index is discovered, it can be dropped an recreated to ensure that all
chunks have a copy of the index, in the future, we may add a command to create
indexes on only those chunks which are missing them. Note that even though the
hypertable's index is marked as invalid, new chunks will have a copy of the
index build as normal.

As part of the refactoring to make this command work, normal index creation was
slightly modified. Instead of getting the column names an index uses
one-at-a-time we get them all at once at the beginning of index creation, this
allows to close the hypertable's root table once we've determined all of while
we create the index info for each chunk. Secondly, it changes our function to
lookup a tablespace, ts_hypertable_get_tablespace_at_offset_from, to only take a
hypertable id, instead of the hypertable's entire cache entry; this function
only ever used the id, so this allows us to release the hypertable cache earlier
2019-02-22 14:54:36 -05:00
Matvey Arye
34edba16a9 Run clang-format on code 2019-02-05 16:55:16 -05:00
Matvey Arye
b891a28b80 Add final commas to designated initializers
Final commas are needed for clang-format to format the
code appropriately.
2019-02-05 16:55:16 -05:00
Joshua Lockerman
acc41a7712 Update license header
Only have the copyright in the NOTICE. Hopefully
only having to update one place each year will
keep it consistent.
2019-01-03 11:57:51 -05:00
Joshua Lockerman
6811296db7 Clean windows warnings
After this, the only remaining warnings on Windows are from postgres
itself.
2019-01-03 10:50:18 -05:00
Amy Tai
be7c74cdf3 Add logic for automatic DB maintenance functions
This commit adds logic for manipulating internal metadata tables used for enabling users to schedule automatic drop_chunks and recluster policies. This commit includes:

- SQL for creating policy tables and chunk stats table
- Catalog code and C code for accessing these three tables programatically
- Implement and expose new user API functions:  add_*_policy and remove_*_policy
- Stub scheduler logic for running the policies
2019-01-02 15:43:48 -05:00
Sven Klemm
92586d8fc9 Fix typos in comments 2018-12-31 18:36:05 +01:00
Sven Klemm
b1378449bc Remove unused functions
Remove the following unused functions:
ts_cache_switch_to_memory_context
ts_chunk_free
ts_chunk_exists
ts_chunk_index_delete_children_of
ts_chunk_index_delete_by_hypertable_id
ts_hypertable_scan_relid
ts_tablespaces_clear
ts_tablespaces_delete
2018-12-18 10:35:04 +01:00
David Kohn
5aa1edac15 Refactor compatibility functions and code to support PG11
Introduce PG11 support by introducing compatibility functions for
any whose signatures have changed in PG11. Additionally, refactor
the structure of the compatibility functions found in compat.h by
breaking them out by function (or small set of similar functions)
so that it is easier to see what changed between versions and maintain
changes as more versions are supported.

In general, the philosophy has been to try for forward compatibility
wherever possible, so that we use the latest versions of function interfaces
where we can or where reasonably convenient and mimic the behavior
in older versions as much as possible.
2018-12-12 11:42:33 -05:00
Erik Nordström
e4a4f8e2f8 Add support for functions on open (time) dimensions
TimescaleDB has always supported functions on closed (space)
dimension, i.e., for hash partitioning. However, functions have not
been supported on open (time) dimensions, instead requiring columns to
have a supported time type (e.g, integer or timestamp). This restricts
the tables that can be time partitioned. Tables with custom "time"
types, which can be transformed by a function expression into a
supported time type, are not supported.

This change generalizes partitioning so that both open and closed
dimensions can have an associated partitioning function that
calculates a dimensional value. Fortunately, since we already support
functions on closed dimensions, the changes necessary to support this
on any dimension are minimal. Thus, open dimensions now support an
(optional) partitioning function that transforms the input type to a
supported time type (e.g., integer or timestamp type). Any indexes on
such dimensional columns become expression indexes.

Tests have been added for chunk expansion and the hashagg and sort
transform optimizations on tables that are using a time partitioning
function.

Currently, not all of these optimizations are well supported, but this
could potentially be fixed in the future.
2018-12-12 10:14:31 +01:00
Amy Tai
83014ee2b0 Implement drop_chunks in C
Remove the existing PLPGSQL function that implements drop_chunks, replacing it with a direct call to the C function, which also implements the old PLPGSQL checks in C. Refactor out much of the code shared between the C implementations of show_chunks and drop_chunks.
2018-12-06 13:27:12 -05:00
Joshua Lockerman
9de504f958 Add ts_ prefix to everything in headers
Future proofing: if we ever want to make our functions available  to
others they’d need to be prefixed to prevent name collisions. In
order to avoid having some functions with the ts_ prefix and
others without, we’re adding the prefix to all non-static
functions now.
2018-12-05 14:43:22 -05:00
Amy Tai
54b189a7e4 Remove unused function from hypertable.c
hypertable_delete_by_id has long since been replaced by hypertable_delete_name and isn't used anywhere in the codebase.
2018-12-04 13:13:42 -05:00
Narek Galstyan
9a3402809f Implement show_chunks in C and have drop_chunks use it
Timescale provides an efficient and easy to use api to drop individual
chunks from timescale database through drop_chunks. This PR builds on
that functionality and through a new show_chunks function gives the
opportunity to see the chunks that would be dropped if drop_chunks was run.
Additionally, it adds a newer_than option to drop_chunks (also supported
by show_chunks) that allows to see/drop chunks in an interval or newer
than a point in time.

This commit includes:
    - Implementation of show_chunks in C
    - Additional helper functions to work with chunks
    - New version of drop_chunks in sql that uses show_chunks. This
      	  also adds a newer_than option to drop_chunks
    - More enhanced tests of drop_chunks and new tests for show_chunks

Among other reasons, show_chunks was implemented in C in order
to be able to have both older_than and newer_than arguments be null. This
was not possible in SQL because the arguments had to have polymorphic types
and whether they are used in function body or not, PL/pgSQL requires these
arguments to typecheck.
2018-11-28 13:46:07 -05:00
Amy Tai
0267f46b56 Clean up catalog code
Organize catalog.c so that functions that access the Catalog struct are physically separate from the functions that modify the actual catalog tables on disk. Also added macros and made static some functions that aren't used outside the catalog files. Also refactored out the CatalogDatabaseInfo struct, which is independent of the Catalog struct and will be reused in the future.
2018-11-27 17:10:50 -05:00
Sven Klemm
7e55d910eb Add checks for NULL arguments to DDL functions
Add checks for NULL arguments to ts_chunk_adaptive_set,
ts_dimension_set_num_slices, ts_dimension_set_interval,
ts_dimension_add and ts_hypertable_create
2018-11-23 20:54:27 +01:00
Amy Tai
80e0b05348 Provide helper function creating struct from tuple
Refactored the boilerplate that allocates and copies over data from a tuple to a struct. This is typically used in the scanner context in order to read rows from a SQL table in C.
2018-11-21 15:33:56 -05:00
Erik Nordström
28e2e6a2f7 Refactor scanner callback interface
This change adds proper result types for the scanner's filter and
tuple handling callbacks. Previously, these callbacks were supposed to
return bool, which was hard to interpret. For instance, for the tuple
handler callback, true meant continue processing the next tuple while
false meant finish the scan. However, this wasn't always clear. Having
proper return types also makes it easier to see from a function's
signature that it is a scanner callback handler, rather than some
other function that can be called directly.
2018-11-08 17:33:26 +01:00
Joshua Lockerman
d8e41ddaba Add Apache License header to all C files 2018-10-29 13:28:19 -04:00
Joshua Lockerman
08bac40021 Add beta NOTICE when using adaptive chunking
Adaptive chunking is in beta, and we don't want users enabling it in
production by accident. This commit adds a warning to that effect.
2018-10-29 13:11:26 -04:00