Something is causing a heap corruption upon setting the license key to
default when we try to use the guc extra on windows. For now stop using
it and just rerun the validation function, if we get to the assign hook
we must have a valid key, so it will never fail.
Also Fixes error message on windows;
turns out windows does not like to print NULL strings.
Don't do that.
Fixes other minor windows bugs.
Remove the following unused functions:
_timescaledb_internal.to_microseconds(TIMESTAMPTZ)
_timescaledb_internal.to_timestamp_pg(BIGINT)
_timescaledb_internal.time_to_internal(anyelement)
Introduce PG11 support by introducing compatibility functions for
any whose signatures have changed in PG11. Additionally, refactor
the structure of the compatibility functions found in compat.h by
breaking them out by function (or small set of similar functions)
so that it is easier to see what changed between versions and maintain
changes as more versions are supported.
In general, the philosophy has been to try for forward compatibility
wherever possible, so that we use the latest versions of function interfaces
where we can or where reasonably convenient and mimic the behavior
in older versions as much as possible.
If possible replace aggregate functions FIRST/LAST with subqueries of the form
(SELECT value FROM table WHERE sort IS NOT NULL AND existing-quals ORDER BY sort ASC/DESC
LIMIT 1).
Given a suitable index on sort column, this plan can be much faster then scanning all the
rows and running an aggregate function.
The optimization can't be performed if:
- query uses GROUP BY or WINDOW function
- query contains CTEs
- query contains other aggregate functions (eg. Combining MIN/MAX with FIRST/LAST. We can't
optimize accross different aggregate functions)
- query uses JOIN
- FIRST/LAST used in ORDER BY
Optimization also works with subqueries, or if FIRST/LAST is used in CTE subquery.
In order to standardize existing FIRST/LAST aggregate function with PostgreSQL and
FIRST/LAST optimization, we exclude NULL values in sort by column.
Future proofing: if we ever want to make our functions available to
others they’d need to be prefixed to prevent name collisions. In
order to avoid having some functions with the ts_ prefix and
others without, we’re adding the prefix to all non-static
functions now.
Timescale provides an efficient and easy to use api to drop individual
chunks from timescale database through drop_chunks. This PR builds on
that functionality and through a new show_chunks function gives the
opportunity to see the chunks that would be dropped if drop_chunks was run.
Additionally, it adds a newer_than option to drop_chunks (also supported
by show_chunks) that allows to see/drop chunks in an interval or newer
than a point in time.
This commit includes:
- Implementation of show_chunks in C
- Additional helper functions to work with chunks
- New version of drop_chunks in sql that uses show_chunks. This
also adds a newer_than option to drop_chunks
- More enhanced tests of drop_chunks and new tests for show_chunks
Among other reasons, show_chunks was implemented in C in order
to be able to have both older_than and newer_than arguments be null. This
was not possible in SQL because the arguments had to have polymorphic types
and whether they are used in function body or not, PL/pgSQL requires these
arguments to typecheck.
Refactored the boilerplate that allocates and copies over data from a tuple to a struct. This is typically used in the scanner context in order to read rows from a SQL table in C.
Macro is used for 2 reasons:
1) It's more correct in that it doesn't mix Timestamp and TimestampTz
types. There is no implicit conversion of the two beneath the hood.
2) It is slightly faster as it avoid an extra function call. This
is a very performance sensitive function for OLAP queries.
Since Monday is the ISO start of the week, it makes sense to move
the time_bucket epoch to start on a Monday. Before the epoch was the
same as the Postgres epoch (2000-01-01, a Saturday).
We've decided to adopt the ts_ prefix on all exported C functions in
order to avoid having symbol conflicts with future postgres functions.
We've already started using this prefix on new functions and this commit
adds the prefix to to the old functions.
Users can now (optionally) set a target chunk size and TimescaleDB
will try to adapt the interval length of the first open ("time")
dimension in order to reach that target chunk size. If a hypertable
has more than one open dimension, only the first one will have a
dynamically adapting interval.
Users can optionally specify their own function that calculates the
new dimension interval. They can also set a target size of 0 in order
to estimate a suitable target size for a chunk based on available
memory.
This PR fixes all the formatting to be inline with the latest version of
pgindent. Since pgindent does not like variables named `type`, those
have been appropriately renamed.
This optimization adds a HashAggregate plan to many group by queries.
In plain postgres, many time-series queries will not use the hash
aggregate because the planner will incorrectly assume that the number of
rows is much larger than it actually is and will use the less efficient
GroupAggregate instead of a HashAggregate to prevent running out of
memory.
The planner will assume a large number of rows because the statistics
planner for grouping assumes that the number of distinct items produced
by a function is the same as the number of distinct items going in. This
is not true for functions like time_bucket and date_trunc. This
optimization fixes the statistics and add the HashAggregate plan if
appropriate.
The statistics now rely on evaluating the spread of a variable and
dividing it by the interval in the time_bucket or date_trunc. This is
still an overestimate of the total number of groups but is better than
before. A further improvement on this will be to evaluate the quals
(WHERE clauses) on the query to try to derive a tighter spread on the
variable. This is left to a future optimization.
The functions for adding and updating dimensions have been refactored
in C to:
- improve usage of proper error codes
- make messages that better conform with the PostgreSQL standard.
- improve security by avoiding that lots of code run under SECURITY DEFINER
A new if_not_exists option has also been added to add_dimension() and
a the number of partitions can now be set using the new
set_number_partitions() function.
A bug in the validation of smallint time intervals has been fixed. The
previous code didn't check for intervals > 0 and smallint intervals
accepted values up to UINT16_MAX instead of INT16_MAX.
Source code indentation has been updated in PostgreSQL 10 to fix a
number of issues. This update applies this new indentation to the
entire code base.
The new indentation requires a new version of pg_bsd_indent, which can
be found here:
https://git.postgresql.org/git/pg_bsd_indent.git
Windows 64-bit binaries should now be buildable using the cmake
build system either from the command line or from Visual Studio.
Previous issues regarding unresolved symbols have been resolved
with compatibility header files to properly export symbols or
getting GUCs via normal APIs.
reindex allows you to reindex the indexes of only certain chunks,
filtering by time. This is a common use case because a user may
want to reindex chunks after they are no longer getting new data once.
reindex also has a recreate option which will not use REINDEX
but will rather CREATE INDEX a new index and then
DROP INDEX / RENAME new_index to old_name. This approach has advantages
in terms of blocking reads for a much shorter period of time. However,
it does more work and will use more disk space during the operation.
Previously, for timezones w/o tz. The range_end and range_start were
defined as UTC, but the constraints on the table were written as as
the local time at the time of chunk creation. This does not work well
if timezones change over the life of the hypertable.
This change removes the dependency on local time for all timestamp
partitioning. Namely, the range_start and range_end remain as UTC
but the constraints are now always written in UTC too. Since old
constraints correctly describe the data currently in the chunks, the
update script to handle this change changes range_start and range_end
instead of the constraints.
Fixes#300.
The extension now works with PostgreSQL 10, while
retaining compatibility with version 9.6.
PostgreSQL 10 has numerous internal changes to functions and
APIs, which necessitates various glue code and compatibility
wrappers to seamlessly retain backwards compatiblity with older
versions.
Test output might also differ between versions. In particular,
the psql client generates version-specific output with `\d` and
EXPLAINs might differ due to new query optimizations. The test
suite has been modified as follows to handle these issues. First,
tests now use version-independent functions to query system
catalogs instead of using `\d`. Second, changes have been made to
the test suite to be able to verify some test outputs against
version-dependent reference files.
For all exported functions the macro PGDLLEXPORT needs to be pre-
pended. Additionally, on Windows `open` is a macro that needed to
be renamed. A few other small changes are done to make Visual
Studio's compiler happy / get rid of warnings (e.g. adding return
statements after elog).
Applying triggers to chunks requires taking the definition
of a trigger on a hypertable and executing it on a chunk. Previously
this was done with string replacement in the trigger definition.
This was not especially safe, and thus we moved the logic to C
where we can do proper parsing/deparsing and replacement of the table
name. Another positive aspect is that we got rid of some DDL triggers.
Streamline code and remove triggers from chunk and
chunk_constraint. Lots of additional cleanup. Also removes need to CASCADE
hypertable drops (fixes#88).
This PR add support for primary-key, foreign-key, unique, and exclusion constraints.
Previously supported are CHECK and NOT NULL constraints. Now, foreign key
constraints where a hypertable references a plain table is support
(while vice versa, with a plain table references a hypertable, is still not).
Previously, date was auto-cast to timestamptz when time_bucket was
called. This led to weird behavior with regards to timezones and
the return value was a timestamptz. This PR makes time_bucket return
a DATE on DATE input and avoids all timezone conversions.
Clean up the table schema to get rid of legacy tables and functionality
that makes it more difficult to provide an upgrade path.
Notable changes:
* Get rid of legacy tables and code
* Simplify directory structure for SQL code
* Simplify table hierarchy: remove root table and make chunk tables
* inherit directly from main table
* Change chunk table suffix from _data to _chunk
* Simplify schema usage: _timescaledb_internal for internal functions.
* _timescaledb_catalog for metadata tables.
* Remove postgres_fdw dependency
* Improve code comments in sql code
This PR disables query optimizations on regular tables by default.
The option timescaledb.optimize_plain_tables = 'on' enables them
again. timescaledb.disable_optimizations = 'on' disables all
optimizations (note the change from 'true' to 'on').
This adds the .dir-locals.el Emacs style settings file from the
PostgreSQL source. This will make it easier for Emacs users to
conform to the official PostgreSQL coding style.
This patch refactors the source code so that a bunch of unrelated code
for the planner, process utilities and transaction management, which
was previously located in the common file timescaledb.c, is now broken
up into separate source files.
Currently, the internal metadata tables for hypertables track time
as a BIGINT integer. Converting hypertable time columns in TIMESTAMP
format to this internal representation requires using Postgres' conversion
functions that are imprecise due to floating-point arithmetic. This patch
adds C-based conversion functions that offer the following conversions
using accurate integer arithmetic:
- TIMESTAMP to UNIX epoch BIGINT in microseconds
- UNIX epoch BIGINT in microseconds to TIMESTAMP
- TIMESTAMP to Postgres epoch BIGINT in microseconds
- Postgres epoch BIGINT in microseconds to TIMESTAMP
The downside of the UNIX epoch functions are that they don't offer the full
date range as offered by the Postgres to_timestamp() function. This is
because of the required epoch shift might otherwise overflow the BIGINT.
All functions should, however, offer appropriate range checks and will
throw errors if outside the range.