There is no CREATE OR REPLACE AGGREGATE which means that the only way to replace
an aggregate is to DROP then CREATE which is problematic as it will fail
if the previous version of the aggregate has dependencies.
This commit makes sure aggregates are not dropped and recreated every time.
NOTE that WHEN CREATING NEW FUNCTIONS in sql/aggregates.sql you should also make
sure they are created in an update script so that both new users and people
updating from a previous version get the new function.
sql/aggregates.sql is run only once when TimescaleDB is installed and is not
rerun during updates that is why everything created there should also be in an
update file.
Fixes#612
Previously, if a hypertable dimension type did not have a default
hash function, create_hypertable would throw an error.
However, this should not be the case if a custom partitioning
function is provided.
This commit addresses the issue making sure that arbitrary
custom types can be used as partitioning dimensions as long
as a valid partitioning function is provided.
Fixes#470.
This refactor does three things:
1) Upgrades the lock taken to AccessExclusive. This is
to prevent upgrading locks during data migration.
2) Explicitly release lock in the IF NOT EXISTS case.
This is more inline with what PG itself does. Also,
optimize the easy IF NOT EXISTS case.
3) Exposes a rel inside create_hypertable itself
so that checks can use one rel instead of opening and closing
a bunch of them.
If the argument column of add_dimension is not in all
of hypertable indexes that have UNIQUE, PRIMARY KEY
or EXCLUSION constraints, then add_dimension call
should fail.
This commit enforces the above.
This removes a lot of duplicated code across the two versions of
update tests (one with testing of constraints vs one without) and
potentially allows for further additions more easily. Also, this
splits Travis testing of updates into two jobs so they can run
concurrently and reduce the amount of turnaround time for PRs.
Finally, added versions since 0.8.0 that were not previously being
tested.
This fixes the show_indexes test support function to properly show the
columns of the indexes instead of the table. The function now also
shows the expressions of expression indexes.
This adds a simple check to enforce that partitioning functions
are IMMUTABLE. This is a requirement since a partitioning function
must always return the same value given the same input.
Constraints with the NO INHERIT option does not make sense on a
hypertable's root table since these will not be enforced.
Previously, NO INHERIT constraints were blocked on chunks, and were
thus not enforced until chunk creation time, allowing creation of NO
INHERIT constraints on empty hypertables, but then causing failure at
chunk-creation time. Instead, NO INHERIT constraints are now properly
blocked at the hypertable level.
Add support for:
* ALTER TABLE ... CLUSTER ON
* ALTER TABLE ... SET WITHOUT CLUSTER
on both hypertables and chunks. Commands on hypertables get
passed down to chunks.
This PR adds better handling for the following commands:
* ALTER TABLE ... ALTER COLUMN ... SET (attribute_name = value)
* ALTER TABLE ... ALTER COLUMN ... RESET (attribute_name)
* ALTER TABLE ... ALTER COLUMN ... SET STATISTICS
* ALTER TABLE ... ALTER COLUMN ... SET STORAGE
For each of the above commands the associated settings are not properly
propagated to existing chunks and new chunks are created with the
same settings as the hypertable.
We also now allow these commands to be run on chunks.
Previously, when running a VACUUM ANALYZE or ANALYZE on
a hypertable the statics on the parent table (hypertable)
were not correctly updated. This fixes that problem
and adds tests.
Old versions used a GUC to determine whether the loader was present.
This PR sets that GUC in the loader so that those versions once
again work with the new loader.
Both setup-db.sh and sql/setup_sample_hypertable.psql don't appear
to be used anywhere in our repo or mentioned in the docs. We have
code snippets and guides to replace them, so they are no longer
needed.
This PR adds a load-time check to the versioned extension for the
expected Postgres versions. This better handles the case where the
extension is distributed as a binary and compiled on a different
Postgres version than the one it is running on.
We also change the versioning to require:
PG >= 9.6.3 to avoid issues with missing functions in previous versions
OR
PG >= 10.2 to avoid issues with ABI incompatibility at PG 10.0 and 10.1
A bug in the SQL for getting the size of chunks would use the
TOAST size of the main/dummy table as the toast size for the
chunks rather than each chunks' own toast size.
This PR fixes all the formatting to be inline with the latest version of
pgindent. Since pgindent does not like variables named `type`, those
have been appropriately renamed.
Cache pins are allocated on the CacheMemoryContext in order to survive
subtransactions, and, optionally, main transactions (in case of, e.g.,
clustering or vaccuming). However, cache pins that are released also
needs to free memory on the CacheMemoryContext in order to avoid
leaking memory in a session.
This change ensures cache pin memory is freed when a cache pin is
released. It also allocates cache pins on a separate memory context,
which is a child of the CacheMemoryContext. This separate memory
context makes it easier to track the memory used by cache pins and
also release it when necessary.
Previously, chunks could be renamed and have their schema changed, but
the metadata in the TimescaleDB catalog was not updated in a
corresponding way. Further, renaming a chunk column was possible,
which could break functionality on the hypertable.
Then catalog metadata is now properly updated on name and schema
changes applied to chunks. Renaming chunk columns have been blocked
with an informational error message.
This fixes a static analyzer warning about using an unititalized
pointer. The analyzer doesn't realize that elog() will generate an
exception, so that the unitialized NULL check will never occur. This
change will clarify the code and silence the static analyzer.
Previously, cache lookups were run on the cache's memory
context. While simple, this risked allocating transient (work) data on
that memory context, e.g., when scanning for new cache data during
cache misses.
This change makes scan functions take a memory context, which the
found data should be allocated on. All other data is allocated on the
current memory (typically the transaction's memory context). With this
functionality, a cache can pass its memory context to the scan, thus
avoiding taking on unnecessary memory allocations.
Previously, upserts (ON CONFLICT) clauses did not work well on tables
where the hypertable attribute numbers did not match chunk attribute
numbers. This is common if the hypertable has dropped columns or
there were other alter commands run on the hypertable before
chunks were created.
This PR fixes the projection of the returning clause as well
as the update clauses. It also fixes the where clause for ON CONFLICT
UPDATE. These fixes are mostly about mapping the attribute numbers
from the hypertable attnos->chunk attnos. Some slot tupleDesc also
needed to be changed to the tupleDesc of the chunk.
Note that because of the limitations in PG 9.6 we had to copy over
some expressions from the ModifyTable plan node inside the chunk
dispatch. These original expressions are irrecoverable from the
ModifyTableState node or the ProjectionInfo structs in 9.6.
This optimization adds a HashAggregate plan to many group by queries.
In plain postgres, many time-series queries will not use the hash
aggregate because the planner will incorrectly assume that the number of
rows is much larger than it actually is and will use the less efficient
GroupAggregate instead of a HashAggregate to prevent running out of
memory.
The planner will assume a large number of rows because the statistics
planner for grouping assumes that the number of distinct items produced
by a function is the same as the number of distinct items going in. This
is not true for functions like time_bucket and date_trunc. This
optimization fixes the statistics and add the HashAggregate plan if
appropriate.
The statistics now rely on evaluating the spread of a variable and
dividing it by the interval in the time_bucket or date_trunc. This is
still an overestimate of the total number of groups but is better than
before. A further improvement on this will be to evaluate the quals
(WHERE clauses) on the query to try to derive a tighter spread on the
variable. This is left to a future optimization.
Windows and FreeBSD seem to need include/ to compile correctly
but on Ubuntu systems the header files in include/server/ and
include/ frequently get out of sync causing compile errors. This
change therefore only keeps that library included for Windows
and FreeBSD.
We hit a bug in 9.6.5 fixed in 9.6.6 by commit 77cd0dc.
Also changed extension is transitioning check to not palloc
anything. This is more efficient and probably has slightly
less side-effects on bugs like this.
This planner optimization reduces planning times when a hypertable has many chunks.
It does this by expanding hypertable chunks manually, eliding the `expand_inherited_tables`
logic used by PG.
Slow planning time were previously seen because `expand_inherited_tables` expands all chunks of
a hypertable, without regard to constraints present in the query. Then, `get_relation_info` is
the called on all chunks before constraint exclusion. Getting the statistics an many chunks ends
up being expensive because RelationGetNumberOfBlocks has to open the file for each relation.
This gets even worse under high concurrency.
This logic solves this by expanding only the chunks needed to fulfil the query instead of all chunks.
In effect, it moves chunk exclusion up in the planning process. But, we actually don't use constraint
exclusion here, but rather a variant of range exclusion implemented
by HypertableRestrictInfo.
Getting an approximate row count for a hypertable involves getting
estimates for all of its chunks rather than just looking up a
single value in the catalog tables. This PR provides a convenience
function for doing the JOINs/summing.
Macros that provide type assertion, like castNode() and lfirst_node()
were introduced in PG 9.6.3 and cannot be used if we want to support
the entire 9.6 series of releases. This change fixes usage of such
macros that was introduced as part of the 0.9.2 release of
TimescaleDB.