Discovered a crash when doing an explain analyze on a cte containing an insert that
did not reference the cte in the select statement. This fixes by copying the eref
from the parent hypertable to the chunk so that columns can be described. It will
only copy the eref from the hypertable if there is a range table entry available.
Also modify copy so it actually sets a dummy variable as the RTE index rather than 1,
which is a valid RTE index, even if there is no RTE for the hypertable.
Previously, the chunk dispatch node used arbiter index and other
information from the top-level Query node. This did not work with CTEs
(and probably subqueries) because the info for this insert was not at
the top-level. This commit changes the logic to use that same info from the
parent ModifyTable node (and ModifyTableState).
This commit also changes the logic to translate the arbiter indexes
from the hypertable index to the chunk index directly using our catalog
tables instead of re-inferring the index on the chunk.
Previously, there was a case involving cache invalidation when
the extension could be preloaded. Namely the following happened:
1) extension_check (outer) was called and found loaded = true.
2) While the extension_check was checking the state of the extension
a cache invalidation callback happened.
3) The cache invalidation callback called extension_check (inner).
4) The inner extension_check loads the extension and sets
loaded = true.
5) The outer extension_check also loads the extension after the
cache invalidation callback returns. This happens because it
never re-checks the value of loaded.
This commit adds another check of loaded just before it is changed
to prevent this situation.
The explain output previously relied on the fact that the
estate->rtable was indexed the same way as the simple_rte_array
during planning. This is not always true when using CTEs. This
commit instead passes the oid of the hypertable to the EXPLAIN
directly.
Previously, only the loader would complain if it was dynamically
loaded after preload. This PR adds a FATAL error message if the
versioned .so is not preloaded AND the loader has not been previously
loaded. This is a common case if the user forgets to edit
shared_preload_libraries to add the loader.
This PR also fixes the error in cases when it's encountered by a
non-superuser.
Test the ABI compatibility of postgres across minor version updates
wrt timescale. Namely, compile the extension on an early minor version
of postgres and test it on the latest snapshot of the major branch.
Previously, all Subtxn aborts released all cache pins. This is
wrong because that can release pins that are still in use by
higher-level subtxns and top-level txns. Now, we release only
pins corresponding to the appropriate subtxn. That means that we
now track the subtxn a pin was created in.
This PR moves table, schema, and trigger drop handling into the event
trigger system. The event trigger system is a more reliable method of
intercepting object drops especially as they can CASCADE via other
object drops.
This PR also adds a test for DROP OWNED which was previously broken.
This commit adds a secondary FK to the update test. This
FK points to the same metadata table as the previous FK.
This case is added because it has caused problems in the
past.
This fixes an issue with the `if_not_exists` option to add_dimension()
that caused the command to fail on a non-empty table. When this option
is set, the command should only fail if the dimension does not exists
and the table is non-empty.
Prior to PostgreSQL 10.3, the output for a trigger function did
not include the implied 'public' schema in its output when
outputting the function. In 10.3 this was changed causing the
regression test to fail. To make this more robust, the table is
now put into an explicit schema so the test does not rely on
the inconsistent behavior across versions as to printing out
'public'.
A fix for updates of version 0.6.1 was lost in the previous PR that
refactored the update process. This change adds back that fix.
The build process for update scripts now also supports versioned
"origin" modfiles, which are included only in the update scripts that
origins in a particular version given by the origin modfile's
name. Origin modfiles make it possible to add fixes that should be
included only for the version upgrading from.
Upgrades from non-tagged commits are not supported, so building
from source should emphasize that for those who want to use
TimescaleDB instead of develop for TimescaleDB.
Update scripts have so far been built by concatenating all the
historical changes and code from previous versions, leading to bloated
update scripts, complicated script build process, and the need to keep
old C-functions in compat.c since those functions are referenced
during updates.
This change greatly simplifies the way update scripts are built,
significantly reducing the size of update scripts (to basically only
the changeset + current code), and removing the need for compat.c.
A few principles of building scripts must be followed going forward,
as discussed in sql/updates/README.md.
This removes the version suffix from SQL files that are copied from
the source directory to the build directory during the build process.
Versioning the files in this step serves no real purpose and only
tends to clutter up the build dir with extra files every time the
version is bumped, requiring manual cleanup.
The cache invalidation triggers on our catalog tables
aren't used anymore as all modifications to catalog tables
happen using the C API, which doesn't invoke triggers and
has its own cache invalidation functionality.
Previously we just checked that we could upgrade from 0.1.0 until
latest but this missed some issues upgrading from intermediate
versions to latest. Now we check that all released versions can
successfully update.
For multi-version upgrades it is necessary to change the location of
trigger functions before doing anything else in upgrade scripts.
Otherwise, it is possible to trigger an even before you change the
location of the functions, which would load the old shared library
and break the system.
This commit also fixes `/sql/timescaledb--0.8.0--0.9.0.sql` to
come from the release build.
Previously, when the FmgrInfo was initialized for partitioning
functions, the type information was cached in the fn_extra
field. However, the contract for fn_extra is that it is to be set by
the called function and not prior to its invokation. Setting fn_extra
prior to function invokation is an issue for custom partitioning
functions implemented in SQL, as the SQL call manager expects to use
fn_extra for its own cache.
This change avoids setting fn_extra prior to function invokation, and
instead information is cached in fn_extra within our provided
partitioning functions.
The native partitioning functions now also support nested functions,
i.e., the input to the function is the output of another function.
Tables can now hold existing data, which is optionally migrated from
the main table to chunks when create_hypertable() is called.
The data migration is similar to the COPY path, with the single
difference that the inserted/copied tuples come from an existing table
instead of being read from a file. After the data has been migrated,
the main table is truncated.
One potential downside of this approach is that all of this happens in
a single transaction, which means that the table is blocked while
migration is ongoing, preventing inserts by other transactions.
Sometimes pg_config reports versions like `PostgreSQL 10.2 (Ubuntu
10.2-1.pgdg16.04+1)`. Thus the regex is adjusted to allow for suffixes
after the postgres version number.
This Fixes at least two bugs:
1) A drop of a referenced table used to drop the associated
FK constraint but not the metadata associated with the constraint.
Fixes#43.
2) A drop of a column removed any indexes associated with the column
but not the metadata associated with the index.
This change improves the handling of tablespaces as follows:
- Add if_not_attached / if_attached options to attach_tablespace() and
detach_tablespace(), respectively
- Block DROP tablespace if it is still attached to a table
- Block REVOKE if it means the table owner no longer has CREATE
permissions on an attached tablespace
- Make error messages follow the PostgreSQL style guide
Dropping a schema that a hypertable depends on should clean up
dependent metadata. There are two schemas that matter for hypertables:
the hypertable's schema and the associated schema where chunks are
stored.
This change deals with the above as follows:
- If the hypertable schema is dropped, the hypertable and all chunks
should be deleted as well, including metadata.
- If an associated schema is dropped, the hypertables that use that
associated schema will have their associated schemas reset to the
internal schema.
- Even if no hypertable currently uses the dropped schema as their
associated schema, there might be chunks that reside in the dropped
schema (e.g., if the associated schema was changed for their
hypertables), so those chunks should have the metadata deleted.
This test adds a case where we use a trigger to automatically populate
a metadata table. Such uses are common in IOT where, for example,
you want to keep metadata associated with devices and you want
new devices to be auto-created.
The compiler does not seem to like when I use the msvc enter/exit
guards for utils/guc.h, so the alternative is to grab the value
via GetConfigOptionByName.
This change refactors the handling of TRUNCATE so
that it is performed directly in process utility without
doing an upcall to PL/pgSQL.
It also adds handling for the ONLY modifier to TRUNCATE,
which shouldn't work on a hypertable. TRUNCATE now generates
an error if TRUNCATE ONLY is used on a hypertable.
Previously stdint.h was not included on Windows so INT16_MAX and
friends were not defined. Additionally, having tablespace_attach
with PG_FUNCTION_ARGS in the header file caused issues during
linking, so a direct call version of the function is now exported
for others to use instead of the PG_FUNCTION_ARGS version.
Two minor warnings regarding not having a return in all cases are
also addressed.
When chunks are deleted, dimension slices can be orphaned, i.e., there
are no chunks or chunk constraints that reference such slices. This
change ensures that, when chunks are deleted, orphaned slices are also
deleted.
Deletes on metadata in the TimescaleDB catalog has so far been a mix
of native deletes using the C-based catalog API and SQL-based DELETE
statements that CASCADEs.
This mixed environment is confusing, and SQL-based DELETEs do not
consistently clean up objects that are related to the deleted
metadata.
This change moves towards A C-based API for deletes that consistently
deletes also the dependent objects (such as indexes, tables and
constraints). Ideally, we should prohobit direct manipulation of
catalog tables using SQL statements to avoid ending up in a bad state.
Once all catalog manipulations happend via the native API, we can also
remove the cache invalidation triggers on the catalog tables.
This is a continuation of prior efforts to refactor API functions in C
to:
- improve usage of proper error codes
- use error messages that better conform with the PostgreSQL standard.
- improve security by avoiding that lots of code run under SECURITY DEFINER
- move towards doing all metadata updates using a consistent catalog API
Most importantly, `create_hypertable()` has been refactored in C,
which simplifies a lot of code that previously required
upcalls/downcalls between C code and plpgsql code, or duplicated
functionality between the two environments.
Chunk creation needs to be serialized in order
to avoid having multiple processes trying to
create the same chunk and causing conflicts.
This serialization didn't work as expected, because
a lock on the chunk catalog table that implemented
this serialization was prematurely released.
This change fixes that issue and also changes the
serialization to happen around a lock on the
chunk's parent table (the main table) instead. This
change should allow multiple processes to simultaneously
create chunks for different hypertables.
The functions for adding and updating dimensions have been refactored
in C to:
- improve usage of proper error codes
- make messages that better conform with the PostgreSQL standard.
- improve security by avoiding that lots of code run under SECURITY DEFINER
A new if_not_exists option has also been added to add_dimension() and
a the number of partitions can now be set using the new
set_number_partitions() function.
A bug in the validation of smallint time intervals has been fixed. The
previous code didn't check for intervals > 0 and smallint intervals
accepted values up to UINT16_MAX instead of INT16_MAX.