This function drops a chunk on a specified data node. It then removes
the metadata about the datanode, chunk association on the access node.
This function is meant for internal use as part of the "move chunk"
functionality.
If only one chunk replica remains then this function refuses to drop it
to avoid data loss.
Creates a table for chunk replica on the given data node. The table
gets the same schema and name as the chunk. The created chunk replica
table is not added into metadata on the access node or data node.
The primary goal is to use it during copy/move chunk.
When an empty chunk table is created, it is not associated with its
hypertable in metadata, however it was inheriting from the hypertable.
This commit removes inheritance, so the chunk table is completely
standalone and cannot affect queries on the hypertable.
In future, this fix of dropping inherit can be replaced with the chunk
table creation, which does not create inheritance in the first place.
Since it is larger amount of work, it was considered now.
Adds an internal API function to create an empty chunk table according
the given hypertable for the given chunk table name and dimension
slices. This functions creates a chunk table inheriting from the
hypertable, so it guarantees the same schema. No TimescaleDB's
metadata is updated.
To be able to create the chunk table in a tablespace attached to the
hyeprtable, this commit allows calculating the tablespace id without
the dimension slice to exist in the catalog.
If there is already a chunk, which collides on dimension slices, the
function fails to create the chunk table.
The function will be used internally in multi-node to be able to
replicate a chunk from one data node to another.
Since pg_prepared_xacts is shared between databases, the healing
function tried to resolve prepared transactions created by other
distributed databases.
This change makes the healing function to work only with current
database.
Fix#3433
The timescaledb extension checking code uses syscache lookups to
determine whether the proxy table exists. Doing syscache lookups
inside the invalidation callback context was corrupting the syscache.
For this reason, we remove this callback.
When a insert into a compressed chunk is blocked by a
concurrent recompress_chunk, the latter process could move
the storage for the compressed chunk. Verify validity of
the compressed chunk before proceeding to acquire locks.
Fixes#3400
If the names for entries in the targetlist for the direct and partial
views of the continuous aggregate does not match the attribute names in
the tuple descriptor for the result tuple of the user view an error
will be generated. This commit fixes this by setting the targetlist
resource names of the columns in the direct and partial view to the
corresponding attribute name of the user view relation tuple
descriptor.
Fixes#3051Fixes#3405
We have added additional functionality in timescaledb extension to
use in tap tests. Install these perl files in the PG installation
directory so that external modules (the current "forge_ext" extension
as an example) can use this functionality without having to have an
additional dependency on the timescaledb extension source code. Note
that these perl files should be installed only if PG installation has
the relevant "${PG_PKGLIBDIR}/pgxs/src/test/perl" subdirectory.
Also rejig the configuration directory variable that's used for the
tap tests so that external modules can specify their own locations by
using their own values for it (the current variable was tightly tied to
timescaledb source code location).
As a future replacement for time_bucket(), time_bucket_ng()
should support seconds, minutes, and hours. This patch adds
this support. The implementation is the same as for
time_bucket(). Timezones are not yet supported.
In nested function invocations hypertable expansion would not work
correctly and a hypertable would not be expanded by timescaledb
code but by postgres table inheritance leading to fdw_private
not being properly initialized.
Fixes#3391
Add 2.3.1 to the update tests and update the downgrade target for the
downgrade version. This commit also fixes two issues that were fixed in
the release branch:
1. Drop `_timescaledb_internal.refresh_continuous_aggregate`
2. Update the changelog to match the release branch.
The regex expected a trailing version like `-rc1` or `-dev` but only
the non-dash part was optional. This correct the regex to ensure that
the entire trailing part is optional.
Rework debug waitpoint functionality to produce an error in
case if the waitpoint is enabled.
This update introduce a controlled way to simulate errors
scenarios during testing.
Current implementation of caggs can't find a bucketing function
if it's declared in the experimental schema. This patch fixes it.
Also the patch adds `debug_notice` test to IGNORE list on AppVeyor.
The corresponding test generates an extra "DEBUG: rehashing catalog
cache" message which is not critical. It seems to be stable on Linux.
**Bugfixes**
* #3279 Add some more randomness to chunk assignment
* #3288 Fix failed update with parallel workers
* #3300 Improve trigger handling on distributed hypertables
* #3304 Remove paths that reference parent relids for compressed chunks
* #3305 Fix pull_varnos miscomputation of relids set
* #3310 Generate downgrade script
* #3314 Fix heap buffer overflow in hypertable expansion
* #3317 Fix heap buffer overflow in remote connection cache.
* #3327 Make aggregate in caggs fully qualified
* #3327 Make aggregates in caggs fully qualified
* #3336 Fix pg_init_privs objsubid handling
* #3345 Fix SkipScan distinct column identification
* #3355 Fix heap buffer overflow when renaming compressed hypertable columns.
* #3367 Improve DecompressChunk qual pushdown
* #3377 Fix bad use of repalloc
**Thanks**
* @db-adrian for reporting an issue when accessing cagg view through postgres_fdw
* @fncaldas and @pgwhalen for reporting an issue accessing caggs when public is not in search_path
* @fvannee, @mglonnro and @ebreijo for reporting an issue with the upgrade script
* @fvannee for reporting a performance regression with SkipScan
During an update, it is not possible to run the downgrade scripts until
the release has been tagged, but the update scripts can be run. This
means that we need to split the previous version into two different
fields: one for running the update tests and one for running the
downgrade tests.
In contrast to `realloc`, the PostgreSQL function `repalloc` does not
accept a NULL pointer and fall back on `palloc`. For that reason it is
necessary to check if the pointer is NULL and either use `palloc` or
`repalloc`.
From PG12 on, CREATE OR REPLACE is supported for aggregates,
therefore, since we have dropped support for PG11, we can avoid
going through the rigamarole of having our aggregates in a separate
file from the functions we define to support them. Nor do we need to
handle aggregates separately from other functions as their creation
is now idempotent.
If `clang-format` is not found `CLANG_FORMAT` will be set to a false
value, which will prevent the `format` target from being defined if
`cmake-format` is not installed.
This commit fixes that by always creating a `format` target since
`clang-format` is always created even if `CLANG_FORMAT` is false.
We do not redefine `CLANG_FORMAT` to a non-false value since it is
expected to contain a path to an executable and if set to a true value
that is not a path it could be used in a way that leads to strange
errors.
Allow pushdown of RelabelType expressions into the scan below the
DecompressChunk node. When using varchar columns as segmentby
columns constraints on those columns would not be pushed down
because postgres would inject RelabelType nodes that where not
accepted as valid expression for pushdown leading to bad performance
because the filter could only be applied after decompression.
Fixes#3351
This function is IMMUTABLE when it doesn't accept timestamptz arguments,
and STABLE otherwise. See the comments in sql/time_bucket_ng.sql for
more details.
PG14 changes xact.h to no longer include fmgr.h which is needed
for PG_FUNCTION_ARGS definition. This patch includes fmgr.h
explicitly to no longer rely on the indirect include.
https://github.com/postgres/postgres/commit/3941eb6341
This patch adds a new EXPERIMENTAL flag to cmake allowing skipping
the check for a compatible postgres version. It also adds macros
needed for PG14 support.
This commit add functions and code to generate a downgrade script from
the current version to the previous version. This requires execution
from a Git repository since it retrieves the prolog and epilog for the
"downgrade" file from the version given by `update_from_version` in the
`version.config` file.
The commit adds several CMake functions that simplifies the composition
of script files, but these are not used to generate the update scripts.
A potential improvement is to use the scripts to also generate the
update scripts.
This commit supports generating a downgrade script from the
current version to the previous version. Other versions are handled
using a variable containing file names of reverse update
scripts and the source and target version is extracted from the file
names, which is assumed to be of the form
`<source-version>--<target-version>.sql`.
In addition to adding support for generating downgrade scripts, the
commit adds a downgrade test file that tests a release in a similar way
to the update script and adds it as a workflow.
Remove the chunk name completely from output as the name might have
different length leading to different output as table headers are
adjusted to the length of the field values.
The SkipScan code assumed the first entry in PathKeys would match
the distinct column. This is not always true as postgres will remove
entries from PathKeys it considers constant leading to SkipScan
operating on the wrong column under those circumstances.
This did most likely not cause any wrong results as the other
constraints for SkipScan to apply still had to be satisfied but
resulted in very inefficient query execution for those affected
queries.
This patch refactors the SkipScan code to use the distinctClause
from the Query instead.
Fixes#3330
Move the timestamp_limits and with_clause_parser test to
regresscheck-shared since those tests don't need a private
database incurring less overhead to run those tests.
Also add missing ORDER BY clauses to some of the queries
in timestamp_limits to make the output more stable.
Change the postgresql ftp URL from snapshot/ to source/. This way we
do not need to react to every new commit upstream, but only whenever
the postgresql minor version changes.
Co-authored-by: Mats Kindahl <mats.kindahl@gmail.com>
Change sanitizer test to run on PG12 and make it use the same
infrastructure as the other linux regression tests.
Co-authored-by: Sven Klemm <sven@timescale.com>
This patch adds time_bucket_ng() function to the experimental
schema. The "ng" part stands for "next generation". Unlike
current time_bucket() implementation the _ng version will support
months, years and timezones.
Current implementation doesn't claim to be complete. For instance,
it doesn't support timezones yet. The reasons to commit it in it's
current state are 1) to shorten the feedback loop 2) to start
experimenting with monthly buckets are soon as possible,
3) to reduce the unnecessary work of rebasing and resolving
conflicts 4) to make the work easier to the reviewers
The post-update script was handling preserving initprivs for newly
added catalog tables and views. However, newly added catalog sequences
need separate handling otherwise update tests start failing. We also
now grant privileges for all future sequences in the update tests.
In passing, default the PG_VERSION in the update tests to 12 since we
don't work with PG11 anymore.