Change the postgresql ftp URL from snapshot/ to source/. This way we
do not need to react to every new commit upstream, but only whenever
the postgresql minor version changes.
Co-authored-by: Mats Kindahl <mats.kindahl@gmail.com>
Change sanitizer test to run on PG12 and make it use the same
infrastructure as the other linux regression tests.
Co-authored-by: Sven Klemm <sven@timescale.com>
This patch adds time_bucket_ng() function to the experimental
schema. The "ng" part stands for "next generation". Unlike
current time_bucket() implementation the _ng version will support
months, years and timezones.
Current implementation doesn't claim to be complete. For instance,
it doesn't support timezones yet. The reasons to commit it in it's
current state are 1) to shorten the feedback loop 2) to start
experimenting with monthly buckets are soon as possible,
3) to reduce the unnecessary work of rebasing and resolving
conflicts 4) to make the work easier to the reviewers
The post-update script was handling preserving initprivs for newly
added catalog tables and views. However, newly added catalog sequences
need separate handling otherwise update tests start failing. We also
now grant privileges for all future sequences in the update tests.
In passing, default the PG_VERSION in the update tests to 12 since we
don't work with PG11 anymore.
Add a workflow to check that CMake files are correctly formatted as
well as a custom target to format CMake files in the repository. This
commit also runs the formatting on all CMake files in the repository.
Combine dist_hypertable_pg12 test since all the tests in that file
can run on all supported PG versions now.
We also rename the views test to information_views to make it clearer
what the test is about and rename the join test to pg_join since
this is the postgres join test ported to hypertables.
With the removal of PG11 support we can use the same template for
the rowsecurity test. We still need to keep the output version
specific since the plan output differs between PG12 and PG13.
Currently any tests in regresscheck-shared can only include
EXPLAIN output if they only access the precreated hypertables
as hypertables and chunks created in the test itself will
have ids depending on execution order of the shared tests.
This patch filters out those ids so shared tests can include
EXPLAIN output for local hypertables.
Triggers on distributed hypertables can now be renamed due to having
the rename statement forwarded to data nodes. This also applies to
other objects on such tables, like constraints and indexes, since they
share the same DDL "rename" machinery. Tests are added for these
cases.
For convenience, trigger functions on distributed hypertables will now
also be created automatically on the data nodes. In other words, it is
no longer necessary to pre-create the trigger function on all data
nodes.
This change also fixes an issue with statement-level triggers, which
weren't properly propagated to data nodes during `CREATE TRIGGER`.
Fixes#3238
Since we are only interested in entries with classoid pg_class
our queries should reflect that. Without these restrictions
objects that have entries for multiple classoids can cause the
extension update to fail.
Refactoring. Since bucket_width will not be fixed in the future this
patch introduces two new procedures:
- ts_continuous_agg_bucket_width(), for determining the exact width
of given bucket;
- ts_continuous_agg_max_bucket_width(), for estimating the maximum
bucket width for given continuous aggregate;
This will allow determining the bucket width on the fly, which is
not possible when ContinuousAgg* -> data.bucket_width is accessed
directly. All accesses to data.bucket_width were changed accordingly.
When executing "ALTER EXTENSION timescaledb UPDATE TO ..." it will fail
if parallel workers spawn for the update itself. Disable parallel
execution during the update.
pg_init_privs can have multiple entries per relation if the relation
has per column privileges. An objsubid different from 0 means that
the entry is for a column privilege. Since we do not currently
restore column privileges we have to ignore those rows otherwise
the update script will fail when tables with column privileges are
present.
The two_buckets_to_str() procedure relies on fixed bucket_width
which is not going to be fixed in the future. Since the procedure
is used only to generate a hint that accompanies an error message,
the simplest solution is to remove this procedure. We can improve
error messages later if that would be necessary.
Harden core APIs by adding the `const` qualifier to pointer parameters
and return values passed by reference. Adding `const` to APIs has
several benefits and potentially reduces bugs.
* Allows core APIs to be called using `const` objects.
* Callers know that objects passed by reference are not modified as a
side-effect of a function call.
* Returning `const` pointers enforces "read-only" usage of pointers to
internal objects, forcing users to copy objects when mutating them
or using explicit APIs for mutations.
* Allows compiler to apply optimizations and helps static analysis.
Note that these changes are so far only applied to core API
functions. Further work can be done to improve other parts of the
code.
When querying continuous aggregate views with a search_path not
including public the query will fail cause the function reference
in the finalize call is not fully qualified.
This can surface when querying caggs through postgres_fdw which
resets search_path to contain only pg_catalog.
Fixes#1919Fixes#3326
Upstream fixed a bug with miscomputation of relids for
PlaceHolderVar. Those fixes changed the signature of pull_varnos,
make_simple_restrictinfo and make_restrictinfo.
The fixes got backported to the PG12 and PG13 branch but to not
break compatibility with extensions the old functions were left in.
This patch makes our code use the new functions when compiling
against a postgres version that has them.
https://github.com/postgres/postgres/commit/1cce024fd2https://github.com/postgres/postgres/commit/73fc2e5bab
We explicilty filter paths for compressed chunks that
have spurious join clauses between the compressed chunk and
the original chunk or hypertable. However there are other
cases where a chunk could be a child rel
(i.e. RELOPT_OTHER_MEMBER_REL) such as when the chunk is
referenced as part of a UNION ALL query. We remove all
paths that have spurious join clauses between the compressed
chunk and any implied parent for the chunk.
Fixes#2917
While looking for code that relies on fixed bucket_width I discovered
several files that don't seem to be used anymore. This patch removes
these files. Even if there is a value in them we can't rely on fixed
bucket_width in the future anyway.
$(pg_config --libdir)/pgxs/src/test/perl is the right path for
PostgresNode.pm on Linux. On MaxOS and PG 13.x another path seems
to be used: $(pg_config --libdir)/postgresql/pgxs/src/test/perl
Introduce a shell wrapper around perl prove utility to control running
of TAP tests. The following control variable is supported:
PROVE_TESTS: only run TAP tests from this list
e.g make provecheck PROVE_TESTS="t/foo.pl t/bar.pl"
Note that you can also use regular expressions to run multiple taps
tests matching the pattern:
e.g make provecheck PROVE_TESTS="t/*chunk*"
If the existing "TESTS=" option is used along with PROVE_TESTS then
the subset represented by PROVE_TESTS will also get run. Otherwise tap
tests will be skipped if "TESTS=" is specified.
e.g make installcheck TESTS=dist_hyper* PROVE_TESTS="t/001_*"
Previously the assignment of data nodes to chunks had a bit
of a thundering-herd problem for multiple hypertables
without space partions: the data node assigned for the
first chunk was always the same across hypertables.
We fix this by adding the hypertable_id to the
index into the datanode array. This de-synchronizes
across hypertables but maintains consistency for any
given hypertable.
We could make this consistent for space partitioned tables
as well but avoid doing so now to prevent partitions
jumping nodes due to this change.
This also effects tablespace selection in the same way.
Install the Perl prerequisites when building the image for ABI tests,
including the `prove` binary.
Although the ABI tests currently don't run TAP tests, CMake still
failed the configuration since it expects the prerequisites to be
installed unless it is run with `-DTAP_CHECKS=off`.
When adding support for PG13 we introduced a macro hide the
function signature differences of ExecComputeStoredGenerated
but the callers of this function never got adjusted to use the
macro.
CMake now detects if the necessary prerequisites for running TAP
tests are installed on the local system. This includes perl
installation and other dependencies, such as the IPC::Run module and
prove binary.
Remove TTSOpsVirtualP, TTSOpsHeapTupleP, TTSOpsMinimalTupleP and
TTSOpsBufferHeapTupleP macros since they were only needed on PG11
to allow us to define compatibility macros for TupleTableSlot
operations.
When refactoring the job code the job_type was removed from our
catalog but a few places got overlooked that still had a variable
holding the job type.
This release adds major new features since the 2.2.1 release. We deem
it moderate priority for upgrading.
This release adds support for inserting data into compressed chunks
and improves performance when inserting data into distributed
hypertables. Distributed hypertables now also support triggers and
compression policies.
The bug fixes in this release address issues related to the handling
of privileges on compressed hypertables, locking, and triggers with
transition tables.
**Features**
* #3116 Add distributed hypertable compression policies
* #3162 Use COPY when executing distributed INSERTs
* #3199 Add GENERATED column support on distributed hypertables
* #3210 Add trigger support on distributed hypertables
* #3230 Support for inserts into compressed chunks
**Bugfixes**
* #3213 Propagate grants to compressed hypertables
* #3229 Use correct lock mode when updating chunk
* #3243 Fix assertion failure in decompress_chunk_plan_create
* #3250 Fix constraint triggers on hypertables
* #3251 Fix segmentation fault due to incorrect call to chunk_scan_internal
* #3252 Fix blocking triggers with transition tables
**Thanks**
* @yyjdelete for reporting a crash with decompress_chunk and identifying the bug in the code
* @fabriziomello for documenting the prerequisites when compiling against PostgreSQL 13
Two insert transactions could potentially try
to update the chunk status to unordered. This results in
one of the transactions failing with a tuple concurrently
update error.
Before updating status, lock the tuple for update, thus
forcing the other transaction to wait for the tuple lock, then
check status column value and update it if needed.