We were using mismatched compiler and gcov, which led to either
segfaults or errors like "GCOV_TAG_COUNTER_ARCS mismatch". Add some
cmake code that tries to find the gcov that matches the compiler.
This should hopefully fix some of the mysterious missing coverage
problems that we've been experiencing for a while.
This patch fixes several issues with next_start calculation.
- Previously, the offset was added twice in some cases.
This is fixed by this patch.
- Additionally, schedule intervals with month components
were not handled correctly.
Internally, time_bucket with origin is used to calculate
the next start. However, in the case of month intervals, the
timestamp calculated for a bucket is always aligned on the first
day of the month, regardless of origin.
Therefore, previously the result was aligned with origin by adding
the difference between origin and its respective time bucket.
This difference was computed as a fixed length interval in terms
of days and time. That computation led to incorrect computation of
next start occasionally, for example when a job should be executed on
the last day of a month.
That is fixed by adding an appropriate interval of months to
initial_start and letting Postgres handle this computation properly.
Fixes#5216
The hypertable_compression test had on implicit reliance on the
ordering of tuples when querying the materialized results.
This patch makes the ordering explicit in this test.
1) Simplify the path generation for the parameterized data node scans.
1) Adjust the data node scan cost if it's an index scan, instead of always
treating it as a sequential scan.
1) Hard-code the grouping estimation for distributed hypertable, instead
of using the totally bogus per-column ndistinct value.
1) Add the GUC to disable parameterized
data node scan.
1) Add more tests.
This release contains bug fixes since the 2.9.2 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.
**Bugfixes**
* #4804 Skip bucketing when start or end of refresh job is null
* #5108 Fix column ordering in compressed table index not following the order of a multi-column segment by definition
* #5187 Don't enable clang-tidy by default
* #5255 Fix year not being considered as a multiple of day/month in hierarchical continuous aggregates
* #5259 Lock down search_path in SPI calls
A previous change accidentally broke the dist_hypertable test so that
it prematurely exited. This change restores the test so that it
executes properly.
The function to execute remote commands on data nodes used a blocking
libpq API that doesn't integrate with PostgreSQL interrupt handling,
making it impossible for a user or statement timeout to cancel a
remote command.
Refactor the remote command execution function to use a non-blocking
API and integrate with PostgreSQL signal handling via WaitEventSets.
Partial fix for #4958.
Refactor remote command execution function
A follow-up for the review comments in the previous PR.
1. Create one backport PR per one source PR, even with multiple commits.
1. Add a comment to the source PR if we fail to backport it for some
reason.
Previously all intervals were converted to seconds using "epoch"
with date_part. However, this treats a year as 365.25 days to
account for leap years, leading to the unexpected situation that
a year is not a multiple of a day or a month.
Fixed by treating month-only intervals as multiples of 30 days.
Fixes#5231
1. Find the latest release branch
1. For each commit in main and not in release branch (compared by
title), find the corresponding PR.
1. If the PR fixes an issue labeled "bug", and neither the PR nor the
issue are labeled "no-backport", cherry-pick the commits from the PR
onto the release branch, and create a PR with these changes.
We should not broadly make functions security definer as that increases
the attack surface for our extension. Especially for our telemetry we
should strive to only run it with the minimum required privileges.
For an unknown reason, pip started to install an older version of
prospector which is incompatible with the current pylint. Require the
new prospector version explicitly.
The `cagg_watermark` function perform just read-only operations so is
safe to make it parallel safe to take advantage of the Postgres
parallel query.
Since 2.7 when we introduced the new Continuous Aggregate format we
don't use partials anymore and those aggregate functions
`partialize_agg` and `finalize_agg` are not parallel safe, so make no
sense don't take advantage of Postgres parallel query for realtime
Continuous Aggregates.
Broken code caused the async connection module to never send queries
using prepared statements. Instead, queries were sent using the
parameterized query statement instead.
Fix this so that prepared statements are used when created.
When run in a parallel group, the dist_move_chunk test can get into a
deadlock with another test running a 'DROP DATABASE' command. So, mark
it as a solo test to disallow it from running in a parallel group.
Closes#4972
Refactor the data node connection establishment so that it is
interruptible, e.g., by ctrl-c or `statement_timeout`.
Previously, the connection establishment used blocking libpq calls. By
instead using asynchronous connection APIs and integrating with
PostgreSQL interrupt handling, the connection establishment can be
canceled by an interrupt caused by a statement timeout or a user.
Fixes#2757
Tie the life cycle of a data node connection to the memory context it
is created on. Previously, a data node connection was automatically
closed at the end of a transaction, although often a connection needs
to live beyond a single transaction. For example, a connection cache
is maintained for data node connections, and, for such cases, a flag
was set on a connection to avoid closing it automatically.
Instead of tying connections to transactions, they are now life-cycle
managed via memory contexts. This simplifies the handling of
connections and avoids having to create exceptions to closing
connections at transaction end.
Since the job error log can contain information from many different
sources and also from many different jobs it is important to ensure
that visibility of the job error log entries is restricted to job
owners.
This commit extend the view `timescaledb_information.job_errors` with
role-based checks so that a user can only see entries for jobs that she
has permission to view and restrict the permissions to
`_timescaledb_internal.job_errors` so that users only can view the job
error log through the view. A special case is added so that the
superuser and the database owner can see all log entries, even if there
is no associated job id with the log entry.
Closes#5217
Recent refactorings in the INSERT into compressed chunk code
path allowed us to support this feature but the check to
prevent users from using this feature was not removed as part
of that patch. This patch removes the blocker code and adds a
minimal test case.
When start or end for a refresh job is null, then it gives an
error while bucketing because start and end are already min and
max timestamp value before bucketing. Hence, skip bucketing for
this case.
Partly fixes#4117
Enable the support of having join in the query used for creating
the continuous aggregates. It has follwoing restrictions-
1. Join can involve only one hypertable and one normal table
2. Join should be a inner join
3. Join condition can only be equality
This release contains bug fixes since the 2.9.1 release.
We recommend that you upgrade at the next available opportunity.
**Bugfixes**
* #5114 Fix issue with deleting data node and dropping the database on multi-node
* #5133 Fix creating a CAgg on a CAgg where the time column is in a different order of the original hypertable
* #5152 Fix adding column with NULL constraint to compressed hypertable
* #5170 Fix CAgg on CAgg variable bucket size validation
* #5180 Fix default data node availability status on multi-node
* #5181 Fix ChunkAppend and ConstraintAwareAppend with TidRangeScan child subplan
* #5193 Fix repartition behavior when attaching data node on multi-node
Commit effc8efe148c4ec0048bd7c1dfe0ca01df2afdc9
accidentally placed .gitattributes inside a build-13 directory
instead of the root of the project. This commit removes build-13 and
fixes the structure.