When a job finishes execution, either because of an error or a success,
this commit will print out the execution time of the job in the log
together with a message about what job that finished.
For continuous aggregate refreshes, the number of rows deleted from and
inserted into the materialization table will be printed.
When the uncompressed part of a partially compressed chunk is read by a
non-partial path and the compressed part by a partial path, the append
node on top could process the uncompressed part multiple times because
the path was declared as a partial path and the append node assumed it
could be executed in all workers in parallel without producing
duplicates.
This PR fixes the declaration of the path.
For continuous aggregates with variable bucket size, the interval
was wrongly manipulated in the process. Now it is corrected by
creating a copy of interval structure for validation purposes
and keeping the original structure untouched.
Fixes#5734
This patch adds support to pass continuous aggregate names to
`chunk_detailed_size` to align it to the behavior of other functions
such as `show_chunks`, `drop_chunks`, `hypertable_size`.
This patch adds support to pass continuous aggregate names to the
`set_chunk_time_interval` function to align it with functions, such as
`show_chunks`, `drop_chunks`, and others.
It reuses the previously existing function to find a hypertable or
resolve a continuous aggregate to its underlying hypertable found in
chunk.c. It, however, moves the function to hypertable.c and exports it
from here. There is some discussion if this functionality should stay in
chunk.c, though, it feels wrong in that file now that it is exported.
This commit is a follow up of #5515, which added support for ALTER TABLE
... REPLICA IDENTITY (FULL | INDEX) on hypertables.
This commit allows the execution against materialized hypertables to
enable update / delete operations on continuous aggregates when logical
replication in enabled for them.
* Restore default batch context size to fix a performance regression on
sorted batch merge plans.
* Support reverse direction.
* Improve gorilla decompression by computing prefix sums of tag bitmaps
during decompression.
The ts_set_flags_32 function takes a bitmap and flags and returns an
updated bitmap. However, if the returned value is not used, the function
call has no effect. An unused result may indicate the improper use of this
function.
This patch adds the qualifier pg_nodiscard to the function which
triggers a warning if the returned value is not used.
In #4664 we introduced fixed schedules for jobs. This was done by
introducing additional parameters fixed_schedule, initial_start and
timezone for our add_job and add_policy APIs.
These fields were not updatable by alter_job so it was not
possible to switch from one type of schedule to another without dropping
and recreating existing jobs and policies.
This patch adds the missing parameters to alter_job to enable switching
from one type of schedule to another.
Fixes#5681
The backport script for the PRs does not have the permission to backport
PRs which include workflow changes. So, these PRs are excluded from
being automatically backported.
Failed CI run:
https://github.com/timescale/timescaledb/actions/runs/5387338161/
jobs/9780701395
> refusing to allow a GitHub App to create or update workflow
> `.github/workflows/xxx.yaml` without `workflows` permission)
We stopped to build packages for Ubuntu Kinetic on ARM64 due to the
limited support of PostgreSQL versions and the EOL of Kinetic in a few
weeks. This patch removes the check for up-to-date packages for this
version.
The CHANGELOG.MD file contains the sections features, bugfixes, and
thanks. This patch adjusts the script merge_changelogs.sh to produce
the sections in the same order.
This release contains bug fixes since the 2.11.0 release. We recommend
that you upgrade at the next available opportunity.
**Features**
* #5679 Teach loader to load OSM extension
**Bugfixes**
* #5705 Scheduler accidentally getting killed when calling `delete_job`
* #5742 Fix Result node handling with ConstraintAwareAppend on
compressed chunks
* #5750 Ensure tlist is present in decompress chunk plan
* #5754 Fixed handling of NULL values in bookend_sfunc
* #5798 Fixed batch look ahead in compressed sorted merge
* #5804 Mark cagg_watermark function as PARALLEL RESTRICTED
* #5807 Copy job config JSONB structure into current MemoryContext
* #5824 Improve continuous aggregate query chunk exclusion
**Thanks**
* @JamieD9 for reporting an issue with a wrong result ordering
* @xvaara for reporting an issue with Result node handling in
ConstraintAwareAppend
This patch changes the time_bucket exclusion in cagg queries to
distinguish between < and <=. Previously those were treated the
same leading to failure to exclude chunks when the constraints
where exactly at the bucket boundary.
Add support for setting replica identity on hypertables via ALTER
TABLE. The replica identity is used in logical replication to identify
rows that have changed.
Currently, replica identity can only be altered on hypertables via the
root; changing it directly on chunks will raise an error.
In decompress_sorted_merge_get_next_tuple it is determine how many
batches need to be opened currently to perform a sorted merge. This is
done by checking if the first tuple from the last opened batch is larger
than the last returned tuple.
If a filter removes the first tuple, the first into the heap inserted
tuple from this batch can no longer be used to perform the check. This
patch fixes the wrong batch look ahead.
Fixes: #5797
The job config jsonb can be a nested structure of elements that
all need to reside in the same memory context as the other job
values. To ensure this we copy the structure on assignment.
If there any indexes on the compressed chunk, insert into them while
inserting the heap data rather than reindexing the relation at the
end. This reduces the amount of locking on the compressed chunk
indexes which created issues when merging chunks and should help
with the future updates of compressed data.
It could happen that the Chunk is dropped in the middle of processing
another command. The test bgw_db_scheduler_fixed can crash due to that
reason. By making sure that the system will error out instead of
failing in an assertion could help avoid the situation in which
postmaster drop out all clients in these cases.
This patch marks the function cagg_watermark as PARALLEL RESTRICTED. It
partially reverts the change of
c0f2ed18095f21ac737f96fe93e4035dbfeeaf2c. The reason is as follows: for
transaction isolation levels < REPEATABLE READ it can not be ensured
that parallel worker reads the same watermark (e.g., using read
committed isolation level: worker A reads the watermark, the CAGG is
refreshed and the watermark changes, worker B reads the newer
watermark). The different views on the CAGG can cause unexpected results
and crashes (e.g., the chunk exclusion excludes different chunks in
worker A and in worker B).
In addition, a correct snapshot is used when the watermark is read from
the CAGG and a TAP test is added, which detects inconsistent watermark
reads.
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Co-authored-by: Zoltan Haindrich <zoltan@timescale.com>
The flakiness was due to two inserts falling into the
same chunk instead of distinct ones, so inserted data
further apart to ensure they fall in different chunks.
During UPDATE/DELETE on compressed hypertables, we do a sequential
scan which can be improved by supporting index scans.
In this patch for a given UPDATE/DELETE query, if there are any
WHERE conditions specified using SEGMENT BY columns, we use index
scan to fetch all matching rows. Fetched rows will be decompressed
and moved to uncompressed chunk and a regular UPDATE/DELETE is
performed on the uncompressed chunk.
The download links for several platforms are broken. This patch removes
the links for the individual platforms and adds a link that points to
the self-hosted install docs instead (as proposed by the docs team, see
the discussion in #5762).
Fixes: #5762
The ignored workflow for windows-build-and-test does not set the name of
the actions properly. Therefore, these actions use the default naming.
For example, 'Regression Windows / build (15, windows-2022, Debug)'.
However, our CI expects names like 'PG15 Debug windows-2022' in the
required checks. This PR corrects the name of the jobs.
CMAKE_CPP_FLAGS is not a thing at all. Furthermore,
CMAKE_CXX_FLAGS is not passed to a C compiler.
pg_config uses CPPGLAGS for all includes, and needs
to be passed into CMAKE_C_FLAGS as well.
SQLSmith found two bugs in the compression sorted merge code.
* The unused_batch_states are not initialized properly. Therefore,
non-existing unused batch states can be part of the BMS. This patch
fixes the initialization.
* For performance reasons, We reuse the same TupleDesc across all
TupleTableSlots. PostgreSQL sometimes uses TupleDesc data structures
with active reference counting. The way we use the TupleDesc
structures collides with the reference counting of PostgreSQL. This
patch introduces a private TupleDesc copy without reference counting.
PG16 defines the PGDLLEXPORT macro as a proper visibility attribute. In
the previous versions from PG12 to PG16, the PGDLLEXPORT was always
defined as an empty macro. Considering all this, the code has been now
updated to skip defining PGDLLEXPORT if it has been already defined
properly. If not, the macro is redefined without any additional checks.
postgres/postgres@089480c
Note that this change in combination with -DEXPERIMENTAL=ON cmake flag
will just allow us to compile timescaledb code with PG16 and this
doesn't mean PG16 is supported by the extension.
We have changed the compression test by disabling parallel append
in some test cases because the regression test was falling only
in PG14.0 but not in PG14.8 or any other PostgreSQL version
Add a function to decompress a compressed batch entirely in one go, and
use it in some query plans. As a result of decompression, produce
ArrowArrays. They will be the base for the subsequent vectorized
computation of aggregates.
As a side effect, some heavy queries to compressed hypertables speed up
by about 15%. Point queries with LIMIT 1 can regress by up to 1 ms. If
the absolute highest performace is desired for such queries, bulk
decompression can be disabled by a GUC.
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.
This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
In the function bookend_sfunc values are compared. If the first
processed value is a NULL value, it was copied into the state of the
sfunc. A following comparison between the NULL value of the state and a
non-NULL value could lead to a crash.
This patch improves the handling of NULL values in bookend_sfunc.
In PostgreSQL < 15, CustomScan nodes are projection capable. The planner
invokes create_plan_recurse with the flag CP_IGNORE_TLIST. So, the
target list of a CustomScan node can be NIL. However, we rely on the
target list to derive information for sorting.
This patch ensures that the target list is always populated before the
sort functions are called.
Fixes: #5738