We stopped to build packages for Ubuntu Kinetic on ARM64 due to the
limited support of PostgreSQL versions and the EOL of Kinetic in a few
weeks. This patch removes the check for up-to-date packages for this
version.
The CHANGELOG.MD file contains the sections features, bugfixes, and
thanks. This patch adjusts the script merge_changelogs.sh to produce
the sections in the same order.
This release contains bug fixes since the 2.11.0 release. We recommend
that you upgrade at the next available opportunity.
**Features**
* #5679 Teach loader to load OSM extension
**Bugfixes**
* #5705 Scheduler accidentally getting killed when calling `delete_job`
* #5742 Fix Result node handling with ConstraintAwareAppend on
compressed chunks
* #5750 Ensure tlist is present in decompress chunk plan
* #5754 Fixed handling of NULL values in bookend_sfunc
* #5798 Fixed batch look ahead in compressed sorted merge
* #5804 Mark cagg_watermark function as PARALLEL RESTRICTED
* #5807 Copy job config JSONB structure into current MemoryContext
* #5824 Improve continuous aggregate query chunk exclusion
**Thanks**
* @JamieD9 for reporting an issue with a wrong result ordering
* @xvaara for reporting an issue with Result node handling in
ConstraintAwareAppend
This patch changes the time_bucket exclusion in cagg queries to
distinguish between < and <=. Previously those were treated the
same leading to failure to exclude chunks when the constraints
where exactly at the bucket boundary.
Add support for setting replica identity on hypertables via ALTER
TABLE. The replica identity is used in logical replication to identify
rows that have changed.
Currently, replica identity can only be altered on hypertables via the
root; changing it directly on chunks will raise an error.
In decompress_sorted_merge_get_next_tuple it is determine how many
batches need to be opened currently to perform a sorted merge. This is
done by checking if the first tuple from the last opened batch is larger
than the last returned tuple.
If a filter removes the first tuple, the first into the heap inserted
tuple from this batch can no longer be used to perform the check. This
patch fixes the wrong batch look ahead.
Fixes: #5797
The job config jsonb can be a nested structure of elements that
all need to reside in the same memory context as the other job
values. To ensure this we copy the structure on assignment.
If there any indexes on the compressed chunk, insert into them while
inserting the heap data rather than reindexing the relation at the
end. This reduces the amount of locking on the compressed chunk
indexes which created issues when merging chunks and should help
with the future updates of compressed data.
It could happen that the Chunk is dropped in the middle of processing
another command. The test bgw_db_scheduler_fixed can crash due to that
reason. By making sure that the system will error out instead of
failing in an assertion could help avoid the situation in which
postmaster drop out all clients in these cases.
This patch marks the function cagg_watermark as PARALLEL RESTRICTED. It
partially reverts the change of
c0f2ed18095f21ac737f96fe93e4035dbfeeaf2c. The reason is as follows: for
transaction isolation levels < REPEATABLE READ it can not be ensured
that parallel worker reads the same watermark (e.g., using read
committed isolation level: worker A reads the watermark, the CAGG is
refreshed and the watermark changes, worker B reads the newer
watermark). The different views on the CAGG can cause unexpected results
and crashes (e.g., the chunk exclusion excludes different chunks in
worker A and in worker B).
In addition, a correct snapshot is used when the watermark is read from
the CAGG and a TAP test is added, which detects inconsistent watermark
reads.
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Co-authored-by: Zoltan Haindrich <zoltan@timescale.com>
The flakiness was due to two inserts falling into the
same chunk instead of distinct ones, so inserted data
further apart to ensure they fall in different chunks.
During UPDATE/DELETE on compressed hypertables, we do a sequential
scan which can be improved by supporting index scans.
In this patch for a given UPDATE/DELETE query, if there are any
WHERE conditions specified using SEGMENT BY columns, we use index
scan to fetch all matching rows. Fetched rows will be decompressed
and moved to uncompressed chunk and a regular UPDATE/DELETE is
performed on the uncompressed chunk.
The download links for several platforms are broken. This patch removes
the links for the individual platforms and adds a link that points to
the self-hosted install docs instead (as proposed by the docs team, see
the discussion in #5762).
Fixes: #5762
The ignored workflow for windows-build-and-test does not set the name of
the actions properly. Therefore, these actions use the default naming.
For example, 'Regression Windows / build (15, windows-2022, Debug)'.
However, our CI expects names like 'PG15 Debug windows-2022' in the
required checks. This PR corrects the name of the jobs.
CMAKE_CPP_FLAGS is not a thing at all. Furthermore,
CMAKE_CXX_FLAGS is not passed to a C compiler.
pg_config uses CPPGLAGS for all includes, and needs
to be passed into CMAKE_C_FLAGS as well.
SQLSmith found two bugs in the compression sorted merge code.
* The unused_batch_states are not initialized properly. Therefore,
non-existing unused batch states can be part of the BMS. This patch
fixes the initialization.
* For performance reasons, We reuse the same TupleDesc across all
TupleTableSlots. PostgreSQL sometimes uses TupleDesc data structures
with active reference counting. The way we use the TupleDesc
structures collides with the reference counting of PostgreSQL. This
patch introduces a private TupleDesc copy without reference counting.
PG16 defines the PGDLLEXPORT macro as a proper visibility attribute. In
the previous versions from PG12 to PG16, the PGDLLEXPORT was always
defined as an empty macro. Considering all this, the code has been now
updated to skip defining PGDLLEXPORT if it has been already defined
properly. If not, the macro is redefined without any additional checks.
postgres/postgres@089480c
Note that this change in combination with -DEXPERIMENTAL=ON cmake flag
will just allow us to compile timescaledb code with PG16 and this
doesn't mean PG16 is supported by the extension.
We have changed the compression test by disabling parallel append
in some test cases because the regression test was falling only
in PG14.0 but not in PG14.8 or any other PostgreSQL version
Add a function to decompress a compressed batch entirely in one go, and
use it in some query plans. As a result of decompression, produce
ArrowArrays. They will be the base for the subsequent vectorized
computation of aggregates.
As a side effect, some heavy queries to compressed hypertables speed up
by about 15%. Point queries with LIMIT 1 can regress by up to 1 ms. If
the absolute highest performace is desired for such queries, bulk
decompression can be disabled by a GUC.
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.
This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
In the function bookend_sfunc values are compared. If the first
processed value is a NULL value, it was copied into the state of the
sfunc. A following comparison between the NULL value of the state and a
non-NULL value could lead to a crash.
This patch improves the handling of NULL values in bookend_sfunc.
In PostgreSQL < 15, CustomScan nodes are projection capable. The planner
invokes create_plan_recurse with the flag CP_IGNORE_TLIST. So, the
target list of a CustomScan node can be NIL. However, we rely on the
target list to derive information for sorting.
This patch ensures that the target list is always populated before the
sort functions are called.
Fixes: #5738
Adds a simple check to ensure that the PR number is present at least once
in the added changelog file.
Also fixes an earlier PR which introduced a typo.
So far, we have set the number of desired workers for decompression to
1. If a query touches only one chunk, we end up with one worker in a
parallel plan. Only if the query touches multiple chunks PostgreSQL
spins up multiple workers. These workers could then be used to process
the data of one chunk.
This patch removes our custom worker calculation and relies on
PostgreSQL logic to calculate the desired parallelity.
Co-authored-by: Jan Kristof Nidzwetzki <jan@timescale.com>
Internal Server Error when loading Explorer tab (SDC #995)
This is with reference to a weird scenarios where chunk table entry exist in
timescaledb catalog but it does not exist in PG catalog. The stale entry blocks
executing hypertable_size function on the hypertable.
The changes in this patch are related to improvements suggested for
hypertable_size function which involves:
1. Locking the hypertable in ACCESS SHARE mode in function hypertable_size to
avoid risk of chunks being dropped by another concurrent process.
2. Joining the hypertable and inherited chunk tables with "pg_class" to make
sure that a stale table without an entry is pg_catalog is not included as part
of hypertable size calculation.
3. An additional filter (schema_name) is required on pg_class to avoid
calculating size of multiple hypertables with same in different schema.
NOTE: With this change calling hypertable_size function will require select
privilege on the table.
Disable-check: force-changelog-file
This patch enables ChunkAppend optimization for partially
compressed chunks on hypertables without space partitioning,
allowing for more efficient processing of LIMIT order by
queries.
A follow-up patch is required to handle space partitioned
hypertables.
This patch does following:
1. Planner changes to create ChunkDispatch node when MERGE command
has INSERT action.
2. Changes to map partition attributes from a tuple returned from
child node of ChunkDispatch against physical targetlist, so that
ChunkDispatch node can read the correct value from partition column.
3. Fixed issues with MERGE on compressed hypertable.
4. Added more testcases.
5. MERGE in distributed hypertables is not supported.
6. Since there is no Custom Scan (HypertableModify) node for MERGE
with UPDATE/DELETE on compressed hypertables, we don't support this.
Fixes#5139
This serves as a way to exercise the decompression fuzzing code, which
will be useful when we need to change the decompression functions. Also
this way we'll have a check in CI that uses libfuzzer, and it will be
easier to apply it to other areas of code in the future.
In several places of our code base we use a combination of
`get_namespace_oid` and `get_relname_relid` to return the Oid of a
schema qualified relation, so refactored the code to encapsulate this
behavior in a single function.
Use postgres for sql-mode, which makes it easier to connect to local
postgresql with C-c C-z and also sets up postgresql extensions for
font-lock.
Copy the 4-space tab-width setting to diff-mode so that, for example,
diffs of lines containing 18 TABs don't display with 144-space indent.