4064 Commits

Author SHA1 Message Date
Lakshmi Narayanan Sreethar
c3a9f90fdd Merge PG12 specific testfiles
Merged testfiles that were split out due to their output differing only
in PG12.
2023-07-25 16:00:18 +05:30
Lakshmi Narayanan Sreethar
7936e8015b Remove PG12 specific test output files 2023-07-25 16:00:18 +05:30
Lakshmi Narayanan Sreethar
81b520d3b5 Remove support for PG12
Remove support for compiling TimescaleDB code against PG12. PG12
specific macros and testfiles will be removed in a followup patch.
2023-07-25 16:00:18 +05:30
Lakshmi Narayanan Sreethar
cdea343cc9 Remove PG12 support from github workflows 2023-07-25 16:00:18 +05:30
Mats Kindahl
906bd38573 Add job exit status and runtime to log
When a job finishes execution, either because of an error or a success,
this commit will print out the execution time of the job in the log
together with a message about what job that finished.

For continuous aggregate refreshes, the number of rows deleted from and
inserted into the materialization table will be printed.
2023-07-14 12:47:14 +02:00
Jan Nidzwetzki
36e7100013 Fix duplicates on partially compressed chunk reads
When the uncompressed part of a partially compressed chunk is read by a
non-partial path and the compressed part by a partial path, the append
node on top could process the uncompressed part multiple times because
the path was declared as a partial path and the append node assumed it
could be executed in all workers in parallel without producing
duplicates.

This PR fixes the declaration of the path.
2023-07-13 08:57:53 +02:00
Rafia Sabih
1bd527375d Rectify interval calculation
For continuous aggregates with variable bucket size, the interval
was wrongly manipulated in the process. Now it is corrected by
creating a copy of interval structure for validation purposes
and keeping the original structure untouched.

Fixes #5734
2023-07-12 23:56:04 +02:00
noctarius aka Christoph Engelbert
4c3d64aa98
Support CAGG names in chunk_detailed_size (#5839)
This patch adds support to pass continuous aggregate names to
`chunk_detailed_size` to align it to the behavior of other functions
such as `show_chunks`, `drop_chunks`, `hypertable_size`.
2023-07-12 20:22:14 +02:00
noctarius aka Christoph Engelbert
963d4eefbf
Make set_chunk_time_interval CAGGs aware (#5852)
This patch adds support to pass continuous aggregate names to the
`set_chunk_time_interval` function to align it with functions, such as
`show_chunks`, `drop_chunks`, and others.

It reuses the previously existing function to find a hypertable or
resolve a continuous aggregate to its underlying hypertable found in
chunk.c. It, however, moves the function to hypertable.c and exports it
from here. There is some discussion if this functionality should stay in
chunk.c, though, it feels wrong in that file now that it is exported.
2023-07-12 20:21:27 +02:00
noctarius aka Christoph Engelbert
88aaf23ae3
Allow Replica Identity (Alter Table) on CAGGs (#5868)
This commit is a follow up of #5515, which added support for ALTER TABLE
... REPLICA IDENTITY (FULL | INDEX) on hypertables.

This commit allows the execution against materialized hypertables to
enable update / delete operations on continuous aggregates when logical
replication in enabled for them.
2023-07-12 14:53:40 +02:00
Alexander Kuzmenkov
eaa1206b7f Improvements for bulk decompression
* Restore default batch context size to fix a performance regression on
  sorted batch merge plans.
* Support reverse direction.
* Improve gorilla decompression by computing prefix sums of tag bitmaps
  during decompression.
2023-07-06 19:52:20 +02:00
Alexander Kuzmenkov
7657efe019 Cache the libfuzzer corpus between CI runs
This might help us find something interesting. Also add deltadelta/int8
fuzzing and make other minor improvements.
2023-07-06 17:57:04 +02:00
Jan Nidzwetzki
490bc916af Warn if result of ts_set_flags_32 is not used
The ts_set_flags_32 function takes a bitmap and flags and returns an
updated bitmap. However, if the returned value is not used, the function
call has no effect. An unused result may indicate the improper use of this
function.

This patch adds the qualifier pg_nodiscard to the function which
triggers a warning if the returned value is not used.
2023-07-05 09:25:15 +02:00
Konstantina Skovola
06d20b1829 Enable altering job schedule type through alter_job
In #4664 we introduced fixed schedules for jobs. This was done by
introducing additional parameters fixed_schedule, initial_start and
timezone for our add_job and add_policy APIs.
These fields were not updatable by alter_job so it was not
possible to switch from one type of schedule to another without dropping
and recreating existing jobs and policies.
This patch adds the missing parameters to alter_job to enable switching
from one type of schedule to another.

Fixes #5681
2023-07-03 15:42:54 +03:00
Jan Nidzwetzki
b9a58dd5c4 Exclude workflow changes from being backported
The backport script for the PRs does not have the permission to backport
PRs which include workflow changes. So, these PRs are excluded from
being automatically backported.

Failed CI run:

https://github.com/timescale/timescaledb/actions/runs/5387338161/
   jobs/9780701395

> refusing to allow a GitHub App to create or update workflow
> `.github/workflows/xxx.yaml` without `workflows` permission)
2023-07-03 12:57:12 +02:00
Jan Nidzwetzki
9bbf521889 Remove Ubuntu Kinetic check on ARM64
We stopped to build packages for Ubuntu Kinetic on ARM64 due to the
limited support of PostgreSQL versions and the EOL of Kinetic in a few
weeks. This patch removes the check for up-to-date packages for this
version.
2023-06-30 11:35:15 +02:00
Jan Nidzwetzki
a7be1cc06a Fixed the ordering of merge_changelogs.sh script
The CHANGELOG.MD file contains the sections features, bugfixes, and
thanks. This patch adjusts the script merge_changelogs.sh to produce
the sections in the same order.
2023-06-30 11:33:57 +02:00
Jan Nidzwetzki
8a58101095 Post-release fixes for 2.11.1
Bumping the previous version and adding tests for 2.11.1.
2023-06-30 09:54:57 +02:00
Jan Nidzwetzki
8ae2da6260 Release 2.11.1
This release contains bug fixes since the 2.11.0 release. We recommend
that you upgrade at the next available opportunity.

**Features**
* #5679 Teach loader to load OSM extension

**Bugfixes**
* #5705 Scheduler accidentally getting killed when calling `delete_job`
* #5742 Fix Result node handling with ConstraintAwareAppend on
  compressed chunks
* #5750 Ensure tlist is present in decompress chunk plan
* #5754 Fixed handling of NULL values in bookend_sfunc
* #5798 Fixed batch look ahead in compressed sorted merge
* #5804 Mark cagg_watermark function as PARALLEL RESTRICTED
* #5807 Copy job config JSONB structure into current MemoryContext
* #5824 Improve continuous aggregate query chunk exclusion

**Thanks**
* @JamieD9 for reporting an issue with a wrong result ordering
* @xvaara for reporting an issue with Result node handling in
  ConstraintAwareAppend
2023-06-28 16:49:02 +02:00
Sven Klemm
118526e6ae Improve continuous aggregate query chunk exclusion
This patch changes the time_bucket exclusion in cagg queries to
distinguish between < and <=. Previously those were treated the
same leading to failure to exclude chunks when the constraints
where exactly at the bucket boundary.
2023-06-28 16:47:51 +02:00
Erik Nordström
e2e7e5f286 Make hypertables support replica identity
Add support for setting replica identity on hypertables via ALTER
TABLE. The replica identity is used in logical replication to identify
rows that have changed.

Currently, replica identity can only be altered on hypertables via the
root; changing it directly on chunks will raise an error.
2023-06-27 15:07:40 +02:00
Jan Nidzwetzki
33a3e10f48 Fixed batch look ahead in compressed sorted merge
In decompress_sorted_merge_get_next_tuple it is determine how many
batches need to be opened currently to perform a sorted merge. This is
done by checking if the first tuple from the last opened batch is larger
than the last returned tuple.

If a filter removes the first tuple, the first into the heap inserted
tuple from this batch can no longer be used to perform the check. This
patch fixes the wrong batch look ahead.

Fixes: #5797
2023-06-26 14:40:46 +02:00
Sven Klemm
da20d071cf Copy job config JSONB structure into current MemoryContext
The job config jsonb can be a nested structure of elements that
all need to reside in the same memory context as the other job
values. To ensure this we copy the structure on assignment.
2023-06-26 12:57:52 +02:00
Jan Nidzwetzki
f172679022 Added perltidy make target
This patch introduces the make target 'perltidy' to format Perl files
with perltidy. In addition, calling perltidy is added to 'make format'.
2023-06-26 10:59:19 +02:00
Ante Kresic
fb0df1ae4e Insert into indexes during chunk compression
If there any indexes on the compressed chunk, insert into them while
inserting the heap data rather than reindexing the relation at the
end. This reduces the amount of locking on the compressed chunk
indexes which created issues when merging chunks and should help
with the future updates of compressed data.
2023-06-26 09:37:12 +02:00
Zoltan Haindrich
81d4eb5cfb Add Ensure-s to reduce crashes in unexpected cases
It could happen that the Chunk is dropped in the middle of processing
another command. The test bgw_db_scheduler_fixed can crash due to that
reason. By making sure that the system will error out instead of
failing in an assertion could help avoid the situation in which
postmaster drop out all clients in these cases.
2023-06-26 09:29:52 +02:00
Jan Nidzwetzki
81e2f35d4b Mark cagg_watermark as PARALLEL RESTRICTED
This patch marks the function cagg_watermark as PARALLEL RESTRICTED. It
partially reverts the change of
c0f2ed18095f21ac737f96fe93e4035dbfeeaf2c. The reason is as follows: for
transaction isolation levels < REPEATABLE READ it can not be ensured
that parallel worker reads the same watermark (e.g., using read
committed isolation level: worker A reads the watermark, the CAGG is
refreshed and the watermark changes, worker B reads the newer
watermark). The different views on the CAGG can cause unexpected results
and crashes (e.g., the chunk exclusion excludes different chunks in
worker A and in worker B).

In addition, a correct snapshot is used when the watermark is read from
the CAGG and a TAP test is added, which detects inconsistent watermark
reads.

Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Co-authored-by: Zoltan Haindrich <zoltan@timescale.com>
2023-06-23 20:07:26 +02:00
Konstantina Skovola
a22e732c02 Fix flaky test bgw_db_scheduler_fixed
The flakiness was due to two inserts falling into the
same chunk instead of distinct ones, so inserted data
further apart to ensure they fall in different chunks.
2023-06-23 18:11:09 +03:00
Zoltan Haindrich
d223000036 Chunk_create must add existing table or fail
Earlier this function have completed successfully if the requested
range already existed - regardless an existing table was supplied
or not.
2023-06-23 13:39:17 +02:00
Zoltan Haindrich
b2132f00a7 Make validate_chunk_status accept Chunk as argument
This makes the calls to this method more straightforward and could
help to do better checks inside the method.
2023-06-22 11:47:31 +02:00
Lakshmi Narayanan Sreethar
8b0ab41643 PG16: Use new function to check vacuum permission
postgres/postgres@b5d63824
2023-06-21 22:52:22 +05:30
Lakshmi Narayanan Sreethar
d96e72af60 PG16: Rename RelFileNode references to RelFileNumber or RelFileLocator
postgres/postgres@b0a55e4
2023-06-21 22:52:22 +05:30
Lakshmi Narayanan Sreethar
933285e646 PG16: Remove MemoryContextContains usage
Remove the usage of MemoryContextContains as it has been removed in PG16.

postgres/postgres@9543eff
2023-06-21 22:52:22 +05:30
Konstantina Skovola
1eb7e38d2d Enable ChunkAppend for space partitioned partial chunks
This is a follow-up patch for timescale#5599 which handles space
partitioned hypertables.
2023-06-20 11:41:59 +03:00
Bharathy
c48f905f78 Index scan support for UPDATE/DELETE.
During UPDATE/DELETE on compressed hypertables, we do a sequential
scan which can be improved by supporting index scans.

In this patch for a given UPDATE/DELETE query, if there are any
WHERE conditions specified using SEGMENT BY columns, we use index
scan to fetch all matching rows. Fetched rows will be decompressed
and moved to uncompressed chunk and a regular UPDATE/DELETE is
performed on the uncompressed chunk.
2023-06-15 19:59:04 +05:30
Jan Nidzwetzki
77318dced8 Fix broken download links
The download links for several platforms are broken. This patch removes
the links for the individual platforms and adds a link that points to
the self-hosted install docs instead (as proposed by the docs team, see
the discussion in #5762).

Fixes: #5762
2023-06-15 15:13:22 +02:00
Jan Nidzwetzki
f05b7f8105 Fixed the naming of the Windows GitHub action
The ignored workflow for windows-build-and-test does not set the name of
the actions properly. Therefore, these actions use the default naming.
For example, 'Regression Windows / build (15, windows-2022, Debug)'.
However, our CI expects names like 'PG15 Debug windows-2022' in the
required checks. This PR corrects the name of the jobs.
2023-06-15 14:25:30 +02:00
Valery Meleshkin
4273a27461 Ensure pg_config --cppflags are passed
CMAKE_CPP_FLAGS is not a thing at all. Furthermore,
CMAKE_CXX_FLAGS is not passed to a C compiler.
pg_config uses CPPGLAGS for all includes, and needs
to be passed into CMAKE_C_FLAGS as well.
2023-06-15 11:31:57 +02:00
Sotiris Stamokostas
14d08576fb Allow flaky-test label
With this PR we allow issues with flaky-test label
to be added to our bugs board.
2023-06-15 10:16:12 +03:00
Sven Klemm
e302aa2ae9 Fix handling of Result nodes below Sort nodes in ConstraintAwareAppend
With PG 15 Result nodes can appear between Sort nodes and
DecompressChunk when postgres tries to adjust the targetlist.
2023-06-13 18:42:02 +02:00
Jan Nidzwetzki
9c7ae3e8a9 Fixed two bugs in decompression sorted merge code
SQLSmith found two bugs in the compression sorted merge code.

* The unused_batch_states are not initialized properly. Therefore,
  non-existing unused batch states can be part of the BMS. This patch
  fixes the initialization.

* For performance reasons, We reuse the same TupleDesc across all
  TupleTableSlots. PostgreSQL sometimes uses TupleDesc data structures
  with active reference counting. The way we use the TupleDesc
  structures collides with the reference counting of PostgreSQL. This
  patch introduces a private TupleDesc copy without reference counting.
2023-06-12 20:30:13 +02:00
Lakshmi Narayanan Sreethar
4dce87a1c4 PG16: Refactor handling of PGDLLEXPORT macro definition
PG16 defines the PGDLLEXPORT macro as a proper visibility attribute. In
the previous versions from PG12 to PG16, the PGDLLEXPORT was always
defined as an empty macro. Considering all this, the code has been now
updated to skip defining PGDLLEXPORT if it has been already defined
properly. If not, the macro is redefined without any additional checks.

postgres/postgres@089480c
2023-06-12 17:25:49 +05:30
Lakshmi Narayanan Sreethar
0f1fde8d31 Mark PG16 as a supported version
Note that this change in combination with -DEXPERIMENTAL=ON cmake flag
will just allow us to compile timescaledb code with PG16 and this
doesn't mean PG16 is supported by the extension.
2023-06-12 17:25:49 +05:30
Sotiris Stamokostas
7df16ee560 Renamed need-more-info label
We plan to rename the need-more-info label to waiting-for-author.
This PR performs the needed adjustments in our GitHub actions.
2023-06-12 13:05:18 +03:00
Sotiris Stamokostas
8b10a6795c Compression test changes for PG14.0
We have changed the compression test by disabling parallel append
in some test cases because the regression test was falling only
in PG14.0 but not in PG14.8 or any other PostgreSQL version
2023-06-12 10:44:41 +03:00
Alexander Kuzmenkov
f26e656c0f Bulk decompression of compressed batches
Add a function to decompress a compressed batch entirely in one go, and
use it in some query plans. As a result of decompression, produce
ArrowArrays. They will be the base for the subsequent vectorized
computation of aggregates.

As a side effect, some heavy queries to compressed hypertables speed up
by about 15%. Point queries with LIMIT 1 can regress by up to 1 ms. If
the absolute highest performace is desired for such queries, bulk
decompression can be disabled by a GUC.
2023-06-07 16:21:50 +02:00
Alexander Kuzmenkov
c96870c91b Update gorilla and deltadelta fuzzing corpuses 2023-06-07 16:21:50 +02:00
Zoltan Haindrich
769646bdb6 Fix issues with scripts/test_update_smoke.sh
The test was failing on first run by leaving a database behind as a
sideeffect.
Between two steps the extension was dropped; without a proper cleanup.
A non-existent sql function was called during cleanup.

This patch also removes the "debug mode" and every execution will leave
the logs/etc in the /tmp directory for further inspection.
2023-06-07 13:30:21 +02:00
Jan Nidzwetzki
b8e674c137 Fixed handling of NULL values in bookend_sfunc
In the function bookend_sfunc values are compared. If the first
processed value is a NULL value, it was copied into the state of the
sfunc. A following comparison between the NULL value of the state and a
non-NULL value could lead to a crash.

This patch improves the handling of NULL values in bookend_sfunc.
2023-06-07 12:38:40 +02:00
Jan Nidzwetzki
f2eac72e2b Ensure tlist is present in decompress chunk plan
In PostgreSQL < 15, CustomScan nodes are projection capable. The planner
invokes create_plan_recurse with the flag CP_IGNORE_TLIST. So, the
target list of a CustomScan node can be NIL. However, we rely on the
target list to derive information for sorting.

This patch ensures that the target list is always populated before the
sort functions are called.

Fixes: #5738
2023-06-06 13:25:39 +02:00