Commit effc8efe148c4ec0048bd7c1dfe0ca01df2afdc9
accidentally placed .gitattributes inside a build-13 directory
instead of the root of the project. This commit removes build-13 and
fixes the structure.
When attaching a data node and specifying `repartition=>false`, the
current number of partitions should remain instead of recalculating
the partitioning based on the number of data nodes.
Fixes#5157
With this patch, the ability to mark reference tables (tables that exist
on all data nodes of a multi-node installation) via an FDW option has
been added.
When TidRangeScan is child of ChunkAppend or ConstraintAwareAppend node, an
error is reported as "invalid child of chunk append: Node (26)". This patch
fixes the issue by recognising TidRangeScan as a valid child.
Fixes: #4872
Previous attempt to fix it (PR #5130) was not entirely correct because
the bucket width calculation for interval width was wrong.
Fixed it by properly calculate the bucket width for intervals using the
Postgres internal function `interval_part` to extract the epoch of the
interval and execute the validations. For integer widths use the already
calculated bucket width.
Fixes#5158, #5168
When deleting a data node with the option `drop_database=>true`, the
database is deleted even if the command fails.
Fix this behavior by dropping the remote database at the end of the
delete data node operation so that other checks fail first.
Previous PR #4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.
Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.
Fixes#4922
If only documentation is changed, the full regression check workflow
will still be executed, so this commit will instead skip running the
regression workflows if there are only changes to files that will not
affect the success of the workflow.
This patch adjusts the code layout for chunk_dispatch to be similar
to the other custom nodes. All the files related to chunk_dispatch
are moved into dedicated nodes/chunk_dispatch directory.
The telemetry_stats testcase uses random() with seed(1) to generate the
column values on which the hypertable is partitioned. The Postgres commit
postgres/postgres@3804539e48 updates the random() implementation to use a
better algorithim causing the test to generate a different set of rows in
PG15. Due to this the test failed in PG15 as the distrubution stats of
the tuples have now changed. Fixed that by creating separate test
outputs for PG15 and other releases.
Fixes#5037
During the creation of a CAgg on top of another CAgg we check if the
bucket width is multiple of the parent and for this arithmetic we made
an assumption that picking just the `month` part of the `interval` for
variable bucket size was enought.
This assumption was wrong for bucket sizes that doesn't have the month
part (i.e: using timezones) leading to division by zero error because in
that case the `month` part of an `interval` is equal to 0 (zero).
Fixed it by properly calculating the bucket width for variable sized
buckets.
Fixes#5126
Creating a CAgg on a CAgg where the time column is in a different
order of the original hypertable was raising the following exception:
`ERROR: time bucket function must reference a hypertable dimension
column`
The problem was during the validation we're initializing internal data
structure with the wrong hypertable metadata.
Fixes#5131
Adding new column with NULL constraint to a compressed hypertable is
raising an error but it make no sense because NULL constraints in
Postgres does nothing, I mean it is useless and exist just for
compatibility with other database systems:
https://www.postgresql.org/docs/current/ddl-constraints.html#id-1.5.4.6.6
Fixed it by ignoring the NULL constraints when we check for `ALTER TABLE
.. ADD COLUMN ..` to a compressed hypertable.
Fixes#5151
This patch includes two changes to the PR handling workflow:
(1) It changes the trigger for the workflow to pull_request_target. So,
PRs can now also be assigned to reviewers if the PR is opened from
external sources.
(2) A workflow has been added that automatically assigns PRs to the
author.
If we test every commit in master, we can allow GitHub to merge the PRs
automatically without requiring a manual rebase on the current master.
These rebases are a real time sink.
Currently when ASSERT_IS_VALID_CHUNK fails it is impossible to tell
which of the conditions fails without opening the coredump in debugger
as all the conditions are ANDed in a single Assert. This patch splits
the conditions into individual Asserts so you can immediately see
from stacktrace which condition failed.
SELECT from partially compressed chunk crashes due to reference to NULL
pointer. When generating paths for DecompressChunk, uncompressed_partial_path
is null which is not checked, thus causing a crash. This patch checks for NULL
before calling create_append_path().
Fixes#5134
This release contains bug fixes since the 2.9.0 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.
**Bugfixes**
* #5072 Fix CAgg on CAgg bucket size validation
* #5101 Fix enabling compression on caggs with renamed columns
* #5106 Fix building against PG15 on Windows
* #5117 Fix postgres server restart on background worker exit
* #5121 Fix privileges for job_errors in update script
This patch changes INSERTs into compressed chunks to no longer
be immediately compressed but stored in the uncompressed chunk
instead and later merged with the compressed chunk by a separate
job.
This greatly simplifies the INSERT-codepath as we no longer have
to rewrite the target of INSERTs and on-the-fly compress leading
to a roughly 2x improvement on INSERT rate into compressed chunk.
Additionally this improves TRIGGER-support for INSERTs into
compressed chunks.
This is a necessary refactoring to allow UPSERT/UPDATE/DELETE on
compressed chunks in follow-patches.
This commit enables PG15 in the following workflows:
- Regression Linux
- ABI test
- Memory tests
- Coverity
- SQLSmith
- Additional cron tests
Co-authored-by: Bharathy Satish <bharathy@timescale.com>
The bucket size of a Continuous Aggregate should be greater or equal to
the parent Continuous Aggregate because there are many cases where you
actually want to roll up on another dimension.
When defining compression segment by parameter using multiple columns,
the parameter ordering is not respected for index creation.
This patch fixes the issue by maintaining the same order in which user
has defined columns in segment by clause.
Fixes#5104
Use rand() instead of random() cause the latter is not available
on Windows and postgres stopped backporting it with PG15.
Ideally we switch to the crypto functions added in PG15 but that
requires a bit more work and this is the minimal change required
to get it to build against PG15 on Windows.
Add 2.9.0 to update test scripts and adjust downgrade scripts for
2.9.0. Additionally adjust CHANGELOG to sync with the actual release
CHANGELOG and add PG15 to CI.
On caggs with realtime aggregation changing the column name does
not update all the column aliases inside the view metadata.
This patch changes the code that creates the compression
configuration for caggs to get the column name from the materialization
hypertable instead of the view internals.
Fixes#5100
When CAggs on CAggs was introduced in commit 3749953 the regression
tests was splited into 6 (six) different test suites.
Simplyfied it grouping tests and reduced to just 2 (two) different test
suites. It save resources and time because each suite test spawn it own
Postgres instance.
The last minor versions for PG14 (14.6) and PG15 (15.1) were unlisted by
chocolatey maintainers due to some issues.
Fixed it by hardcoding the 14.5 until the packages become available
again.
When CAggs migration was introduced in commit e34218ce the regression
tests was splited into 6 (six) different test suites.
Simplyfied it grouping tests and reduced to just 2 (two) different test
suites. It save resources and time because each suite test spawn it own
Postgres instance.
The tree contains a lot of design and architecture documents, but they
are not linked together, so this commits adds a few additional README
and build a basic structure for the documentation.