429 Commits

Author SHA1 Message Date
Erik Nordström
b81033b835 Make data node command execution interruptible
The function to execute remote commands on data nodes used a blocking
libpq API that doesn't integrate with PostgreSQL interrupt handling,
making it impossible for a user or statement timeout to cancel a
remote command.

Refactor the remote command execution function to use a non-blocking
API and integrate with PostgreSQL signal handling via WaitEventSets.

Partial fix for #4958.

Refactor remote command execution function
2023-02-03 13:15:28 +01:00
Konstantina Skovola
6bc8980216 Fix year not multiple of day/month in nested CAgg
Previously all intervals were converted to seconds using "epoch"
with date_part. However, this treats a year as 365.25 days to
account for leap years, leading to the unexpected situation that
a year is not a multiple of a day or a month.

Fixed by treating month-only intervals as multiples of 30 days.

Fixes #5231
2023-02-02 12:14:37 +02:00
Sven Klemm
789bb26dfb Lock down search_path in SPI calls 2023-02-01 07:54:03 +01:00
Erik Nordström
d489ed6f32 Fix use of prepared statement in async module
Broken code caused the async connection module to never send queries
using prepared statements. Instead, queries were sent using the
parameterized query statement instead.

Fix this so that prepared statements are used when created.
2023-01-31 12:01:03 +01:00
Erik Nordström
5d12a3883d Make connection establishment interruptible
Refactor the data node connection establishment so that it is
interruptible, e.g., by ctrl-c or `statement_timeout`.

Previously, the connection establishment used blocking libpq calls. By
instead using asynchronous connection APIs and integrating with
PostgreSQL interrupt handling, the connection establishment can be
canceled by an interrupt caused by a statement timeout or a user.

Fixes #2757
2023-01-30 17:48:59 +01:00
Erik Nordström
cce0e18c36 Manage life-cycle of connections via memory contexts
Tie the life cycle of a data node connection to the memory context it
is created on. Previously, a data node connection was automatically
closed at the end of a transaction, although often a connection needs
to live beyond a single transaction. For example, a connection cache
is maintained for data node connections, and, for such cases, a flag
was set on a connection to avoid closing it automatically.

Instead of tying connections to transactions, they are now life-cycle
managed via memory contexts. This simplifies the handling of
connections and avoids having to create exceptions to closing
connections at transaction end.
2023-01-30 15:46:21 +01:00
Mats Kindahl
5661ff1523 Add role-level security to job error log
Since the job error log can contain information from many different
sources and also from many different jobs it is important to ensure
that visibility of the job error log entries is restricted to job
owners.

This commit extend the view `timescaledb_information.job_errors` with
role-based checks so that a user can only see entries for jobs that she
has permission to view and restrict the permissions to
`_timescaledb_internal.job_errors` so that users only can view the job
error log through the view. A special case is added so that the
superuser and the database owner can see all log entries, even if there
is no associated job id with the log entry.

Closes #5217
2023-01-30 12:13:00 +01:00
Sven Klemm
334864127d Stop blocking RETURNING for compressed chunks
Recent refactorings in the INSERT into compressed chunk code
path allowed us to support this feature but the check to
prevent users from using this feature was not removed as part
of that patch. This patch removes the blocker code and adds a
minimal test case.
2023-01-30 09:49:52 +01:00
Rafia Sabih
043092a97f Fix timestamp out of range
When start or end for a refresh job is null, then it gives an
error while bucketing because start and end are already min and
max timestamp value before bucketing. Hence, skip bucketing for
this case.

Partly fixes #4117
2023-01-26 20:11:21 +05:30
Rafia Sabih
a67b90e977 Allow joins in continuous aggregates
Enable the support of having join in the query used for creating
the continuous aggregates. It has follwoing restrictions-
1. Join can involve only one hypertable and one normal table
2. Join should be a inner join
3. Join condition can only be equality
2023-01-24 19:57:24 +05:30
Bharathy
f211294c61 Release 2.9.2
This release contains bug fixes since the 2.9.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #5114 Fix issue with deleting data node and dropping the database on multi-node
* #5133 Fix creating a CAgg on a CAgg where the time column is in a different order of the original hypertable
* #5152 Fix adding column with NULL constraint to compressed hypertable
* #5170 Fix CAgg on CAgg variable bucket size validation
* #5180 Fix default data node availability status on multi-node
* #5181 Fix ChunkAppend and ConstraintAwareAppend with TidRangeScan child subplan
* #5193 Fix repartition behavior when attaching data node on multi-node
2023-01-23 15:55:10 +05:30
Fabrízio de Royes Mello
73df496c75 Fix CAgg on CAgg variable bucket size validation
Previous attempt to fix it (PR #5130) was not entirely correct because
the bucket width calculation for interval width was wrong.

Fixed it by properly calculate the bucket width for intervals using the
Postgres internal function `interval_part` to extract the epoch of the
interval and execute the validations. For integer widths use the already
calculated bucket width.

Fixes #5158, #5168
2023-01-13 12:38:58 -03:00
Erik Nordström
1e7b9bc558 Fix issue with deleting data node and dropping database
When deleting a data node with the option `drop_database=>true`, the
database is deleted even if the command fails.

Fix this behavior by dropping the remote database at the end of the
delete data node operation so that other checks fail first.
2023-01-13 13:54:27 +01:00
Fabrízio de Royes Mello
cd48553de5 Fix CAgg on CAgg variable bucket size validation
During the creation of a CAgg on top of another CAgg we check if the
bucket width is multiple of the parent and for this arithmetic we made
an assumption that picking just the `month` part of the `interval` for
variable bucket size was enought.

This assumption was wrong for bucket sizes that doesn't have the month
part (i.e: using timezones) leading to division by zero error because in
that case the `month` part of an `interval` is equal to 0 (zero).

Fixed it by properly calculating the bucket width for variable sized
buckets.

Fixes #5126
2023-01-06 16:39:04 -03:00
Fabrízio de Royes Mello
73c97524a0 Fix CAgg on CAgg different column order
Creating a CAgg on a CAgg where the time column is in a different
order of the original hypertable was raising the following exception:

`ERROR:  time bucket function must reference a hypertable dimension
column`

The problem was during the validation we're initializing internal data
structure with the wrong hypertable metadata.

Fixes #5131
2023-01-06 13:04:06 -03:00
Fabrízio de Royes Mello
41d6a1f142 Fix adding column with NULL constraint
Adding new column with NULL constraint to a compressed hypertable is
raising an error but it make no sense because NULL constraints in
Postgres does nothing, I mean it is useless and exist just for
compatibility with other database systems:
https://www.postgresql.org/docs/current/ddl-constraints.html#id-1.5.4.6.6

Fixed it by ignoring the NULL constraints when we check for `ALTER TABLE
.. ADD COLUMN ..` to a compressed hypertable.

Fixes #5151
2023-01-06 11:15:12 -03:00
Sven Klemm
93667df7d8 Release 2.9.1
This release contains bug fixes since the 2.9.0 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.

**Bugfixes**
* #5072 Fix CAgg on CAgg bucket size validation
* #5101 Fix enabling compression on caggs with renamed columns
* #5106 Fix building against PG15 on Windows
* #5117 Fix postgres server restart on background worker exit
* #5121 Fix privileges for job_errors in update script
2022-12-23 14:38:45 +01:00
Sven Klemm
4527f51e7c Refactor INSERT into compressed chunks
This patch changes INSERTs into compressed chunks to no longer
be immediately compressed but stored in the uncompressed chunk
instead and later merged with the compressed chunk by a separate
job.

This greatly simplifies the INSERT-codepath as we no longer have
to rewrite the target of INSERTs and on-the-fly compress leading
to a roughly 2x improvement on INSERT rate into compressed chunk.
Additionally this improves TRIGGER-support for INSERTs into
compressed chunks.

This is a necessary refactoring to allow UPSERT/UPDATE/DELETE on
compressed chunks in follow-patches.
2022-12-21 12:53:29 +01:00
Sven Klemm
08bb21f7e6 2.9.0 Post-release adjustments
Add 2.9.0 to update test scripts and adjust downgrade scripts for
2.9.0. Additionally adjust CHANGELOG to sync with the actual release
CHANGELOG and add PG15 to CI.
2022-12-19 19:10:24 +01:00
Bharathy
dd65a6b436 Fix segfault after second ANALYZE
Issue occurs in extended query protocol mode only where every
query goes through PREPARE and EXECUTE phase. First time ANALYZE
is executed, a list of relations to be vaccumed is extracted and
saved in a list. This list is referenced in parsetree node. Once
execution of ANALYZE is complete, this list is cleaned up, however
reference to the same is not cleaned up in parsetree node. When
second time ANALYZE is executed, segfault happens as we access an
invalid memory location.

Fixed the issue by restoring the actual value in parsetree node
once ANALYZE completes its execution.

Fixes #4857
2022-12-12 17:34:41 +05:30
Sachin
cd4509c2a3 Release 2.9.0
This release adds major new features since the 2.8.1 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Hierarchical Continuous Aggregates (aka Continuous Aggregate on top of another Continuous Aggregate)
* Improve `time_bucket_gapfill` function allowing specifying timezone to bucket
* Use `alter_data_node()` to change the data node configuration. This function introduces the option to configure the availability of the data node.

This release also includes several bug fixes.

**Features**
* #4476 Batch rows on access node for distributed COPY
* #4567 Exponentially backoff when out of background workers
* #4650 Show warnings when not following best practices
* #4664 Introduce fixed schedules for background jobs
* #4668 Hierarchical Continuous Aggregates
* #4670 Add timezone support to time_bucket_gapfill
* #4678 Add interface for troubleshooting job failures
* #4718 Add ability to merge chunks while compressing
* #4786 Extend the now() optimization to also apply to CURRENT_TIMESTAMP
* #4820 Support parameterized data node scans in joins
* #4830 Add function to change configuration of a data nodes
* #4966 Handle DML activity when datanode is not available
* #4971 Add function to drop stale chunks on a datanode

**Bugfixes**
* #4663 Don't error when compression metadata is missing
* #4673 Fix now() constification for VIEWs
* #4681 Fix compression_chunk_size primary key
* #4696 Report warning when enabling compression on hypertable
* #4745 Fix FK constraint violation error while insert into hypertable which references partitioned table
* #4756 Improve compression job IO performance
* #4770 Continue compressing other chunks after an error
* #4794 Fix degraded performance seen on timescaledb_internal.hypertable_local_size() function
* #4807 Fix segmentation fault during INSERT into compressed hypertable
* #4822 Fix missing segmentby compression option in CAGGs
* #4823 Fix a crash that could occur when using nested user-defined functions with hypertables
* #4840 Fix performance regressions in the copy code
* #4860 Block multi-statement DDL command in one query
* #4898 Fix cagg migration failure when trying to resume
* #4904 Remove BitmapScan support in DecompressChunk
* #4906 Fix a performance regression in the query planner by speeding up frozen chunk state checks
* #4910 Fix a typo in process_compressed_data_out
* #4918 Cagg migration orphans cagg policy
* #4941 Restrict usage of the old format (pre 2.7) of continuous aggregates in PostgreSQL 15.
* #4955 Fix cagg migration for hypertables using timestamp without timezone
* #4968 Check for interrupts in gapfill main loop
* #4988 Fix cagg migration crash when refreshing the newly created cagg

**Thanks**
* @jflambert for reporting a crash with nested user-defined functions.
* @jvanns for reporting hypertable FK reference to vanilla PostgreSQL partitioned table doesn't seem to work
* @kou for fixing a typo in process_compressed_data_out
* @xvaara for helping reproduce a bug with bitmap scans in transparent decompression
* @byazici for reporting a problem with segmentby on compressed caggs
* @tobiasdirksen for requesting Continuous aggregate on top of another continuous aggregate
* @xima for reporting a bug in Cagg migration
2022-12-05 19:33:45 +05:30
Jan Nidzwetzki
380464df9b Perform frozen chunk status check via trigger
The commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces frozen
chunks. Checking whether a chunk is frozen or not has been done so far
in the query planner. If it is not possible to determine which chunks
are affected by a query in the planner (e.g., due to a cast in the WHERE
condition), all chunks are checked. This leads (1) to an increased
planning time and (2) to the situation that a single frozen chunk could
reject queries, even if the frozen chunk is not addressed by the query.
2022-11-18 15:29:49 +01:00
Sven Klemm
8b6eb9024f Check for interrupts in gapfill main loop
Add CHECK_FOR_INTERRUPTS() macro to gapfill main loop.
2022-11-15 15:02:46 +01:00
Fabrízio de Royes Mello
6ae192631e Fix CAgg migration with timestamp without timezone
It was a leftover from the original implementation where we didn't add
tests for time dimension using `timestamp without timezone`.

Fixed it by dealing with this datatype and added regression tests.

Fixes #4956
2022-11-11 15:25:01 -03:00
Sven Klemm
9744b4f3bc Remove BitmapScan support in DecompressChunk
We don't want to support BitmapScans below DecompressChunk
as this adds additional complexity to support and there
is little benefit in doing so.
This fixes a bug that can happen when we have a parameterized
BitmapScan that is parameterized on a compressed column and
will lead to an execution failure with an error regarding
incorrect attribute types in the expression.
2022-11-08 23:29:29 +01:00
Ante Kresic
2475c1b92f Roll up uncompressed chunks into compressed ones
This change introduces a new option to the compression procedure which
decouples the uncompressed chunk interval from the compressed chunk
interval. It does this by allowing multiple uncompressed chunks into one
compressed chunk as part of the compression procedure. The main use-case
is to allow much smaller uncompressed chunks than compressed ones. This
has several advantages:
- Reduce the size of btrees on uncompressed data (thus allowing faster
inserts because those indexes are memory-resident).
- Decrease disk-space usage for uncompressed data.
- Reduce number of chunks over historical data.

From a UX point of view, we simple add a compression with clause option
`compress_chunk_time_interval`. The user should set that according to
their needs for constraint exclusion over historical data. Ideally, it
should be a multiple of the uncompressed chunk interval and so we throw
a warning if it is not.
2022-11-02 15:14:18 +01:00
Alexander Kuzmenkov
1cc8c15cad Do not clobber the baserel cache on UDF error
The baserel cache should only be allocated and freed by the top-level
query.
2022-11-01 18:01:26 +04:00
Jan Nidzwetzki
e555eea9db Fix performance regressions in the copy code
In 8375b9aa536a619a5ac2644e0dae3c25880a4ead, a patch was added to handle
chunks closes during an ongoing copy operation. However, this patch
introduces a performance regression. All MultiInsertBuffers are deleted
after they are flushed. In this PR, the performance regression is fixed.
The most commonly used MultiInsertBuffers survive flushing.

The 51259b31c4c62b87228b059af0bbf28caa143eb3 commit changes the way the
per-tuple context is used. Since this commit, more objects are stored in
this context. The size of the context was used to set the tuple size to
PG < 14. The extra objects in the context lead to wrong (very large)
results and flushes almost after every tuple read.

The cache synchronization introduced in
296601b1d7aba7f23aea3d47c617e2d6df81de3e is reverted. With the current
implementation, `MAX_PARTITION_BUFFERS` survive the flash. If
`timescaledb.max_open_chunks_per_insert` is lower than
`MAX_PARTITION_BUFFERS` , a buffer flush would be performed after each
tuple read.
2022-10-21 09:02:03 +02:00
Bharathy
38878bee16 Fix segementation fault during INSERT into compressed hypertable.
INSERT into compressed hypertable with number of open chunks greater
than ts_guc_max_open_chunks_per_insert causes segementation fault.
New row which needs to be inserted into compressed chunk has to be
compressed. Memory required as part of compressing a row is allocated
from RowCompressor::per_row_ctx memory context. Once row is compressed,
ExecInsert() is called, where memory from same context is used to
allocate and free it instead of using "Executor State". This causes
a corruption in memory.

Fixes: #4778
2022-10-13 20:48:23 +05:30
Sven Klemm
8cda0e17ec Extend the now() optimization to also apply to CURRENT_TIMESTAMP
The optimization that constifies certain now() expressions before
hypertable expansion did not apply to CURRENT_TIMESTAMP even
though it is functionally similar to now(). This patch extends the
optimization to CURRENT_TIMESTAMP.
2022-10-05 20:46:30 +02:00
Jan Nidzwetzki
12b7b9f665 Release 2.8.1
This release is a patch release. We recommend that you upgrade at the
next available opportunity.

**Bugfixes**
* #4454 Keep locks after reading job status
* #4658 Fix error when querying a compressed hypertable with compress_segmentby on an enum column
* #4671 Fix a possible error while flushing the COPY data
* #4675 Fix bad TupleTableSlot drop
* #4676 Fix a deadlock when decompressing chunks and performing SELECTs
* #4685 Fix chunk exclusion for space partitions in SELECT FOR UPDATE queries
* #4694 Change parameter names of cagg_migrate procedure
* #4698 Do not use row-by-row fetcher for parameterized plans
* #4711 Remove support for procedures as custom checks
* #4712 Fix assertion failure in constify_now
* #4713 Fix Continuous Aggregate migration policies
* #4720 Fix chunk exclusion for prepared statements and dst changes
* #4726 Fix gapfill function signature
* #4737 Fix join on time column of compressed chunk
* #4738 Fix error when waiting for remote COPY to finish
* #4739 Fix continuous aggregate migrate check constraint
* #4760 Fix segfault when INNER JOINing hypertables
* #4767 Fix permission issues on index creation for CAggs

**Thanks**
* @boxhock and @cocowalla for reporting a segfault when JOINing hypertables
* @carobme for reporting constraint error during continuous aggregate migration
* @choisnetm, @dustinsorensen, @jayadevanm and @joeyberkovitz for reporting a problem with JOINs on compressed hypertables
* @daniel-k for reporting a background worker crash
* @justinpryzby for reporting an error when compressing very wide tables
* @maxtwardowski for reporting problems with chunk exclusion and space partitions
* @yuezhihan for reporting GROUP BY error when having compress_segmentby on an enum column
2022-10-05 14:40:25 +02:00
Rafia Sabih
f7c769c684 Allow manual index creation in CAggs
The materialised hypertable resides in the _timescaledb.internal schema
which resulted in permission error at the time of manual index creation
by non super user. To solve this, it now switches to timescaledb user
before index creation of CAgg.

Fixes #4735
2022-09-30 07:55:38 -07:00
Sven Klemm
1d4b9d6977 Fix join on time column of compressed chunk
Do not allow paths that are parameterized on a
compressed column to exist when creating paths
for a compressed chunk.
2022-09-29 10:36:02 +02:00
Sven Klemm
940187936c Fix segfault when INNER JOINing hypertables
This fixing a segfault when INNER JOINing 2 hypertables that are
ordered by time.
2022-09-28 17:12:45 +02:00
Ante Kresic
cc110a33a2 Move ANALYZE after heap scan during compression
Depending on the statistics target, running ANALYZE on a chunk before
compression can cause a lot of random IO operations for chunks that
are bigger than the number of pages ANALYZE needs to read. By moving
that operation after the heap is loaded into memory for sorting,
we increase the chance of hitting cache and reducing disk operations
necessary to execute compression jobs.
2022-09-28 14:40:52 +02:00
Bharathy
f6dd55a191 Hypertable FK reference to partitioned table
Consider a hypertable which has a foreign key constraint on a
referenced table, which is a parititioned table. In such case, foreign
key constraint is duplicated for each of the partitioned table. When
we insert into a hypertable, we end up checking the foreign key
constraint multiple times, which obviously leads to foreign key
constraint violation. Instead we only check foreign key constraint of
the parent table of the partitioned table.

Fixes #4684
2022-09-27 21:09:05 +05:30
Alexander Kuzmenkov
6011c1446e Fix error when waiting for remote COPY to finish
Pass proper flags to WaitLatchOrSocket, and fix the swapped up
arguments.
2022-09-23 13:46:03 +03:00
Sven Klemm
2529ae3f68 Fix chunk exclusion for prepared statements and dst changes
The constify code constifying TIMESTAMPTZ expressions when doing
chunk exclusion did not account for daylight saving time switches
leading to different calculation outcomes when timezone changes.
This patch adds a 4 hour safety buffer to any such calculations.
2022-09-22 18:16:20 +02:00
Fabrízio de Royes Mello
217f514657 Fix continuous aggregate migrate check constraint
Instances upgraded to 2.8.0 will end up with a wrong check constraint
in catalog table `continuous_aggregate_migrate_plan_step`.

Fixed it by removing and adding the constraint with the correct checks.

Fix #4727
2022-09-22 11:33:29 -03:00
Jan Nidzwetzki
de30d190e4 Fix a deadlock in chunk decompression and SELECTs
This patch fixes a deadlock between chunk decompression and SELECT
queries executed in parallel. The change in
a608d7db614c930213dee8d6a5e9d26a0259da61 requests an AccessExclusiveLock
for the decompressed chunk instead of the compressed chunk, resulting in
deadlocks.

In addition, an isolation test has been added to test that SELECT
queries on a chunk that is currently decompressed can be executed.

Fixes #4605
2022-09-22 14:37:14 +02:00
Bharathy
d00a55772c error compressing wide table
Consider a compressed hypertable has many columns (like more than 600 columns).
In call to compress_chunk(), the compressed tuple size exceeds, 8K which causes
error as "row is too big: size 10856, maximum size 8160."

This patch estimates the tuple size of compressed hypertable and reports a
warning when compression is enabled on hypertable. Thus user gets aware of
this warning before calling compress_chunk().

Fixes #4398
2022-09-17 11:24:23 +05:30
Sven Klemm
d2baef3ef3 Fix planner chunk exclusion for VIEWs
Allow planner chunk exclusion in subqueries. When we decicde on
whether a query may benefit from constifying now and encounter a
subquery peek into the subquery and check if the constraint
references a hypertable partitioning column.

Fixes #4524
2022-09-12 17:29:14 +02:00
Bharathy
b869f91e25 Show warnings during create_hypertable().
The schema of base table on which hypertables are created, should define
columns with proper data types. As per postgres best practices Wiki
(https://wiki.postgresql.org/wiki/Don't_Do_This), one should not define
columns with CHAR, VARCHAR, VARCHAR(N), instead use TEXT data type.
Similarly instead of using timestamp, one should use timestamptz.
This patch reports a WARNING to end user when creating hypertables,
if underlying parent table, has columns of above mentioned data types.

Fixes #4335
2022-09-12 18:47:47 +05:30
Sven Klemm
f27e627341 Fix chunk exclusion for space partitions in SELECT FOR UPDATE queries
Since we do not use our own hypertable expansion for SELECT FOR UPDATE
queries we need to make sure to add the extra information necessary to
get hashed space partitions with the native postgres inheritance
expansion working.
2022-09-11 10:57:54 +02:00
Sven Klemm
6de979518d Fix compression_chunk_size primary key
The primary key for compression_chunk_size was defined as chunk_id,
compressed_chunk_id but other places assumed chunk_id is actually
unique and would error when it was not. Since it makes no sense
to have multiple entries per chunk since that reference would be
to a no longer existing chunk the primary key is changed to chunk_id
only with this patch.
2022-09-08 22:28:20 +02:00
Sven Klemm
b34b91f18b Add timezone support to time_bucket_gapfill
This patch adds a new time_bucket_gapfill function that
allows bucketing in a specific timezone.

You can gapfill with explicit timezone like so:
`SELECT time_bucket_gapfill('1 day', time, 'Europe/Berlin') ...`

Unfortunately this introduces an ambiguity with some previous
call variations when an untyped start/finish argument was passed
to the function. Some queries might need to be adjusted and either
explicitly name the positional argument or resolve the type ambiguity
by casting to the intended type.
2022-09-07 16:37:53 +02:00
Sven Klemm
39a700a4de Fix changelog entries added in wrong place
Commit ed212b44 added the changelog entries to the bottom of the
changelog instead of the top.
2022-09-01 18:32:10 +02:00
Bharathy
ed212b4442 GROUP BY error when setting compress_segmentby with an enum column
When using a custom ENUM data type for compressed hypertable on the GROUP BY
clause raises an error.

Fixed it by generating scan paths for the query by checking if the SEGMENT BY
column is a custom ENUM type and then report a valid error message.

Fixes #3481
2022-09-01 07:31:07 +05:30
Sven Klemm
f432d7f931 Release 2.8.0
This release adds major new features since the 2.7.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* time_bucket now supports bucketing by month, year and timezone
* Improve performance of bulk SELECT and COPY for distributed hypertables
* 1 step CAgg policy management
* Migrate Continuous Aggregates to the new format

**Features**
* #4188 Use COPY protocol in row-by-row fetcher
* #4307 Mark partialize_agg as parallel safe
* #4380 Enable chunk exclusion for space dimensions in UPDATE/DELETE
* #4384 Add schedule_interval to policies
* #4390 Faster lookup of chunks by point
* #4393 Support intervals with day component when constifying now()
* #4397 Support intervals with month component when constifying now()
* #4405 Support ON CONFLICT ON CONSTRAINT for hypertables
* #4412 Add telemetry about replication
* #4415 Drop remote data when detaching data node
* #4416 Handle TRUNCATE TABLE on chunks
* #4425 Add parameter check_config to alter_job
* #4430 Create index on Continuous Aggregates
* #4439 Allow ORDER BY on continuous aggregates
* #4443 Add stateful partition mappings
* #4484 Use non-blocking data node connections for COPY
* #4495 Support add_dimension() with existing data
* #4502 Add chunks to baserel cache on chunk exclusion
* #4545 Add hypertable distributed argument and defaults
* #4552 Migrate Continuous Aggregates to the new format
* #4556 Add runtime exclusion for hypertables
* #4561 Change get_git_commit to return full commit hash
* #4563 1 step CAgg policy management
* #4641 Allow bucketing by month, year, century in time_bucket and time_bucket_gapfill
* #4642 Add timezone support to time_bucket

**Bugfixes**
* #4359 Create composite index on segmentby columns
* #4374 Remove constified now() constraints from plan
* #4416 Handle TRUNCATE TABLE on chunks
* #4478 Synchronize chunk cache sizes
* #4486 Adding boolean column with default value doesn't work on compressed table
* #4512 Fix unaligned pointer access
* #4519 Throw better error message on incompatible row fetcher settings
* #4549 Fix dump_meta_data for windows
* #4553 Fix timescaledb_post_restore GUC handling
* #4573 Load TSL library on compressed_data_out call
* #4575 Fix use of `get_partition_hash` and `get_partition_for_key` inside an IMMUTABLE function
* #4577 Fix segfaults in compression code with corrupt data
* #4580 Handle default privileges on CAggs properly
* #4582 Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
* #4583 Fix partitioning functions
* #4589 Fix rename for distributed hypertable
* #4601 Reset compression sequence when group resets
* #4611 Fix a potential OOM when loading large data sets into a hypertable
* #4624 Fix heap buffer overflow
* #4627 Fix telemetry initialization
* #4631 Ensure TSL library is loaded on database upgrades
* #4646 Fix time_bucket_ng origin handling
* #4647 Fix the error "SubPlan found with no parent plan" that occurred if using joins in RETURNING clause.

**Thanks**
* @AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
* @Creatation for reporting an issue with renaming hypertables
* @janko for reporting an issue when adding bool column with default value to compressed hypertable
* @jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
* @michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
* @mwahlhuetter for reporting error in joins in RETURNING clause
* @ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
* @PBudmark for reporting an issue with dump_meta_data.sql on Windows
* @ssmoss for reporting an issue with time_bucket_ng origin handling
2022-08-31 16:33:21 +02:00
Alexander Kuzmenkov
ae6773fca6 Fix joins in RETURNING
To make it work, it is enough to properly pass the parent of the
PlanState while initializing the projection in RETURNING clause.
2022-08-31 14:14:34 +03:00