Unless otherwise listed, the TODO was converted to a comment or put
into an issue tracker.
test/sql/
- triggers.sql: Made required change
tsl/test/
- CMakeLists.txt: TODO complete
- bgw_policy.sql: TODO complete
- continuous_aggs_materialize.sql: TODO complete
- compression.sql: TODO complete
- compression_algos.sql: TODO complete
tsl/src/
- compression/compression.c:
- row_compressor_decompress_row: Expected complete
- compression/dictionary.c: FIXME complete
- materialize.c: TODO complete
- reorder.c: TODO complete
- simple8b_rle.h:
- compressor_finish: Removed (obsolete)
src/
- extension.c: Removed due to age
- adts/simplehash.h: TODOs are from copied Postgres code
- adts/vec.h: TODO is non-significant
- planner.c: Removed
- process_utility.c
- process_altertable_end_subcmd: Removed (PG will handle case)
With ordered append, chunk exclusion occur only along the primary open
"time" dimension, failing to exclude chunks along additional
partitioning dimensions. For instance, a query on a two-dimensional
table "hyper" (time, device), such as
```
SELECT * FROM hyper
WHERE time > '2019-06-11 12:30'
AND device = 1
ORDER BY time;
```
would only exclude chunks based on the "time" column restriction, but
not the "device" column restriction. This causes an unnecessary number
of chunks to be included in the query plan.
The reason this happens is because chunk exclusion during ordered
append is based on pre-sorting the set of slices in the primary
dimension to determine ordering. This is followed by a scan for
chunks slice-by-slice in the order of the sorted slices. Since those
scans do not include the restrictions in other dimensions, chunks that
would otherwise not match are included in the result.
This change fixes this issue by using the "regular" chunk scan that
account for multi-dimensional restrictions. This is followed by a sort
of the resulting chunks along the primary "time" dimension.
While this, sometimes, means sorting a larger set than the initial
slices in the primary "time" dimension, the resulting chunk set is
smaller instead. Sorting chunks also allows doing secondary ordering
on chunk ID for those chunks that belong to the same "time"
slice. While this additional ordering is not required for correct
tuple ordering, it gives slightly nicer EXPLAIN output since chunks
are also ordered by ID.
Postgres changed some behavior in 11.3 (commit 925f46f) so that
`apply_scanjoin_target_to_paths` resets all paths in partitioned
relations and re-does them (without calling the `set_rel_pathlist`
hook on the append relation again).
This broke our constraint aware append and ordered append
optimizations because they are applied in `set_rel_pathlist`.
This hook is not called again by postgres. After some discussion
on pgsql-hackers it seems like this will be changed in the future.
This fix uses the enable_partitionwise_aggregate PG GUC which is
and disabled by default to control whether our partitionwise
optimization is enabled. Thus, by default this optimization is
off for now.
This is a short-term fix until PG fixes the hook.
This patch adds a new ChunkAppend node. This node combines the
functionality of ConstraintAwareAppend and Append and additionally
adds support for runtime chunk exclusion.
This patch only changes the ordered append plans to the new node.
The patch also adds support for space partitioned hypertables
to ordered append and for hypertables with indexes not on all
chunks.
Runtime chunk exclusion will allow chunk exclusion to exclude
chunks for JOINs, LATERAL JOINs and correlated subqueries.
PostgreSQL 11 added support for query plans that do partitionwise
aggregation on partitioned tables. Such query plans push down
aggregates to individual partitions (either fully or partially) for
similar or better performance than regular plans due to, among other
things, improved locking.
The changes in this commit adds the corresponding partitionwise
aggregation functionality for hypertables. To enable this
functionality on hypertables, we add partitioning metadata at the
planning stage to make the regular PostgreSQL planner believe it is
planning a partitioned table. Alternatively, we could have added the
corresponding planner paths in our own code, e.g., in the
create_upper_paths_hook, but this would require copying or
re-implementing a large amount of PostgreSQL planning code.
Note that partitionwise aggregation will only work with PostgreSQL 11.
As a side effect of making hypertables look like partitioned tables
during planning, some append plans will differ because the planner
removes any Result projection nodes from such plans, knowing it can
push projections down to the partitions instead. This also affects a
number of query-related tests, so these have been split into
version-specific tests.