15 Commits

Author SHA1 Message Date
gayyappan
6dad1f246a Add joininfo to compressed rel
If the joininfo for a rel is not available, the index path
cannot compute the correct filters for parameterized paths
as the RelOptInfo's ppilist is setup using information
from the joininfo.

Fixes 1558
2019-12-31 13:21:09 -05:00
Matvey Arye
2f7d69f93b Make continuous agg relative to now()
Previously, refresh_lag in continuous aggs was calculated
relative to the maximum timestamp in the table. Change the
semantics so that it is relative to now(). This is more
intuitive.

Requires an integer_now function applied to hypertables
with integer-based time dimensions.
2019-11-21 14:17:37 -05:00
Matvey Arye
aaa45df369 Fix varoattno when creating ec for segment by.
We reset the varoattno when creating the equivalence
member for segment by columns. varoattno is used for
finding equivalence members (em) when searching for pathkeys
(although, strangely not for indexclauses). Without this change
the code for finding matching ems differs in the case where attnos
have changed and where they haven't.

Fixing this, allows the planner to plan more different types of paths
for several tests, Because of the way the cost fuzzer in
`compare_path_costs_fuzzily` interacts with disabling seqscans,
some choices the planner makes have changed (pretty much the cost
is dominated by the penalty of the seqscan and so it picks the
first available path). We've  changed some enable_seqscan clauses
to get around this and have the planner show what we want in tests.

Also delete transparent_decompression-9.6.out since compression is
disabled on 9.6.
2019-10-29 19:02:58 -04:00
Joshua Lockerman
28653640af Fix redundant sorts and other planner fixes
This commit changes the sort-before-decompression logic from always
sorting if possible to only sorting if it is possible, and the
underlying scan of the compressed chunk does not already retun tuples
in the correct order. The pathkeys used by the decompression node are
the query_pathkeys if ordering can be pushed down. The equivalent
compressed_pathkeys are saved in a separate field. At plan time,
a sort is put in between the scan and the decompress node if the
scan's pathkeys do not satisfy the ordering of the decompressed
path's compressed_pathkey field.

Other planner fixes:

Allow using ordered paths when ordering by a subset
of segment_by columns (and no other columns).

Adjust costing of ordered paths to add costs for sorting
when it is needed. That prevents the planner from using
ordered paths when it doesn't make use of the ordering.

EC members of compressed rels are marked as children.
In order to correctly find them when building compressed scans,
we have to correctly pass down the relids being searched instead
of NULL, otherwise child ems are not considered. This
improves index pushdowns.
2019-10-29 19:02:58 -04:00
Sven Klemm
4d12f5b8f3 Fix transparent decompression sort interaction
The sort optimization adds a new index path to the pathlist of rel
with the modified pathkeys. This optimization needs to happen before
the DecompressChunk paths get generated otherwise those paths will
survive in pathlist and a query on a compressed chunk will target
the empty chunk of the uncompressed hypertable.
2019-10-29 19:02:58 -04:00
Sven Klemm
32cb4a6af8 Add tableoid support for transparent decompression
This patch adds support for the tableoid system column to transparent
decompression. tableoid will be the relid of the uncompressed chunk.
All other system columns will still throw an error when queried.
2019-10-29 19:02:58 -04:00
Joshua Lockerman
47da729236 Allow pushdown of quals containing params
This commit enables the pushdown of quals containing a param. This
allows us to filter on such expression before decompression, speeding
up eg lastpoint approximately 2x
2019-10-29 19:02:58 -04:00
Joshua Lockerman
1502842134 Improve JOIN handling for compressed chunks
This commit improves the JOIN handling of compressed hypertables,
specifically enabling NestLoop JOINs that filter on segmentby columns
before the table is decompressed. It does this via the following
changes:

1. It adds equivalence-members relating each segmentby on the
   decompressed-chunk to the equivalent column on the compressed chunk.
2. It pulls the ParamInfo and has_eclass_joins from the compressed path
   to the DecompressedPath.
3. It fixes up the quals in the DecompressPlan to remove ones redundant
   with IndexScans, and to ensure that any remaining quals refer to
   columns on the decompressed-chunk.
2019-10-29 19:02:58 -04:00
Sven Klemm
4c3bb6d2d6 Add JOIN tests for transparent decompression 2019-10-29 19:02:58 -04:00
Joshua Lockerman
965054658e Enable IndexScans on compressed chunks
This commit enables IndexScans on the segmentby columns of compressed
chunks, if they have an index. It makes three changes to enable this:

1. It creates a DecompressChunkPath for every path planned on the
   compressed chunk, not only the cheapest one.
2. It sets up the reltargetlist on the compressed RelOptInfo accurately
   reflect the columns of the compressed chunk that are read, instead
   of leaving it empty (needed to prevent IndexOnlyScans from being
   planned).
3. It plans IndexPaths, not only SeqScanPaths.
2019-10-29 19:02:58 -04:00
Sven Klemm
e2c03e40aa Add support for pathkey pushdown for transparent decompression
This patch adds support for producing ordered output. All
segmentby columns need to be prefix of pathkeys and the orderby
specified for the compression needs exactly match the rest of
pathkeys.
2019-10-29 19:02:58 -04:00
Matvey Arye
2bf97e452d Push down quals to segment meta columns
This commit pushes down quals or order_by columns to make
use of the SegmentMetaMinMax objects. Namely =,<,<=,>,>= quals
can now be pushed down.

We also remove filters from decompress node for quals that
have been pushed down and don't need a recheck.

This commit also changes tests to add more segment by and
order-by columns.

Finally, we rename segment meta accessor functions to be smaller
2019-10-29 19:02:58 -04:00
Sven Klemm
42a2c8666e Fix DecompressChunk parallel execution
When DecompressChunk is used in parallel plans the scan on the
compressed hypertable chunk needs to be parallel aware to prevent
duplicating work. This patch will change DecompressChunk to always
create a non parallel safe path and if requested a parallel safe
partial path with a parallel aware scan.
2019-10-29 19:02:58 -04:00
Sven Klemm
b1a5000b5c Improve qual pushdown for transparent decompression
This patch adds support for pushing down IS NULL, IS NOT NULL and
ScalarArrayOp expression to the scan on the compressed chunk.
2019-10-29 19:02:58 -04:00
Sven Klemm
4cc1a4159a Add DecompressChunk custom scan node
This patch adds a DecompressChunk custom scan node, which will be
used when querying hypertables with compressed chunks to transparently
decompress chunks.
2019-10-29 19:02:58 -04:00