20 Commits

Author SHA1 Message Date
gayyappan
87786f1520 Add compressed table size to existing views
Some information views report hypertable sizes. Include
compressed table size in the calculation when applicable.
2019-10-29 19:02:58 -04:00
Matvey Arye
0f3e74215a Split segment meta min_max into two columns
This simplifies the code and the access to the min/max
metadata. Before we used a custom type, but now the min/max
are just the same type as the underlying column and stored as two
columns.

This also removes the custom type that was used before.
2019-10-29 19:02:58 -04:00
gayyappan
43aa49ddc0 Add more information in compression views
Rename compression views to compressed_hypertable_stats and
compressed_chunk_stats and summarize information about compression
status for chunks.
2019-10-29 19:02:58 -04:00
gayyappan
909b0ece78 Block updates/deletes on compressed chunks 2019-10-29 19:02:58 -04:00
gayyappan
edd3999553 Add trigger to block INSERT on compressed chunk
Prevent insert on compressed chunks by adding a trigger that blocks it.
Enable insert if the chunk gets decompressed.
2019-10-29 19:02:58 -04:00
Matvey Arye
8250714a29 Add fixes for Windows
- Fix declaration of functions wrt TSDLLEXPORT consistency
- Empty structs need to be created with '{ 0 }' syntax.
- Alignment sentinels have to use uint64 instead of a struct
  with a 0-size member
- Add some more ORDER BY clauses in the tests to constrain
  the order of results
- Add ANALYZE after running compression in
  transparent-decompression test
2019-10-29 19:02:58 -04:00
Matvey Arye
2bf97e452d Push down quals to segment meta columns
This commit pushes down quals or order_by columns to make
use of the SegmentMetaMinMax objects. Namely =,<,<=,>,>= quals
can now be pushed down.

We also remove filters from decompress node for quals that
have been pushed down and don't need a recheck.

This commit also changes tests to add more segment by and
order-by columns.

Finally, we rename segment meta accessor functions to be smaller
2019-10-29 19:02:58 -04:00
Matvey Arye
5c891f732e Add sequence id metadata col to compressed table
Add a sequence id to the compressed table. This id increments
monotonically for each compressed row in a way that follows
the order by clause. We leave gaps to allow for the
possibility to fill in rows due to e.g. inserts down
the line.

The sequence id is global to the entire chunk and does not reset
for each segment-by-group-change since this has the potential
to allow some micro optimizations when ordering by a segment by
columns as well.

The sequence number is a INT32, which allows up to 200 billion
uncompressed rows per chunk to be supported (assuming 1000 rows
per compressed row and a gap of 10). Overflow is checked in the
code and will error if this is breached.
2019-10-29 19:02:58 -04:00
Matvey Arye
b4a7108492 Integrate segment meta into compression
This commit integrates the SegmentMetaMinMax into the
compression logic. It adds metadata columns to the compressed table
and correctly sets it upon compression.

We also fix several errors with datum detoasting in SegmentMetaMinMax
2019-10-29 19:02:58 -04:00
Joshua Lockerman
8b273a5187 Fix flush when num-rows overflow
We should only free the segment-bys when we're changing groups not when
we've got too many rows to compress, in that case we'll need them.
2019-10-29 19:02:58 -04:00
Sven Klemm
45fac0ebe6 Add test for compress_chunk plan invalidation
This patch adds a testcase for prepared statement plan invalidation
when a chunk gets compressed.
2019-10-29 19:02:58 -04:00
gayyappan
6832ed2ca5 Modify storage type for toast columns
This PR modifies the toast type for compressed columns based on
the algorithm used for compression.
2019-10-29 19:02:58 -04:00
Sven Klemm
4cc1a4159a Add DecompressChunk custom scan node
This patch adds a DecompressChunk custom scan node, which will be
used when querying hypertables with compressed chunks to transparently
decompress chunks.
2019-10-29 19:02:58 -04:00
Matvey Arye
f6573f9247 Add a metadata count column to compressed table
This is useful, if some or all compressed columns are NULL.
The count reflects the number of uncompressed rows that are
in the compressed row. Stored as a 32-bit integer.
2019-10-29 19:02:58 -04:00
Matvey Arye
a078781c2e Add decompress_chunk function
This is the opposite dual of compress_chunk.
2019-10-29 19:02:58 -04:00
Matvey Arye
9223f08d68 Truncate chunks after (de-)compression
This commit will truncate the original chunk after compression
or decompression.
2019-10-29 19:02:58 -04:00
gayyappan
7a728dc15f Add view for compression size
View for compressed_chunk_size and compressed_hypertable_size
2019-10-29 19:02:58 -04:00
gayyappan
1f4689eca9 Record chunk sizes after compression
Compute chunk size before/after compressing a chunk and record in
catalog table.
2019-10-29 19:02:58 -04:00
gayyappan
44941f7bd2 Add UI for compress_chunks functionality
Add support for compress_chunks function.

This also adds support for compress_orderby and compress_segmentby
parameters in ALTER TABLE. These parameteres are used by the
compress_chunks function.

The parsing code will most likely be changed to use PG raw_parser
function.
2019-10-29 19:02:58 -04:00
gayyappan
1c6aacc374 Add ability to create the compressed hypertable
This happens when compression is turned on for regular hypertables.
2019-10-29 19:02:58 -04:00