In ae21ee96 we fixed a race condition when running a query to get the
hypertable sizes and one or more chunks was dropped in a concurrent
session leading to exception because the chunks does not exist.
In fact the table lock introduced is useless because we also added
proper joins with Postgres catalog tables to ensure that the relation
exists in the database when calculating the sizes. And even worse with
this table lock now dropping chunks wait for the functions that
calculate the hypertable sizes.
Fixed it by removing the useless table lock and also added isolation
tests to make sure we'll not end up with race conditions again.
In 068534e31730154b894dc8e4fb5315054e1ae51c we make the dist_util
regression test version specific. However, the solo test declaration for
this test was not adjusted, which makes this test flaky. This PR fixes
the declaration.
In the sanitizer workflow we're trying to upload sanitizer output
logs from `${{ github.workspace }}/sanitizer` but in the ASAN_OPTIONS,
LSAN_OPTIONS and UBSAN_OPTIONS we were setting output logs to another
place.
Example of workflow that files were not found in the provided path:
https://github.com/timescale/timescaledb/actions/runs/6830847083
Workflow actions does not move issue to Done column when there is a
comment and the issue is then closed. This commit deals with that by
handling the closed event and move it to the Done column as well as
removing labels that can interfere with processing.
With this function is possible to execute the Continuous Aggregate query
validation over an arbitrary query string, without the need to actually
create the Continuous Aggregate.
It can be used, for example, to check for most frequent queries maybe
using `pg_stat_statements`, validate them and check if there are queries
that potenttialy can turned into a Continuous Aggregate.
We will decompress the compressed columns on demand, skipping them if
the vectorized quals don't pass for the entire batch. This allows us to
avoid reading some columns, saving on IO. The number of batches that are
entirely filtered out is reflected in EXPLAIN ANALYZE as 'Batches
Removed by Filters'.
When creating a Continuous Aggregate using a NULL `bucket_width` in the
`time_bucket` function it lead to a segfault, for example:
CREATE MATERIALIZED VIEW cagg WITH (timescaledb.continuous) AS
SELECT time_bucket(NULL, time), count(*)
FROM metrics
GROUP BY 1;
Fixed it by raising an ERROR if a NULL `bucked_width` is used during the
Continuous Aggregate query validation.
Currently, MN is not supported on PG16. Therefore, the creation of
distributed restore points fails on PG16. This patch disables the CI
test for this PG version.
If users have accidentally been removed from `pg_authid` as a result of
bugs where dropping a user did not revoke privileges from all tables
where the had privileges, it will not be possible to create new chunks
since these require the user to be found when copying the privileges
for the parent table (either compressed hypertable or normal
hypertable).
To fix the situation, we repair the `pg_class` table when updating the
extension by modifying the `relacl` for relations and remove any user
that do not have an entry in `pg_authid`.
A repair function `_timescaledb_functions.repair_relation_acls` is
added that will perform the job. A `makeaclitem` from PG16 that accepts
a list of comma and used as part of the repair is also added as
`_timescaledb_functions.makeaclitem`.
The CI tests on Windows log the creation of a new WAL file in a
non-deterministic way. This message causes the regression tests to fail.
This PR removed these messages from the test output.
This patch adds the support for the dynamic detection of the data type
for a vectorized aggregate. In addition, it removes the hard-coded
integer data type and initializes the decompression_map properly. This
also fixes an invalid memory access.
In ba9b81854c8c94005793bccff29433f6086e5274 we added support for
chunk-wise aggregates. The pushdown of the aggregate breaks the startup
exclusion logic of the ChunkAppend node. This PR adds the support for
startup chunk exclusion with chunk-wise aggs.
Fixes: #6282
In PG16 new collations named `en_US` and `en-US` on Windows bootstrap
that are incompatible with UTF-8. So added the `collencoding` to the
order of the query that determine the collation name to get another
compatible collation.
This changes the way of pinning OpenSSL 1.1, so that it supports the
case of PG16 where both the oldest and the newest alpine docker images
have OpenSSL 3. Depending on the PG versions, both old and new images
might have either 1.1 or 3, so we first try to install the versioned
1.1 package, and if it fails, it means the unversioned package is 1.1
and we install it instead.
In fbc1fbb64bed5e82bc3992dc4ac547bb172d46c3 PG 16 was enabled in the CI.
However, PG16 was not added to PG_LATEST. Jobs that try to modify the
build matrix (e.g., the Sanitizer) fail since they can not find an
entry for PG 16. This patch fixes the build configuration.
a094f175eb7c98173c78f557880ccd2d89b791f8 changes the
contains_path_plain_or_sorted_agg function. However, a return statement
is missing to properly return the result of subpaths. This pull request
addresses the logic issues and simplifies the check.
Telemetry tests without OpenSSL was failing to build because PG16
converted the *GetDatum() and DatumGet*() macros to inline functions
that enable compiler to catch wrong usage of Datums.
postgres/postgres@c8b2ef0
Sometime ago we freeze Postgres packages on Windows (14.5.1 and 15.0.1)
due to some issues on chocolatey.
Now is time to remove it and use the latest version.
When trying to compress a table with an oid > 2^31 an error about
bitmapset not allowing negative members would be raised because
we tried to store the oids in a bitmapset.
Oids should not be stored in Bitmapsets as storing Oids that way
is very inefficient and may potentially consume a lot of memory
depending on the values of the oids. Storing them as list is more
efficient instead.
PG16 converted the *GetDatum() and DatumGet*() macros to inline
functions. This enabled the compiler to catch a wrong use of Const
instead of Datum.
postgres/postgres@c8b2ef0
Some bitmap set operations recycle one of the input parameters or return
a reference to a new bitmap set. The added set of rules checks that the
returned reference is not discarded.
- Updated show_chunks, drop_chunks APIs to get the affected
chunks using chunk creation time metadata based on the
"date/time/interval" like boundary specified for the INTEGER
columns.
- We honor "integer_now" function if it's specified so as to keep
backwards compatibility with the existing behavior
Co-authored-by: Dipesh Pandit <dipesh@timescale.com>
In 2 instances of the exception handling _message and _detail were
not properly set in compression_policy_execute leading to empty
message and detail being reported.
This commit introduces a vectorized version of the sum() aggregate
function on compressed data. This optimization is enabled if (1) the
data is compressed, (2) no filters or grouping is applied, (3) the data
type is a 32-bit sum, and (4) the aggregation can be pushed down to
the chunk level.
In 8767de658b8279fd3352d0bf2acce4564784079e the freezing of tuples was
introduced, which changes the relation size in PG >= 14 by one page
since a visibility map is created. This PR adjusts the output of the
dist_util test.
If `RegisterDynamicbackgroundworker` fails the returned handle will be
null, which will cause `WaitForBackgroundWorkerStartup` to fail with a
crash. This can typically happen if there is not enough slots in
`BackgroundWorkerData`. In that case, just return and do not call
`WaitForBackgroundWorkerStartup`.
In #6168 we added the ordered append output test for PG16 but
unfortunately it was wrong and didn't take in account the planner
output differences introduced.
Fixed it by changing properly the output changes for PG16.
f3b3e556932e016a321796bc0c517199939aabf6 introduces sorted paths for
compressed chunks. However, it uses an unneeded and invalid memcpy call,
which can cause crashes. This PR removes the memcpy call.
In our code we have an Event Trigger on `ddl_command_end` to process
some DDL commands, and specially the `ALTER TABLE` we execute some code
to properly deal with hypertables, chunks and compressed chunks.
The problem is when we start to process our code at `ddl_command_end`
event we get the related `relid` by calling the `AlterTableLookupRelation`
that perform the proper acl checking. But this is a bit wrong because in
case of a `ALTER TABLE ... OWNER TO ...` at this point (aka
`ddl_command_end`) the ownership of the relation was already processed
and with this PG16 refactoring it now is visible for the
`object_ownercheck` leading to a misbehavior.
Due to our intention is just the get the `relid` related to the
AlterTableCmd just fixed it by replacing the `AlterTableLookupRelation`
for the `RangeVarGetRelid` because at this point all the proper aclcheck
was already done.
postgres/postgres@afbfc029