3722 Commits

Author SHA1 Message Date
Bharathy
dd65a6b436 Fix segfault after second ANALYZE
Issue occurs in extended query protocol mode only where every
query goes through PREPARE and EXECUTE phase. First time ANALYZE
is executed, a list of relations to be vaccumed is extracted and
saved in a list. This list is referenced in parsetree node. Once
execution of ANALYZE is complete, this list is cleaned up, however
reference to the same is not cleaned up in parsetree node. When
second time ANALYZE is executed, segfault happens as we access an
invalid memory location.

Fixed the issue by restoring the actual value in parsetree node
once ANALYZE completes its execution.

Fixes #4857
2022-12-12 17:34:41 +05:30
Jan Nidzwetzki
d92739099b Reduce test group size in sanitizer runs
When the sanitizer is active, the tests require a lot of memory. If they
are run in large parallel groups, out-of-memory situations can occur.
This patch reduces the size of parallel groups to 5 when the sanitizer
is active.
2022-12-09 08:26:47 +01:00
Alexander Kuzmenkov
a01e483bf3 More gdb output in CI
Print locals and arguments.
2022-12-09 08:05:05 +04:00
Jan Nidzwetzki
c76dfa0acb Improve Sanitizer checks
This patch contains two changes to the Sanitizer checks:

(1) All logfiles of the Sanitizer will be uploaded to the
    CI database.

(2) The Sanitizer checks are executed on every PR.
2022-12-08 10:55:17 +01:00
Jan Nidzwetzki
323d41b53b Ensure dist_hypertable is executed as solo test
The `dist_hypertable` test needs a lot of memory, especially when the
sanitizer is enabled. This patch runs this test as a `SOLO_TEST`. This
ensures that PostgreSQL does not run into an out-of-memory situation.
2022-12-08 08:57:14 +01:00
Jan Nidzwetzki
5fd9170b0a Correct sanitizer log directory
So far, we have treated the 'log_path' setting of the sanitizer like a
file. In fact, this value is used as a prefix for the created log file.
Since we expected the exact file name when uploading the sanitizer
output, this file was not found and we lost the messages of the
sanitizer. This PR changes the behavior. We now treat the setting as a
prefix and upload all files created in a new sanitizer output folder.
2022-12-07 21:17:44 +01:00
Bharathy
bfed42c2d3 Fix remote_txn on PG15
In remote_txn, testcases which kill remote processes on data nodes,
tend to rollback transactions and as part of the process, WARNINGS/ERROR
are reported to client. Client however reports WARNINGS/ERROR in different
order intermittently. This is an issue specific to psql utility. Placing
psql in gdb and trying to diagnise the problem does not reproduce the issue.

This patch fixes the tests by not reporting WARNINGS.

Fixes #4837
2022-12-06 12:07:49 +05:30
Erik Nordström
fd42fe76fa Read until EOF in COPY fetcher
Ensure the COPY fetcher implementation reads data until EOF with
`PQgetCopyData()`. Also ensure the malloc'ed copy data is freed with
`PQfreemem()` if an error is thrown in the processing loop.

Previously, the COPY fetcher didn't read until EOF, and instead
assumed EOF when the COPY file trailer is received. Since EOF wasn't
reached, it required terminating the COPY with an extra call to the
(deprecated) `PQendcopy()` function.

Still, there are cases when a COPY needs to be prematurely terminated,
for example, when querying with a LIMIT clause. Therefore, distinguish
between "normal" end (when receiving EOF) and forceful end (cancel the
ongoing query).
2022-12-05 18:28:35 +01:00
Sachin
cd4509c2a3 Release 2.9.0
This release adds major new features since the 2.8.1 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Hierarchical Continuous Aggregates (aka Continuous Aggregate on top of another Continuous Aggregate)
* Improve `time_bucket_gapfill` function allowing specifying timezone to bucket
* Use `alter_data_node()` to change the data node configuration. This function introduces the option to configure the availability of the data node.

This release also includes several bug fixes.

**Features**
* #4476 Batch rows on access node for distributed COPY
* #4567 Exponentially backoff when out of background workers
* #4650 Show warnings when not following best practices
* #4664 Introduce fixed schedules for background jobs
* #4668 Hierarchical Continuous Aggregates
* #4670 Add timezone support to time_bucket_gapfill
* #4678 Add interface for troubleshooting job failures
* #4718 Add ability to merge chunks while compressing
* #4786 Extend the now() optimization to also apply to CURRENT_TIMESTAMP
* #4820 Support parameterized data node scans in joins
* #4830 Add function to change configuration of a data nodes
* #4966 Handle DML activity when datanode is not available
* #4971 Add function to drop stale chunks on a datanode

**Bugfixes**
* #4663 Don't error when compression metadata is missing
* #4673 Fix now() constification for VIEWs
* #4681 Fix compression_chunk_size primary key
* #4696 Report warning when enabling compression on hypertable
* #4745 Fix FK constraint violation error while insert into hypertable which references partitioned table
* #4756 Improve compression job IO performance
* #4770 Continue compressing other chunks after an error
* #4794 Fix degraded performance seen on timescaledb_internal.hypertable_local_size() function
* #4807 Fix segmentation fault during INSERT into compressed hypertable
* #4822 Fix missing segmentby compression option in CAGGs
* #4823 Fix a crash that could occur when using nested user-defined functions with hypertables
* #4840 Fix performance regressions in the copy code
* #4860 Block multi-statement DDL command in one query
* #4898 Fix cagg migration failure when trying to resume
* #4904 Remove BitmapScan support in DecompressChunk
* #4906 Fix a performance regression in the query planner by speeding up frozen chunk state checks
* #4910 Fix a typo in process_compressed_data_out
* #4918 Cagg migration orphans cagg policy
* #4941 Restrict usage of the old format (pre 2.7) of continuous aggregates in PostgreSQL 15.
* #4955 Fix cagg migration for hypertables using timestamp without timezone
* #4968 Check for interrupts in gapfill main loop
* #4988 Fix cagg migration crash when refreshing the newly created cagg

**Thanks**
* @jflambert for reporting a crash with nested user-defined functions.
* @jvanns for reporting hypertable FK reference to vanilla PostgreSQL partitioned table doesn't seem to work
* @kou for fixing a typo in process_compressed_data_out
* @xvaara for helping reproduce a bug with bitmap scans in transparent decompression
* @byazici for reporting a problem with segmentby on compressed caggs
* @tobiasdirksen for requesting Continuous aggregate on top of another continuous aggregate
* @xima for reporting a bug in Cagg migration
2022-12-05 19:33:45 +05:30
Sachin
29f35da905 Fix Github CI failures
Not specifying alpine version causes libssl version
to change, which in turn cause error in downgrade tests
as well as ABI tests.
This commit also fixes shellcheck failures.
Some failing windows tests are addd to ignore list.

Co-authored-by: Lakshmi Narayanan Sreethar <lakshmi@timescale.com>
Co-authored-by: Alexander Kuzmenkov <akuzmenkov@timescale.com>
Signed-off-by: Sachin <sachin@timescale.com>
2022-12-05 18:15:21 +05:30
Sven Klemm
1a806e2fde Check for presence of RelationGetSmgr
RelationGetSmgr was backported by upstream to the STABLE branches
but is not yet available in any released version so we cannot use
pg version to determine presence of RelationGetSmgr.
2022-11-30 02:57:19 +01:00
Mats Kindahl
09c0ba7136 Do not spam log with telemetry problems
The telemetry process runs on a regular basis and usually does not make
a lot of noise, but in a few particular cases, it writes entries to the
log unnecessarily.

If the telemetry server cannot be contacted, it will print a warning in
the log that the server cannot be contacted. Since it is nothing wrong
with the system and the telemetry process will try to re-connect at a
later time, it is unnecessary to print as a warning.

If the telemetry response is malformed, a warning is printed. This is
also unnecessary since there is nothing wrong with the system, there is
nothing the user can do about it, and this warning can be largely
ignored.

If the hard-coded telemetry scheme is incorrect, a warning will be
printed. This should not normally happen, and if it happens on a
running server, there is nothing that can be done to eliminate the
error message and the message is unnecessary.

When the telemetry job exits, a standard termination message is
printed in the log. Although harmless, it is mostly confusing and
provide no value to the user.

If the telemetry process is attempting to connect, or is connected, to
the telemetry server, the telemetry server will wait until the
connection gets a timeout before shutting down. This is unnecessary
since there is no critical problem in aborting the connection and doing
a direct shutdown.

This commit turns those warnings into notices and installs a signal
handler so that the telemetry job exits silently and abort any
outstanding connections.

Fixes #4028
2022-11-29 12:18:00 +01:00
Sven Klemm
558da2c5c6 Use RelationGetSmgr instead of rd_smgr
rd_smgr should not be accessed directly but RelationGetSmgr should
be used instead. Accessing it directly can lead to segfaults when
parallel relcache flushes are happening.

f10f0ae420
2022-11-28 13:50:35 +01:00
Sven Klemm
2d0087a0e7 Fix segfault in cagg creation
When trying to create cagg on top of any relation that is a neither
a hypertable nor a continuous aggregate the command would segfault.
This patch changes the code to handle this case gracefully and error
out when trying to create a cagg on top of a relation that is not
supported. Found by coverity.
2022-11-28 12:19:20 +01:00
Fabrízio de Royes Mello
35c9120498 Add Hierarchical Continuous Aggregates validations
Commit 3749953e introduce Hierarchical Continuous Aggregates (aka
Continuous Aggregate on top of another Continuous Aggregate) but it
lacks of some basic validations.

Validations added during the creation of a Hierarchical Continuous
Aggregate:

* Forbid create a continuous aggregate with fixed-width bucket on top of
  a continuous aggregate with variable-width bucket.

* Forbid incompatible bucket widths:
  - should not be equal;
  - bucket width of the new continuous aggregate should be greater than
    the source continuous aggregate;
  - bucket width of the new continuous aggregate should be multiple of
    the source continuous aggregate.
2022-11-25 19:55:24 -03:00
Sven Klemm
83b13cf6f7 Use packaged postgres for sqlsmith and coverity CI
The sqlsmith and coverity workflows used the cache postgres build
but could not produce a build by themselves and therefore relied
on other workflows to produce the cached binaries. This patch
changes those workflows to use normal postgres packages instead
of custom built postgres to remove that dependency.
2022-11-25 21:37:49 +01:00
Sven Klemm
3b94b996f2 Use custom node to block frozen chunk modifications
This patch changes the code that blocks frozen chunk
modifications to no longer use triggers but to use custom
node instead. Frozen chunks is a timescaledb internal object
and should therefore not be protected by TRIGGER which is
external and creates several hazards. TRIGGERs created to
protect internal state contend with user-created triggers.
The trigger created to protect frozen chunks does not work
well with our restoring GUC which we use when restoring
logical dumps. Thirdly triggers are not functional for any
internal operations but are only working in code paths that
explicitly added trigger support.
2022-11-25 19:56:48 +01:00
Mats Kindahl
ce778faa11 Updating scheduled run
Updating scheduled run to avoid original creator from being notified.
2022-11-25 19:13:37 +01:00
Konstantina Skovola
4a30e5969b Fix flaky bgw_db_scheduler_fixed test
Apply date_trunc to last_successful_finish.

Commit 20cdd9ca3ed0c2d62779c4fc61d278a489b4460a mostly fixed
the flakiness, but date_trunc was not
applied to the last_successful_finish
so we still got some flaky runs.
2022-11-25 15:48:29 +02:00
Nikhil Sontakke
c92e29ba3a Fix DML HA in multi-node
If a datanode goes down for whatever reason then DML activity to
chunks residing on (or targeted to) that DN will start erroring out.
We now handle this by marking the target chunk as "stale" for this
DN by changing the metadata on the access node. This allows us to
continue to do DML to replicas of the same chunk data on other DNs
in the setup. This obviously will only work for chunks which have
"replication_factor" > 1. Note that for chunks which do not have
undergo any change will continue to carry the appropriate DN related
metadata on the AN.

This means that such "stale" chunks will become underreplicated and
need to be re-balanced by using the copy_chunk functionality by a micro
service or some such process.

Fixes #4846
2022-11-25 17:42:26 +05:30
Dmitry Simonenko
26e3be1452 Test dist caggs with an unavailable data node
Add additional test cases to ensure caggs functionality on distributed
hypertable during data node being unavailable.

Fix #4978
2022-11-24 19:15:40 +02:00
Dmitry Simonenko
826dcd2721 Ensure nodes availability using dist restore point
Make sure that a data node list does not have unavailable data nodes
when using create_distributed_restore_point() API.

Fix #4979
2022-11-24 16:08:06 +02:00
Bharathy
7bfd28a02f Fix dist_fetcher_type test on PG15 2022-11-24 18:41:46 +05:30
Dmitry Simonenko
5813173e07 Introduce drop_stale_chunks() function
This function drops chunks on a specified data node if those chunks are
not known by the access node.

Call drop_stale_chunks() automatically when data node becomes
available again.

Fix #4848
2022-11-23 19:21:05 +02:00
Alexander Kuzmenkov
bdae647f0a Add i386 check results to database
Also add some more gdb commands to give us more context.
2022-11-23 18:53:40 +04:00
Alexander Kuzmenkov
26db866637 Fix GITHUB_OUTPUT on Windows
We have to add it to WSLENV and translate it as a path, so that it
properly passes the WSL <-> native process boundary.
2022-11-23 18:53:40 +04:00
Konstantina Skovola
40297f1897 Fix TRUNCATE on hierarchical caggs
When truncating a cagg that had another cagg defined on
top of it, the nested cagg would not get invalidated accordingly.
That was because we were not adding a corresponding entry in
the hypertable invalidation log for the materialization hypertable
of the base cagg.
This commit adds an invalidation entry in the table so that
subsequent refreshes see and properly process this invalidation.

Co-authored-by: name <fabriziomello@gmail.com>
2022-11-23 11:17:17 +02:00
Fabrízio de Royes Mello
35fa891013 Add missing gitignore entry
Pull request #4998 introduced a new template SQL test file but missed
to add the properly `.gitignore` entry to ignore generated test files.
2022-11-23 05:08:05 -03:00
Fabrízio de Royes Mello
e84a6e2e65 Remove the refresh step from CAgg migration
We're facing some weird `portal snapshot` issues running the
`refresh_continuous_aggregate` procedure called from other procedures.

Fixed it by ignoring the Refresh Continuous Aggregate step from the
`cagg_migrate` and warning users to run it manually after the execution.

Fixes #4913
2022-11-22 16:49:13 -03:00
Lakshmi Narayanan Sreethar
7bc6e56cb7 Fix plan_hashagg test failure in PG15
Updated the expected output of plan_hashagg to reflect changes introduced
by postgres/postgres@4b160492.
2022-11-22 22:36:22 +05:30
Sven Klemm
639a5018a3 Change time of scheduled CI run
Since we now use the date as a part of the cache key to ensure no
stale cache entries hiding build failures we need to make sure
we have a cache entry present before workflows that depend on cache
are run.
2022-11-22 14:49:28 +01:00
Konstantina Skovola
48d9733fda Add telemetry for caggs on top of caggs
Commit #4668 introduced hierarchical caggs. This patch adds
a field `num_caggs_nested` to the telemetry report to include the
number of caggs defined on top of other caggs.
2022-11-22 13:39:27 +02:00
Jan Nidzwetzki
fd84bf42a5 Use Ensure in get_or_add_baserel_from_cache
This patch changes an Assert in get_or_add_baserel_from_cache to an
Ensure. Therefore, this check is also performed in release builds. This
is done to detect metadata corruptions at an early stage.
2022-11-22 10:37:45 +01:00
Fabrízio de Royes Mello
a5b8c9b084 Fix caggs on caggs tests on PG15
PR #4668 introduced the Hierarchical Continuous Aggregates (aka
Continuous Aggregate on top of another Continuous Aggregate) but
unfortunately we miss to fix the regression tests on PG15.
2022-11-21 15:19:44 -03:00
Bharathy
89cede81bd Fix PG15 specific tests. 2022-11-21 16:09:42 +05:30
Fabrízio de Royes Mello
3b5653e4cc Ignore trailing whitespaces changes in git blame 2022-11-19 11:42:36 -03:00
Fabrízio de Royes Mello
a4356f342f Remove trailing whitespaces from test code 2022-11-18 16:31:47 -03:00
Fabrízio de Royes Mello
b1742969d0 Add SQL test files to trailing whitespace CI check
In commit 1f807153 we added a CI check for trailing whitespaces over
our source code files (.c and .h).

This commit add SQL test files (.sql and .sql.in) to this check.
2022-11-18 16:31:47 -03:00
Fabrízio de Royes Mello
3749953e97 Hierarchical Continuous Aggregates
Enable users create Hierarchical Continuous Aggregates (aka Continuous
Aggregates on top of another Continuous Aggregates).

With this PR users can create levels of aggregation granularity in
Continuous Aggregates making the refresh process even faster.

A problem with this feature can be in upper levels we can end up with
the "average of averages". But to get the "real average" we can rely on
"stats_aggs" TimescaleDB Toolkit function that calculate and store the
partials that can be finalized with other toolkit functions like
"average" and "sum".

Closes #1400
2022-11-18 14:34:18 -03:00
Jan Nidzwetzki
fd11479700 Speed up get_or_add_baserel_from_cache operation
Commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces the
get_or_add_baserel_from_cache function. It contains a performance
regression, since an expensive metadata scan
(ts_chunk_get_hypertable_id_by_relid) is performed even when it could be
avoided.
2022-11-18 15:29:49 +01:00
Jan Nidzwetzki
380464df9b Perform frozen chunk status check via trigger
The commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces frozen
chunks. Checking whether a chunk is frozen or not has been done so far
in the query planner. If it is not possible to determine which chunks
are affected by a query in the planner (e.g., due to a cast in the WHERE
condition), all chunks are checked. This leads (1) to an increased
planning time and (2) to the situation that a single frozen chunk could
reject queries, even if the frozen chunk is not addressed by the query.
2022-11-18 15:29:49 +01:00
Lakshmi Narayanan Sreethar
7c32ceb073 Fix perl test import in PG15
Removed an invalid import from 007_healthcheck.pl test.
Also enabled all the perl tests and a couple of others in PG15.
2022-11-18 13:55:59 +05:30
gayyappan
b9ca06d6e3 Move freeze/unfreeze chunk to tsl
Move code for freeze and unfreeze chunk to tsl directory.
2022-11-17 15:28:47 -05:00
Bharathy
bfa641a81c INSERT .. SELECT on distributed hypertable fails on PG15
INSERT .. SELECT query containing distributed hypertables generates plan
with DataNodeCopy node which is not supported. Issue is in function
tsl_create_distributed_insert_path() where we decide if we should
generate DataNodeCopy or DataNodeDispatch node based on the kind of
query. In PG15 for INSERT .. SELECT query timescaledb planner generates
DataNodeCopy as rte->subquery is set to NULL. This is because of a commit
in PG15 where rte->subquery is set to NULL as part of a fix.

This patch checks if SELECT subquery has distributed hypertables or not
by looking into root->parse->jointree which represents subquery.

Fixes #4983
2022-11-17 21:18:23 +05:30
Sachin
1e3200be7d USE C function for time_bucket() offset
Instead of using SQL UDF for handling offset parameter
added ts_timestamp/tz/date_offset_bucket() which will
handle offset
2022-11-17 13:08:19 +00:00
Lakshmi Narayanan Sreethar
839e42dd0c Use async API to drop database from delete_data_node
PG15 introduced a ProcSignalBarrier mechanism in drop database
implementation to force all backends to close the file handles for
dropped tables. The backend that is executing the drop database command
will emit a new process signal barrier and wait for other backends to
accept it. But the backend which is executing the delete_data_node
function will not be able to process the above mentioned signal as it
will be stuck waiting for the drop database query to return. Thus the
two backends end up waiting for each other causing a deadlock.

Fixed it by using the async API to execute the drop database command
from delete_data_node instead of the blocking remote_connection_cmdf_ok
call.

Fixes #4838
2022-11-17 18:09:39 +05:30
Alexander Kuzmenkov
1b65297ff7 Fix memory leak with INSERT into compressed hypertable
We used to allocate some temporary data in the ExecutorContext.
2022-11-16 13:58:52 +04:00
Alexander Kuzmenkov
7e4ebd131f Escape the quotes in gdb command 2022-11-15 21:49:39 +04:00
Alexander Kuzmenkov
676d1fb1f1 Fix const null clauses in runtime chunk exclusion
The code we inherited from postgres expects that if we have a const null
or false clause, it's going to be the single one, but that's not true
for runtime chunk exclusion because we don't try to fold such
restrictinfos after evaluating the mutable functions. Fix it to also
work for multiple restrictinfos.
2022-11-15 21:49:39 +04:00
Mats Kindahl
f3a3da7804 Take advisory lock for job tuple
Job ids are locked using an advisory lock rather than a row lock on the
jobs table, but this lock is not taken in the job API functions
(`alter_job`, `delete_job`, etc.), which appears to cause a race
condition resulting in addition of multiple rows with the same job id.

This commit adds an advisory `RowExclusiveLock` on the job id while
altering it to match the advisory locks taken while performing other
modifications.

Closes #4863
2022-11-15 17:58:49 +01:00