738 Commits

Author SHA1 Message Date
Lakshmi Narayanan Sreethar
1eb8aa3f14 Post-release fixes for 2.9.3
Bumping the previous version and adding tests for 2.9.3.
2023-02-07 15:53:42 +05:30
Lakshmi Narayanan Sreethar
fb3ad7d6c6 Release 2.9.3
This release contains bug fixes since the 2.9.2 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.

**Bugfixes**
* #4804 Skip bucketing when start or end of refresh job is null
* #5108 Fix column ordering in compressed table index not following the order of a multi-column segment by definition
* #5187 Don't enable clang-tidy by default
* #5255 Fix year not being considered as a multiple of day/month in hierarchical continuous aggregates
* #5259 Lock down search_path in SPI calls
2023-02-03 20:04:18 +05:30
Sven Klemm
2a47462fbc Remove SECURITY DEFINER from get_telemetry_report
We should not broadly make functions security definer as that increases
the attack surface for our extension. Especially for our telemetry we
should strive to only run it with the minimum required privileges.
2023-02-01 13:10:23 +01:00
Fabrízio de Royes Mello
c0f2ed1809 Mark cagg_watermark parallel safe
The `cagg_watermark` function perform just read-only operations so is
safe to make it parallel safe to take advantage of the Postgres
parallel query.

Since 2.7 when we introduced the new Continuous Aggregate format we
don't use partials anymore and those aggregate functions
`partialize_agg` and `finalize_agg` are not parallel safe, so make no
sense don't take advantage of Postgres parallel query for realtime
Continuous Aggregates.
2023-01-31 13:07:19 -03:00
Mats Kindahl
5661ff1523 Add role-level security to job error log
Since the job error log can contain information from many different
sources and also from many different jobs it is important to ensure
that visibility of the job error log entries is restricted to job
owners.

This commit extend the view `timescaledb_information.job_errors` with
role-based checks so that a user can only see entries for jobs that she
has permission to view and restrict the permissions to
`_timescaledb_internal.job_errors` so that users only can view the job
error log through the view. A special case is added so that the
superuser and the database owner can see all log entries, even if there
is no associated job id with the log entry.

Closes #5217
2023-01-30 12:13:00 +01:00
Bharathy
684637a172 Post-release fixes for 2.9.2
Bumping the previous version and adding tests for 2.9.2
2023-01-25 17:54:54 +05:30
Bharathy
f211294c61 Release 2.9.2
This release contains bug fixes since the 2.9.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #5114 Fix issue with deleting data node and dropping the database on multi-node
* #5133 Fix creating a CAgg on a CAgg where the time column is in a different order of the original hypertable
* #5152 Fix adding column with NULL constraint to compressed hypertable
* #5170 Fix CAgg on CAgg variable bucket size validation
* #5180 Fix default data node availability status on multi-node
* #5181 Fix ChunkAppend and ConstraintAwareAppend with TidRangeScan child subplan
* #5193 Fix repartition behavior when attaching data node on multi-node
2023-01-23 15:55:10 +05:30
Fabrízio de Royes Mello
4118a72575 Remove parallel safe from partialize_agg
Previous PR #4307 mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes #4922
2023-01-13 07:31:55 -03:00
Sven Klemm
b92f36d765 Add 2.9.1 to update test scripts 2022-12-27 09:24:57 +01:00
Sven Klemm
93667df7d8 Release 2.9.1
This release contains bug fixes since the 2.9.0 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.

**Bugfixes**
* #5072 Fix CAgg on CAgg bucket size validation
* #5101 Fix enabling compression on caggs with renamed columns
* #5106 Fix building against PG15 on Windows
* #5117 Fix postgres server restart on background worker exit
* #5121 Fix privileges for job_errors in update script
2022-12-23 14:38:45 +01:00
Konstantina Skovola
0a3615fc70 Fix privileges for job_errors table in update script 2022-12-23 14:05:19 +02:00
Sven Klemm
4527f51e7c Refactor INSERT into compressed chunks
This patch changes INSERTs into compressed chunks to no longer
be immediately compressed but stored in the uncompressed chunk
instead and later merged with the compressed chunk by a separate
job.

This greatly simplifies the INSERT-codepath as we no longer have
to rewrite the target of INSERTs and on-the-fly compress leading
to a roughly 2x improvement on INSERT rate into compressed chunk.
Additionally this improves TRIGGER-support for INSERTs into
compressed chunks.

This is a necessary refactoring to allow UPSERT/UPDATE/DELETE on
compressed chunks in follow-patches.
2022-12-21 12:53:29 +01:00
Sven Klemm
08bb21f7e6 2.9.0 Post-release adjustments
Add 2.9.0 to update test scripts and adjust downgrade scripts for
2.9.0. Additionally adjust CHANGELOG to sync with the actual release
CHANGELOG and add PG15 to CI.
2022-12-19 19:10:24 +01:00
Sachin
cd4509c2a3 Release 2.9.0
This release adds major new features since the 2.8.1 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Hierarchical Continuous Aggregates (aka Continuous Aggregate on top of another Continuous Aggregate)
* Improve `time_bucket_gapfill` function allowing specifying timezone to bucket
* Use `alter_data_node()` to change the data node configuration. This function introduces the option to configure the availability of the data node.

This release also includes several bug fixes.

**Features**
* #4476 Batch rows on access node for distributed COPY
* #4567 Exponentially backoff when out of background workers
* #4650 Show warnings when not following best practices
* #4664 Introduce fixed schedules for background jobs
* #4668 Hierarchical Continuous Aggregates
* #4670 Add timezone support to time_bucket_gapfill
* #4678 Add interface for troubleshooting job failures
* #4718 Add ability to merge chunks while compressing
* #4786 Extend the now() optimization to also apply to CURRENT_TIMESTAMP
* #4820 Support parameterized data node scans in joins
* #4830 Add function to change configuration of a data nodes
* #4966 Handle DML activity when datanode is not available
* #4971 Add function to drop stale chunks on a datanode

**Bugfixes**
* #4663 Don't error when compression metadata is missing
* #4673 Fix now() constification for VIEWs
* #4681 Fix compression_chunk_size primary key
* #4696 Report warning when enabling compression on hypertable
* #4745 Fix FK constraint violation error while insert into hypertable which references partitioned table
* #4756 Improve compression job IO performance
* #4770 Continue compressing other chunks after an error
* #4794 Fix degraded performance seen on timescaledb_internal.hypertable_local_size() function
* #4807 Fix segmentation fault during INSERT into compressed hypertable
* #4822 Fix missing segmentby compression option in CAGGs
* #4823 Fix a crash that could occur when using nested user-defined functions with hypertables
* #4840 Fix performance regressions in the copy code
* #4860 Block multi-statement DDL command in one query
* #4898 Fix cagg migration failure when trying to resume
* #4904 Remove BitmapScan support in DecompressChunk
* #4906 Fix a performance regression in the query planner by speeding up frozen chunk state checks
* #4910 Fix a typo in process_compressed_data_out
* #4918 Cagg migration orphans cagg policy
* #4941 Restrict usage of the old format (pre 2.7) of continuous aggregates in PostgreSQL 15.
* #4955 Fix cagg migration for hypertables using timestamp without timezone
* #4968 Check for interrupts in gapfill main loop
* #4988 Fix cagg migration crash when refreshing the newly created cagg

**Thanks**
* @jflambert for reporting a crash with nested user-defined functions.
* @jvanns for reporting hypertable FK reference to vanilla PostgreSQL partitioned table doesn't seem to work
* @kou for fixing a typo in process_compressed_data_out
* @xvaara for helping reproduce a bug with bitmap scans in transparent decompression
* @byazici for reporting a problem with segmentby on compressed caggs
* @tobiasdirksen for requesting Continuous aggregate on top of another continuous aggregate
* @xima for reporting a bug in Cagg migration
2022-12-05 19:33:45 +05:30
Sven Klemm
3b94b996f2 Use custom node to block frozen chunk modifications
This patch changes the code that blocks frozen chunk
modifications to no longer use triggers but to use custom
node instead. Frozen chunks is a timescaledb internal object
and should therefore not be protected by TRIGGER which is
external and creates several hazards. TRIGGERs created to
protect internal state contend with user-created triggers.
The trigger created to protect frozen chunks does not work
well with our restoring GUC which we use when restoring
logical dumps. Thirdly triggers are not functional for any
internal operations but are only working in code paths that
explicitly added trigger support.
2022-11-25 19:56:48 +01:00
Dmitry Simonenko
5813173e07 Introduce drop_stale_chunks() function
This function drops chunks on a specified data node if those chunks are
not known by the access node.

Call drop_stale_chunks() automatically when data node becomes
available again.

Fix #4848
2022-11-23 19:21:05 +02:00
Fabrízio de Royes Mello
e84a6e2e65 Remove the refresh step from CAgg migration
We're facing some weird `portal snapshot` issues running the
`refresh_continuous_aggregate` procedure called from other procedures.

Fixed it by ignoring the Refresh Continuous Aggregate step from the
`cagg_migrate` and warning users to run it manually after the execution.

Fixes #4913
2022-11-22 16:49:13 -03:00
Fabrízio de Royes Mello
a4356f342f Remove trailing whitespaces from test code 2022-11-18 16:31:47 -03:00
Fabrízio de Royes Mello
3749953e97 Hierarchical Continuous Aggregates
Enable users create Hierarchical Continuous Aggregates (aka Continuous
Aggregates on top of another Continuous Aggregates).

With this PR users can create levels of aggregation granularity in
Continuous Aggregates making the refresh process even faster.

A problem with this feature can be in upper levels we can end up with
the "average of averages". But to get the "real average" we can rely on
"stats_aggs" TimescaleDB Toolkit function that calculate and store the
partials that can be finalized with other toolkit functions like
"average" and "sum".

Closes #1400
2022-11-18 14:34:18 -03:00
Jan Nidzwetzki
380464df9b Perform frozen chunk status check via trigger
The commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces frozen
chunks. Checking whether a chunk is frozen or not has been done so far
in the query planner. If it is not possible to determine which chunks
are affected by a query in the planner (e.g., due to a cast in the WHERE
condition), all chunks are checked. This leads (1) to an increased
planning time and (2) to the situation that a single frozen chunk could
reject queries, even if the frozen chunk is not addressed by the query.
2022-11-18 15:29:49 +01:00
Sachin
1e3200be7d USE C function for time_bucket() offset
Instead of using SQL UDF for handling offset parameter
added ts_timestamp/tz/date_offset_bucket() which will
handle offset
2022-11-17 13:08:19 +00:00
Bharathy
8afdddc2da Deprecate continuous aggregates with old format
This patch will report a warning when upgrading to new timescaledb extension,
if their exists any caggs with partial aggregates only on release builds.
Also restrict users from creating cagss with old format on timescaledb with
PG15.
2022-11-15 08:38:03 +05:30
Fabrízio de Royes Mello
6ae192631e Fix CAgg migration with timestamp without timezone
It was a leftover from the original implementation where we didn't add
tests for time dimension using `timestamp without timezone`.

Fixed it by dealing with this datatype and added regression tests.

Fixes #4956
2022-11-11 15:25:01 -03:00
Erik Nordström
f13214891c Add function to alter data nodes
Add a new function, `alter_data_node()`, which can be used to change
the data node's configuration originally set up via `add_data_node()`
on the access node.

The new functions introduces a new option "available" that allows
configuring the availability of the data node. Setting
`available=>false` means that the node should no longer be used for
reads and writes. Only read "failover" is implemented as part of this
change, however.

To fail over reads, the alter data node function finds all the chunks
for which the unavailable data node is the "primary" query target and
"fails over" to a chunk replica on another data node instead. If some
chunks do not have a replica to fail over to, a warning will be
raised.

When a data node is available again, the function can be used to
switch back to using the data node for queries.

Closes #2104
2022-11-11 13:59:42 +01:00
Fabrízio de Royes Mello
bfef3173bc Refactor CAgg migration code to use job API
The current implementation update the jobs table directly and to make it
consistent with other parts of the code we changed it to use the
`alter_job` API instead to enable and disable the jobs during the
migration. This refactoring is related to #4863.
2022-11-08 17:41:01 -03:00
Sven Klemm
3059290bea Add new chunk state CHUNK_STATUS_COMPRESSED_PARTIAL
A chunk is in this state when it is compressed but also has
uncompressed data in the uncompressed chunk. Individual tuples
can only ever exist in either area. This is preparation patch
to add support for uncompressed staging area for DML operations.
2022-11-07 13:32:37 +01:00
Fabrízio de Royes Mello
6c73b61b99 Fix orphan jobs after CAgg migration
When using `override => true` the migration procedure rename the
current cagg using the suffix `_old` and rename the new created with
suffix `_new` to the original name.

The problem is the `copy polices` step was executed after the
`override` step and then we didn't found the new cagg name because it
was renamed to the the original name leading the policy orphan (without
connection with the materialization hypertable).

Fixed it by reordering the steps executin the `copy policies` before
the `override` step. Also made some ajustments to properly copy all
`bgw_job` columns even if this catalog table was changed.

Fixes #4885
2022-11-05 12:45:24 -03:00
Konstantina Skovola
c54cf3ea56 Add job execution statistics to telemetry
This patch adds two new fields to the telemetry report,
`stats_by_job_type` and `errors_by_sqlerrcode`. Both report results
grouped by job type (different types of policies or
user defined action).
The patch also adds a new field to the `bgw_job_stats` table,
`total_duration_errors` to separate the duration of the failed runs
from the duration of successful ones.
2022-11-04 11:06:01 +02:00
Fabrízio de Royes Mello
7dd45cf348 Fix failure resuming a CAgg migration
Trying to resume a failed Continuous Aggregate raise an exception that
the migration plan already exists, but this is wrong and the expected
behaviour should be resume the migration and continue from the last
failed step.
2022-11-03 14:18:53 -03:00
Ante Kresic
2475c1b92f Roll up uncompressed chunks into compressed ones
This change introduces a new option to the compression procedure which
decouples the uncompressed chunk interval from the compressed chunk
interval. It does this by allowing multiple uncompressed chunks into one
compressed chunk as part of the compression procedure. The main use-case
is to allow much smaller uncompressed chunks than compressed ones. This
has several advantages:
- Reduce the size of btrees on uncompressed data (thus allowing faster
inserts because those indexes are memory-resident).
- Decrease disk-space usage for uncompressed data.
- Reduce number of chunks over historical data.

From a UX point of view, we simple add a compression with clause option
`compress_chunk_time_interval`. The user should set that according to
their needs for constraint exclusion over historical data. Ideally, it
should be a multiple of the uncompressed chunk interval and so we throw
a warning if it is not.
2022-11-02 15:14:18 +01:00
Erik Nordström
4b05402580 Add health check function
A new health check function _timescaledb_internal.health() returns the
health and status of the database instance, including any configured
data nodes (in case the instance is an access node).

Since the function returns also the health of the data nodes, it tries
hard to avoid throwing errors. An error will fail the whole function
and therefore not return any node statuses, although some of the nodes
might be healthy.

The health check on the data nodes is a recursive (remote) call to the
same function on those nodes. Unfortunately, the check will fail with
an error if a connection cannot be established to a node (or an error
occurs on the connection), which means the whole function call will
fail. This will be addressed in a future change by returning the error
in the function result instead.
2022-10-21 10:34:16 +02:00
Konstantina Skovola
54ed0d5c05 Introduce fixed schedules for background jobs
Currently, the next start of a scheduled background job is
calculated by adding the `schedule_interval` to its finish
time. This does not allow scheduling jobs to execute at fixed
times, as the next execution is "shifted" by the job duration.

This commit introduces the option to execute a job on a fixed
schedule instead. Users are expected to provide an initial_start
parameter on which subsequent job executions are aligned. The next
start is calculated by computing the next time_bucket of the finish
time with initial_start origin.
An `initial_start` parameter is added to the compression, retention,
reorder and continuous aggregate `add_policy` signatures. By passing
that upon policy creation users indicate the policy will execute on
a fixed schedule, or drifting schedule if `initial_start` is not
provided.
To allow users to pick a drifting schedule when registering a UDA,
an additional parameter `fixed_schedule` is added to `add_job`
to allow users to specify the old behavior by setting it to false.

Additionally, an optional TEXT parameter, `timezone`, is added to both
add_job and add_policy signatures, to address the 1-hour shift in
execution time caused by DST switches. As internally the next start of
a fixed schedule job is calculated using time_bucket, the timezone
parameter allows using timezone-aware buckets to calculate
the next start.
2022-10-18 18:46:57 +03:00
Jan Nidzwetzki
2f739bb328 Post-release fixes for 2.8.1
Bumping the previous version and adding tests for 2.8.1.
2022-10-07 10:10:22 +02:00
Sven Klemm
d2f0c4ed20 Fix update script handling of bgw_job_stat
Update scripts should not use ADD/DROP/RENAME and always rebuild
catalog tables to ensure the objects are identical between new
install, upgrade and downgrade.
2022-10-05 23:31:01 +02:00
Fabrízio de Royes Mello
a76f76f4ee Improve size utils functions and views performance
Changed queries to use LATERAL join on size functions and views instead
of CTEs and it eliminate a lot of unnecessary projections and give a
chance for the planner to push-down predicates.

Closes #4775
2022-10-05 17:40:28 -03:00
Jan Nidzwetzki
12b7b9f665 Release 2.8.1
This release is a patch release. We recommend that you upgrade at the
next available opportunity.

**Bugfixes**
* #4454 Keep locks after reading job status
* #4658 Fix error when querying a compressed hypertable with compress_segmentby on an enum column
* #4671 Fix a possible error while flushing the COPY data
* #4675 Fix bad TupleTableSlot drop
* #4676 Fix a deadlock when decompressing chunks and performing SELECTs
* #4685 Fix chunk exclusion for space partitions in SELECT FOR UPDATE queries
* #4694 Change parameter names of cagg_migrate procedure
* #4698 Do not use row-by-row fetcher for parameterized plans
* #4711 Remove support for procedures as custom checks
* #4712 Fix assertion failure in constify_now
* #4713 Fix Continuous Aggregate migration policies
* #4720 Fix chunk exclusion for prepared statements and dst changes
* #4726 Fix gapfill function signature
* #4737 Fix join on time column of compressed chunk
* #4738 Fix error when waiting for remote COPY to finish
* #4739 Fix continuous aggregate migrate check constraint
* #4760 Fix segfault when INNER JOINing hypertables
* #4767 Fix permission issues on index creation for CAggs

**Thanks**
* @boxhock and @cocowalla for reporting a segfault when JOINing hypertables
* @carobme for reporting constraint error during continuous aggregate migration
* @choisnetm, @dustinsorensen, @jayadevanm and @joeyberkovitz for reporting a problem with JOINs on compressed hypertables
* @daniel-k for reporting a background worker crash
* @justinpryzby for reporting an error when compressing very wide tables
* @maxtwardowski for reporting problems with chunk exclusion and space partitions
* @yuezhihan for reporting GROUP BY error when having compress_segmentby on an enum column
2022-10-05 14:40:25 +02:00
Bharathy
f1c6fd97a3 Continue compressing other chunks after an error
When a compression_policy is executed by a background worker, the policy
should continue to execute even if compressing or decompressing one of
the chunks fails.

Fixes: #4610
2022-10-04 22:00:13 +05:30
Dmitry Simonenko
ea5038f263 Add connection cache invalidation ignore logic
Calling `ts_dist_cmd_invoke_on_data_nodes_using_search_path()` function
without an active transaction allows connection invalidation event
happen between applying `search_path` and the actual command
execution, which leads to an error.

This change introduces a way to ignore connection cache invalidations
using `remote_connection_cache_invalidation_ignore()` function.

This work is based on @nikkhils original fix and the problem research.

Fix #4022
2022-10-04 10:50:45 +03:00
Konstantina Skovola
9bd772de25 Add interface for troubleshooting job failures
This commit gives more visibility into job failures by making the
information regarding a job runtime error available in an extension
table (`job_errors`) that users can directly query.
This commit also adds an infromational view on top of the table for
convenience.
To prevent the `job_errors` table from growing too large,
a retention job is also set up with a default retention interval
of 1 month. The retention job is registered with a custom check
function that requires that a valid "drop_after" interval be provided
in the config field of the job.
2022-09-30 15:22:27 +02:00
Fabrízio de Royes Mello
893faf8a6b Fix Continuous Aggregate migration policies
After migrate a Continuous Aggregate from the old format to the new
using `cagg_migrate` procedure we end up with the following problems:
* Refresh policy is not copied from the OLD to the NEW cagg;
* Compression setting is not copied from the OLD to the NEW cagg.

Fixed it by properly copying the refresh policy and setting the
`timescaledb.compress=true` flag to the new CAGG.

Fix #4710
2022-09-22 17:38:21 -03:00
Fabrízio de Royes Mello
217f514657 Fix continuous aggregate migrate check constraint
Instances upgraded to 2.8.0 will end up with a wrong check constraint
in catalog table `continuous_aggregate_migrate_plan_step`.

Fixed it by removing and adding the constraint with the correct checks.

Fix #4727
2022-09-22 11:33:29 -03:00
Fabrízio de Royes Mello
02ad4f6b76 Change parameter names of cagg_migrate procedure
Removed the underline character prefix '_' from the parameter names of
the procedure `cagg_migrate`. The new signature is:

cagg_migrate(
    IN cagg regclass,
    IN override boolean DEFAULT false,
    IN drop_old boolean DEFAULT false
)
2022-09-13 17:22:27 -03:00
Sven Klemm
6de979518d Fix compression_chunk_size primary key
The primary key for compression_chunk_size was defined as chunk_id,
compressed_chunk_id but other places assumed chunk_id is actually
unique and would error when it was not. Since it makes no sense
to have multiple entries per chunk since that reference would be
to a no longer existing chunk the primary key is changed to chunk_id
only with this patch.
2022-09-08 22:28:20 +02:00
Sven Klemm
b34b91f18b Add timezone support to time_bucket_gapfill
This patch adds a new time_bucket_gapfill function that
allows bucketing in a specific timezone.

You can gapfill with explicit timezone like so:
`SELECT time_bucket_gapfill('1 day', time, 'Europe/Berlin') ...`

Unfortunately this introduces an ambiguity with some previous
call variations when an untyped start/finish argument was passed
to the function. Some queries might need to be adjusted and either
explicitly name the positional argument or resolve the type ambiguity
by casting to the intended type.
2022-09-07 16:37:53 +02:00
Sven Klemm
3722b0bf23 Add 2.8.0 to update tests
Add 2.8.0 to update tests and adjust the downgrade script files.
2022-09-01 18:32:10 +02:00
Sven Klemm
f432d7f931 Release 2.8.0
This release adds major new features since the 2.7.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* time_bucket now supports bucketing by month, year and timezone
* Improve performance of bulk SELECT and COPY for distributed hypertables
* 1 step CAgg policy management
* Migrate Continuous Aggregates to the new format

**Features**
* #4188 Use COPY protocol in row-by-row fetcher
* #4307 Mark partialize_agg as parallel safe
* #4380 Enable chunk exclusion for space dimensions in UPDATE/DELETE
* #4384 Add schedule_interval to policies
* #4390 Faster lookup of chunks by point
* #4393 Support intervals with day component when constifying now()
* #4397 Support intervals with month component when constifying now()
* #4405 Support ON CONFLICT ON CONSTRAINT for hypertables
* #4412 Add telemetry about replication
* #4415 Drop remote data when detaching data node
* #4416 Handle TRUNCATE TABLE on chunks
* #4425 Add parameter check_config to alter_job
* #4430 Create index on Continuous Aggregates
* #4439 Allow ORDER BY on continuous aggregates
* #4443 Add stateful partition mappings
* #4484 Use non-blocking data node connections for COPY
* #4495 Support add_dimension() with existing data
* #4502 Add chunks to baserel cache on chunk exclusion
* #4545 Add hypertable distributed argument and defaults
* #4552 Migrate Continuous Aggregates to the new format
* #4556 Add runtime exclusion for hypertables
* #4561 Change get_git_commit to return full commit hash
* #4563 1 step CAgg policy management
* #4641 Allow bucketing by month, year, century in time_bucket and time_bucket_gapfill
* #4642 Add timezone support to time_bucket

**Bugfixes**
* #4359 Create composite index on segmentby columns
* #4374 Remove constified now() constraints from plan
* #4416 Handle TRUNCATE TABLE on chunks
* #4478 Synchronize chunk cache sizes
* #4486 Adding boolean column with default value doesn't work on compressed table
* #4512 Fix unaligned pointer access
* #4519 Throw better error message on incompatible row fetcher settings
* #4549 Fix dump_meta_data for windows
* #4553 Fix timescaledb_post_restore GUC handling
* #4573 Load TSL library on compressed_data_out call
* #4575 Fix use of `get_partition_hash` and `get_partition_for_key` inside an IMMUTABLE function
* #4577 Fix segfaults in compression code with corrupt data
* #4580 Handle default privileges on CAggs properly
* #4582 Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
* #4583 Fix partitioning functions
* #4589 Fix rename for distributed hypertable
* #4601 Reset compression sequence when group resets
* #4611 Fix a potential OOM when loading large data sets into a hypertable
* #4624 Fix heap buffer overflow
* #4627 Fix telemetry initialization
* #4631 Ensure TSL library is loaded on database upgrades
* #4646 Fix time_bucket_ng origin handling
* #4647 Fix the error "SubPlan found with no parent plan" that occurred if using joins in RETURNING clause.

**Thanks**
* @AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
* @Creatation for reporting an issue with renaming hypertables
* @janko for reporting an issue when adding bool column with default value to compressed hypertable
* @jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
* @michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
* @mwahlhuetter for reporting error in joins in RETURNING clause
* @ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
* @PBudmark for reporting an issue with dump_meta_data.sql on Windows
* @ssmoss for reporting an issue with time_bucket_ng origin handling
2022-08-31 16:33:21 +02:00
Dmitry Simonenko
c697700add Add hypertable distributed argument and defaults
This PR introduces a new `distributed` argument to the
create_hypertable() function as well as two new GUC's to
control its default behaviour: timescaledb.hypertable_distributed_default
and timescaledb.hypertable_replication_factor_default.

The main idea of this change is to allow automatic creation
of the distributed hypertables by default.
2022-08-29 17:44:16 +03:00
Fabrízio de Royes Mello
e34218ce29 Migrate Continuous Aggregates to the new format
Timescale 2.7 released a new version of Continuous Aggregate (#4269)
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.

When upgrading to Timescale 2.7, new created Continuous Aggregates
are using the new format, but existing Continuous Aggregates keep
using the format they were defined with.

Created a procedure to upgrade existing Continuous Aggregates from
the old format to the new format, by calling a simple procedure:

test=# CALL cagg_migrate('conditions_summary_daily');

Closes #4424
2022-08-25 17:49:09 -03:00
Sven Klemm
5d934baf1d Add timezone support to time_bucket
This patch adds a new function time_bucket(period,timestamp,timezone)
which supports bucketing for arbitrary timezones.
2022-08-25 12:59:05 +02:00
Konstantina Skovola
dc145b7485 Add parameter check_config to alter_job
Previously users had no way to update the check function
registered with add_job. This commit adds a parameter check_config
to alter_job to allow updating the check function field.

Also, previously the signature expected from a check was of
the form (job_id, config) and there was no validation
that the check function given had the correct signature.
This commit removes the job_id as it is not required and
also checks that the check function has the correct signature
when it is registered with add_job, preventing an error being
thrown at job runtime.
2022-08-25 10:38:03 +03:00