1
0
mirror of https://github.com/timescale/timescaledb.git synced 2025-05-19 12:13:24 +08:00

735 Commits

Author SHA1 Message Date
Fabrízio de Royes Mello
c0f2ed1809 Mark cagg_watermark parallel safe
The `cagg_watermark` function perform just read-only operations so is
safe to make it parallel safe to take advantage of the Postgres
parallel query.

Since 2.7 when we introduced the new Continuous Aggregate format we
don't use partials anymore and those aggregate functions
`partialize_agg` and `finalize_agg` are not parallel safe, so make no
sense don't take advantage of Postgres parallel query for realtime
Continuous Aggregates.
2023-01-31 13:07:19 -03:00
Mats Kindahl
5661ff1523 Add role-level security to job error log
Since the job error log can contain information from many different
sources and also from many different jobs it is important to ensure
that visibility of the job error log entries is restricted to job
owners.

This commit extend the view `timescaledb_information.job_errors` with
role-based checks so that a user can only see entries for jobs that she
has permission to view and restrict the permissions to
`_timescaledb_internal.job_errors` so that users only can view the job
error log through the view. A special case is added so that the
superuser and the database owner can see all log entries, even if there
is no associated job id with the log entry.

Closes 
2023-01-30 12:13:00 +01:00
Bharathy
684637a172 Post-release fixes for 2.9.2
Bumping the previous version and adding tests for 2.9.2
2023-01-25 17:54:54 +05:30
Bharathy
f211294c61 Release 2.9.2
This release contains bug fixes since the 2.9.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
*  Fix issue with deleting data node and dropping the database on multi-node
*  Fix creating a CAgg on a CAgg where the time column is in a different order of the original hypertable
*  Fix adding column with NULL constraint to compressed hypertable
*  Fix CAgg on CAgg variable bucket size validation
*  Fix default data node availability status on multi-node
*  Fix ChunkAppend and ConstraintAwareAppend with TidRangeScan child subplan
*  Fix repartition behavior when attaching data node on multi-node
2023-01-23 15:55:10 +05:30
Fabrízio de Royes Mello
4118a72575 Remove parallel safe from partialize_agg
Previous PR  mark `partialize_agg` and `finalize_agg` as parallel
safe but this change is leading to incorrect results in some cases.

Those functions are supposed work in parallel but seems is not the case
and it is not evident yet the root cause and how to properly use it in
parallel queries so we decided to revert this change and provide correct
results to users.

Fixes 
2023-01-13 07:31:55 -03:00
Sven Klemm
b92f36d765 Add 2.9.1 to update test scripts 2022-12-27 09:24:57 +01:00
Sven Klemm
93667df7d8 Release 2.9.1
This release contains bug fixes since the 2.9.0 release.
This release is high priority for upgrade. We strongly recommend that you
upgrade as soon as possible.

**Bugfixes**
*  Fix CAgg on CAgg bucket size validation
*  Fix enabling compression on caggs with renamed columns
*  Fix building against PG15 on Windows
*  Fix postgres server restart on background worker exit
*  Fix privileges for job_errors in update script
2022-12-23 14:38:45 +01:00
Konstantina Skovola
0a3615fc70 Fix privileges for job_errors table in update script 2022-12-23 14:05:19 +02:00
Sven Klemm
4527f51e7c Refactor INSERT into compressed chunks
This patch changes INSERTs into compressed chunks to no longer
be immediately compressed but stored in the uncompressed chunk
instead and later merged with the compressed chunk by a separate
job.

This greatly simplifies the INSERT-codepath as we no longer have
to rewrite the target of INSERTs and on-the-fly compress leading
to a roughly 2x improvement on INSERT rate into compressed chunk.
Additionally this improves TRIGGER-support for INSERTs into
compressed chunks.

This is a necessary refactoring to allow UPSERT/UPDATE/DELETE on
compressed chunks in follow-patches.
2022-12-21 12:53:29 +01:00
Sven Klemm
08bb21f7e6 2.9.0 Post-release adjustments
Add 2.9.0 to update test scripts and adjust downgrade scripts for
2.9.0. Additionally adjust CHANGELOG to sync with the actual release
CHANGELOG and add PG15 to CI.
2022-12-19 19:10:24 +01:00
Sachin
cd4509c2a3 Release 2.9.0
This release adds major new features since the 2.8.1 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:
* Hierarchical Continuous Aggregates (aka Continuous Aggregate on top of another Continuous Aggregate)
* Improve `time_bucket_gapfill` function allowing specifying timezone to bucket
* Use `alter_data_node()` to change the data node configuration. This function introduces the option to configure the availability of the data node.

This release also includes several bug fixes.

**Features**
*  Batch rows on access node for distributed COPY
*  Exponentially backoff when out of background workers
*  Show warnings when not following best practices
*  Introduce fixed schedules for background jobs
*  Hierarchical Continuous Aggregates
*  Add timezone support to time_bucket_gapfill
*  Add interface for troubleshooting job failures
*  Add ability to merge chunks while compressing
*  Extend the now() optimization to also apply to CURRENT_TIMESTAMP
*  Support parameterized data node scans in joins
*  Add function to change configuration of a data nodes
*  Handle DML activity when datanode is not available
*  Add function to drop stale chunks on a datanode

**Bugfixes**
*  Don't error when compression metadata is missing
*  Fix now() constification for VIEWs
*  Fix compression_chunk_size primary key
*  Report warning when enabling compression on hypertable
*  Fix FK constraint violation error while insert into hypertable which references partitioned table
*  Improve compression job IO performance
*  Continue compressing other chunks after an error
*  Fix degraded performance seen on timescaledb_internal.hypertable_local_size() function
*  Fix segmentation fault during INSERT into compressed hypertable
*  Fix missing segmentby compression option in CAGGs
*  Fix a crash that could occur when using nested user-defined functions with hypertables
*  Fix performance regressions in the copy code
*  Block multi-statement DDL command in one query
*  Fix cagg migration failure when trying to resume
*  Remove BitmapScan support in DecompressChunk
*  Fix a performance regression in the query planner by speeding up frozen chunk state checks
*  Fix a typo in process_compressed_data_out
*  Cagg migration orphans cagg policy
*  Restrict usage of the old format (pre 2.7) of continuous aggregates in PostgreSQL 15.
*  Fix cagg migration for hypertables using timestamp without timezone
*  Check for interrupts in gapfill main loop
*  Fix cagg migration crash when refreshing the newly created cagg

**Thanks**
* @jflambert for reporting a crash with nested user-defined functions.
* @jvanns for reporting hypertable FK reference to vanilla PostgreSQL partitioned table doesn't seem to work
* @kou for fixing a typo in process_compressed_data_out
* @xvaara for helping reproduce a bug with bitmap scans in transparent decompression
* @byazici for reporting a problem with segmentby on compressed caggs
* @tobiasdirksen for requesting Continuous aggregate on top of another continuous aggregate
* @xima for reporting a bug in Cagg migration
2022-12-05 19:33:45 +05:30
Sven Klemm
3b94b996f2 Use custom node to block frozen chunk modifications
This patch changes the code that blocks frozen chunk
modifications to no longer use triggers but to use custom
node instead. Frozen chunks is a timescaledb internal object
and should therefore not be protected by TRIGGER which is
external and creates several hazards. TRIGGERs created to
protect internal state contend with user-created triggers.
The trigger created to protect frozen chunks does not work
well with our restoring GUC which we use when restoring
logical dumps. Thirdly triggers are not functional for any
internal operations but are only working in code paths that
explicitly added trigger support.
2022-11-25 19:56:48 +01:00
Dmitry Simonenko
5813173e07 Introduce drop_stale_chunks() function
This function drops chunks on a specified data node if those chunks are
not known by the access node.

Call drop_stale_chunks() automatically when data node becomes
available again.

Fix 
2022-11-23 19:21:05 +02:00
Fabrízio de Royes Mello
e84a6e2e65 Remove the refresh step from CAgg migration
We're facing some weird `portal snapshot` issues running the
`refresh_continuous_aggregate` procedure called from other procedures.

Fixed it by ignoring the Refresh Continuous Aggregate step from the
`cagg_migrate` and warning users to run it manually after the execution.

Fixes 
2022-11-22 16:49:13 -03:00
Fabrízio de Royes Mello
a4356f342f Remove trailing whitespaces from test code 2022-11-18 16:31:47 -03:00
Fabrízio de Royes Mello
3749953e97 Hierarchical Continuous Aggregates
Enable users create Hierarchical Continuous Aggregates (aka Continuous
Aggregates on top of another Continuous Aggregates).

With this PR users can create levels of aggregation granularity in
Continuous Aggregates making the refresh process even faster.

A problem with this feature can be in upper levels we can end up with
the "average of averages". But to get the "real average" we can rely on
"stats_aggs" TimescaleDB Toolkit function that calculate and store the
partials that can be finalized with other toolkit functions like
"average" and "sum".

Closes 
2022-11-18 14:34:18 -03:00
Jan Nidzwetzki
380464df9b Perform frozen chunk status check via trigger
The commit 9f4dcea30135d1e36d1c452d631fc8b8743b3995 introduces frozen
chunks. Checking whether a chunk is frozen or not has been done so far
in the query planner. If it is not possible to determine which chunks
are affected by a query in the planner (e.g., due to a cast in the WHERE
condition), all chunks are checked. This leads (1) to an increased
planning time and (2) to the situation that a single frozen chunk could
reject queries, even if the frozen chunk is not addressed by the query.
2022-11-18 15:29:49 +01:00
Sachin
1e3200be7d USE C function for time_bucket() offset
Instead of using SQL UDF for handling offset parameter
added ts_timestamp/tz/date_offset_bucket() which will
handle offset
2022-11-17 13:08:19 +00:00
Bharathy
8afdddc2da Deprecate continuous aggregates with old format
This patch will report a warning when upgrading to new timescaledb extension,
if their exists any caggs with partial aggregates only on release builds.
Also restrict users from creating cagss with old format on timescaledb with
PG15.
2022-11-15 08:38:03 +05:30
Fabrízio de Royes Mello
6ae192631e Fix CAgg migration with timestamp without timezone
It was a leftover from the original implementation where we didn't add
tests for time dimension using `timestamp without timezone`.

Fixed it by dealing with this datatype and added regression tests.

Fixes 
2022-11-11 15:25:01 -03:00
Erik Nordström
f13214891c Add function to alter data nodes
Add a new function, `alter_data_node()`, which can be used to change
the data node's configuration originally set up via `add_data_node()`
on the access node.

The new functions introduces a new option "available" that allows
configuring the availability of the data node. Setting
`available=>false` means that the node should no longer be used for
reads and writes. Only read "failover" is implemented as part of this
change, however.

To fail over reads, the alter data node function finds all the chunks
for which the unavailable data node is the "primary" query target and
"fails over" to a chunk replica on another data node instead. If some
chunks do not have a replica to fail over to, a warning will be
raised.

When a data node is available again, the function can be used to
switch back to using the data node for queries.

Closes 
2022-11-11 13:59:42 +01:00
Fabrízio de Royes Mello
bfef3173bc Refactor CAgg migration code to use job API
The current implementation update the jobs table directly and to make it
consistent with other parts of the code we changed it to use the
`alter_job` API instead to enable and disable the jobs during the
migration. This refactoring is related to .
2022-11-08 17:41:01 -03:00
Sven Klemm
3059290bea Add new chunk state CHUNK_STATUS_COMPRESSED_PARTIAL
A chunk is in this state when it is compressed but also has
uncompressed data in the uncompressed chunk. Individual tuples
can only ever exist in either area. This is preparation patch
to add support for uncompressed staging area for DML operations.
2022-11-07 13:32:37 +01:00
Fabrízio de Royes Mello
6c73b61b99 Fix orphan jobs after CAgg migration
When using `override => true` the migration procedure rename the
current cagg using the suffix `_old` and rename the new created with
suffix `_new` to the original name.

The problem is the `copy polices` step was executed after the
`override` step and then we didn't found the new cagg name because it
was renamed to the the original name leading the policy orphan (without
connection with the materialization hypertable).

Fixed it by reordering the steps executin the `copy policies` before
the `override` step. Also made some ajustments to properly copy all
`bgw_job` columns even if this catalog table was changed.

Fixes 
2022-11-05 12:45:24 -03:00
Konstantina Skovola
c54cf3ea56 Add job execution statistics to telemetry
This patch adds two new fields to the telemetry report,
`stats_by_job_type` and `errors_by_sqlerrcode`. Both report results
grouped by job type (different types of policies or
user defined action).
The patch also adds a new field to the `bgw_job_stats` table,
`total_duration_errors` to separate the duration of the failed runs
from the duration of successful ones.
2022-11-04 11:06:01 +02:00
Fabrízio de Royes Mello
7dd45cf348 Fix failure resuming a CAgg migration
Trying to resume a failed Continuous Aggregate raise an exception that
the migration plan already exists, but this is wrong and the expected
behaviour should be resume the migration and continue from the last
failed step.
2022-11-03 14:18:53 -03:00
Ante Kresic
2475c1b92f Roll up uncompressed chunks into compressed ones
This change introduces a new option to the compression procedure which
decouples the uncompressed chunk interval from the compressed chunk
interval. It does this by allowing multiple uncompressed chunks into one
compressed chunk as part of the compression procedure. The main use-case
is to allow much smaller uncompressed chunks than compressed ones. This
has several advantages:
- Reduce the size of btrees on uncompressed data (thus allowing faster
inserts because those indexes are memory-resident).
- Decrease disk-space usage for uncompressed data.
- Reduce number of chunks over historical data.

From a UX point of view, we simple add a compression with clause option
`compress_chunk_time_interval`. The user should set that according to
their needs for constraint exclusion over historical data. Ideally, it
should be a multiple of the uncompressed chunk interval and so we throw
a warning if it is not.
2022-11-02 15:14:18 +01:00
Erik Nordström
4b05402580 Add health check function
A new health check function _timescaledb_internal.health() returns the
health and status of the database instance, including any configured
data nodes (in case the instance is an access node).

Since the function returns also the health of the data nodes, it tries
hard to avoid throwing errors. An error will fail the whole function
and therefore not return any node statuses, although some of the nodes
might be healthy.

The health check on the data nodes is a recursive (remote) call to the
same function on those nodes. Unfortunately, the check will fail with
an error if a connection cannot be established to a node (or an error
occurs on the connection), which means the whole function call will
fail. This will be addressed in a future change by returning the error
in the function result instead.
2022-10-21 10:34:16 +02:00
Konstantina Skovola
54ed0d5c05 Introduce fixed schedules for background jobs
Currently, the next start of a scheduled background job is
calculated by adding the `schedule_interval` to its finish
time. This does not allow scheduling jobs to execute at fixed
times, as the next execution is "shifted" by the job duration.

This commit introduces the option to execute a job on a fixed
schedule instead. Users are expected to provide an initial_start
parameter on which subsequent job executions are aligned. The next
start is calculated by computing the next time_bucket of the finish
time with initial_start origin.
An `initial_start` parameter is added to the compression, retention,
reorder and continuous aggregate `add_policy` signatures. By passing
that upon policy creation users indicate the policy will execute on
a fixed schedule, or drifting schedule if `initial_start` is not
provided.
To allow users to pick a drifting schedule when registering a UDA,
an additional parameter `fixed_schedule` is added to `add_job`
to allow users to specify the old behavior by setting it to false.

Additionally, an optional TEXT parameter, `timezone`, is added to both
add_job and add_policy signatures, to address the 1-hour shift in
execution time caused by DST switches. As internally the next start of
a fixed schedule job is calculated using time_bucket, the timezone
parameter allows using timezone-aware buckets to calculate
the next start.
2022-10-18 18:46:57 +03:00
Jan Nidzwetzki
2f739bb328 Post-release fixes for 2.8.1
Bumping the previous version and adding tests for 2.8.1.
2022-10-07 10:10:22 +02:00
Sven Klemm
d2f0c4ed20 Fix update script handling of bgw_job_stat
Update scripts should not use ADD/DROP/RENAME and always rebuild
catalog tables to ensure the objects are identical between new
install, upgrade and downgrade.
2022-10-05 23:31:01 +02:00
Fabrízio de Royes Mello
a76f76f4ee Improve size utils functions and views performance
Changed queries to use LATERAL join on size functions and views instead
of CTEs and it eliminate a lot of unnecessary projections and give a
chance for the planner to push-down predicates.

Closes 
2022-10-05 17:40:28 -03:00
Jan Nidzwetzki
12b7b9f665 Release 2.8.1
This release is a patch release. We recommend that you upgrade at the
next available opportunity.

**Bugfixes**
*  Keep locks after reading job status
*  Fix error when querying a compressed hypertable with compress_segmentby on an enum column
*  Fix a possible error while flushing the COPY data
*  Fix bad TupleTableSlot drop
*  Fix a deadlock when decompressing chunks and performing SELECTs
*  Fix chunk exclusion for space partitions in SELECT FOR UPDATE queries
*  Change parameter names of cagg_migrate procedure
*  Do not use row-by-row fetcher for parameterized plans
*  Remove support for procedures as custom checks
*  Fix assertion failure in constify_now
*  Fix Continuous Aggregate migration policies
*  Fix chunk exclusion for prepared statements and dst changes
*  Fix gapfill function signature
*  Fix join on time column of compressed chunk
*  Fix error when waiting for remote COPY to finish
*  Fix continuous aggregate migrate check constraint
*  Fix segfault when INNER JOINing hypertables
*  Fix permission issues on index creation for CAggs

**Thanks**
* @boxhock and @cocowalla for reporting a segfault when JOINing hypertables
* @carobme for reporting constraint error during continuous aggregate migration
* @choisnetm, @dustinsorensen, @jayadevanm and @joeyberkovitz for reporting a problem with JOINs on compressed hypertables
* @daniel-k for reporting a background worker crash
* @justinpryzby for reporting an error when compressing very wide tables
* @maxtwardowski for reporting problems with chunk exclusion and space partitions
* @yuezhihan for reporting GROUP BY error when having compress_segmentby on an enum column
2022-10-05 14:40:25 +02:00
Bharathy
f1c6fd97a3 Continue compressing other chunks after an error
When a compression_policy is executed by a background worker, the policy
should continue to execute even if compressing or decompressing one of
the chunks fails.

Fixes: 
2022-10-04 22:00:13 +05:30
Dmitry Simonenko
ea5038f263 Add connection cache invalidation ignore logic
Calling `ts_dist_cmd_invoke_on_data_nodes_using_search_path()` function
without an active transaction allows connection invalidation event
happen between applying `search_path` and the actual command
execution, which leads to an error.

This change introduces a way to ignore connection cache invalidations
using `remote_connection_cache_invalidation_ignore()` function.

This work is based on @nikkhils original fix and the problem research.

Fix 
2022-10-04 10:50:45 +03:00
Konstantina Skovola
9bd772de25 Add interface for troubleshooting job failures
This commit gives more visibility into job failures by making the
information regarding a job runtime error available in an extension
table (`job_errors`) that users can directly query.
This commit also adds an infromational view on top of the table for
convenience.
To prevent the `job_errors` table from growing too large,
a retention job is also set up with a default retention interval
of 1 month. The retention job is registered with a custom check
function that requires that a valid "drop_after" interval be provided
in the config field of the job.
2022-09-30 15:22:27 +02:00
Fabrízio de Royes Mello
893faf8a6b Fix Continuous Aggregate migration policies
After migrate a Continuous Aggregate from the old format to the new
using `cagg_migrate` procedure we end up with the following problems:
* Refresh policy is not copied from the OLD to the NEW cagg;
* Compression setting is not copied from the OLD to the NEW cagg.

Fixed it by properly copying the refresh policy and setting the
`timescaledb.compress=true` flag to the new CAGG.

Fix 
2022-09-22 17:38:21 -03:00
Fabrízio de Royes Mello
217f514657 Fix continuous aggregate migrate check constraint
Instances upgraded to 2.8.0 will end up with a wrong check constraint
in catalog table `continuous_aggregate_migrate_plan_step`.

Fixed it by removing and adding the constraint with the correct checks.

Fix 
2022-09-22 11:33:29 -03:00
Fabrízio de Royes Mello
02ad4f6b76 Change parameter names of cagg_migrate procedure
Removed the underline character prefix '_' from the parameter names of
the procedure `cagg_migrate`. The new signature is:

cagg_migrate(
    IN cagg regclass,
    IN override boolean DEFAULT false,
    IN drop_old boolean DEFAULT false
)
2022-09-13 17:22:27 -03:00
Sven Klemm
6de979518d Fix compression_chunk_size primary key
The primary key for compression_chunk_size was defined as chunk_id,
compressed_chunk_id but other places assumed chunk_id is actually
unique and would error when it was not. Since it makes no sense
to have multiple entries per chunk since that reference would be
to a no longer existing chunk the primary key is changed to chunk_id
only with this patch.
2022-09-08 22:28:20 +02:00
Sven Klemm
b34b91f18b Add timezone support to time_bucket_gapfill
This patch adds a new time_bucket_gapfill function that
allows bucketing in a specific timezone.

You can gapfill with explicit timezone like so:
`SELECT time_bucket_gapfill('1 day', time, 'Europe/Berlin') ...`

Unfortunately this introduces an ambiguity with some previous
call variations when an untyped start/finish argument was passed
to the function. Some queries might need to be adjusted and either
explicitly name the positional argument or resolve the type ambiguity
by casting to the intended type.
2022-09-07 16:37:53 +02:00
Sven Klemm
3722b0bf23 Add 2.8.0 to update tests
Add 2.8.0 to update tests and adjust the downgrade script files.
2022-09-01 18:32:10 +02:00
Sven Klemm
f432d7f931 Release 2.8.0
This release adds major new features since the 2.7.2 release.
We deem it moderate priority for upgrading.

This release includes these noteworthy features:

* time_bucket now supports bucketing by month, year and timezone
* Improve performance of bulk SELECT and COPY for distributed hypertables
* 1 step CAgg policy management
* Migrate Continuous Aggregates to the new format

**Features**
*  Use COPY protocol in row-by-row fetcher
*  Mark partialize_agg as parallel safe
*  Enable chunk exclusion for space dimensions in UPDATE/DELETE
*  Add schedule_interval to policies
*  Faster lookup of chunks by point
*  Support intervals with day component when constifying now()
*  Support intervals with month component when constifying now()
*  Support ON CONFLICT ON CONSTRAINT for hypertables
*  Add telemetry about replication
*  Drop remote data when detaching data node
*  Handle TRUNCATE TABLE on chunks
*  Add parameter check_config to alter_job
*  Create index on Continuous Aggregates
*  Allow ORDER BY on continuous aggregates
*  Add stateful partition mappings
*  Use non-blocking data node connections for COPY
*  Support add_dimension() with existing data
*  Add chunks to baserel cache on chunk exclusion
*  Add hypertable distributed argument and defaults
*  Migrate Continuous Aggregates to the new format
*  Add runtime exclusion for hypertables
*  Change get_git_commit to return full commit hash
*  1 step CAgg policy management
*  Allow bucketing by month, year, century in time_bucket and time_bucket_gapfill
*  Add timezone support to time_bucket

**Bugfixes**
*  Create composite index on segmentby columns
*  Remove constified now() constraints from plan
*  Handle TRUNCATE TABLE on chunks
*  Synchronize chunk cache sizes
*  Adding boolean column with default value doesn't work on compressed table
*  Fix unaligned pointer access
*  Throw better error message on incompatible row fetcher settings
*  Fix dump_meta_data for windows
*  Fix timescaledb_post_restore GUC handling
*  Load TSL library on compressed_data_out call
*  Fix use of `get_partition_hash` and `get_partition_for_key` inside an IMMUTABLE function
*  Fix segfaults in compression code with corrupt data
*  Handle default privileges on CAggs properly
*  Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
*  Fix partitioning functions
*  Fix rename for distributed hypertable
*  Reset compression sequence when group resets
*  Fix a potential OOM when loading large data sets into a hypertable
*  Fix heap buffer overflow
*  Fix telemetry initialization
*  Ensure TSL library is loaded on database upgrades
*  Fix time_bucket_ng origin handling
*  Fix the error "SubPlan found with no parent plan" that occurred if using joins in RETURNING clause.

**Thanks**
* @AlmiS for reporting error on `get_partition_hash` executed inside an IMMUTABLE function
* @Creatation for reporting an issue with renaming hypertables
* @janko for reporting an issue when adding bool column with default value to compressed hypertable
* @jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
* @michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
* @mwahlhuetter for reporting error in joins in RETURNING clause
* @ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
* @PBudmark for reporting an issue with dump_meta_data.sql on Windows
* @ssmoss for reporting an issue with time_bucket_ng origin handling
2022-08-31 16:33:21 +02:00
Dmitry Simonenko
c697700add Add hypertable distributed argument and defaults
This PR introduces a new `distributed` argument to the
create_hypertable() function as well as two new GUC's to
control its default behaviour: timescaledb.hypertable_distributed_default
and timescaledb.hypertable_replication_factor_default.

The main idea of this change is to allow automatic creation
of the distributed hypertables by default.
2022-08-29 17:44:16 +03:00
Fabrízio de Royes Mello
e34218ce29 Migrate Continuous Aggregates to the new format
Timescale 2.7 released a new version of Continuous Aggregate ()
that store the final aggregation state instead of the byte array of
the partial aggregate state, offering multiple opportunities of
optimizations as well a more compact form.

When upgrading to Timescale 2.7, new created Continuous Aggregates
are using the new format, but existing Continuous Aggregates keep
using the format they were defined with.

Created a procedure to upgrade existing Continuous Aggregates from
the old format to the new format, by calling a simple procedure:

test=# CALL cagg_migrate('conditions_summary_daily');

Closes 
2022-08-25 17:49:09 -03:00
Sven Klemm
5d934baf1d Add timezone support to time_bucket
This patch adds a new function time_bucket(period,timestamp,timezone)
which supports bucketing for arbitrary timezones.
2022-08-25 12:59:05 +02:00
Konstantina Skovola
dc145b7485 Add parameter check_config to alter_job
Previously users had no way to update the check function
registered with add_job. This commit adds a parameter check_config
to alter_job to allow updating the check function field.

Also, previously the signature expected from a check was of
the form (job_id, config) and there was no validation
that the check function given had the correct signature.
This commit removes the job_id as it is not required and
also checks that the check function has the correct signature
when it is registered with add_job, preventing an error being
thrown at job runtime.
2022-08-25 10:38:03 +03:00
Mats Kindahl
e0f3e17575 Use new validation functions
Old patch was using old validation functions, but there are already
validation functions that both read and validate the policy, so using
those. Also removing the old `job_config_check` function since that is
no longer use and instead adding a `job_config_check` that calls the
checking function with the configuration.
2022-08-25 10:38:03 +03:00
gayyappan
c643173b8b Filter out osm chunks from chunks information view
Modify timescaledb_information.chunks
view to filter out osm chunks. We do not want
user to invoke chunk specific operations on
OSM chunks.
2022-08-22 09:59:57 -04:00
gayyappan
6beda28965 Modify chunk exclusion to include OSM chunks
OSM chunks manage their ranges and the timescale
catalog has dummy ranges for these dimensions.
So the chunk exclusion logic cannot rely on the
timescaledb catalog metadata to exclude an OSM chunk.
2022-08-18 09:32:21 -04:00