891 Commits

Author SHA1 Message Date
Ante Kresic
5c2c80f845 Fix removal of metadata function and update script
Changing the code to remove the assumption of 1:1
mapping between chunks and chunk constraints. Including
a check if a chunk constraint is shared with another chunk.
In that case, skip deleting the dimension slice.
2024-06-05 13:58:57 +02:00
Fabrízio de Royes Mello
438736f6bd Post release 2.15.1 2024-05-30 14:08:38 -03:00
Fabrízio de Royes Mello
e0a21a060b Release 2.15.1
This release contains bug fixes since the 2.15.0 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6540 Segmentation fault when backfilling data with COPY into a compressed chunk
* #6858 Before update trigger not working correctly
* #6908 Fix gapfill with timezone behaviour around dst switches
* #6911 Fix dropped chunk metadata removal in update script
* #6940 Fix `pg_upgrade` failure by removing `regprocedure` from catalog table
* #6957 Fix segfault in UNION queries with ordering on compressed chunks

**Thanks**
* @DiAifU, @kiddhombre and @intermittentnrg for reporting issues with gapfill and daylight saving time
* @edgarzamora for reporting issue with update triggers
* @hongquan for reporting an issue with the update script
* @iliastsa and @SystemParadox for reporting an issue with COPY into a compressed chunk
2024-05-29 09:03:09 -03:00
Fabrízio de Royes Mello
8b994c717d Remove regprocedure oid type from catalog
In #6624 we refactored the time bucket catalog table to make it more
generic and save information for all Continuous Aggregates. Previously
it stored only variable bucket size information.

The problem is we used the `regprocedure` type to store the OID of the
given time bucket function but unfortunately it is not supported by
`pg_upgrade`.

Fixed it by changing the column to TEXT and resolve to/from OID using
builtin `regprocedurein` and `format_procedure_qualified` functions.

Fixes #6935
2024-05-22 11:01:56 -03:00
Sven Klemm
f9ccf1be07 Fix _timescaledb_functions.remove_dropped_chunk_metadata
The removal function would only remove chunk_constraints that are
part of dimension constraints. This patch changes it to remove all
constraints of a chunk.
2024-05-21 17:31:18 +02:00
Sven Klemm
009d970af9 Remove PG13 support
In order to keep the number of supported PG versions more managable
remove support for PG13.
2024-05-14 17:19:20 +02:00
Fabrízio de Royes Mello
ca125cf620 Post-release changes for 2.15.0. 2024-05-07 16:44:43 -03:00
Fabrízio de Royes Mello
defe4ef581 Release 2.15.0
This release contains performance improvements and bug fixes since
the 2.14.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:
* Support `time_bucket` with `origin` and/or `offset` on Continuous Aggregate
* Compression improvements:
  - Improve expression pushdown
  - Add minmax sparse indexes when compressing columns with btree indexes
  - Make compression use the defaults functions
  - Vectorize filters in WHERE clause that contain text equality operators and LIKE expressions

**Deprecation warning**
* Starting on this release will not be possible to create Continuous Aggregate using `time_bucket_ng` anymore and it will be completely removed on the upcoming releases.
* Recommend users to [migrate their old Continuous Aggregate format to the new one](https://docs.timescale.com/use-timescale/latest/continuous-aggregates/migrate/) because it support will be completely removed in next releases prevent them to migrate.
* This is the last release supporting PostgreSQL 13.

**For on-premise users and this release only**, you will need to run [this SQL script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql) after running `ALTER EXTENSION`. More details can be found in the pull request [#6786](https://github.com/timescale/timescaledb/pull/6797).

**Features**
* #6382 Support for time_bucket with origin and offset in CAggs
* #6696 Improve defaults for compression segment_by and order_by
* #6705 Add sparse minmax indexes for compressed columns that have uncompressed btree indexes
* #6754 Allow DROP CONSTRAINT on compressed hypertables
* #6767 Add metadata table `_timestaledb_internal.bgw_job_stat_history` for tracking job execution history
* #6798 Prevent usage of deprecated time_bucket_ng in CAgg definition
* #6810 Add telemetry for access methods
* #6811 Remove no longer relevant timescaledb.allow_install_without_preload GUC
* #6837 Add migration path for CAggs using time_bucket_ng
* #6865 Update the watermark when truncating a CAgg

**Bugfixes**
* #6617 Fix error in show_chunks
* #6621 Remove metadata when dropping chunks
* #6677 Fix snapshot usage in CAgg invalidation scanner
* #6698 Define meaning of 0 retries for jobs as no retries
* #6717 Fix handling of compressed tables with primary or unique index in COPY path
* #6726 Fix constify cagg_watermark using window function when querying a CAgg
* #6729 Fix NULL start value handling in CAgg refresh
* #6732 Fix CAgg migration with custom timezone / date format settings
* #6752 Remove custom autovacuum setting from compressed chunks
* #6770 Fix plantime chunk exclusion for OSM chunk
* #6789 Fix deletes with subqueries and compression
* #6796 Fix a crash involving a view on a hypertable
* #6797 Fix foreign key constraint handling on compressed hypertables
* #6816 Fix handling of chunks with no contraints
* #6820 Fix a crash when the ts_hypertable_insert_blocker was called directly
* #6849 Use non-orderby compressed metadata in compressed DML
* #6867 Clean up compression settings when deleting compressed cagg
* #6869 Fix compressed DML with constraints of form value OP column
* #6870 Fix bool expression pushdown for queries on compressed chunks

**Thanks**
* @brasic for reporting a crash when the ts_hypertable_insert_blocker was called directly
* @bvanelli for reporting an issue with the jobs retry count
* @djzurawsk For reporting error when dropping chunks
* @Dzuzepppe for reporting an issue with DELETEs using subquery on compressed chunk working incorrectly.
* @hongquan For reporting a 'timestamp out of range' error during CAgg migrations
* @kevcenteno for reporting an issue with the show_chunks API showing incorrect output when 'created_before/created_after' was used with time-partitioned columns.
* @mahipv For starting working on the job history PR
* @rovo89 For reporting constify cagg_watermark not working using window function when querying a CAgg
2024-05-06 12:40:40 -03:00
Sven Klemm
e298ecd532 Don't reuse job id
We shouldnt reuse job ids to make it easy to recognize the job
log entries for a job. We also need to keep the old job around
to not break loading dumps from older versions.
2024-05-03 09:05:57 +02:00
Jan Nidzwetzki
f88899171f Add migration for CAggs using time_bucket_ng
The function time_bucket_ng is deprecated. This PR adds a migration path
for existing CAggs. Since time_bucket and time_bucket_ng use different
origin values, a custom origin is set if needed to let time_bucket
create the same buckets as created by time_bucket_ng so far.
2024-04-25 16:08:48 +02:00
Jan Nidzwetzki
3ad948163c Remove MN leftover create distributed hypertable
This commit removes an unused function that was used in MN setups to
create distributed hypertables.
2024-04-25 10:55:27 +02:00
James Guthrie
95b4e246eb Remove stray comment missed in 0b23bab4 2024-04-23 11:39:31 +01:00
Fabrízio de Royes Mello
866ffba40e Migrate missing job info to history table
In #6767 and #6831 we introduced the ability to track job execution
history including succeeded and failed jobs.

We migrate records from the old `_timescaledb_internal.job_errors` to
the new `_timescaledb_internal.bgw_job_stat_history` table but we miss
to get the job information into the JSONB field where we store detailed
information about the job execution.
2024-04-22 14:06:59 -03:00
Fabrízio de Royes Mello
66c0702d3b Refactor job execution history table
In #6767 we introduced the ability to track job execution history
including succeeded and failed jobs.

The new metadata table `_timescaledb_internal.bgw_job_stat_history` has
two JSONB columns `config` (store config information) and `error_data`
(store the ErrorData information). The problem is that this approach is
not flexible for future history recording changes so this PR refactor
the current implementation to use only one JSONB column named `data`
that will store more job information in that form:

{
  "job": {
    "owner": "fabrizio",
    "proc_name": "error",
    "scheduled": true,
    "max_retries": -1,
    "max_runtime": "00:00:00",
    "proc_schema": "public",
    "retry_period": "00:05:00",
    "initial_start": "00:05:00",
    "fixed_schedule": true,
    "schedule_interval": "00:00:30"
  },
  "config": {
    "bar": 1
  },
  "error_data": {
    "domain": "postgres-16",
    "lineno": 841,
    "context": "SQL statement \"SELECT 1/0\"\nPL/pgSQL function error(integer,jsonb) line 3 at PERFORM",
    "message": "division by zero",
    "filename": "int.c",
    "funcname": "int4div",
    "proc_name": "error",
    "sqlerrcode": "22012",
    "proc_schema": "public",
    "context_domain": "plpgsql-16"
  }
}
2024-04-19 09:19:23 -03:00
Fabrízio de Royes Mello
52094a3103 Track job execution history
In #4678 we added an interface for troubleshoting job failures by
logging it in the metadata table `_timescaledb_internal.job_errors`.

With this PR we extended the existing interface to also store succeeded
executions. A new GUC named `timescaledb.enable_job_execution_logging`
was added to control this new behavior and the default value is `false`.

We renamed the metadata table to `_timescaledb_internal.bgw_job_stat_history`
and added a new view `timescaledb_information.job_history` to users that
have enough permissions can check the job execution history.
2024-04-04 10:39:28 -03:00
Jan Nidzwetzki
8beee08f43 Exclude OSM chunks from enabling autovaccum
Storage options can not be set for OSM chunks. Therefore, the need to be
excluded from resetting the autovaccum storage option.
2024-03-20 11:15:23 +01:00
Matvey Arye
ebb2556ab3 Make compression use the defaults functions
Previously, we create functions to calculate default order by and
segment by values. This PR makes those functions be used by default
when compression is enabled. We also added GUCs to disable those
functions or to use alternative functions for the defaults calculation.
2024-03-18 12:35:27 -04:00
Jan Nidzwetzki
8dcb6eed99 Populate CAgg bucket catalog table for all CAggs
This changes the behavior of the CAgg catalog tables. From now on, all
CAggs that use a time_bucket function create an entry in the catalog
table continuous_aggs_bucket_function. In addition, the duplicate
bucket_width attribute is removed from the catalog table continuous_agg.
2024-03-13 16:40:56 +01:00
Jan Nidzwetzki
67ad085d18 Remove autovacuum setting from compressed chunks
In old TSDB versions, we disabled autovacuum for compressed chunks to
keep the statistics. However, this restriction was removed in #5118, but
no migration was performed to reset the custom autovacuum setting for
existing chunks.
2024-03-08 15:10:21 +01:00
Jan Nidzwetzki
c0a3369c96 Fix CAgg migration with custom settings
The CAgg migration path contained two bugs. This PR fixes both. A typo
in the column type prevented 'timestamp with time zone' buckets from
being handled properly. In addition, a custom setting of the datestyle
could create errors during the parsing of the generated timestamp
values.

Fixes: #5359
2024-03-05 14:20:08 +01:00
Sven Klemm
c87be4ab84 Remove get_chunk_colstats and get_chunk_relstats
These 2 functions were used in the multinode context and are no longer
used now.
2024-03-03 23:14:02 +01:00
Jan Nidzwetzki
3ea1417440 Define meaning of 0 retries for jobs as no retries
So far, we have not differentiated between 0 and -1 retries of a job. -1 was defined as an infinite number of retries. The behavior of 0 was not well defined and also handled as an infinite number of retries. This commit defines 0 as no retries.

Fixes: #6691
2024-02-26 15:24:12 +01:00
Jan Nidzwetzki
fdf3aa3bfa Use NULL in CAgg bucket function catalog table
Historically, we have used an empty string for undefined values in the
catalog table continuous_aggs_bucket_function. Since #6624, the optional
arguments can be NULL. This patch cleans up the empty strings and
changes the logic to work with NULL values.
2024-02-23 20:58:32 +01:00
Sven Klemm
a058950256 2.14.2 post release
Add 2.14.2 to update tests and adjust version configuration
2024-02-20 13:02:10 +01:00
Sven Klemm
d0e07a404f Release 2.14.2
This release contains bug fixes since the 2.14.1 release.
We recommend that you upgrade at the next available opportunity.

**Bugfixes**
* #6655 Fix segfault in cagg_validate_query
* #6660 Fix refresh on empty CAgg with variable bucket
* #6670 Don't try to compress osm chunks

**Thanks**
* @kav23alex for reporting a segfault in cagg_validate_query
2024-02-20 07:39:51 +01:00
Sven Klemm
23b2665d42 Don't try to compress osm chunks
This patch filters out the osm chunk in the compression policy and
adds some additional checks so we dont run compress_chunk on osm
chunks.
2024-02-18 20:00:56 +01:00
Jan Nidzwetzki
b01c8e7377 Unify handling of CAgg bucket_origin
So far, bucket_origin was defined as a Timestamp but used as a
TimestampTz in many places. This commit changes this and unifies the
usage of the variable.
2024-02-16 18:28:21 +01:00
Jan Nidzwetzki
ab7a09e876 Make CAgg time_bucket catalog table more generic
The catalog table continuous_aggs_bucket_function is currently only used
for variable bucket sizes. Information about the fixed-size buckets is
stored in the table continuous_agg only. This causes some problems
(e.g., we have redundant fields for the bucket_size, fixes size buckets
with offsets are not supported, ...).

This commit is the first in a row of commits that refactor the catalog
for the CAgg time_bucket function. The goals are:

* Remove the CAgg redundant attributes in the catalog
* Create an entry in continuous_aggs_bucket_function for all CAggs
  that use time_bucket

This first commit refactors the continuous_aggs_bucket_function table
and prepares it for more generic use. Not all attributes are used yet,
but these will change in follow-up PRs.
2024-02-16 15:39:49 +01:00
Fabrízio de Royes Mello
5a359ac660 Remove metadata when dropping chunk
Historically we preserve chunk metadata because the old format of the
Continuous Aggregate has the `chunk_id` column in the materialization
hypertable so in order to don't have chunk ids left over there we just
mark it as dropped when dropping chunks.

In #4269 we introduced a new Continuous Aggregate format that don't
store the `chunk_id` in the materialization hypertable anymore so it's
safe to also remove the metadata when dropping chunk and all associated
Continuous Aggregates are in the new format.

Also added a post-update SQL script to cleanup unnecessary dropped chunk
metadata in our catalog.

Closes #6570
2024-02-16 10:45:04 -03:00
Sven Klemm
8d8f158302 2.14.1 post release
Adjust update tests to include new version.
2024-02-15 06:15:59 +01:00
Sven Klemm
59f50f2daa Release 2.14.1
This release contains bug fixes since the 2.14.0 release.
We recommend that you upgrade at the next available opportunity.

**Features**
* #6630 Add views for per chunk compression settings

**Bugfixes**
* #6636 Fixes extension update of compressed hypertables with dropped columns
* #6637 Reset sequence numbers on non-rollup compression
* #6639 Disable default indexscan for compression
* #6651 Fix DecompressChunk path generation with per chunk settings

**Thanks**
* @anajavi for reporting an issue with extension update of compressed hypertables
2024-02-14 17:19:04 +01:00
Sven Klemm
87430168b5 Fix downgrade script to make backports easier
While the downgrade script doesnt combine multiple version into a single
script since we only create the script for the previous version, fixing
this will make backporting in the release branch easier.
2024-02-14 14:45:06 +01:00
Sven Klemm
fc0e41cc13 Fix DecompressChunk path generation with per chunk settings
Adjust DecompressChunk path generation to use the per chunk settings
and not the hypertable settings when building compression info.
This patch also fixes the missing chunk configuration generation
in the update script which was masked by this bug.
2024-02-14 14:18:04 +01:00
Sven Klemm
ea6d826c12 Add compression settings informational view
This patch adds 2 new views hypertable_compression_settings and
chunk_compression_settings to query the per chunk compression
settings.
2024-02-13 07:33:37 +01:00
Sven Klemm
fa5c0a9b22 Fix update script for tables with dropped columns 2024-02-12 20:58:20 +01:00
Ante Kresic
ba3ccc46db Post-release fixes for 2.14.0
Bumping the previous version and adding tests for 2.14.0.
2024-02-12 09:32:40 +01:00
Jan Nidzwetzki
1765c82c50 Remove distributed CAgg leftovers
Version 2.14.0 removes the multi-node code. However, there were a few
leftovers for the handling of distributed CAggs. This commit cleans up
the CAgg code and removes the no longer needed functions:

invalidation_cagg_log_add_entry(integer,bigint,bigint);
invalidation_hyper_log_add_entry(integer,bigint,bigint);
materialization_invalidation_log_delete(integer);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
     bigint,integer[],bigint[],bigint[]);
invalidation_process_cagg_log(integer,integer,regtype,bigint,
     bigint,integer[],bigint[],bigint[],text[]);
invalidation_process_hypertable_log(integer,integer,regtype,
     integer[],bigint[],bigint[]);
invalidation_process_hypertable_log(integer,integer,regtype,
     integer[],bigint[],bigint[],text[]);
hypertable_invalidation_log_delete(integer);
2024-02-08 16:02:19 +01:00
Ante Kresic
505b427a04 Release 2.14.0
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
2024-02-08 13:57:06 +01:00
Sven Klemm
101e4c57ef Add recompress optional argument to compress_chunk
This patch deprecates the recompress_chunk procedure as all that
functionality is covered by compress_chunk now. This patch also adds a
new optional boolean argument to compress_chunk to force applying
changed compression settings to existing compressed chunks.
2024-02-07 12:19:13 +01:00
Fabrízio de Royes Mello
b093749b45 Properly set job failure on compression policies
In #4770 we changed the behavior of compression policies to continue
compressing chunks even if a failure happens in one of them.

The problem is if a failure happens in one or all the chunks the job is
marked as successful, and this is wrong because a failure happens so if
an exception occurs when compressing chunks we'll raise an exception at
the end of the procedure in order to mark the job as failed.
2024-02-01 10:59:47 -03:00
Nikhil Sontakke
2b8f98c616 Support approximate hypertable size
If a lot of chunks are involved then the current pl/pgsql function
to compute the size of each chunk via a nested loop is pretty slow.
Additionally, the current functionality makes a system call to get the
file size on disk for each chunk everytime this function is called.
That again slows things down. We now have an approximate function which
is implemented in C to avoid the issues in the pl/pgsql function.
Additionally, this function also uses per backend caching using the
smgr layer to compute the approximate size cheaply. The PG cache
invalidation clears off the cached size for a chunk when DML happens
into it. That size cache is thus able to get the latest size in a
matter of minutes. Also, due to the backend caching, any long running
session will only fetch latest data for new or modified chunks and can
use the cached data (which is calculated afresh the first time around)
effectively for older chunks.
2024-02-01 13:25:41 +05:30
Sven Klemm
1502bad832 Change boolean default value for compress_chunk and decompress_chunk
This patch changes those functions to no longer error by default when
the chunk is not the expected state, instead a warning is raised.
This is in preparation for changing compress_chunk to forward to
the appropriate operation depending on the chunk state.
2024-01-31 20:55:55 +01:00
Nikhil Sontakke
c715d96aa4 Don't dump unnecessary extension tables
Logging and caching related tables from the timescaledb extension
should not be dumped using pg_dump. Our scripts specify a few such
unwanted tables. Apart from being unnecessary, the "job_errors" had
some restricted permissions causing additional problems in pg_dump.

We now don't include such tables for dumping.

Fixes #5449
2024-01-25 12:01:11 +05:30
Konstantina Skovola
929ad41fa7 Fix now() plantime constification with BETWEEN
Previously when using BETWEEN ... AND additional constraints in a
WHERE clause, the BETWEEN was not handled correctly because it was
wrapped in a BoolExpr node, which prevented plantime exclusion.
The flattening of such expressions happens in `eval_const_expressions`
which gets called after our constify_now code.
This commit fixes the handling of this case to allow chunk exclusion to
take place at planning time.
Also, makes sure we use our mock timestamp in all places in tests.
Previously we were using a mix of current_timestamp_mock and now(),
which was returning unexpected/incorrect results.
2024-01-24 18:25:39 +02:00
Sven Klemm
0b23bab466 Include _timescaledb_catalog.metadata in dumps
This patch changes the dump configuration for
_timescaledb_catalog.metadata to include all entries. To allow loading
logical dumps with this configuration an insert trigger is added that
turns uniqueness conflicts into updates to not block the restore.
2024-01-23 12:53:48 +01:00
Matvey Arye
e89bc24af2 Add functions for determining compression defaults
Add functions to help determine defaults for segment_by and order_by.
2024-01-22 08:10:23 -05:00
Sven Klemm
754f77e083 Remove chunks_in function
This function was used to propagate chunk exclusion decisions from
an access node to data nodes and is no longer needed with the removal
of multinode.
2024-01-22 09:18:26 +01:00
Sven Klemm
f57d584dd2 Make compression settings per chunk
This patch implements changes to the compressed hypertable to allow per
chunk configuration. To enable this the compressed hypertable can no
longer be in an inheritance tree as the schema of the compressed chunk
is determined by the compression settings. While this patch implements
all the underlying infrastructure changes, the restrictions for changing
compression settings remain intact and will be lifted in a followup patch.
2024-01-17 12:53:07 +01:00
Sven Klemm
75cc4f7d7b Improve multinode detection in update script
Previously we would only check for data nodes defined or distributed
hypertables being present which might not be true on data nodes so
we prevent update on any installation that has dist_uuid defined which
is also true on data nodes.
2024-01-12 11:18:35 +01:00
Mats Kindahl
662fcc1b1b Make extension state available through function
The extension state is not easily accessible in release builds, which
makes debugging issue with the loader very difficult. This commit
introduces a new schema `_timescaledb_debug` and makes the function
`ts_extension_get_state` available also in release builds as
`_timescaledb_debug.extension_state`.

See #1682
2024-01-11 10:52:35 +01:00