1
0
mirror of https://github.com/timescale/timescaledb.git synced 2025-05-17 19:13:16 +08:00
timescaledb/docs/MultiNodeDeprecation.md
Jan Nidzwetzki 337adb63fc Release 2.13.0
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.12.2 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Full PostgreSQL 16 support for all existing features
* Vectorized aggregation execution for sum()
* Track chunk creation time used in retention/compression policies

**Deprecation notice: Multi-node support**
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**PostgreSQL 13 deprecation announcement**
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will
announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be
included going forward.

**Starting from TimescaleDB 2.13.0**
* No Amazon Machine Images (AMI) are published. If you previously used AMI, please
use another [installation method](https://docs.timescale.com/self-hosted/latest/install/)
* Continuous Aggregates are materialized only (non-realtime) by default

**Features**
*  Add chunk-wise sorted paths for compressed chunks
*  Simplify hypertable DDL API
*  Reduce WAL activity by freezing compressed tuples immediately
*  Vectorized aggregation execution for sum()
*  Add metadata for chunk creation time
*  Make Continous Aggregates materialized only (non-realtime) by default
*  Change show_chunks/drop_chunks using chunk creation time
*  Show batches/tuples decompressed during DML operations in EXPLAIN output
*  Keep track of catalog version
*  Use creation time in retention/compression policy
*  Add SQL function cagg_validate_query

**Bugfixes**
*  Add GUC for setting background worker log level
*  Allow enabling compression on hypertable with unique expression index
*  Check if worker registration succeeded
*  Fix exception detail passing in compression_policy_execute
*  Fix missing bms_del_member result assignment
*  Fix negative bitmapset member not allowed in compression
*  Potential data loss when compressing a table with a partial index that matches compression order.
*  Add support for startup chunk exclusion with aggs
*  Repair relacl on upgrade
*  Fix segfault when creating a cagg using a NULL width in time bucket function
*  Make timescaledb_functions.makeaclitem strict
*  Fix typmod and collation for segmentby columns
*  Fix tablespace with constraints
*  Enable segmentwise recompression in compression policy

**Thanks**
* @fetchezar for reporting an issue with compression policy error messages
* @jflambert for reporting the background worker log level issue
* @torazem for reporting an issue with compression and large oids
* @fetchezar for reporting an issue in the compression policy
* @lyp-bobi for reporting an issue with tablespace with constraints
* @pdipesh02 for contributing to the implementation of the metadata for chunk creation time,
             the generalized hypertable API, and show_chunks/drop_chunks using chunk creation time
* @lkshminarayanan for all his work on PG16 support
2023-11-27 16:13:51 +01:00

3.1 KiB
Raw Blame History

Multi-node Deprecation

Multi-node support has been deprecated. TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node support in 2.13 is available for PostgreSQL 13, 14 and 15. This decision was not made lightly, and we want to provide a clear understanding of the reasoning behind this change and the path forward.

Why we are ending multi-node support

We began to work on multi-node support in 2018 and released the first version in 2020 to address the growing demand for higher scalability in TimescaleDB deployments. The distributed architecture of multi-node allowed for horizontal scalability of writes and reads to go beyond what a single node could provide.

While we added many improvements since the initial launch, the architecture of multi-node came with a number of inherent limitations and challenges that have limited its adoption. Regrettably, only about 1% of current TimescaleDB deployments utilize multi-node, and the complexity involved in maintaining this feature has become a significant obstacle. Its not an isolated feature that can be kept in the product with very little effort since adding new features required extra development and testing to ensure they also worked for multi-node.

As we've evolved our single-node product and expanded our cloud offering to serve thousands of customers, we've identified more efficient solutions to meet the scalability needs of our users.

First, weve been able and will continue to make big improvements in the write and read performance of single-node. Weve scaled a single-node deployment to process 2 million inserts per second and have seen performance improvements of 10x for common queries. You can read a summary of the latest query performance improvements here.

And second, we are leveraging cloud technologies that have become very mature to provide higher scalability in a more accessible way. For example, our cloud offering uses object storage to deliver virtually infinite storage capacity at a very low cost.

For those reasons, weve decided to focus our efforts on improving single-node and leveraging cloud technologies to solve for high scalability and as a result weve ended support for multi-node.

What this means for you

We understand that this change may raise questions, and we are committed to supporting you through the transition.

For current TimescaleDB multi-node users, please refer to our migration documentation for a step-by-step guide to transition to a single-node configuration.

Alternatively, you can continue to use multi-node up to version 2.13. However, please be aware that future versions will no longer include this functionality.

If you have any questions or feedback, you can share them in the #multi-node channel in our community Slack.