Fix chunk-related deadlocks.

This patch fixes two deadlock cases.

The first case occurred as a result of taking partition and chunk
locks in inconsistent orders. When creating the first chunk C1
in a table, concurrent INSERT workers would race to create
that chunk. The result would be that the transactions queue up on
the partition lock P, effectively serializing these transactions.
This would lead to these concurrent transactions to insert
at very different offsets in time, one at a time. At some point
in the future, some n'th transaction Tn queued up on P would get
that lock as the preceeding inserters T1-(n-1) finish their inserts
and move on to their next batches. When Tn finally holds P, one of
the preceeding workers starts a new transaction that finds that it
needs to close C1, grabbing a lock on C1 and then on P. However,
it will block on P since Tn already holds P. Tn will also believe
it needs to close C1, thus trying to grab a lock on C1, but will
block, causing a deadlock.

The second case can occur on multi-partition hypertables. With
multiple partitions there are more than one open-ended chunk
at a time (one for each partition). This leads to a deadlock case
when two processes try to close (and thus lock) the chunks in
different order. For instance process P1 closes chunk C1 and then
C2, while process P2 locks in order C2 and C1.

The fix for the first case is to remove the partition lock
altogether. As it turns out, this lock is not needed.
Instead, transactions can race to create new chunks, thus causing
conflicts. A conflict in creating a new chunk can safely be
ignored and it also avoids taking unecessary locks. Removing the
partition lock also avoids the transaction serialization that
happens around this lock, which is especially bad for long-running
transactions (e.g., big INSERT batches).

The fix for the second multi-partition deadlock case is to always
close chunks in chunk ID order. This requires closing chunks at
the end of a transaction, once a transaction knows all the chunks
it needs to close. This also has the added benefit of reducing the
time a transaction holds exclusive locks on chunks, potentially
improving insert performance.
This commit is contained in:
Erik Nordström 2017-02-16 10:08:16 +01:00 committed by Erik Nordström
parent 5e44e996f4
commit e3fabf993a
9 changed files with 62 additions and 56 deletions

View File

@ -60,7 +60,7 @@ $BODY$;
--closes the given chunk if it is over the size limit set for the hypertable --closes the given chunk if it is over the size limit set for the hypertable
--it belongs to. --it belongs to.
CREATE OR REPLACE FUNCTION _iobeamdb_internal.close_chunk_if_needed( CREATE OR REPLACE FUNCTION _iobeamdb_internal.close_chunk_if_needed(
chunk_row _iobeamdb_catalog.chunk chunk_id INTEGER
) )
RETURNS boolean LANGUAGE PLPGSQL VOLATILE AS RETURNS boolean LANGUAGE PLPGSQL VOLATILE AS
$BODY$ $BODY$
@ -68,17 +68,19 @@ DECLARE
chunk_size BIGINT; chunk_size BIGINT;
chunk_max_size BIGINT; chunk_max_size BIGINT;
BEGIN BEGIN
chunk_size := _iobeamdb_data_api.get_chunk_size(chunk_row.id); chunk_size := _iobeamdb_data_api.get_chunk_size(chunk_id);
chunk_max_size := _iobeamdb_internal.get_chunk_max_size(chunk_row.id); chunk_max_size := _iobeamdb_internal.get_chunk_max_size(chunk_id);
IF chunk_row.end_time IS NOT NULL OR (NOT chunk_size >= chunk_max_size) THEN IF chunk_size >= chunk_max_size THEN
RETURN FALSE; --This should use the non-transactional rpc because this needs to
--commit before we can take a lock for writing on the closed chunk.
--That means this operation is not transactional with the insert
--and will not be rolled back.
PERFORM _iobeamdb_meta_api.close_chunk_end_immediate(chunk_id);
RETURN TRUE;
END IF; END IF;
--This should use the non-transactional rpc because this needs to commit before we can take a lock RETURN FALSE;
--for writing on the closed chunk. That means this operation is not transactional with the insert and will not be rolled back.
PERFORM _iobeamdb_meta_api.close_chunk_end_immediate(chunk_row.id);
return TRUE;
END END
$BODY$; $BODY$;

View File

@ -93,6 +93,7 @@ DECLARE
point_record_query_sql TEXT; point_record_query_sql TEXT;
point_record RECORD; point_record RECORD;
chunk_row _iobeamdb_catalog.chunk; chunk_row _iobeamdb_catalog.chunk;
chunk_id INT;
crn_record RECORD; crn_record RECORD;
hypertable_row RECORD; hypertable_row RECORD;
partition_constraint_where_clause TEXT = ''; partition_constraint_where_clause TEXT = '';
@ -140,15 +141,19 @@ BEGIN
USING ERRCODE = 'IO501'; USING ERRCODE = 'IO501';
END IF; END IF;
--Create a temp table to collect all the chunks we insert into. We might
--need to close the chunks at the end of the transaction.
CREATE TEMP TABLE IF NOT EXISTS insert_chunks(LIKE _iobeamdb_catalog.chunk) ON COMMIT DROP;
--We need to truncate the table if it already existed due to calling this
--function twice in a single transaction.
TRUNCATE TABLE insert_chunks;
WHILE point_record.time IS NOT NULL LOOP WHILE point_record.time IS NOT NULL LOOP
--Get the chunk we should insert into --Get a chunk with SHARE lock
chunk_row := get_or_create_chunk(point_record.partition_id, point_record.time); INSERT INTO insert_chunks
SELECT * FROM get_or_create_chunk(point_record.partition_id, point_record.time, TRUE)
--Check if the chunk should be closed (must be done without lock on chunk). RETURNING * INTO chunk_row;
PERFORM _iobeamdb_internal.close_chunk_if_needed(chunk_row);
--Get a chunk with lock
chunk_row := get_or_create_chunk(point_record.partition_id, point_record.time, TRUE);
IF point_record.partitioning_column IS NOT NULL THEN IF point_record.partitioning_column IS NOT NULL THEN
--if we are inserting across more than one partition, --if we are inserting across more than one partition,
@ -172,17 +177,16 @@ BEGIN
END IF; END IF;
--Do insert on all chunk replicas --Do insert on all chunk replicas
SELECT string_agg(insert_stmt, ',')
SELECT string_agg(insert_stmt, ',') INTO insert_sql
INTO insert_sql FROM (
FROM (
SELECT format('i_%s AS (INSERT INTO %I.%I (%s) SELECT * FROM selected)', SELECT format('i_%s AS (INSERT INTO %I.%I (%s) SELECT * FROM selected)',
row_number() OVER(), crn.schema_name, crn.table_name, column_list) insert_stmt row_number() OVER(), crn.schema_name, crn.table_name, column_list) insert_stmt
FROM _iobeamdb_catalog.chunk_replica_node crn FROM _iobeamdb_catalog.chunk_replica_node crn
WHERE (crn.chunk_id = chunk_row.id) WHERE (crn.chunk_id = chunk_row.id)
) AS parts; ) AS parts;
EXECUTE format( EXECUTE format(
$$ $$
WITH selected AS WITH selected AS
( (
@ -206,6 +210,16 @@ BEGIN
USING ERRCODE = 'IO501'; USING ERRCODE = 'IO501';
END IF; END IF;
END LOOP; END LOOP;
--Loop through all open chunks that were inserted into, closing
--if needed. Do it in ID order to avoid deadlocks.
FOR chunk_id IN
SELECT c.id FROM insert_chunks cl
INNER JOIN _iobeamdb_catalog.chunk c ON cl.id = c.id
WHERE c.end_time IS NULL ORDER BY cl.id DESC
LOOP
PERFORM _iobeamdb_internal.close_chunk_if_needed(chunk_id);
END LOOP;
END END
$BODY$; $BODY$;

View File

@ -56,10 +56,10 @@ BEGIN
END END
$BODY$; $BODY$;
--creates chunk. Must be called after aquiring a lock on partition. --creates chunk.
CREATE OR REPLACE FUNCTION _iobeamdb_meta.create_chunk_unlocked( CREATE OR REPLACE FUNCTION _iobeamdb_meta.create_chunk_unlocked(
partition_id INT, part_id INT,
time_point BIGINT time_point BIGINT
) )
RETURNS VOID LANGUAGE PLPGSQL VOLATILE AS RETURNS VOID LANGUAGE PLPGSQL VOLATILE AS
$BODY$ $BODY$
@ -69,10 +69,14 @@ DECLARE
BEGIN BEGIN
SELECT * SELECT *
INTO table_start, table_end INTO table_start, table_end
FROM _iobeamdb_meta.calculate_new_chunk_times(partition_id, time_point); FROM _iobeamdb_meta.calculate_new_chunk_times(part_id, time_point);
--INSERT on chunk implies SHARE lock on partition row due to foreign key.
--If the insert conflicts, it means another transaction created the chunk
--before us, and we can safely ignore the error.
INSERT INTO _iobeamdb_catalog.chunk (partition_id, start_time, end_time) INSERT INTO _iobeamdb_catalog.chunk (partition_id, start_time, end_time)
VALUES (partition_id, table_start, table_end); VALUES (part_id, table_start, table_end)
ON CONFLICT (partition_id, start_time) DO NOTHING;
END END
$BODY$; $BODY$;
@ -87,13 +91,6 @@ DECLARE
chunk_row _iobeamdb_catalog.chunk; chunk_row _iobeamdb_catalog.chunk;
partition_row _iobeamdb_catalog.partition; partition_row _iobeamdb_catalog.partition;
BEGIN BEGIN
--get lock
SELECT *
INTO partition_row
FROM _iobeamdb_catalog.partition
WHERE id = partition_id
FOR UPDATE;
--recheck: --recheck:
chunk_row := _iobeamdb_internal.get_chunk(partition_id, time_point); chunk_row := _iobeamdb_internal.get_chunk(partition_id, time_point);
@ -169,14 +166,7 @@ BEGIN
RETURN; RETURN;
END IF; END IF;
--get partition lock --PHASE 1: lock chunk row on all nodes (prevents concurrent chunk insert)
SELECT *
INTO partition_row
FROM _iobeamdb_catalog.partition
WHERE id = chunk_row.partition_id
FOR UPDATE;
--PHASE 1: lock chunk row on all rows (prevents concurrent chunk insert)
FOR node_row IN FOR node_row IN
SELECT * SELECT *
FROM _iobeamdb_catalog.node n FROM _iobeamdb_catalog.node n

View File

@ -92,13 +92,13 @@ WHERE h.schema_name = 'public' AND (h.table_name = 'drop_chunk_test1' OR h.table
3 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_3_data | 3 | 3 3 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_3_data | 3 | 3
4 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_4_data | 4 | 4 4 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_4_data | 4 | 4
5 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_5_data | 5 | 5 5 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_5_data | 5 | 5
6 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_6_data | 6 | 6 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_6_data | 6 | 6
7 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_7_data | | 1 7 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_7_data | | 1
8 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_8_data | 2 | 2 8 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_8_data | 2 | 2
9 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_9_data | 3 | 3 9 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_9_data | 3 | 3
10 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_10_data | 4 | 4 10 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_10_data | 4 | 4
11 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_11_data | 5 | 5 11 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_11_data | 5 | 5
12 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_12_data | 6 | 12 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_12_data | 6 | 6
(12 rows) (12 rows)
SELECT * FROM _iobeamdb_catalog.chunk_replica_node; SELECT * FROM _iobeamdb_catalog.chunk_replica_node;
@ -160,12 +160,12 @@ WHERE h.schema_name = 'public' AND (h.table_name = 'drop_chunk_test1' OR h.table
3 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_3_data | 3 | 3 3 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_3_data | 3 | 3
4 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_4_data | 4 | 4 4 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_4_data | 4 | 4
5 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_5_data | 5 | 5 5 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_5_data | 5 | 5
6 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_6_data | 6 | 6 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_6_data | 6 | 6
8 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_8_data | 2 | 2 8 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_8_data | 2 | 2
9 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_9_data | 3 | 3 9 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_9_data | 3 | 3
10 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_10_data | 4 | 4 10 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_10_data | 4 | 4
11 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_11_data | 5 | 5 11 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_11_data | 5 | 5
12 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_12_data | 6 | 12 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_12_data | 6 | 6
(10 rows) (10 rows)
SELECT * FROM _iobeamdb_catalog.chunk_replica_node; SELECT * FROM _iobeamdb_catalog.chunk_replica_node;
@ -222,12 +222,12 @@ WHERE h.schema_name = 'public' AND (h.table_name = 'drop_chunk_test1' OR h.table
3 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_3_data | 3 | 3 3 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_3_data | 3 | 3
4 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_4_data | 4 | 4 4 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_4_data | 4 | 4
5 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_5_data | 5 | 5 5 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_5_data | 5 | 5
6 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_6_data | 6 | 6 | 1 | 1 | _iobeamdb_internal | _hyper_1_1_0_6_data | 6 | 6
8 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_8_data | 2 | 2 8 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_8_data | 2 | 2
9 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_9_data | 3 | 3 9 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_9_data | 3 | 3
10 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_10_data | 4 | 4 10 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_10_data | 4 | 4
11 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_11_data | 5 | 5 11 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_11_data | 5 | 5
12 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_12_data | 6 | 12 | 2 | 2 | _iobeamdb_internal | _hyper_2_2_0_12_data | 6 | 6
(9 rows) (9 rows)
SELECT * FROM _iobeamdb_catalog.chunk_replica_node; SELECT * FROM _iobeamdb_catalog.chunk_replica_node;

View File

@ -101,7 +101,7 @@ Inherits: _iobeamdb_internal._hyper_2_2_0_partition
device_id | text | | extended | | device_id | text | | extended | |
Check constraints: Check constraints:
"partition" CHECK (_iobeamdb_catalog.get_partition_for_key(device_id, 32768) >= '0'::smallint AND _iobeamdb_catalog.get_partition_for_key(device_id, 32768) <= '32767'::smallint) "partition" CHECK (_iobeamdb_catalog.get_partition_for_key(device_id, 32768) >= '0'::smallint AND _iobeamdb_catalog.get_partition_for_key(device_id, 32768) <= '32767'::smallint)
"time_range" CHECK ("time" >= '3'::bigint) NOT VALID "time_range" CHECK ("time" >= '3'::bigint AND "time" <= '3'::bigint) NOT VALID
Inherits: _iobeamdb_internal._hyper_2_2_0_partition Inherits: _iobeamdb_internal._hyper_2_2_0_partition
Table "_iobeamdb_internal._hyper_2_2_0_partition" Table "_iobeamdb_internal._hyper_2_2_0_partition"

View File

@ -88,7 +88,7 @@ SELECT * FROM _iobeamdb_catalog.chunk c
----+--------------+------------+----------+----------+----------------------+---------------+--------------------+---------------------+----+--------------+---------------+------------+--------------------+------------------------+----+-------------+--------------------+------------------------+-------------------------+--------------------+-----------------+--------------------+-----------+------------------+------------------+------------+------------------ ----+--------------+------------+----------+----------+----------------------+---------------+--------------------+---------------------+----+--------------+---------------+------------+--------------------+------------------------+----+-------------+--------------------+------------------------+-------------------------+--------------------+-----------------+--------------------+-----------+------------------+------------------+------------+------------------
4 | 3 | | 1 | 4 | 3 | single | _iobeamdb_internal | _hyper_2_3_0_4_data | 3 | 3 | 2 | 0 | _iobeamdb_internal | _hyper_2_3_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000 4 | 3 | | 1 | 4 | 3 | single | _iobeamdb_internal | _hyper_2_3_0_4_data | 3 | 3 | 2 | 0 | _iobeamdb_internal | _hyper_2_3_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000
5 | 3 | 2 | 2 | 5 | 3 | single | _iobeamdb_internal | _hyper_2_3_0_5_data | 3 | 3 | 2 | 0 | _iobeamdb_internal | _hyper_2_3_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000 5 | 3 | 2 | 2 | 5 | 3 | single | _iobeamdb_internal | _hyper_2_3_0_5_data | 3 | 3 | 2 | 0 | _iobeamdb_internal | _hyper_2_3_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000
6 | 3 | 3 | | 6 | 3 | single | _iobeamdb_internal | _hyper_2_3_0_6_data | 3 | 3 | 2 | 0 | _iobeamdb_internal | _hyper_2_3_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000 6 | 3 | 3 | 3 | 6 | 3 | single | _iobeamdb_internal | _hyper_2_3_0_6_data | 3 | 3 | 2 | 0 | _iobeamdb_internal | _hyper_2_3_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000
(3 rows) (3 rows)
\d+ "_iobeamdb_internal".* \d+ "_iobeamdb_internal".*
@ -349,7 +349,7 @@ Inherits: _iobeamdb_internal._hyper_2_3_0_partition
device_id | text | | extended | | device_id | text | | extended | |
Check constraints: Check constraints:
"partition" CHECK (_iobeamdb_catalog.get_partition_for_key(device_id, 32768) >= '0'::smallint AND _iobeamdb_catalog.get_partition_for_key(device_id, 32768) <= '32767'::smallint) "partition" CHECK (_iobeamdb_catalog.get_partition_for_key(device_id, 32768) >= '0'::smallint AND _iobeamdb_catalog.get_partition_for_key(device_id, 32768) <= '32767'::smallint)
"time_range" CHECK ("time" >= '3'::bigint) NOT VALID "time_range" CHECK ("time" >= '3'::bigint AND "time" <= '3'::bigint) NOT VALID
Inherits: _iobeamdb_internal._hyper_2_3_0_partition Inherits: _iobeamdb_internal._hyper_2_3_0_partition
Table "_iobeamdb_internal._hyper_2_3_0_partition" Table "_iobeamdb_internal._hyper_2_3_0_partition"
@ -398,7 +398,7 @@ SELECT * FROM _iobeamdb_catalog.chunk;
3 | 2 | | 3 | 2 | |
4 | 3 | | 1 4 | 3 | | 1
5 | 3 | 2 | 2 5 | 3 | 2 | 2
6 | 3 | 3 | 6 | 3 | 3 | 3
(6 rows) (6 rows)
SELECT * FROM _iobeamdb_catalog.chunk_replica_node; SELECT * FROM _iobeamdb_catalog.chunk_replica_node;

View File

@ -89,7 +89,7 @@ SELECT * FROM _iobeamdb_catalog.chunk c
----+--------------+------------+----------+----------+----------------------+---------------+--------------------+---------------------+----+--------------+---------------+------------+--------------------+------------------------+----+-------------+--------------------+------------------------+-------------------------+--------------------+-----------------+--------------------+-----------+------------------+------------------+------------+------------------ ----+--------------+------------+----------+----------+----------------------+---------------+--------------------+---------------------+----+--------------+---------------+------------+--------------------+------------------------+----+-------------+--------------------+------------------------+-------------------------+--------------------+-----------------+--------------------+-----------+------------------+------------------+------------+------------------
3 | 2 | | 1 | 3 | 2 | single | _iobeamdb_internal | _hyper_2_2_0_3_data | 2 | 2 | 2 | 0 | _iobeamdb_internal | _hyper_2_2_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000 3 | 2 | | 1 | 3 | 2 | single | _iobeamdb_internal | _hyper_2_2_0_3_data | 2 | 2 | 2 | 0 | _iobeamdb_internal | _hyper_2_2_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000
4 | 2 | 2 | 2 | 4 | 2 | single | _iobeamdb_internal | _hyper_2_2_0_4_data | 2 | 2 | 2 | 0 | _iobeamdb_internal | _hyper_2_2_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000 4 | 2 | 2 | 2 | 4 | 2 | single | _iobeamdb_internal | _hyper_2_2_0_4_data | 2 | 2 | 2 | 0 | _iobeamdb_internal | _hyper_2_2_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000
5 | 2 | 3 | | 5 | 2 | single | _iobeamdb_internal | _hyper_2_2_0_5_data | 2 | 2 | 2 | 0 | _iobeamdb_internal | _hyper_2_2_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000 5 | 2 | 3 | 3 | 5 | 2 | single | _iobeamdb_internal | _hyper_2_2_0_5_data | 2 | 2 | 2 | 0 | _iobeamdb_internal | _hyper_2_2_0_partition | 2 | public | chunk_closing_test | _iobeamdb_internal | _hyper_2 | _iobeamdb_internal | _hyper_2_root | 1 | STICKY | time | bigint | single | 10000
(3 rows) (3 rows)
\c single \c single

View File

@ -19,7 +19,7 @@ SELECT setup_single_node(hostname => 'fakehost'); -- fakehost makes sure there i
\set ECHO ALL \set ECHO ALL
\c single \c single
\set ON_ERROR_STOP 0 \set ON_ERROR_STOP 0
SET client_min_messages = WARNING; SET client_min_messages = ERROR;
drop tablespace if exists tspace1; drop tablespace if exists tspace1;
SET client_min_messages = NOTICE; SET client_min_messages = NOTICE;
\set VERBOSITY verbose \set VERBOSITY verbose

View File

@ -9,7 +9,7 @@
\set ON_ERROR_STOP 0 \set ON_ERROR_STOP 0
SET client_min_messages = WARNING; SET client_min_messages = ERROR;
drop tablespace if exists tspace1; drop tablespace if exists tspace1;
SET client_min_messages = NOTICE; SET client_min_messages = NOTICE;