Time types, like date and timestamps, have limits that aren't the same
as the underlying storage type. For instance, while a timestamp is
stored as an `int64` internally, its max supported time value is not
`INT64_MAX`. Instead, `INT64_MAX` represents `+Infinity` and the
actual largest possible timestamp is close to `INT64_MAX` (but not
`INT64_MAX-1` either). The same applies to min values.
Unfortunately, time handling code does not check for these boundaries;
in most cases, overflow handling when, e.g., bucketing, are checked
against the max integer values instead of type-specific boundaries. In
other cases, overflows simply throw errors instead of clamping to the
boundary values, which makes more sense in many situations.
Using integer time suffers from similar issues. To take one example,
simply inserting a valid `smallint` value close to the max into a
table with a `smallint` time column fails:
```
INSERT INTO smallint_table VALUES ('32765', 1, 2.0);
ERROR: value "32770" is out of range for type smallint
```
This happens because the code that adds dimensional constraints always
checks for overflow against `INT64_MAX` instead of the type-specific
max value. Therefore, it tries to create a chunk constraint that ends
at `32770`, which is outside the allowed range of `smallint`.
The resolve these issues, several time-related utility functions have
been implemented that, e.g., return type-specific range boundaries,
and perform saturated addition and subtraction while clamping to
supported boundaries.
Fixes#2292
The following functions have had permission checks
added or adjusted:
ts_chunk_index_clone
ts_chunk_index_replace
ts_hypertable_insert_blocker_trigger_add
ts_current_license_key
ts_calculate_chunk_interval
ts_chunk_adaptive_set
The following functions have been removed from the regular SQL install.
They are only installed and used in tests:
dimension_calculate_default_range_open
dimension_calculate_default_range_closed
We now use INT64_MAX and INT64_MIN as the max and min values for
dimension_slice ranges. If a dimension_slice has a range_start of
INT64_MIN or the range_end is INT64_MAX, we remove the corresponding
check constraint on the chunk since it signifies that this end of the
range is infinite. Closed ranges now always have INT64_MIN as range_end
of first slice and range_end of INT64_MAX for the last slice.
Also, points corresponding to INT64_MAX are always
put in the same slice as INT64_MAX-1 to avoid problems with the
semantics that coordinate < range_end.
This commit moves a lot of test setup logic to runner.sh. Also
passes the right commands to the regression infrastructure to create
appropriate users and run tests as a regular user.
When new chunks are created, the calculated chunk hypercube might
collide or not align with existing chunks when partitioning has
changed in one or more dimensions. In such cases, the chunk should be
cut to fit the alignment criteria and any collisions should be
resolved. Unfortunately, alignment and collision detection wasn't
properly handled.
This refactoring adds proper axis-aligned bounding box collision
detection generalized to N dimensions. It also correctly handles
dimension alignment.