Writing a shell script correctly can be hard even for a skilled
programmer. shellcheck is a static analysis tool that helps catch
common errors in shell scripts. We now have 36 executable scripts in
our repository, for which shellcheck reports 126 errors (calculated
like find . -type f -executable -exec bash -c '[ "$(file --brief
--mime-type "$1")" == "text/x-shellscript" ]' sh {} \; -exec shellcheck
-f gcc --exclude=SC2086 {} \; | cut -d: -f1 | sort | uniq | wc -l).
This commit fixes these warnings and adds a GitHub actions workflow
that runs shellcheck on all the executable shell scripts in the
repository. The warning SC2086: Double quote to prevent globbing and
word splitting is disabled globally, because it has little practical
consequences, sometimes leads to false positives, and is general is too
widespread because people forget to quote.
The post-update script was handling preserving initprivs for newly
added catalog tables and views. However, newly added catalog sequences
need separate handling otherwise update tests start failing. We also
now grant privileges for all future sequences in the update tests.
In passing, default the PG_VERSION in the update tests to 12 since we
don't work with PG11 anymore.
In order to implement repair tests, changes are made to the
`constraint_check` table to simulate a broken dependency, which
requires the constraints on that table to be dropped. This means that
the repair runs without constraints, and a bug in the update test could
potentially not get caught.
This commit fixes this by factoring out the repair tests from the
update tests and run them as a separate pass. This means that the
contraints are not dropped in the update tests and bugs there will be
caught.
In addition, some bash functions are factored out into a separate file
to avoid duplication.
The commit fixes two bugs in the repair scripts that could
prevent an update in rare circumstances.
For the 1.7.1--1.7.2 repair script: if there were several missing
dimension slices in different hypertables with the same column name,
the repair script would be confused on what constraint had what type
and generate an error.
For the 2.0.0-rc1--2.0.0-rc2 repair script: if a partition constraint
was broken, it would generate an error rather than repairing the
dimension slices because BIGINT_MIN would be cast to a double float and
then an attempt would be made to cast it back to bigint, causing an
overflow error.
This commit also creates an update repair test that breaks a few tables
for pre-2.0 versions to ensure that the repair script actually fixes
them. The integrity check for the update tests already contain a check
that dimension slices are valid, so there is no need to add a test for
that.
This commit adds an extra dimension in the workflow to test updates
with repair and run that separately. It also changes the update test
scripts to by default run without repair tests and add the additional
option `-r` for running repair tests in addition to the normal tests.
Fixes#2824
As part of the 2.0 continous aggregate changes, we are removing the
continuous_aggs_completed_threshold table. However, this may result
in currently running aggregates being considered complete even if
their completed threshold hadn't reached the invalidation threshold.
This change fixes this by adding an entry to the invalidation log
for any such aggregates.
Fixes#2314
This patch removes code support for PG9.6 and PG10. In addition to
removing PG96 and PG10 macros the following changes are done:
remove HAVE_INT64_TIMESTAMP since this is always true on PG10+
remove PG_VERSION_SUPPORTS_MULTINODE
Remove EXIT trap handler because it seems to occasionally capture exit
code from something that is not an update test resulting in error
state even though all update tests were successful.
This fixes a number of issues with the update test script that
prohibited it from displaying a log of any failing tests. In
particular, the script needs to be run with `set +e` to ensure a test
error doesn't immediately cause an exit without recording the failed
log.
This change makes the update tests from each prior version of
TimescaleDB to run in parallel instead of in serial order.
This speeds up the testing significantly.
Previously we just checked that we could upgrade from 0.1.0 until
latest but this missed some issues upgrading from intermediate
versions to latest. Now we check that all released versions can
successfully update.
Previously, for timezones w/o tz. The range_end and range_start were
defined as UTC, but the constraints on the table were written as as
the local time at the time of chunk creation. This does not work well
if timezones change over the life of the hypertable.
This change removes the dependency on local time for all timestamp
partitioning. Namely, the range_start and range_end remain as UTC
but the constraints are now always written in UTC too. Since old
constraints correctly describe the data currently in the chunks, the
update script to handle this change changes range_start and range_end
instead of the constraints.
Fixes#300.
The extension now works with PostgreSQL 10, while
retaining compatibility with version 9.6.
PostgreSQL 10 has numerous internal changes to functions and
APIs, which necessitates various glue code and compatibility
wrappers to seamlessly retain backwards compatiblity with older
versions.
Test output might also differ between versions. In particular,
the psql client generates version-specific output with `\d` and
EXPLAINs might differ due to new query optimizations. The test
suite has been modified as follows to handle these issues. First,
tests now use version-independent functions to query system
catalogs instead of using `\d`. Second, changes have been made to
the test suite to be able to verify some test outputs against
version-dependent reference files.
Moving the build system to CMake allows easy cross-platform
compiles, dependency checks, and more. In particular, CMake
allows us to easily build on Windows, and Visual Studio now
has native CMake support.
This is part of the ongoing effort to simplify the metadata tables and
removing any triggers on them that cause side effects.
This change includes the following:
- Remove the on_change_hypertable() trigger on the hypertable catalog
table.
- Remove the TRUNCATE blocking triggers on all metadata tables. If
we think such blocking is important, we should do this in an
event trigger or the processUtility hook.
- Put all SQL files in a single load_order.txt instead of splitting
across three distinct files. Now all SQL files are included in
update scripts as well for simplicity and consistency.
- As a result of removing triggers and related functions, the
setup_main() and restore_timescaledb() functions are no longer
needed. This also further simplifies the database restore process
as calling restore_timescaledb() is no longer needed (or possible).
- Refactor create_hypertable_row() to do more validation before
allocating a new hypertable ID. This avoids incrementing the serial
ID unnecessarily in case some validations fail.
The update test script compares a TimescaleDB instance, which is
updated to the latest version from an older version, to a cleanly
installed one at the latest version. It does this by inserting the
same data on both and comparing the data and metadata tables. This
comparison breaks if the new version has changed some behavior (e.g.,
chunk indexes might use a different naming scheme), because the
updated version will retain the metadata related to the old behavior
while the latest version will create metadata according to the new
behavior.
To fix this issue, the update script now performs a database backup on
the old version and restores it on the cleanly installed version of
TimescaleDB. This retains the metadata in accordance with the old
behavior, although the new version will create new metadata according
to the new behavior. This has the further benefit of testing the
restore process during upgrades, which is probably the way people will
move to a new version of TimescaleDB if they do not update the
extension with the data in place.
The capture errors related to broken update scripts, the update test
now includes an integrity test for some metadata tables.
This PR add support for primary-key, foreign-key, unique, and exclusion constraints.
Previously supported are CHECK and NOT NULL constraints. Now, foreign key
constraints where a hypertable references a plain table is support
(while vice versa, with a plain table references a hypertable, is still not).
Also includes some update infrastructure changes. Main difference
is that sql/updates/ scripts are now split into pre- and post- files.
This is needed because table updates need to happen before reloading
functions (some functions checked for accessing right schema at
definition time). Meanwhile trigger updates need to happen after
since triggers must refer to valid functions at definition time.
SQL code is now split into setup, functions, and init files to
allow a subset to be run when the extension is updated. During
build, an update script is now also generated.