Summary: During `FinishCompactionOutputFile()` if there's an IOError, we may end up having the output in memory, but table properties are not populated, because `outputs.UpdateTableProperties();` is called only when `s.ok()` is true. However, during remote compaction result serialization, we always try to access the `table_properties` which may be null. This was causing a segfault. We can skip building the output files in the result completely if the status is not ok. # Unit Test New test added ``` ./compaction_service_test --gtest_filter="*CompactionOutputFileIOError*" ``` Before the fix ``` Received signal 11 (Segmentation fault) Invoking GDB for stack trace... https://github.com/facebook/rocksdb/issues/4 0x00000000004708ed in rocksdb::TableProperties::TableProperties (this=0x7fae070fb4e8) at ./include/rocksdb/table_properties.h:212 212 struct TableProperties { https://github.com/facebook/rocksdb/issues/5 0x00007fae0b195b9e in rocksdb::CompactionServiceOutputFile::CompactionServiceOutputFile (this=0x7fae070fb400, name=..., smallest=0, largest=0, _smallest_internal_key=..., _largest_internal_key=..., _oldest_ancester_time=1733335023, _file_creation_time=1733335026, _epoch_number=1, _file_checksum=..., _file_checksum_func_name=..., _paranoid_hash=0, _marked_for_compaction=false, _unique_id=..., _table_properties=...) at ./db/compaction/compaction_job.h:450 450 table_properties(_table_properties) {} ``` After the fix ``` [ RUN ] CompactionServiceTest.CompactionOutputFileIOError [ OK ] CompactionServiceTest.CompactionOutputFileIOError (4499 ms) [----------] 1 test from CompactionServiceTest (4499 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (4499 ms total) [ PASSED ] 1 test. ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/13183 Reviewed By: anand1976 Differential Revision: D66770876 Pulled By: jaykorean fbshipit-source-id: 63df7c2786ce0353f38a93e493ae4e7b591f4ed9
RocksDB: A Persistent Key-Value Store for Flash and RAM Storage
RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)
This code is a library that forms the core building block for a fast key-value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it especially suitable for storing multiple terabytes of data in a single database.
Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples
See the github wiki for more explanation.
The public interface is in include/
. Callers should not include or
rely on the details of any other header files in this package. Those
internal APIs may be changed without warning.
Questions and discussions are welcome on the RocksDB Developers Public Facebook group and email list on Google Groups.
License
RocksDB is dual-licensed under both the GPLv2 (found in the COPYING file in the root directory) and Apache 2.0 License (found in the LICENSE.Apache file in the root directory). You may select, at your option, one of the above-listed licenses.