Jay Huh 14d3046a53 Skip building remote compaction output file when not OK (#13183)
Summary:
During `FinishCompactionOutputFile()` if there's an IOError, we may end up having the output in memory, but table properties are not populated, because `outputs.UpdateTableProperties();` is called only when `s.ok()` is true.

However, during remote compaction result serialization, we always try to access the `table_properties` which may be null.  This was causing a segfault.

We can skip building the output files in the result completely if the status is not ok.

# Unit Test
New test added
```
./compaction_service_test --gtest_filter="*CompactionOutputFileIOError*"
```

Before the fix
```
Received signal 11 (Segmentation fault)
Invoking GDB for stack trace...
https://github.com/facebook/rocksdb/issues/4  0x00000000004708ed in rocksdb::TableProperties::TableProperties (this=0x7fae070fb4e8) at ./include/rocksdb/table_properties.h:212
212     struct TableProperties {
https://github.com/facebook/rocksdb/issues/5  0x00007fae0b195b9e in rocksdb::CompactionServiceOutputFile::CompactionServiceOutputFile (this=0x7fae070fb400, name=..., smallest=0, largest=0, _smallest_internal_key=..., _largest_internal_key=..., _oldest_ancester_time=1733335023, _file_creation_time=1733335026, _epoch_number=1, _file_checksum=..., _file_checksum_func_name=..., _paranoid_hash=0, _marked_for_compaction=false, _unique_id=..., _table_properties=...) at ./db/compaction/compaction_job.h:450
450             table_properties(_table_properties) {}
```

After the fix
```
[ RUN      ] CompactionServiceTest.CompactionOutputFileIOError
[       OK ] CompactionServiceTest.CompactionOutputFileIOError (4499 ms)
[----------] 1 test from CompactionServiceTest (4499 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (4499 ms total)
[  PASSED  ] 1 test.
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/13183

Reviewed By: anand1976

Differential Revision: D66770876

Pulled By: jaykorean

fbshipit-source-id: 63df7c2786ce0353f38a93e493ae4e7b591f4ed9
2024-12-04 18:23:08 -08:00
2024-09-06 10:11:34 -07:00
2024-10-14 03:01:20 -07:00
2024-10-14 03:01:20 -07:00
2024-02-22 14:39:05 -08:00
2024-10-14 03:01:20 -07:00
2024-12-04 10:11:27 -08:00
2024-06-12 21:46:40 -07:00
2024-12-04 10:10:48 -08:00
2024-09-17 17:47:10 -07:00
2023-11-14 07:33:21 -08:00
2023-11-16 10:35:08 -08:00

RocksDB: A Persistent Key-Value Store for Flash and RAM Storage

CircleCI Status

RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)

This code is a library that forms the core building block for a fast key-value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it especially suitable for storing multiple terabytes of data in a single database.

Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples

See the github wiki for more explanation.

The public interface is in include/. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.

Questions and discussions are welcome on the RocksDB Developers Public Facebook group and email list on Google Groups.

License

RocksDB is dual-licensed under both the GPLv2 (found in the COPYING file in the root directory) and Apache 2.0 License (found in the LICENSE.Apache file in the root directory). You may select, at your option, one of the above-listed licenses.

Description
A library that provides an embeddable, persistent key-value store for fast storage.
Readme 344 MiB
Languages
C++ 82.5%
Java 9.3%
Starlark 2.4%
C 1.5%
Python 1.5%
Other 2.7%