Fix some various indentation issues that caused weird formatting in the documentation output.

This commit is contained in:
A.J. Beamon 2020-02-21 20:05:48 -08:00
parent 6a6b89b258
commit 9c9e643334
8 changed files with 99 additions and 103 deletions

View File

@ -177,7 +177,7 @@ You can add new machines to a cluster at any time:
5) If you have previously :ref:`excluded <removing-machines-from-a-cluster>` a machine from the cluster, you will need to take it off the exclusion list using the ``include <ip>`` command of fdbcli before it can be a full participant in the cluster. 5) If you have previously :ref:`excluded <removing-machines-from-a-cluster>` a machine from the cluster, you will need to take it off the exclusion list using the ``include <ip>`` command of fdbcli before it can be a full participant in the cluster.
.. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled. .. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled.
.. _removing-machines-from-a-cluster: .. _removing-machines-from-a-cluster:
@ -192,26 +192,26 @@ To temporarily or permanently remove one or more machines from a FoundationDB cl
3) Use the ``exclude`` command in ``fdbcli`` on the machines you plan to remove: 3) Use the ``exclude`` command in ``fdbcli`` on the machines you plan to remove:
:: ::
user@host1$ fdbcli user@host1$ fdbcli
Using cluster file `/etc/foundationdb/fdb.cluster'. Using cluster file `/etc/foundationdb/fdb.cluster'.
The database is available. The database is available.
Welcome to the fdbcli. For help, type `help'. Welcome to the fdbcli. For help, type `help'.
fdb> exclude 1.2.3.4 1.2.3.5 1.2.3.6 fdb> exclude 1.2.3.4 1.2.3.5 1.2.3.6
Waiting for state to be removed from all excluded servers. This may take a while. Waiting for state to be removed from all excluded servers. This may take a while.
It is now safe to remove these machines or processes from the cluster. It is now safe to remove these machines or processes from the cluster.
``exclude`` can be used to exclude either machines (by specifying an IP address) or individual processes (by specifying an ``IP``:``PORT`` pair).
.. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled. ``exclude`` can be used to exclude either machines (by specifying an IP address) or individual processes (by specifying an ``IP``:``PORT`` pair).
Excluding a server doesn't shut it down immediately; data on the machine is first moved away. When the ``exclude`` command completes successfully (by returning control to the command prompt), the machines that you specified are no longer required to maintain the configured redundancy mode. A large amount of data might need to be transferred first, so be patient. When the process is complete, the excluded machine or process can be shut down without fault tolerance or availability consequences. .. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled.
If you interrupt the exclude command with Ctrl-C after seeing the "waiting for state to be removed" message, the exclusion work will continue in the background. Repeating the command will continue waiting for the exclusion to complete. To reverse the effect of the ``exclude`` command, use the ``include`` command. Excluding a server doesn't shut it down immediately; data on the machine is first moved away. When the ``exclude`` command completes successfully (by returning control to the command prompt), the machines that you specified are no longer required to maintain the configured redundancy mode. A large amount of data might need to be transferred first, so be patient. When the process is complete, the excluded machine or process can be shut down without fault tolerance or availability consequences.
If you interrupt the exclude command with Ctrl-C after seeing the "waiting for state to be removed" message, the exclusion work will continue in the background. Repeating the command will continue waiting for the exclusion to complete. To reverse the effect of the ``exclude`` command, use the ``include`` command.
4) On each removed machine, stop the FoundationDB server and prevent it from starting at the next boot. Follow the :ref:`instructions for your platform <administration-running-foundationdb>`. For example, on Ubuntu:: 4) On each removed machine, stop the FoundationDB server and prevent it from starting at the next boot. Follow the :ref:`instructions for your platform <administration-running-foundationdb>`. For example, on Ubuntu::
@ -316,9 +316,9 @@ Running backups Number of backups currently running. Different backups c
Running DRs Number of DRs currently running. Different DRs could be streaming different prefixes and/or to different DR clusters. Running DRs Number of DRs currently running. Different DRs could be streaming different prefixes and/or to different DR clusters.
====================== ========================================================================================================== ====================== ==========================================================================================================
The "Memory availability" is a conservative estimate of the minimal RAM available to any ``fdbserver`` process across all machines in the cluster. This value is calculated in two steps. Memory available per process is first calculated *for each machine* by taking: The "Memory availability" is a conservative estimate of the minimal RAM available to any ``fdbserver`` process across all machines in the cluster. This value is calculated in two steps. Memory available per process is first calculated *for each machine* by taking::
availability = ((total - committed) + sum(processSize)) / processes availability = ((total - committed) + sum(processSize)) / processes
where: where:

View File

@ -538,31 +538,31 @@ Applications must provide error handling and an appropriate retry loop around th
``FDB_STREAMING_MODE_ITERATOR`` ``FDB_STREAMING_MODE_ITERATOR``
The caller is implementing an iterator (most likely in a binding to a higher level language). The amount of data returned depends on the value of the ``iteration`` parameter to :func:`fdb_transaction_get_range()`. The caller is implementing an iterator (most likely in a binding to a higher level language). The amount of data returned depends on the value of the ``iteration`` parameter to :func:`fdb_transaction_get_range()`.
``FDB_STREAMING_MODE_SMALL`` ``FDB_STREAMING_MODE_SMALL``
Data is returned in small batches (not much more expensive than reading individual key-value pairs). Data is returned in small batches (not much more expensive than reading individual key-value pairs).
``FDB_STREAMING_MODE_MEDIUM`` ``FDB_STREAMING_MODE_MEDIUM``
Data is returned in batches between _SMALL and _LARGE. Data is returned in batches between _SMALL and _LARGE.
``FDB_STREAMING_MODE_LARGE`` ``FDB_STREAMING_MODE_LARGE``
Data is returned in batches large enough to be, in a high-concurrency environment, nearly as efficient as possible. If the caller does not need the entire range, some disk and network bandwidth may be wasted. The batch size may be still be too small to allow a single client to get high throughput from the database. Data is returned in batches large enough to be, in a high-concurrency environment, nearly as efficient as possible. If the caller does not need the entire range, some disk and network bandwidth may be wasted. The batch size may be still be too small to allow a single client to get high throughput from the database.
``FDB_STREAMING_MODE_SERIAL`` ``FDB_STREAMING_MODE_SERIAL``
Data is returned in batches large enough that an individual client can get reasonable read bandwidth from the database. If the caller does not need the entire range, considerable disk and network bandwidth may be wasted. Data is returned in batches large enough that an individual client can get reasonable read bandwidth from the database. If the caller does not need the entire range, considerable disk and network bandwidth may be wasted.
``FDB_STREAMING_MODE_WANT_ALL`` ``FDB_STREAMING_MODE_WANT_ALL``
The caller intends to consume the entire range and would like it all transferred as early as possible. The caller intends to consume the entire range and would like it all transferred as early as possible.
``FDB_STREAMING_MODE_EXACT`` ``FDB_STREAMING_MODE_EXACT``
The caller has passed a specific row limit and wants that many rows delivered in a single batch. The caller has passed a specific row limit and wants that many rows delivered in a single batch.
.. function:: void fdb_transaction_set(FDBTransaction* transaction, uint8_t const* key_name, int key_name_length, uint8_t const* value, int value_length) .. function:: void fdb_transaction_set(FDBTransaction* transaction, uint8_t const* key_name, int key_name_length, uint8_t const* value, int value_length)

View File

@ -211,21 +211,21 @@ Key selectors
Creates a key selector with the given reference key, equality flag, and offset. It is usually more convenient to obtain a key selector with one of the following methods: Creates a key selector with the given reference key, equality flag, and offset. It is usually more convenient to obtain a key selector with one of the following methods:
.. classmethod:: last_less_than(key) -> KeySelector .. classmethod:: last_less_than(key) -> KeySelector
Returns a key selector referencing the last (greatest) key in the database less than the specified key. Returns a key selector referencing the last (greatest) key in the database less than the specified key.
.. classmethod:: KeySelector.last_less_or_equal(key) -> KeySelector .. classmethod:: KeySelector.last_less_or_equal(key) -> KeySelector
Returns a key selector referencing the last (greatest) key less than, or equal to, the specified key. Returns a key selector referencing the last (greatest) key less than, or equal to, the specified key.
.. classmethod:: KeySelector.first_greater_than(key) -> KeySelector .. classmethod:: KeySelector.first_greater_than(key) -> KeySelector
Returns a key selector referencing the first (least) key greater than the specified key. Returns a key selector referencing the first (least) key greater than the specified key.
.. classmethod:: KeySelector.first_greater_or_equal(key) -> KeySelector .. classmethod:: KeySelector.first_greater_or_equal(key) -> KeySelector
Returns a key selector referencing the first key greater than, or equal to, the specified key. Returns a key selector referencing the first key greater than, or equal to, the specified key.
.. method:: KeySelector.+(offset) -> KeySelector .. method:: KeySelector.+(offset) -> KeySelector
@ -281,16 +281,16 @@ A |database-blurb1| |database-blurb2|
The ``options`` hash accepts the following optional parameters: The ``options`` hash accepts the following optional parameters:
``:limit`` ``:limit``
Only the first ``limit`` keys (and their values) in the range will be returned. Only the first ``limit`` keys (and their values) in the range will be returned.
``:reverse`` ``:reverse``
If ``true``, then the keys in the range will be returned in reverse order. Reading ranges in reverse is supported natively by the database and should have minimal extra cost. If ``true``, then the keys in the range will be returned in reverse order. Reading ranges in reverse is supported natively by the database and should have minimal extra cost.
If ``:limit`` is also specified, the *last* ``limit`` keys in the range will be returned in reverse order. If ``:limit`` is also specified, the *last* ``limit`` keys in the range will be returned in reverse order.
``:streaming_mode`` ``:streaming_mode``
A valid |streaming-mode|, which provides a hint to FoundationDB about how to retrieve the specified range. This option should generally not be specified, allowing FoundationDB to retrieve the full range very efficiently. A valid |streaming-mode|, which provides a hint to FoundationDB about how to retrieve the specified range. This option should generally not be specified, allowing FoundationDB to retrieve the full range very efficiently.
.. method:: Database.get_range(begin, end, options={}) {|kv| block } -> nil .. method:: Database.get_range(begin, end, options={}) {|kv| block } -> nil
@ -459,16 +459,16 @@ Reading data
The ``options`` hash accepts the following optional parameters: The ``options`` hash accepts the following optional parameters:
``:limit`` ``:limit``
Only the first ``limit`` keys (and their values) in the range will be returned. Only the first ``limit`` keys (and their values) in the range will be returned.
``:reverse`` ``:reverse``
If ``true``, then the keys in the range will be returned in reverse order. Reading ranges in reverse is supported natively by the database and should have minimal extra cost. If ``true``, then the keys in the range will be returned in reverse order. Reading ranges in reverse is supported natively by the database and should have minimal extra cost.
If ``:limit`` is also specified, the *last* ``limit`` keys in the range will be returned in reverse order. If ``:limit`` is also specified, the *last* ``limit`` keys in the range will be returned in reverse order.
``:streaming_mode`` ``:streaming_mode``
A valid |streaming-mode|, which provides a hint to FoundationDB about how the returned enumerable is likely to be used. The default is ``:iterator``. A valid |streaming-mode|, which provides a hint to FoundationDB about how the returned enumerable is likely to be used. The default is ``:iterator``.
.. method:: Transaction.get_range(begin, end, options={}) {|kv| block } -> nil .. method:: Transaction.get_range(begin, end, options={}) {|kv| block } -> nil

View File

@ -9,9 +9,9 @@ What is the CAP Theorem?
In 2000, Eric Brewer conjectured that a distributed system cannot simultaneously provide all three of the following desirable properties: In 2000, Eric Brewer conjectured that a distributed system cannot simultaneously provide all three of the following desirable properties:
* Consistency: A read sees all previously completed writes. * Consistency: A read sees all previously completed writes.
* Availability: Reads and writes always succeed. * Availability: Reads and writes always succeed.
* Partition tolerance: Guaranteed properties are maintained even when network failures prevent some machines from communicating with others. * Partition tolerance: Guaranteed properties are maintained even when network failures prevent some machines from communicating with others.
In 2002, Gilbert and Lynch proved this in the asynchronous and partially synchronous network models, so it is now commonly called the `CAP Theorem <http://en.wikipedia.org/wiki/CAP_theorem>`_. In 2002, Gilbert and Lynch proved this in the asynchronous and partially synchronous network models, so it is now commonly called the `CAP Theorem <http://en.wikipedia.org/wiki/CAP_theorem>`_.

View File

@ -397,7 +397,7 @@ Datacenter-aware mode
In addition to the more commonly used modes listed above, this version of FoundationDB has support for redundancy across multiple datacenters. In addition to the more commonly used modes listed above, this version of FoundationDB has support for redundancy across multiple datacenters.
.. note:: When using the datacenter-aware mode, all ``fdbserver`` processes should be passed a valid datacenter identifier on the command line. .. note:: When using the datacenter-aware mode, all ``fdbserver`` processes should be passed a valid datacenter identifier on the command line.
``three_datacenter`` mode ``three_datacenter`` mode
*(for 5+ machines in 3 datacenters)* *(for 5+ machines in 3 datacenters)*
@ -624,23 +624,23 @@ The ``satellite_redundancy_mode`` is configured per region, and specifies how ma
``one_satellite_single`` mode ``one_satellite_single`` mode
Keep one copy of the mutation log in the satellite datacenter with the highest priority. If the highest priority satellite is unavailable it will put the transaction log in the satellite datacenter with the next highest priority. Keep one copy of the mutation log in the satellite datacenter with the highest priority. If the highest priority satellite is unavailable it will put the transaction log in the satellite datacenter with the next highest priority.
``one_satellite_double`` mode ``one_satellite_double`` mode
Keep two copies of the mutation log in the satellite datacenter with the highest priority. Keep two copies of the mutation log in the satellite datacenter with the highest priority.
``one_satellite_triple`` mode ``one_satellite_triple`` mode
Keep three copies of the mutation log in the satellite datacenter with the highest priority. Keep three copies of the mutation log in the satellite datacenter with the highest priority.
``two_satellite_safe`` mode ``two_satellite_safe`` mode
Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. This mode will protect against the simultaneous loss of both the primary and one of the satellite datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter. Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. This mode will protect against the simultaneous loss of both the primary and one of the satellite datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter.
``two_satellite_fast`` mode ``two_satellite_fast`` mode
Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. FoundationDB will only synchronously wait for one of the two satellite datacenters to make the mutations durable before considering a commit successful. This will reduce tail latencies caused by network issues between datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter. Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. FoundationDB will only synchronously wait for one of the two satellite datacenters to make the mutations durable before considering a commit successful. This will reduce tail latencies caused by network issues between datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter.
.. warning:: In release 6.0 this is implemented by waiting for all but 2 of the transaction logs. If ``satellite_logs`` is set to more than 4, FoundationDB will still need to wait for replies from both datacenters. .. warning:: In release 6.0 this is implemented by waiting for all but 2 of the transaction logs. If ``satellite_logs`` is set to more than 4, FoundationDB will still need to wait for replies from both datacenters.
@ -698,17 +698,17 @@ Migrating a database to use a region configuration
To configure an existing database to regions, do the following steps: To configure an existing database to regions, do the following steps:
1. Ensure all processes have their dcid locality set on the command line. All processes should exist in the same datacenter. If converting from a ``three_datacenter`` configuration, first configure down to using a single datacenter by changing the replication mode. Then exclude the machines in all datacenters but the one that will become the initial active region. 1. Ensure all processes have their dcid locality set on the command line. All processes should exist in the same datacenter. If converting from a ``three_datacenter`` configuration, first configure down to using a single datacenter by changing the replication mode. Then exclude the machines in all datacenters but the one that will become the initial active region.
2. Configure the region configuration. The datacenter with all the existing processes should have a non-negative priority. The region which will eventually store the remote replica should be added with a negative priority. 2. Configure the region configuration. The datacenter with all the existing processes should have a non-negative priority. The region which will eventually store the remote replica should be added with a negative priority.
3. Add processes to the cluster in the remote region. These processes will not take data yet, but need to be added to the cluster. If they are added before the region configuration is set they will be assigned data like any other FoundationDB process, which will lead to high latencies. 3. Add processes to the cluster in the remote region. These processes will not take data yet, but need to be added to the cluster. If they are added before the region configuration is set they will be assigned data like any other FoundationDB process, which will lead to high latencies.
4. Configure ``usable_regions=2``. This will cause the cluster to start copying data between the regions. 4. Configure ``usable_regions=2``. This will cause the cluster to start copying data between the regions.
5. Watch ``status`` and wait until data movement is complete. This will signal that the remote datacenter has a full replica of all of the data in the database. 5. Watch ``status`` and wait until data movement is complete. This will signal that the remote datacenter has a full replica of all of the data in the database.
6. Change the region configuration to have a non-negative priority for the primary datacenters in both regions. This will enable automatic failover between regions. 6. Change the region configuration to have a non-negative priority for the primary datacenters in both regions. This will enable automatic failover between regions.
Handling datacenter failures Handling datacenter failures
---------------------------- ----------------------------
@ -719,9 +719,9 @@ When a primary datacenter fails, the cluster will go into a degraded state. It w
To drop the dead datacenter do the following steps: To drop the dead datacenter do the following steps:
1. Configure the region configuration so that the dead datacenter has a negative priority. 1. Configure the region configuration so that the dead datacenter has a negative priority.
2. Configure ``usable_regions=1``. 2. Configure ``usable_regions=1``.
If you are running in a configuration without a satellite datacenter, or you have lost all machines in a region simultaneously, the ``force_recovery_with_data_loss`` command from ``fdbcli`` allows you to force a recovery to the other region. This will discard the portion of the mutation log which did not make it across the WAN. Once the database has recovered, immediately follow the previous steps to drop the dead region the normal way. If you are running in a configuration without a satellite datacenter, or you have lost all machines in a region simultaneously, the ``force_recovery_with_data_loss`` command from ``fdbcli`` allows you to force a recovery to the other region. This will discard the portion of the mutation log which did not make it across the WAN. Once the database has recovered, immediately follow the previous steps to drop the dead region the normal way.
@ -730,13 +730,10 @@ Region change safety
The steps described above for both adding and removing replicas are enforced by ``fdbcli``. The following are the specific conditions checked by ``fdbcli``: The steps described above for both adding and removing replicas are enforced by ``fdbcli``. The following are the specific conditions checked by ``fdbcli``:
* You cannot change the ``regions`` configuration while also changing ``usable_regions``. * You cannot change the ``regions`` configuration while also changing ``usable_regions``.
* You can only change ``usable_regions`` when exactly one region has priority >= 0.
* You can only change ``usable_regions`` when exactly one region has priority >= 0. * When ``usable_regions`` > 1, all regions with priority >= 0 must have a full replica of the data.
* All storage servers must be in one of the regions specified by the region configuration.
* When ``usable_regions`` > 1, all regions with priority >= 0 must have a full replica of the data.
* All storage servers must be in one of the regions specified by the region configuration.
Monitoring Monitoring
---------- ----------
@ -772,9 +769,8 @@ Known limitations
The 6.2 release still has a number of rough edges related to region configuration. This is a collection of all the issues that have been pointed out in the sections above. These issues should be significantly improved in future releases of FoundationDB: The 6.2 release still has a number of rough edges related to region configuration. This is a collection of all the issues that have been pointed out in the sections above. These issues should be significantly improved in future releases of FoundationDB:
* FoundationDB supports replicating data to at most two regions. * FoundationDB supports replicating data to at most two regions.
* ``two_satellite_fast`` does not hide latency properly when configured with more than 4 satellite transaction logs.
* ``two_satellite_fast`` does not hide latency properly when configured with more than 4 satellite transaction logs.
.. _guidelines-process-class-config: .. _guidelines-process-class-config:

View File

@ -543,25 +543,25 @@ How you map your application data to keys and values can have a dramatic impact
* Structure keys so that range reads can efficiently retrieve the most frequently accessed data. * Structure keys so that range reads can efficiently retrieve the most frequently accessed data.
* If you perform a range read that is, in total, much more than 1 kB, try to restrict your range as much as you can while still retrieving the needed data. * If you perform a range read that is, in total, much more than 1 kB, try to restrict your range as much as you can while still retrieving the needed data.
* Structure keys so that no single key needs to be updated too frequently, which can cause transaction conflicts. * Structure keys so that no single key needs to be updated too frequently, which can cause transaction conflicts.
* If a key is updated more than 10-100 times per second, try to split it into multiple keys. * If a key is updated more than 10-100 times per second, try to split it into multiple keys.
* For example, if a key is storing a counter, split the counter into N separate counters that are randomly incremented by clients. The total value of the counter can then read by adding up the N individual ones. * For example, if a key is storing a counter, split the counter into N separate counters that are randomly incremented by clients. The total value of the counter can then read by adding up the N individual ones.
* Keep key sizes small. * Keep key sizes small.
* Try to keep key sizes below 1 kB. (Performance will be best with key sizes below 32 bytes and *cannot* be more than 10 kB.) * Try to keep key sizes below 1 kB. (Performance will be best with key sizes below 32 bytes and *cannot* be more than 10 kB.)
* When using the tuple layer to encode keys (as is recommended), select short strings or small integers for tuple elements. Small integers will encode to just two bytes. * When using the tuple layer to encode keys (as is recommended), select short strings or small integers for tuple elements. Small integers will encode to just two bytes.
* If your key sizes are above 1 kB, try either to move data from the key to the value, split the key into multiple keys, or encode the parts of the key more efficiently (remembering to preserve any important ordering). * If your key sizes are above 1 kB, try either to move data from the key to the value, split the key into multiple keys, or encode the parts of the key more efficiently (remembering to preserve any important ordering).
* Keep value sizes moderate. * Keep value sizes moderate.
* Try to keep value sizes below 10 kB. (Value sizes *cannot* be more than 100 kB.) * Try to keep value sizes below 10 kB. (Value sizes *cannot* be more than 100 kB.)
* If your value sizes are above 10 kB, consider splitting the value across multiple keys. * If your value sizes are above 10 kB, consider splitting the value across multiple keys.
* If you read values with sizes above 1 kB but use only a part of each value, consider splitting the values using multiple keys. * If you read values with sizes above 1 kB but use only a part of each value, consider splitting the values using multiple keys.
* If you frequently perform individual reads on a set of values that total to fewer than 200 bytes, try either to combine the values into a single value or to store the values in adjacent keys and use a range read. * If you frequently perform individual reads on a set of values that total to fewer than 200 bytes, try either to combine the values into a single value or to store the values in adjacent keys and use a range read.
Large Values and Blobs Large Values and Blobs
---------------------- ----------------------

View File

@ -5,7 +5,7 @@ Release Notes
1.0.1 1.0.1
===== =====
* Fix segmentation fault in client when there are a very large number of dependent operations in a transaction and certain errors occur. * Fix segmentation fault in client when there are a very large number of dependent operations in a transaction and certain errors occur.
1.0.0 1.0.0
===== =====
@ -21,34 +21,34 @@ There are only minor technical differences between this release and the 0.3.0 re
Java Java
---- ----
* ``clear(Range)`` replaces the now deprecated ``clearRangeStartsWith()``. * ``clear(Range)`` replaces the now deprecated ``clearRangeStartsWith()``.
Python Python
------ ------
* Windows installer supports Python 3. * Windows installer supports Python 3.
Node and Ruby Node and Ruby
------------- -------------
* String option parameters are converted to UTF-8. * String option parameters are converted to UTF-8.
All All
--- ---
* API version updated to 100. See the :ref:`API version upgrade guide <api-version-upgrade-guide-100>` for upgrade details. * API version updated to 100. See the :ref:`API version upgrade guide <api-version-upgrade-guide-100>` for upgrade details.
* Runs on Mac OS X 10.7. * Runs on Mac OS X 10.7.
* Improvements to installation packages, including package paths and directory modes. * Improvements to installation packages, including package paths and directory modes.
* Eliminated cases of excessive resource usage in the locality API. * Eliminated cases of excessive resource usage in the locality API.
* Watches are disabled when read-your-writes functionality is disabled. * Watches are disabled when read-your-writes functionality is disabled.
* Fatal error paths now call ``_exit()`` instead instead of ``exit()``. * Fatal error paths now call ``_exit()`` instead instead of ``exit()``.
Fixes Fixes
----- -----
* A few Python API entry points failed to respect the ``as_foundationdb_key()`` convenience interface. * A few Python API entry points failed to respect the ``as_foundationdb_key()`` convenience interface.
* ``fdbcli`` could print commit version numbers incorrectly in Windows. * ``fdbcli`` could print commit version numbers incorrectly in Windows.
* Multiple watches set on the same key were not correctly triggered by a subsequent write in the same transaction. * Multiple watches set on the same key were not correctly triggered by a subsequent write in the same transaction.
Earlier release notes Earlier release notes
--------------------- ---------------------

View File

@ -128,9 +128,9 @@ Certificate file default location
The default behavior when the certificate or key file is not specified is to look for a file named ``fdb.pem`` in the current working directory. If this file is not present, an attempt is made to load a file from a system-dependent location as follows: The default behavior when the certificate or key file is not specified is to look for a file named ``fdb.pem`` in the current working directory. If this file is not present, an attempt is made to load a file from a system-dependent location as follows:
* Linux: ``/etc/foundationdb/fdb.pem`` * Linux: ``/etc/foundationdb/fdb.pem``
* macOS: ``/usr/local/etc/foundationdb/fdb.pem`` * macOS: ``/usr/local/etc/foundationdb/fdb.pem``
* Windows: ``C:\ProgramData\foundationdb\fdb.pem`` * Windows: ``C:\ProgramData\foundationdb\fdb.pem``
Default Peer Verification Default Peer Verification
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
@ -152,9 +152,9 @@ Automatic TLS certificate refresh
The TLS certificate will be automatically refreshed on a configurable cadence. The server will inspect the CA, certificate, and key files in the specified locations periodically, and will begin using the new versions if following criterion were met: The TLS certificate will be automatically refreshed on a configurable cadence. The server will inspect the CA, certificate, and key files in the specified locations periodically, and will begin using the new versions if following criterion were met:
* They are changed, judging by the last modified time. * They are changed, judging by the last modified time.
* They are valid certificates. * They are valid certificates.
* The key file matches the certificate file. * The key file matches the certificate file.
The refresh rate is controlled by ``--knob_tls_cert_refresh_delay_seconds``. Setting it to 0 will disable the refresh. The refresh rate is controlled by ``--knob_tls_cert_refresh_delay_seconds``. Setting it to 0 will disable the refresh.