1
0
mirror of https://github.com/apple/foundationdb.git synced 2025-05-24 16:20:15 +08:00

Fix some various indentation issues that caused weird formatting in the documentation output.

This commit is contained in:
A.J. Beamon 2020-02-21 20:05:48 -08:00
parent 6a6b89b258
commit 9c9e643334
8 changed files with 99 additions and 103 deletions

@ -177,7 +177,7 @@ You can add new machines to a cluster at any time:
5) If you have previously :ref:`excluded <removing-machines-from-a-cluster>` a machine from the cluster, you will need to take it off the exclusion list using the ``include <ip>`` command of fdbcli before it can be a full participant in the cluster.
.. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled.
.. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled.
.. _removing-machines-from-a-cluster:
@ -192,7 +192,7 @@ To temporarily or permanently remove one or more machines from a FoundationDB cl
3) Use the ``exclude`` command in ``fdbcli`` on the machines you plan to remove:
::
::
user@host1$ fdbcli
Using cluster file `/etc/foundationdb/fdb.cluster'.
@ -205,13 +205,13 @@ To temporarily or permanently remove one or more machines from a FoundationDB cl
It is now safe to remove these machines or processes from the cluster.
``exclude`` can be used to exclude either machines (by specifying an IP address) or individual processes (by specifying an ``IP``:``PORT`` pair).
``exclude`` can be used to exclude either machines (by specifying an IP address) or individual processes (by specifying an ``IP``:``PORT`` pair).
.. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled.
.. note:: Addresses have the form ``IP``:``PORT``. This form is used even if TLS is enabled.
Excluding a server doesn't shut it down immediately; data on the machine is first moved away. When the ``exclude`` command completes successfully (by returning control to the command prompt), the machines that you specified are no longer required to maintain the configured redundancy mode. A large amount of data might need to be transferred first, so be patient. When the process is complete, the excluded machine or process can be shut down without fault tolerance or availability consequences.
Excluding a server doesn't shut it down immediately; data on the machine is first moved away. When the ``exclude`` command completes successfully (by returning control to the command prompt), the machines that you specified are no longer required to maintain the configured redundancy mode. A large amount of data might need to be transferred first, so be patient. When the process is complete, the excluded machine or process can be shut down without fault tolerance or availability consequences.
If you interrupt the exclude command with Ctrl-C after seeing the "waiting for state to be removed" message, the exclusion work will continue in the background. Repeating the command will continue waiting for the exclusion to complete. To reverse the effect of the ``exclude`` command, use the ``include`` command.
If you interrupt the exclude command with Ctrl-C after seeing the "waiting for state to be removed" message, the exclusion work will continue in the background. Repeating the command will continue waiting for the exclusion to complete. To reverse the effect of the ``exclude`` command, use the ``include`` command.
4) On each removed machine, stop the FoundationDB server and prevent it from starting at the next boot. Follow the :ref:`instructions for your platform <administration-running-foundationdb>`. For example, on Ubuntu::
@ -316,7 +316,7 @@ Running backups Number of backups currently running. Different backups c
Running DRs Number of DRs currently running. Different DRs could be streaming different prefixes and/or to different DR clusters.
====================== ==========================================================================================================
The "Memory availability" is a conservative estimate of the minimal RAM available to any ``fdbserver`` process across all machines in the cluster. This value is calculated in two steps. Memory available per process is first calculated *for each machine* by taking:
The "Memory availability" is a conservative estimate of the minimal RAM available to any ``fdbserver`` process across all machines in the cluster. This value is calculated in two steps. Memory available per process is first calculated *for each machine* by taking::
availability = ((total - committed) + sum(processSize)) / processes

@ -9,9 +9,9 @@ What is the CAP Theorem?
In 2000, Eric Brewer conjectured that a distributed system cannot simultaneously provide all three of the following desirable properties:
* Consistency: A read sees all previously completed writes.
* Availability: Reads and writes always succeed.
* Partition tolerance: Guaranteed properties are maintained even when network failures prevent some machines from communicating with others.
* Consistency: A read sees all previously completed writes.
* Availability: Reads and writes always succeed.
* Partition tolerance: Guaranteed properties are maintained even when network failures prevent some machines from communicating with others.
In 2002, Gilbert and Lynch proved this in the asynchronous and partially synchronous network models, so it is now commonly called the `CAP Theorem <http://en.wikipedia.org/wiki/CAP_theorem>`_.

@ -397,7 +397,7 @@ Datacenter-aware mode
In addition to the more commonly used modes listed above, this version of FoundationDB has support for redundancy across multiple datacenters.
.. note:: When using the datacenter-aware mode, all ``fdbserver`` processes should be passed a valid datacenter identifier on the command line.
.. note:: When using the datacenter-aware mode, all ``fdbserver`` processes should be passed a valid datacenter identifier on the command line.
``three_datacenter`` mode
*(for 5+ machines in 3 datacenters)*
@ -624,23 +624,23 @@ The ``satellite_redundancy_mode`` is configured per region, and specifies how ma
``one_satellite_single`` mode
Keep one copy of the mutation log in the satellite datacenter with the highest priority. If the highest priority satellite is unavailable it will put the transaction log in the satellite datacenter with the next highest priority.
Keep one copy of the mutation log in the satellite datacenter with the highest priority. If the highest priority satellite is unavailable it will put the transaction log in the satellite datacenter with the next highest priority.
``one_satellite_double`` mode
Keep two copies of the mutation log in the satellite datacenter with the highest priority.
Keep two copies of the mutation log in the satellite datacenter with the highest priority.
``one_satellite_triple`` mode
Keep three copies of the mutation log in the satellite datacenter with the highest priority.
Keep three copies of the mutation log in the satellite datacenter with the highest priority.
``two_satellite_safe`` mode
Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. This mode will protect against the simultaneous loss of both the primary and one of the satellite datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter.
Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. This mode will protect against the simultaneous loss of both the primary and one of the satellite datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter.
``two_satellite_fast`` mode
Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. FoundationDB will only synchronously wait for one of the two satellite datacenters to make the mutations durable before considering a commit successful. This will reduce tail latencies caused by network issues between datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter.
Keep two copies of the mutation log in each of the two satellite datacenters with the highest priorities, for a total of four copies of each mutation. FoundationDB will only synchronously wait for one of the two satellite datacenters to make the mutations durable before considering a commit successful. This will reduce tail latencies caused by network issues between datacenters. If only one satellite is available, it will fall back to only storing two copies of the mutation log in the remaining datacenter.
.. warning:: In release 6.0 this is implemented by waiting for all but 2 of the transaction logs. If ``satellite_logs`` is set to more than 4, FoundationDB will still need to wait for replies from both datacenters.
@ -698,17 +698,17 @@ Migrating a database to use a region configuration
To configure an existing database to regions, do the following steps:
1. Ensure all processes have their dcid locality set on the command line. All processes should exist in the same datacenter. If converting from a ``three_datacenter`` configuration, first configure down to using a single datacenter by changing the replication mode. Then exclude the machines in all datacenters but the one that will become the initial active region.
1. Ensure all processes have their dcid locality set on the command line. All processes should exist in the same datacenter. If converting from a ``three_datacenter`` configuration, first configure down to using a single datacenter by changing the replication mode. Then exclude the machines in all datacenters but the one that will become the initial active region.
2. Configure the region configuration. The datacenter with all the existing processes should have a non-negative priority. The region which will eventually store the remote replica should be added with a negative priority.
2. Configure the region configuration. The datacenter with all the existing processes should have a non-negative priority. The region which will eventually store the remote replica should be added with a negative priority.
3. Add processes to the cluster in the remote region. These processes will not take data yet, but need to be added to the cluster. If they are added before the region configuration is set they will be assigned data like any other FoundationDB process, which will lead to high latencies.
3. Add processes to the cluster in the remote region. These processes will not take data yet, but need to be added to the cluster. If they are added before the region configuration is set they will be assigned data like any other FoundationDB process, which will lead to high latencies.
4. Configure ``usable_regions=2``. This will cause the cluster to start copying data between the regions.
4. Configure ``usable_regions=2``. This will cause the cluster to start copying data between the regions.
5. Watch ``status`` and wait until data movement is complete. This will signal that the remote datacenter has a full replica of all of the data in the database.
5. Watch ``status`` and wait until data movement is complete. This will signal that the remote datacenter has a full replica of all of the data in the database.
6. Change the region configuration to have a non-negative priority for the primary datacenters in both regions. This will enable automatic failover between regions.
6. Change the region configuration to have a non-negative priority for the primary datacenters in both regions. This will enable automatic failover between regions.
Handling datacenter failures
----------------------------
@ -719,9 +719,9 @@ When a primary datacenter fails, the cluster will go into a degraded state. It w
To drop the dead datacenter do the following steps:
1. Configure the region configuration so that the dead datacenter has a negative priority.
1. Configure the region configuration so that the dead datacenter has a negative priority.
2. Configure ``usable_regions=1``.
2. Configure ``usable_regions=1``.
If you are running in a configuration without a satellite datacenter, or you have lost all machines in a region simultaneously, the ``force_recovery_with_data_loss`` command from ``fdbcli`` allows you to force a recovery to the other region. This will discard the portion of the mutation log which did not make it across the WAN. Once the database has recovered, immediately follow the previous steps to drop the dead region the normal way.
@ -730,13 +730,10 @@ Region change safety
The steps described above for both adding and removing replicas are enforced by ``fdbcli``. The following are the specific conditions checked by ``fdbcli``:
* You cannot change the ``regions`` configuration while also changing ``usable_regions``.
* You can only change ``usable_regions`` when exactly one region has priority >= 0.
* When ``usable_regions`` > 1, all regions with priority >= 0 must have a full replica of the data.
* All storage servers must be in one of the regions specified by the region configuration.
* You cannot change the ``regions`` configuration while also changing ``usable_regions``.
* You can only change ``usable_regions`` when exactly one region has priority >= 0.
* When ``usable_regions`` > 1, all regions with priority >= 0 must have a full replica of the data.
* All storage servers must be in one of the regions specified by the region configuration.
Monitoring
----------
@ -772,9 +769,8 @@ Known limitations
The 6.2 release still has a number of rough edges related to region configuration. This is a collection of all the issues that have been pointed out in the sections above. These issues should be significantly improved in future releases of FoundationDB:
* FoundationDB supports replicating data to at most two regions.
* ``two_satellite_fast`` does not hide latency properly when configured with more than 4 satellite transaction logs.
* FoundationDB supports replicating data to at most two regions.
* ``two_satellite_fast`` does not hide latency properly when configured with more than 4 satellite transaction logs.
.. _guidelines-process-class-config:

@ -5,7 +5,7 @@ Release Notes
1.0.1
=====
* Fix segmentation fault in client when there are a very large number of dependent operations in a transaction and certain errors occur.
* Fix segmentation fault in client when there are a very large number of dependent operations in a transaction and certain errors occur.
1.0.0
=====
@ -21,34 +21,34 @@ There are only minor technical differences between this release and the 0.3.0 re
Java
----
* ``clear(Range)`` replaces the now deprecated ``clearRangeStartsWith()``.
* ``clear(Range)`` replaces the now deprecated ``clearRangeStartsWith()``.
Python
------
* Windows installer supports Python 3.
* Windows installer supports Python 3.
Node and Ruby
-------------
* String option parameters are converted to UTF-8.
* String option parameters are converted to UTF-8.
All
---
* API version updated to 100. See the :ref:`API version upgrade guide <api-version-upgrade-guide-100>` for upgrade details.
* Runs on Mac OS X 10.7.
* Improvements to installation packages, including package paths and directory modes.
* Eliminated cases of excessive resource usage in the locality API.
* Watches are disabled when read-your-writes functionality is disabled.
* Fatal error paths now call ``_exit()`` instead instead of ``exit()``.
* API version updated to 100. See the :ref:`API version upgrade guide <api-version-upgrade-guide-100>` for upgrade details.
* Runs on Mac OS X 10.7.
* Improvements to installation packages, including package paths and directory modes.
* Eliminated cases of excessive resource usage in the locality API.
* Watches are disabled when read-your-writes functionality is disabled.
* Fatal error paths now call ``_exit()`` instead instead of ``exit()``.
Fixes
-----
* A few Python API entry points failed to respect the ``as_foundationdb_key()`` convenience interface.
* ``fdbcli`` could print commit version numbers incorrectly in Windows.
* Multiple watches set on the same key were not correctly triggered by a subsequent write in the same transaction.
* A few Python API entry points failed to respect the ``as_foundationdb_key()`` convenience interface.
* ``fdbcli`` could print commit version numbers incorrectly in Windows.
* Multiple watches set on the same key were not correctly triggered by a subsequent write in the same transaction.
Earlier release notes
---------------------

@ -128,9 +128,9 @@ Certificate file default location
The default behavior when the certificate or key file is not specified is to look for a file named ``fdb.pem`` in the current working directory. If this file is not present, an attempt is made to load a file from a system-dependent location as follows:
* Linux: ``/etc/foundationdb/fdb.pem``
* macOS: ``/usr/local/etc/foundationdb/fdb.pem``
* Windows: ``C:\ProgramData\foundationdb\fdb.pem``
* Linux: ``/etc/foundationdb/fdb.pem``
* macOS: ``/usr/local/etc/foundationdb/fdb.pem``
* Windows: ``C:\ProgramData\foundationdb\fdb.pem``
Default Peer Verification
^^^^^^^^^^^^^^^^^^^^^^^^^
@ -152,9 +152,9 @@ Automatic TLS certificate refresh
The TLS certificate will be automatically refreshed on a configurable cadence. The server will inspect the CA, certificate, and key files in the specified locations periodically, and will begin using the new versions if following criterion were met:
* They are changed, judging by the last modified time.
* They are valid certificates.
* The key file matches the certificate file.
* They are changed, judging by the last modified time.
* They are valid certificates.
* The key file matches the certificate file.
The refresh rate is controlled by ``--knob_tls_cert_refresh_delay_seconds``. Setting it to 0 will disable the refresh.