mirror of
https://github.com/apple/foundationdb.git
synced 2025-05-28 10:52:03 +08:00
Fix minor typos in config docs
the each --> east, datacenter have --> datacenters has, mean signal --> signal
This commit is contained in:
parent
b30fe29c9f
commit
21e574d33d
@ -530,7 +530,7 @@ The second feature is the ability to add one or more synchronous replicas of the
|
||||
|
||||
An example configuration would be four total datacenters, two on the east coast, two on the west coast, with a preference for fast write latencies from the west coast. One datacenter on each coast would be sized to store a full copy of the data. The second datacenter on each coast would only have a few FoundationDB processes.
|
||||
|
||||
While everything is healthy, writes need to be made durable in both west coast datacenters before a commit can succeed. The geographic proximity of the two datacenters minimizes the additional commit latency. Reads can be served from either region, and clients can get data from whichever region is closer. Getting a read version from the each coast region will still require communicating with a west coast datacenter. Clients can cache read versions if they can tolerate reading stale data to avoid waiting on read versions.
|
||||
While everything is healthy, writes need to be made durable in both west coast datacenters before a commit can succeed. The geographic proximity of the two datacenters minimizes the additional commit latency. Reads can be served from either region, and clients can get data from whichever region is closer. Getting a read version from east coast region will still require communicating with a west coast datacenter. Clients can cache read versions if they can tolerate reading stale data to avoid waiting on read versions.
|
||||
|
||||
If either west coast datacenter fails, the last few mutations will be propagated from the remaining west coast datacenter to the east coast. At this point, FoundationDB will start accepting commits on the east coast. Once the west coast comes back online, the system will automatically start copying all the data that was committed to the east coast back to the west coast replica. Once the west coast has caught up, the system will automatically switch back to accepting writes from the west coast again.
|
||||
|
||||
@ -615,7 +615,7 @@ The number of replicas in each region is controlled by redundancy level. For exa
|
||||
Asymmetric configurations
|
||||
-------------------------
|
||||
|
||||
The fact that satellite policies are configured per region allows for asymmetric configurations. For example, FoudnationDB can have a three datacenter setup where there are two datacenters on the west coast (WC1, WC2) and one datacenter on the east coast (EC1). The west coast region can be set as the preferred active region by setting the priority of its primary datacenter higher than the east coast datacenter. The west coast region should have a satellite policy configured, so that when it is active, FoundationDB is making mutations durable in both west coast datacenters. In the rare event that one of the west coast datacenter have failed, FoundationDB will fail over to the east coast datacenter. Because this region does not a satellite datacenter, the mutations will only be made durable in one datacenter while the transaction subsystem is located here. However this is justifiable because the region will only be active if a datacenter has already been lost.
|
||||
The fact that satellite policies are configured per region allows for asymmetric configurations. For example, FoudnationDB can have a three datacenter setup where there are two datacenters on the west coast (WC1, WC2) and one datacenter on the east coast (EC1). The west coast region can be set as the preferred active region by setting the priority of its primary datacenter higher than the east coast datacenter. The west coast region should have a satellite policy configured, so that when it is active, FoundationDB is making mutations durable in both west coast datacenters. In the rare event that one of the west coast datacenters has failed, FoundationDB will fail over to the east coast datacenter. Because this region does not a satellite datacenter, the mutations will only be made durable in one datacenter while the transaction subsystem is located here. However, this is justifiable because the region will only be active if a datacenter has already been lost.
|
||||
|
||||
This is the region configuration that implements the example::
|
||||
|
||||
@ -669,7 +669,7 @@ To configure an existing database to regions, do the following steps:
|
||||
|
||||
4. Configure ``usable_regions=2``. This will cause the cluster to start copying data between the regions.
|
||||
|
||||
5. Watch ``status`` and wait until data movement is complete. This will mean signal that the remote datacenter has a full replica of all of the data in the database.
|
||||
5. Watch ``status`` and wait until data movement is complete. This will signal that the remote datacenter has a full replica of all of the data in the database.
|
||||
|
||||
6. Change the region configuration to have a non-negative priority for the primary datacenters in both regions. This will enable automatic failover between regions.
|
||||
|
||||
@ -680,7 +680,7 @@ When a primary datacenter fails, the cluster will go into a degraded state. It w
|
||||
|
||||
.. warning:: While a datacenter has failed, the maximum write throughput of the cluster will be roughly 1/3 of normal performance. This is because the transaction logs need to store all of the mutations being committed, so that once the other datacenter comes back online, it can replay history to catch back up.
|
||||
|
||||
To drop the dead datacenter do the follow steps:
|
||||
To drop the dead datacenter do the following steps:
|
||||
|
||||
1. Configure the region configuration so that the dead datacenter has a negative priority.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user