nixos: typofixes/tab deletion in some foundationdb docs/module

Signed-off-by: Austin Seipp <aseipp@pobox.com>
This commit is contained in:
Austin Seipp 2018-04-25 00:05:18 -05:00
parent fefbc047d2
commit e4e8562806
2 changed files with 9 additions and 8 deletions

View File

@ -206,7 +206,7 @@ in
default = null; default = null;
type = types.nullOr types.str; type = types.nullOr types.str;
description = '' description = ''
Machine identifier key. All processes on a machine should share a Machine identifier key. All processes on a machine should share a
unique id. By default, processes on a machine determine a unique id to share. unique id. By default, processes on a machine determine a unique id to share.
This does not generally need to be set. This does not generally need to be set.
''; '';
@ -216,7 +216,7 @@ in
default = null; default = null;
type = types.nullOr types.str; type = types.nullOr types.str;
description = '' description = ''
Zone identifier key. Processes that share a zone id are Zone identifier key. Processes that share a zone id are
considered non-unique for the purposes of data replication. considered non-unique for the purposes of data replication.
If unset, defaults to machine id. If unset, defaults to machine id.
''; '';
@ -226,7 +226,7 @@ in
default = null; default = null;
type = types.nullOr types.str; type = types.nullOr types.str;
description = '' description = ''
Data center identifier key. All processes physically located in a Data center identifier key. All processes physically located in a
data center should share the id. If you are depending on data data center should share the id. If you are depending on data
center based replication this must be set on all processes. center based replication this must be set on all processes.
''; '';
@ -236,7 +236,7 @@ in
default = null; default = null;
type = types.nullOr types.str; type = types.nullOr types.str;
description = '' description = ''
Data hall identifier key. All processes physically located in a Data hall identifier key. All processes physically located in a
data hall should share the id. If you are depending on data data hall should share the id. If you are depending on data
hall based replication this must be set on all processes. hall based replication this must be set on all processes.
''; '';

View File

@ -16,8 +16,8 @@
<para>FoundationDB (or "FDB") is a distributed, open source, high performance, <para>FoundationDB (or "FDB") is a distributed, open source, high performance,
transactional key-value store. It can store petabytes of data and deliver transactional key-value store. It can store petabytes of data and deliver
exceptional performance while maintaining consistency and ACID semantics over a exceptional performance while maintaining consistency and ACID semantics
large cluster.</para> (serializable transactions) over a large cluster.</para>
<section><title>Configuring and basic setup</title> <section><title>Configuring and basic setup</title>
@ -101,7 +101,7 @@ FoundationDB worker processes that should be started on the machine.</para>
<para>FoundationDB worker processes typically require 4GB of RAM per-process at <para>FoundationDB worker processes typically require 4GB of RAM per-process at
minimum for good performance, so this option is set to 1 by default since the minimum for good performance, so this option is set to 1 by default since the
maximum aount of RAM is unknown. You're advised to abide by this restriction, maximum amount of RAM is unknown. You're advised to abide by this restriction,
so pick a number of processes so that each has 4GB or more.</para> so pick a number of processes so that each has 4GB or more.</para>
<para>A similar option exists in order to scale backup agent processes, <para>A similar option exists in order to scale backup agent processes,
@ -129,7 +129,8 @@ client applications will use to find and join coordinators. Note that this file
<emphasis>can not</emphasis> be managed by NixOS so easily: FoundationDB is <emphasis>can not</emphasis> be managed by NixOS so easily: FoundationDB is
designed so that it will rewrite the file at runtime for all clients and nodes designed so that it will rewrite the file at runtime for all clients and nodes
when cluster coordinators change, with clients transparently handling this when cluster coordinators change, with clients transparently handling this
without intervention.</para> without intervention. It is fundamentally a mutable file, and you should not
try to manage it in any way in NixOS.</para>
<para>When dealing with a cluster, there are two main things you want to <para>When dealing with a cluster, there are two main things you want to
do:</para> do:</para>