Merge branch 'master' into es6

This commit is contained in:
Bas van Dijk 2018-08-23 23:41:27 +02:00
commit 551fec4467
1746 changed files with 48873 additions and 43106 deletions

3
.github/CODEOWNERS vendored
View File

@ -21,7 +21,8 @@
/pkgs/top-level/default.nix @nbp @Ericson2314 /pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314 /pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314 /pkgs/top-level/stage.nix @nbp @Ericson2314
/pkgs/stdenv /pkgs/stdenv/generic @Ericson2314
/pkgs/stdenv/cross @Ericson2314
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej /pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej /pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314 /pkgs/build-support/setup-hooks @Ericson2314

View File

@ -43,7 +43,7 @@ See the nixpkgs manual for more details on [standard meta-attributes](https://ni
## Writing good commit messages ## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list archives, pull request discussions or upstream changes, it may require a lot of work. In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient. For package version upgrades and such a one-line commit message is usually sufficient.

View File

@ -8,7 +8,7 @@ build daemon as so-called channels. To get channel information via git, add
[nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels.git) as a remote: [nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels.git) as a remote:
``` ```
% git remote add channels git://github.com/NixOS/nixpkgs-channels.git % git remote add channels https://github.com/NixOS/nixpkgs-channels.git
``` ```
For stability and maximum binary package support, it is recommended to maintain For stability and maximum binary package support, it is recommended to maintain
@ -38,5 +38,4 @@ For pull-requests, please rebase onto nixpkgs `master`.
Communication: Communication:
* [Discourse Forum](https://discourse.nixos.org/) * [Discourse Forum](https://discourse.nixos.org/)
* [Mailing list](https://groups.google.com/forum/#!forum/nix-devel)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos) * [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)

View File

@ -1047,6 +1047,19 @@ As you can see, `packunused` finds out that although the testsuite component has
no redundant dependencies the library component of `scientific-0.3.5.1` depends no redundant dependencies the library component of `scientific-0.3.5.1` depends
on `ghc-prim` which is unused in the library. on `ghc-prim` which is unused in the library.
### Using hackage2nix with nixpkgs
Hackage package derivations are found in the
[`hackage-packages.nix`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/hackage-packages.nix)
file within `nixpkgs` and are used as the initial package set for
`haskellPackages`. The `hackage-packages.nix` file is not meant to be edited
by hand, but rather autogenerated by [`hackage2nix`](https://github.com/NixOS/cabal2nix/tree/master/hackage2nix),
which by default uses the [`configuration-hackage2nix.yaml`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/configuration-hackage2nix.yaml)
file to generate all the derivations.
To modify the contents `configuration-hackage2nix.yaml`, follow the
instructions on [`hackage2nix`](https://github.com/NixOS/cabal2nix/tree/master/hackage2nix).
## Other resources ## Other resources
- The Youtube video [Nix Loves Haskell](https://www.youtube.com/watch?v=BsBhi_r-OeE) - The Youtube video [Nix Loves Haskell](https://www.youtube.com/watch?v=BsBhi_r-OeE)

View File

@ -15,13 +15,17 @@ stdenv.mkDerivation {
buildPhase = "ant"; buildPhase = "ant";
} }
</programlisting> </programlisting>
Note that <varname>jdk</varname> is an alias for the OpenJDK. Note that <varname>jdk</varname> is an alias for the OpenJDK (self-built
</para> where available, or pre-built via Zulu).
Platforms with OpenJDK not (yet) in Nixpkgs (<literal>Aarch32</literal>,
<literal>Aarch64</literal>) point to the (unfree)
<literal>oraclejdk</literal>.
</para>
<para> <para>
JAR files that are intended to be used by other packages should be installed JAR files that are intended to be used by other packages should be installed
in <filename>$out/share/java</filename>. The OpenJDK has a stdenv setup hook in <filename>$out/share/java</filename>. JDKs have a stdenv setup hook
that adds any JARs in the <filename>share/java</filename> directories of the that add any JARs in the <filename>share/java</filename> directories of the
build inputs to the <envar>CLASSPATH</envar> environment variable. For build inputs to the <envar>CLASSPATH</envar> environment variable. For
instance, if the package <literal>libfoo</literal> installs a JAR named instance, if the package <literal>libfoo</literal> installs a JAR named
<filename>foo.jar</filename> in its <filename>share/java</filename> <filename>foo.jar</filename> in its <filename>share/java</filename>
@ -57,7 +61,18 @@ installPhase =
<literal>${jre}/bin/java</literal> instead of <literal>${jre}/bin/java</literal> instead of
<literal>${jdk}/bin/java</literal>, you prevent your package from depending <literal>${jdk}/bin/java</literal>, you prevent your package from depending
on the JDK at runtime. on the JDK at runtime.
</para> </para>
<para>
Note all JDKs passthru <literal>home</literal>, so if your application
requires environment variables like <envar>JAVA_HOME</envar> being set, that
can be done in a generic fashion with the <literal>--set</literal> argument
of <literal>makeWrapper</literal>:
<programlisting>
--set JAVA_HOME ${jdk.home}
</programlisting>
</para>
<para> <para>
It is possible to use a different Java compiler than <command>javac</command> It is possible to use a different Java compiler than <command>javac</command>

View File

@ -59,6 +59,11 @@ all crate sources of this package. Currently it is obtained by inserting a
fake checksum into the expression and building the package once. The correct fake checksum into the expression and building the package once. The correct
checksum can be then take from the failed build. checksum can be then take from the failed build.
When the `Cargo.lock`, provided by upstream, is not in sync with the
`Cargo.toml`, it is possible to use `cargoPatches` to update it. All patches
added in `cargoPatches` will also be prepended to the patches in `patches` at
build-time.
To install crates with nix there is also an experimental project called To install crates with nix there is also an experimental project called
[nixcrates](https://github.com/fractalide/nixcrates). [nixcrates](https://github.com/fractalide/nixcrates).

View File

@ -64,7 +64,7 @@ stdenv.mkDerivation {
sha256 = "1ian3kwh2vg6hr3ymrv48s04gijs539vzrq62xr76bxbhbwnz2np"; sha256 = "1ian3kwh2vg6hr3ymrv48s04gijs539vzrq62xr76bxbhbwnz2np";
}; };
inherit noSysDirs; inherit noSysDirs;
configureFlags = "--target=arm-linux"; configureFlags = [ "--target=arm-linux" ];
} }
--- ---

View File

@ -705,4 +705,52 @@ overrides = super: self: rec {
</programlisting> </programlisting>
</para> </para>
</section> </section>
<section xml:id="sec-citrix">
<title>Citrix Receiver</title>
<para>
The <link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> is a remote
desktop viewer which provides access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link> installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the licenses agreements of the vendor
need to be accepted first. This is available at the
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">download page at citrix.com</link>.
Then run <literal>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</literal>.
With the archive available in the store the package can be built and installed with Nix.
</para>
<para>
<emphasis>Note: it's recommended to install <literal>Citrix Receiver</literal> using
<literal>nix-env -i</literal> or globally to ensure that the <literal>.desktop</literal> files
are installed properly into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't
be possible to open <literal>.ica</literal> files
automatically from the browser to start a Citrix connection.</emphasis>
</para>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> in <literal>nixpkgs</literal> trusts several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the Mozilla database</link> by default.
However several companies using Citrix might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In order to work around this issue the package provides a simple
mechanism to add custom certificates without rebuilding the entire package using <literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_receiver.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
</chapter> </chapter>

View File

@ -9,7 +9,7 @@
<para> <para>
Checkout the Nixpkgs source tree: Checkout the Nixpkgs source tree:
<screen> <screen>
$ git clone git://github.com/NixOS/nixpkgs.git $ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs</screen> $ cd nixpkgs</screen>
</para> </para>
</listitem> </listitem>

View File

@ -103,8 +103,9 @@
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
mention-bot usually notifies GitHub users based on the submitted changes, <link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link>
but it can happen that it misses some of the package maintainers. will make GitHub notify users based on the submitted changes, but it can
happen that it misses some of the package maintainers.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -376,8 +377,9 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
Mention-bot notify GitHub users based on the submitted changes, but it <link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link>
can happen that it miss some of the package maintainers. will make GitHub notify users based on the submitted changes, but it can
happen that it misses some of the package maintainers.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -603,10 +605,11 @@ policy.
--> -->
<para> <para>
In a case a contributor leaves definitively the Nix community, he should In a case a contributor leaves definitively the Nix community, he
create an issue or notify the mailing list with references of packages and should create an issue or post on <link
modules he maintains so the maintainership can be taken over by other xlink:href="https://discourse.nixos.org">Discourse</link> with
contributors. references of packages and modules he maintains so the
maintainership can be taken over by other contributors.
</para> </para>
</section> </section>
</chapter> </chapter>

View File

@ -836,9 +836,10 @@ passthru = {
These can optionally be compressed using <command>gzip</command> These can optionally be compressed using <command>gzip</command>
(<filename>.tar.gz</filename>, <filename>.tgz</filename> or (<filename>.tar.gz</filename>, <filename>.tgz</filename> or
<filename>.tar.Z</filename>), <command>bzip2</command> <filename>.tar.Z</filename>), <command>bzip2</command>
(<filename>.tar.bz2</filename> or <filename>.tbz2</filename>) or (<filename>.tar.bz2</filename>, <filename>.tbz2</filename> or
<command>xz</command> (<filename>.tar.xz</filename> or <filename>.tbz</filename>) or <command>xz</command>
<filename>.tar.lzma</filename>). (<filename>.tar.xz</filename>, <filename>.tar.lzma</filename> or
<filename>.txz</filename>).
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View File

@ -384,11 +384,12 @@ rec {
recursiveUpdateUntil = pred: lhs: rhs: recursiveUpdateUntil = pred: lhs: rhs:
let f = attrPath: let f = attrPath:
zipAttrsWith (n: values: zipAttrsWith (n: values:
let here = attrPath ++ [n]; in
if tail values == [] if tail values == []
|| pred attrPath (head (tail values)) (head values) then || pred here (head (tail values)) (head values) then
head values head values
else else
f (attrPath ++ [n]) values f here values
); );
in f [] [rhs lhs]; in f [] [rhs lhs];

View File

@ -195,9 +195,10 @@ rec {
let self = f self // { let self = f self // {
newScope = scope: newScope (self // scope); newScope = scope: newScope (self // scope);
callPackage = self.newScope {}; callPackage = self.newScope {};
# TODO(@Ericson2314): Haromonize argument order of `g` with everything else
overrideScope = g: overrideScope = g:
makeScope newScope makeScope newScope
(self_: let super = f self_; in super // g super self_); (lib.fixedPoints.extends (lib.flip g) f);
packages = f; packages = f;
}; };
in self; in self;

View File

@ -80,7 +80,7 @@ let
inherit (strings) concatStrings concatMapStrings concatImapStrings inherit (strings) concatStrings concatMapStrings concatImapStrings
intersperse concatStringsSep concatMapStringsSep intersperse concatStringsSep concatMapStringsSep
concatImapStringsSep makeSearchPath makeSearchPathOutput concatImapStringsSep makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath makePerlPath optionalString makeLibraryPath makeBinPath makePerlPath makeFullPerlPath optionalString
hasPrefix hasSuffix stringToCharacters stringAsChars escape hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs replaceChars lowerChars escapeShellArg escapeShellArgs replaceChars lowerChars
upperChars toLower toUpper addContextFrom splitString upperChars toLower toUpper addContextFrom splitString

View File

@ -210,6 +210,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Common Public License 1.0"; fullName = "Common Public License 1.0";
}; };
curl = {
fullName = "MIT/X11 derivate";
url = "https://curl.haxx.se/docs/copyright.html";
};
doc = spdx { doc = spdx {
spdxId = "DOC"; spdxId = "DOC";
fullName = "DOC License"; fullName = "DOC License";
@ -613,6 +618,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Vim License"; fullName = "Vim License";
}; };
virtualbox-puel = {
fullName = "Oracle VM VirtualBox Extension Pack Personal Use and Evaluation License (PUEL)";
url = "https://www.virtualbox.org/wiki/VirtualBox_PUEL";
free = false;
};
vsl10 = spdx { vsl10 = spdx {
spdxId = "VSL-1.0"; spdxId = "VSL-1.0";
fullName = "Vovida Software License v1.0"; fullName = "Vovida Software License v1.0";
@ -643,6 +654,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "wxWindows Library Licence, Version 3.1"; fullName = "wxWindows Library Licence, Version 3.1";
}; };
xfig = {
fullName = "xfig";
url = "http://mcj.sourceforge.net/authors.html#xfig";
};
zlib = spdx { zlib = spdx {
spdxId = "Zlib"; spdxId = "Zlib";
fullName = "zlib License"; fullName = "zlib License";

View File

@ -126,6 +126,15 @@ rec {
*/ */
makePerlPath = makeSearchPathOutput "lib" "lib/perl5/site_perl"; makePerlPath = makeSearchPathOutput "lib" "lib/perl5/site_perl";
/* Construct a perl search path recursively including all dependencies (such as $PERL5LIB)
Example:
pkgs = import <nixpkgs> { }
makeFullPerlPath [ pkgs.perlPackages.CGI ]
=> "/nix/store/fddivfrdc1xql02h9q500fpnqy12c74n-perl-CGI-4.38/lib/perl5/site_perl:/nix/store/8hsvdalmsxqkjg0c5ifigpf31vc4vsy2-perl-HTML-Parser-3.72/lib/perl5/site_perl:/nix/store/zhc7wh0xl8hz3y3f71nhlw1559iyvzld-perl-HTML-Tagset-3.20/lib/perl5/site_perl"
*/
makeFullPerlPath = deps: makePerlPath (lib.misc.closePropagation deps);
/* Depending on the boolean `cond', return either the given string /* Depending on the boolean `cond', return either the given string
or the empty string. Useful to concatenate against a bigger string. or the empty string. Useful to concatenate against a bigger string.

View File

@ -213,6 +213,30 @@ runTests {
}; };
# ATTRSETS
# code from the example
testRecursiveUpdateUntil = {
expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) {
# first attribute set
foo.bar = 1;
foo.baz = 2;
bar = 3;
} {
#second attribute set
foo.bar = 1;
foo.quz = 2;
baz = 4;
};
expected = {
foo.bar = 1; # 'foo.*' from the second set
foo.quz = 2; #
bar = 3; # 'bar' from the first set
baz = 4; # 'baz' from the second set
};
};
# GENERATORS # GENERATORS
# these tests assume attributes are converted to lists # these tests assume attributes are converted to lists
# in alphabetical order # in alphabetical order

View File

@ -534,6 +534,11 @@
github = "bodil"; github = "bodil";
name = "Bodil Stokke"; name = "Bodil Stokke";
}; };
boj = {
email = "brian@uncannyworks.com";
github = "boj";
name = "Brian Jones";
};
boothead = { boothead = {
email = "ben@perurbis.com"; email = "ben@perurbis.com";
github = "boothead"; github = "boothead";
@ -668,6 +673,11 @@
github = "changlinli"; github = "changlinli";
name = "Changlin Li"; name = "Changlin Li";
}; };
CharlesHD = {
email = "charleshdespointes@gmail.com";
github = "CharlesHD";
name = "Charles Huyghues-Despointes";
};
chaoflow = { chaoflow = {
email = "flo@chaoflow.net"; email = "flo@chaoflow.net";
github = "chaoflow"; github = "chaoflow";
@ -807,6 +817,11 @@
github = "coroa"; github = "coroa";
name = "Jonas Hörsch"; name = "Jonas Hörsch";
}; };
costrouc = {
email = "chris.ostrouchov@gmail.com";
github = "costrouc";
name = "Chris Ostrouchov";
};
couchemar = { couchemar = {
email = "couchemar@yandex.ru"; email = "couchemar@yandex.ru";
github = "couchemar"; github = "couchemar";
@ -936,6 +951,11 @@
github = "demin-dmitriy"; github = "demin-dmitriy";
name = "Dmitriy Demin"; name = "Dmitriy Demin";
}; };
demize = {
email = "johannes@kyriasis.com";
github = "kyrias";
name = "Johannes Löthberg";
};
demyanrogozhin = { demyanrogozhin = {
email = "demyan.rogozhin@gmail.com"; email = "demyan.rogozhin@gmail.com";
github = "demyanrogozhin"; github = "demyanrogozhin";
@ -1372,6 +1392,11 @@
github = "fps"; github = "fps";
name = "Florian Paul Schmidt"; name = "Florian Paul Schmidt";
}; };
freepotion = {
email = "freepotion@protonmail.com";
github = "freepotion";
name = "Free Potion";
};
Fresheyeball = { Fresheyeball = {
email = "fresheyeball@gmail.com"; email = "fresheyeball@gmail.com";
github = "fresheyeball"; github = "fresheyeball";
@ -1570,6 +1595,11 @@
github = "havvy"; github = "havvy";
name = "Ryan Scheel"; name = "Ryan Scheel";
}; };
hax404 = {
email = "hax404foogit@hax404.de";
github = "hax404";
name = "Georg Haas";
};
hbunke = { hbunke = {
email = "bunke.hendrik@gmail.com"; email = "bunke.hendrik@gmail.com";
github = "hbunke"; github = "hbunke";
@ -1669,6 +1699,11 @@
github = "ikervagyok"; github = "ikervagyok";
name = "Balázs Lengyel"; name = "Balázs Lengyel";
}; };
illegalprime = {
email = "themichaeleden@gmail.com";
github = "illegalprime";
name = "Michael Eden";
};
ilya-kolpakov = { ilya-kolpakov = {
email = "ilya.kolpakov@gmail.com"; email = "ilya.kolpakov@gmail.com";
github = "ilya-kolpakov"; github = "ilya-kolpakov";
@ -1684,6 +1719,11 @@
github = "imalsogreg"; github = "imalsogreg";
name = "Greg Hale"; name = "Greg Hale";
}; };
imuli = {
email = "i@imu.li";
github = "imuli";
name = "Imuli";
};
infinisil = { infinisil = {
email = "infinisil@icloud.com"; email = "infinisil@icloud.com";
github = "infinisil"; github = "infinisil";
@ -1832,6 +1872,11 @@
github = "jluttine"; github = "jluttine";
name = "Jaakko Luttinen"; name = "Jaakko Luttinen";
}; };
jmettes = {
email = "jonathan@jmettes.com";
github = "jmettes";
name = "Jonathan Mettes";
};
Jo = { Jo = {
email = "0x4A6F@shackspace.de"; email = "0x4A6F@shackspace.de";
name = "Joachim Ernst"; name = "Joachim Ernst";
@ -1900,6 +1945,11 @@
github = "jonafato"; github = "jonafato";
name = "Jon Banafato"; name = "Jon Banafato";
}; };
jonathanreeve = {
email = "jon.reeve@gmail.com";
github = "JonathanReeve";
name = "Jonathan Reeve";
};
joncojonathan = { joncojonathan = {
email = "joncojonathan@gmail.com"; email = "joncojonathan@gmail.com";
github = "joncojonathan"; github = "joncojonathan";
@ -1920,6 +1970,11 @@
github = "jpotier"; github = "jpotier";
name = "Martin Potier"; name = "Martin Potier";
}; };
jqueiroz = {
email = "nixos@johnjq.com";
github = "jqueiroz";
name = "Jonathan Queiroz";
};
jraygauthier = { jraygauthier = {
email = "jraygauthier@gmail.com"; email = "jraygauthier@gmail.com";
github = "jraygauthier"; github = "jraygauthier";
@ -2089,6 +2144,11 @@
github = "kuznero"; github = "kuznero";
name = "Roman Kuznetsov"; name = "Roman Kuznetsov";
}; };
kylewlacy = {
email = "kylelacy+nix@pm.me";
github = "kylewlacy";
name = "Kyle Lacy";
};
lasandell = { lasandell = {
email = "lasandell@gmail.com"; email = "lasandell@gmail.com";
github = "lasandell"; github = "lasandell";
@ -2174,6 +2234,11 @@
github = "nathanielbaxter"; github = "nathanielbaxter";
name = "Nathaniel Baxter"; name = "Nathaniel Baxter";
}; };
lightdiscord = {
email = "root@arnaud.sh";
github = "lightdiscord";
name = "Arnaud Pascal";
};
lihop = { lihop = {
email = "nixos@leroy.geek.nz"; email = "nixos@leroy.geek.nz";
github = "lihop"; github = "lihop";
@ -2837,10 +2902,10 @@
github = "nocoolnametom"; github = "nocoolnametom";
name = "Tom Doggett"; name = "Tom Doggett";
}; };
nonfreeblob = { noneucat = {
email = "nonfreeblob@yandex.com"; email = "andy@lolc.at";
github = "nonfreeblob"; github = "noneucat";
name = "nonfreeblob"; name = "Andy Chun";
}; };
notthemessiah = { notthemessiah = {
email = "brian.cohen.88@gmail.com"; email = "brian.cohen.88@gmail.com";
@ -3212,6 +3277,11 @@
github = "qoelet"; github = "qoelet";
name = "Kenny Shen"; name = "Kenny Shen";
}; };
qyliss = {
email = "hi@alyssa.is";
github = "alyssais";
name = "Alyssa Ross";
};
ragge = { ragge = {
email = "r.dahlen@gmail.com"; email = "r.dahlen@gmail.com";
github = "ragnard"; github = "ragnard";
@ -3246,6 +3316,11 @@
email = "ravloony@gmail.com"; email = "ravloony@gmail.com";
name = "Tom Macdonald"; name = "Tom Macdonald";
}; };
rawkode = {
email = "david.andrew.mckay@gmail.com";
github = "rawkode";
name = "David McKay";
};
razvan = { razvan = {
email = "razvan.panda@gmail.com"; email = "razvan.panda@gmail.com";
github = "razvan-panda"; github = "razvan-panda";
@ -3683,6 +3758,11 @@
github = "s-na"; github = "s-na";
name = "S. Nordin Abouzahra"; name = "S. Nordin Abouzahra";
}; };
snaar = {
email = "snaar@snaar.net";
github = "snaar";
name = "Serguei Narojnyi";
};
snyh = { snyh = {
email = "snyh@snyh.org"; email = "snyh@snyh.org";
github = "snyh"; github = "snyh";
@ -3803,6 +3883,11 @@
github = "swarren83"; github = "swarren83";
name = "Shawn Warren"; name = "Shawn Warren";
}; };
swdunlop = {
email = "swdunlop@gmail.com";
github = "swdunlop";
name = "Scott W. Dunlop";
};
swflint = { swflint = {
email = "swflint@flintfam.org"; email = "swflint@flintfam.org";
github = "swflint"; github = "swflint";

View File

@ -14,7 +14,7 @@
xlink:href="http://nixos.org/nixpkgs/manual">Nixpkgs xlink:href="http://nixos.org/nixpkgs/manual">Nixpkgs
manual</link>. In short, you clone Nixpkgs: manual</link>. In short, you clone Nixpkgs:
<screen> <screen>
$ git clone git://github.com/NixOS/nixpkgs.git $ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs $ cd nixpkgs
</screen> </screen>
Then you write and test the package as described in the Nixpkgs manual. Then you write and test the package as described in the Nixpkgs manual.

View File

@ -26,6 +26,7 @@
<xref linkend="opt-services.xserver.desktopManager.plasma5.enable"/> = true; <xref linkend="opt-services.xserver.desktopManager.plasma5.enable"/> = true;
<xref linkend="opt-services.xserver.desktopManager.xfce.enable"/> = true; <xref linkend="opt-services.xserver.desktopManager.xfce.enable"/> = true;
<xref linkend="opt-services.xserver.desktopManager.gnome3.enable"/> = true; <xref linkend="opt-services.xserver.desktopManager.gnome3.enable"/> = true;
<xref linkend="opt-services.xserver.desktopManager.mate.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.xmonad.enable"/> = true; <xref linkend="opt-services.xserver.windowManager.xmonad.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.twm.enable"/> = true; <xref linkend="opt-services.xserver.windowManager.twm.enable"/> = true;
<xref linkend="opt-services.xserver.windowManager.icewm.enable"/> = true; <xref linkend="opt-services.xserver.windowManager.icewm.enable"/> = true;

View File

@ -11,9 +11,9 @@
modify NixOS, however, you should check out the latest sources from Git. This modify NixOS, however, you should check out the latest sources from Git. This
is as follows: is as follows:
<screen> <screen>
$ git clone git://github.com/NixOS/nixpkgs.git $ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs $ cd nixpkgs
$ git remote add channels git://github.com/NixOS/nixpkgs-channels.git $ git remote add channels https://github.com/NixOS/nixpkgs-channels
$ git remote update channels $ git remote update channels
</screen> </screen>
This will check out the latest Nixpkgs sources to This will check out the latest Nixpkgs sources to

View File

@ -326,10 +326,9 @@ Retype new UNIX password: ***
</screen> </screen>
<note> <note>
<para> <para>
To prevent the password prompt, set For unattended installations, it is possible to use
<code><xref linkend="opt-users.mutableUsers"/> = false;</code> in <command>nixos-install --no-root-passwd</command>
<filename>configuration.nix</filename>, which allows unattended in order to disable the password prompt entirely.
installation necessary in automation.
</para> </para>
</note> </note>
</para> </para>

View File

@ -17,8 +17,8 @@
<para> <para>
If you encounter problems, please report them on the If you encounter problems, please report them on the
<literal <literal
xlink:href="https://groups.google.com/forum/#!forum/nix-devel">nix-devel</literal> xlink:href="https://discourse.nixos.org">Discourse</literal>
mailing list or on the <link or on the <link
xlink:href="irc://irc.freenode.net/#nixos"> xlink:href="irc://irc.freenode.net/#nixos">
<literal>#nixos</literal> channel on Freenode</link>. Bugs should be <literal>#nixos</literal> channel on Freenode</link>. Bugs should be
reported in reported in

View File

@ -73,6 +73,20 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
</para> </para>
<itemizedlist> <itemizedlist>
<listitem>
<para>
The <varname>services.cassandra</varname> module has been reworked and
was rewritten from scratch. The service has succeeding tests for
the versions 2.1, 2.2, 3.0 and 3.11 of <link
xlink:href="https://cassandra.apache.org/">Apache Cassandra</link>.
</para>
</listitem>
<listitem>
<para>
There is a new <varname>services.foundationdb</varname> module for deploying
<link xlink:href="https://www.foundationdb.org">FoundationDB</link> clusters.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
When enabled the <literal>iproute2</literal> will copy the files expected When enabled the <literal>iproute2</literal> will copy the files expected
@ -81,6 +95,22 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
routing tables for instance. routing tables for instance.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<varname>services.strongswan-swanctl</varname>
is a modern replacement for <varname>services.strongswan</varname>.
You can use either one of them to setup IPsec VPNs but not both at the same time.
</para>
<para>
<varname>services.strongswan-swanctl</varname> uses the
<link xlink:href="https://wiki.strongswan.org/projects/strongswan/wiki/swanctl">swanctl</link>
command which uses the modern
<link xlink:href="https://github.com/strongswan/strongswan/blob/master/src/libcharon/plugins/vici/README.md">vici</link>
<emphasis>Versatile IKE Configuration Interface</emphasis>.
The deprecated <literal>ipsec</literal> command used in <varname>services.strongswan</varname> is using the legacy
<link xlink:href="https://github.com/strongswan/strongswan/blob/master/README_LEGACY.md">stroke configuration interface</link>.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -97,6 +127,12 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
</para> </para>
<itemizedlist> <itemizedlist>
<listitem>
<para>
The deprecated <varname>services.cassandra</varname> module has
seen a complete rewrite. (See above.)
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<literal>lib.strict</literal> is removed. Use <literal>lib.strict</literal> is removed. Use
@ -187,6 +223,16 @@ $ nix-instantiate -E '(import &lt;nixpkgsunstable&gt; {}).gitFull'
<varname>kibana-oss</varname>. <varname>kibana-oss</varname>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Options
<literal>boot.initrd.luks.devices.<replaceable>name</replaceable>.yubikey.ramfsMountPoint</literal>
<literal>boot.initrd.luks.devices.<replaceable>name</replaceable>.yubikey.storage.mountPoint</literal>
were removed. <literal>luksroot.nix</literal> module never supported more than one YubiKey at
a time anyway, hence those options never had any effect. You should be able to remove them
from your config without any issues.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -265,6 +311,8 @@ inherit (pkgs.nixos {
<literal>lib.traceCallXml</literal> has been deprecated. Please complain <literal>lib.traceCallXml</literal> has been deprecated. Please complain
if you use the function regularly. if you use the function regularly.
</para> </para>
</listitem>
<listitem>
<para> <para>
The attribute <literal>lib.nixpkgsVersion</literal> has been deprecated in The attribute <literal>lib.nixpkgsVersion</literal> has been deprecated in
favor of <literal>lib.version</literal>. Please refer to the discussion in favor of <literal>lib.version</literal>. Please refer to the discussion in
@ -272,6 +320,13 @@ inherit (pkgs.nixos {
for further reference. for further reference.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<literal>lib.recursiveUpdateUntil</literal> was not acting according to its
specification. It has been fixed to act according to the docstring, and a
test has been added.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The module for <option>security.dhparams</option> has two new options now: The module for <option>security.dhparams</option> has two new options now:

View File

@ -6,16 +6,19 @@
, storePaths , storePaths
, volumeLabel , volumeLabel
, uuid ? "44444444-4444-4444-8888-888888888888" , uuid ? "44444444-4444-4444-8888-888888888888"
, e2fsprogs
, libfaketime
, perl
}: }:
let let
sdClosureInfo = pkgs.closureInfo { rootPaths = storePaths; }; sdClosureInfo = pkgs.buildPackages.closureInfo { rootPaths = storePaths; };
in in
pkgs.stdenv.mkDerivation { pkgs.stdenv.mkDerivation {
name = "ext4-fs.img"; name = "ext4-fs.img";
nativeBuildInputs = with pkgs; [e2fsprogs.bin libfaketime perl]; nativeBuildInputs = [e2fsprogs.bin libfaketime perl];
buildCommand = buildCommand =
'' ''

View File

@ -70,7 +70,7 @@ in
description = '' description = ''
Shell script code called during global environment initialisation Shell script code called during global environment initialisation
after all variables and profileVariables have been set. after all variables and profileVariables have been set.
This code is asumed to be shell-independent, which means you should This code is assumed to be shell-independent, which means you should
stick to pure sh without sh word split. stick to pure sh without sh word split.
''; '';
type = types.lines; type = types.lines;

View File

@ -29,8 +29,5 @@ with lib;
# Add Memtest86+ to the CD. # Add Memtest86+ to the CD.
boot.loader.grub.memtest86.enable = true; boot.loader.grub.memtest86.enable = true;
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
system.stateVersion = mkDefault "18.03"; system.stateVersion = mkDefault "18.03";
} }

View File

@ -318,7 +318,7 @@ in
options = [ "allow_other" "cow" "nonempty" "chroot=/mnt-root" "max_files=32768" "hide_meta_files" "dirs=/nix/.rw-store=rw:/nix/.ro-store=ro" ]; options = [ "allow_other" "cow" "nonempty" "chroot=/mnt-root" "max_files=32768" "hide_meta_files" "dirs=/nix/.rw-store=rw:/nix/.ro-store=ro" ];
}; };
boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "usb-storage" "uas" ]; boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "uas" ];
boot.blacklistedKernelModules = [ "nouveau" ]; boot.blacklistedKernelModules = [ "nouveau" ];

View File

@ -33,9 +33,6 @@ in
# Also increase the amount of CMA to ensure the virtual console on the RPi3 works. # Also increase the amount of CMA to ensure the virtual console on the RPi3 works.
boot.kernelParams = ["cma=32M" "console=ttyS0,115200n8" "console=ttyAMA0,115200n8" "console=tty0"]; boot.kernelParams = ["cma=32M" "console=ttyS0,115200n8" "console=ttyAMA0,115200n8" "console=tty0"];
# FIXME: this probably should be in installation-device.nix
users.users.root.initialHashedPassword = "";
sdImage = { sdImage = {
populateBootCommands = let populateBootCommands = let
configTxt = pkgs.writeText "config.txt" '' configTxt = pkgs.writeText "config.txt" ''

View File

@ -34,9 +34,6 @@ in
# - ttySAC2: for Exynos (ODROID-XU3) # - ttySAC2: for Exynos (ODROID-XU3)
boot.kernelParams = ["console=ttyS0,115200n8" "console=ttymxc0,115200n8" "console=ttyAMA0,115200n8" "console=ttyO0,115200n8" "console=ttySAC2,115200n8" "console=tty0"]; boot.kernelParams = ["console=ttyS0,115200n8" "console=ttymxc0,115200n8" "console=ttyAMA0,115200n8" "console=ttyO0,115200n8" "console=ttySAC2,115200n8" "console=tty0"];
# FIXME: this probably should be in installation-device.nix
users.users.root.initialHashedPassword = "";
sdImage = { sdImage = {
populateBootCommands = let populateBootCommands = let
configTxt = pkgs.writeText "config.txt" '' configTxt = pkgs.writeText "config.txt" ''

View File

@ -27,9 +27,6 @@ in
boot.consoleLogLevel = lib.mkDefault 7; boot.consoleLogLevel = lib.mkDefault 7;
boot.kernelPackages = pkgs.linuxPackages_rpi; boot.kernelPackages = pkgs.linuxPackages_rpi;
# FIXME: this probably should be in installation-device.nix
users.users.root.initialHashedPassword = "";
sdImage = { sdImage = {
populateBootCommands = let populateBootCommands = let
configTxt = pkgs.writeText "config.txt" '' configTxt = pkgs.writeText "config.txt" ''

View File

@ -12,13 +12,12 @@
with lib; with lib;
let let
rootfsImage = import ../../../lib/make-ext4-fs.nix { rootfsImage = pkgs.callPackage ../../../lib/make-ext4-fs.nix ({
inherit pkgs;
inherit (config.sdImage) storePaths; inherit (config.sdImage) storePaths;
volumeLabel = "NIXOS_SD"; volumeLabel = "NIXOS_SD";
} // optionalAttrs (config.sdImage.rootPartitionUUID != null) { } // optionalAttrs (config.sdImage.rootPartitionUUID != null) {
uuid = config.sdImage.rootPartitionUUID; uuid = config.sdImage.rootPartitionUUID;
}; });
in in
{ {
options.sdImage = { options.sdImage = {
@ -94,10 +93,10 @@ in
sdImage.storePaths = [ config.system.build.toplevel ]; sdImage.storePaths = [ config.system.build.toplevel ];
system.build.sdImage = pkgs.stdenv.mkDerivation { system.build.sdImage = pkgs.callPackage ({ stdenv, dosfstools, e2fsprogs, mtools, libfaketime, utillinux }: stdenv.mkDerivation {
name = config.sdImage.imageName; name = config.sdImage.imageName;
buildInputs = with pkgs; [ dosfstools e2fsprogs mtools libfaketime utillinux ]; nativeBuildInputs = [ dosfstools e2fsprogs mtools libfaketime utillinux ];
buildCommand = '' buildCommand = ''
mkdir -p $out/nix-support $out/sd-image mkdir -p $out/nix-support $out/sd-image
@ -138,7 +137,7 @@ in
(cd boot; mcopy -bpsvm -i ../bootpart.img ./* ::) (cd boot; mcopy -bpsvm -i ../bootpart.img ./* ::)
dd conv=notrunc if=bootpart.img of=$img seek=$START count=$SECTORS dd conv=notrunc if=bootpart.img of=$img seek=$START count=$SECTORS
''; '';
}; }) {};
boot.postBootCommands = '' boot.postBootCommands = ''
# On the first boot do some maintenance tasks # On the first boot do some maintenance tasks

View File

@ -14,7 +14,4 @@ with lib;
../../profiles/base.nix ../../profiles/base.nix
../../profiles/installation-device.nix ../../profiles/installation-device.nix
]; ];
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
} }

View File

@ -28,7 +28,6 @@ with lib;
++ (if pkgs.stdenv.system == "aarch64-linux" ++ (if pkgs.stdenv.system == "aarch64-linux"
then [] then []
else [ pkgs.grub2 pkgs.syslinux ]); else [ pkgs.grub2 pkgs.syslinux ]);
system.boot.loader.kernelFile = pkgs.stdenv.platform.kernelTarget;
fileSystems."/" = fileSystems."/" =
{ fsType = "tmpfs"; { fsType = "tmpfs";
@ -86,7 +85,7 @@ with lib;
system.build.netbootIpxeScript = pkgs.writeTextDir "netboot.ipxe" '' system.build.netbootIpxeScript = pkgs.writeTextDir "netboot.ipxe" ''
#!ipxe #!ipxe
kernel ${pkgs.stdenv.platform.kernelTarget} init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams} kernel ${pkgs.stdenv.hostPlatform.platform.kernelTarget} init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams}
initrd initrd initrd initrd
boot boot
''; '';

View File

@ -536,6 +536,13 @@ if ($showHardwareConfig) {
# Use the systemd-boot EFI boot loader. # Use the systemd-boot EFI boot loader.
boot.loader.systemd-boot.enable = true; boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true; boot.loader.efi.canTouchEfiVariables = true;
EOF
} elsif (-e "/boot/extlinux") {
$bootLoaderConfig = <<EOF;
# Use the extlinux boot loader. (NixOS wants to enable GRUB by default)
boot.loader.grub.enable = false;
# Enables the generation of /boot/extlinux/extlinux.conf
boot.loader.generic-extlinux-compatible.enable = true;
EOF EOF
} elsif ($virt ne "systemd-nspawn") { } elsif ($virt ne "systemd-nspawn") {
$bootLoaderConfig = <<EOF; $bootLoaderConfig = <<EOF;

View File

@ -323,6 +323,9 @@
mapred = 296; mapred = 296;
hadoop = 297; hadoop = 297;
hydron = 298; hydron = 298;
cfssl = 299;
cassandra = 300;
qemu-libvirtd = 301;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399! # When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -606,6 +609,9 @@
mapred = 296; mapred = 296;
hadoop = 297; hadoop = 297;
hydron = 298; hydron = 298;
cfssl = 299;
cassandra = 300;
qemu-libvirtd = 301;
# When adding a gid, make sure it doesn't match an existing # When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal # uid. Users and groups with the same name should have equal

View File

@ -76,9 +76,6 @@ in
config = { config = {
warnings = lib.optional (options.system.stateVersion.highestPrio > 1000)
"You don't have `system.stateVersion` explicitly set. Expect things to break.";
system.nixos = { system.nixos = {
# These defaults are set here rather than up there so that # These defaults are set here rather than up there so that
# changing them would not rebuild the manual # changing them would not rebuild the manual

View File

@ -201,6 +201,7 @@
./services/databases/4store-endpoint.nix ./services/databases/4store-endpoint.nix
./services/databases/4store.nix ./services/databases/4store.nix
./services/databases/aerospike.nix ./services/databases/aerospike.nix
./services/databases/cassandra.nix
./services/databases/clickhouse.nix ./services/databases/clickhouse.nix
./services/databases/couchdb.nix ./services/databases/couchdb.nix
./services/databases/firebird.nix ./services/databases/firebird.nix
@ -246,6 +247,7 @@
./services/desktops/gnome3/tracker-miners.nix ./services/desktops/gnome3/tracker-miners.nix
./services/desktops/profile-sync-daemon.nix ./services/desktops/profile-sync-daemon.nix
./services/desktops/telepathy.nix ./services/desktops/telepathy.nix
./services/desktops/zeitgeist.nix
./services/development/bloop.nix ./services/development/bloop.nix
./services/development/hoogle.nix ./services/development/hoogle.nix
./services/editors/emacs.nix ./services/editors/emacs.nix
@ -279,6 +281,7 @@
./services/hardware/upower.nix ./services/hardware/upower.nix
./services/hardware/usbmuxd.nix ./services/hardware/usbmuxd.nix
./services/hardware/thermald.nix ./services/hardware/thermald.nix
./services/hardware/undervolt.nix
./services/logging/SystemdJournal2Gelf.nix ./services/logging/SystemdJournal2Gelf.nix
./services/logging/awstats.nix ./services/logging/awstats.nix
./services/logging/fluentd.nix ./services/logging/fluentd.nix
@ -406,6 +409,7 @@
./services/monitoring/cadvisor.nix ./services/monitoring/cadvisor.nix
./services/monitoring/collectd.nix ./services/monitoring/collectd.nix
./services/monitoring/das_watchdog.nix ./services/monitoring/das_watchdog.nix
./services/monitoring/datadog-agent.nix
./services/monitoring/dd-agent/dd-agent.nix ./services/monitoring/dd-agent/dd-agent.nix
./services/monitoring/fusion-inventory.nix ./services/monitoring/fusion-inventory.nix
./services/monitoring/grafana.nix ./services/monitoring/grafana.nix
@ -622,6 +626,8 @@
./services/search/hound.nix ./services/search/hound.nix
./services/search/kibana.nix ./services/search/kibana.nix
./services/search/solr.nix ./services/search/solr.nix
./services/security/certmgr.nix
./services/security/cfssl.nix
./services/security/clamav.nix ./services/security/clamav.nix
./services/security/fail2ban.nix ./services/security/fail2ban.nix
./services/security/fprintd.nix ./services/security/fprintd.nix

View File

@ -33,7 +33,7 @@
# USB support, especially for booting from USB CD-ROM # USB support, especially for booting from USB CD-ROM
# drives. # drives.
"usb_storage" "uas"
# Firewire support. Not tested. # Firewire support. Not tested.
"ohci1394" "sbp2" "ohci1394" "sbp2"

View File

@ -31,7 +31,8 @@ with lib;
#services.rogue.enable = true; #services.rogue.enable = true;
# Disable some other stuff we don't need. # Disable some other stuff we don't need.
security.sudo.enable = false; security.sudo.enable = mkDefault false;
services.udisks2.enable = mkDefault false;
# Automatically log in at the virtual consoles. # Automatically log in at the virtual consoles.
services.mingetty.autologinUser = "root"; services.mingetty.autologinUser = "root";
@ -86,5 +87,9 @@ with lib;
networking.firewall.logRefusedConnections = mkDefault false; networking.firewall.logRefusedConnections = mkDefault false;
environment.systemPackages = [ pkgs.vim ]; environment.systemPackages = [ pkgs.vim ];
# Allow the user to log in as root without a password.
users.users.root.initialHashedPassword = "";
}; };
} }

View File

@ -3,7 +3,30 @@
with lib; with lib;
let let
cfg = config.programs.zsh.ohMyZsh; cfg = config.programs.zsh.ohMyZsh;
mkLinkFarmEntry = name: dir:
let
env = pkgs.buildEnv {
name = "zsh-${name}-env";
paths = cfg.customPkgs;
pathsToLink = "/share/zsh/${dir}";
};
in
{ inherit name; path = "${env}/share/zsh/${dir}"; };
mkLinkFarmEntry' = name: mkLinkFarmEntry name name;
custom =
if cfg.custom != null then cfg.custom
else if length cfg.customPkgs == 0 then null
else pkgs.linkFarm "oh-my-zsh-custom" [
(mkLinkFarmEntry' "themes")
(mkLinkFarmEntry "completions" "site-functions")
(mkLinkFarmEntry' "plugins")
];
in in
{ {
options = { options = {
@ -34,10 +57,19 @@ in
}; };
custom = mkOption { custom = mkOption {
default = ""; default = null;
type = types.str; type = with types; nullOr str;
description = '' description = ''
Path to a custom oh-my-zsh package to override config of oh-my-zsh. Path to a custom oh-my-zsh package to override config of oh-my-zsh.
(Can't be used along with `customPkgs`).
'';
};
customPkgs = mkOption {
default = [];
type = types.listOf types.package;
description = ''
List of custom packages that should be loaded into `oh-my-zsh`.
''; '';
}; };
@ -67,7 +99,7 @@ in
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
programs.zsh.interactiveShellInit = with builtins; '' programs.zsh.interactiveShellInit = ''
# oh-my-zsh configuration generated by NixOS # oh-my-zsh configuration generated by NixOS
export ZSH=${cfg.package}/share/oh-my-zsh export ZSH=${cfg.package}/share/oh-my-zsh
@ -75,8 +107,8 @@ in
"plugins=(${concatStringsSep " " cfg.plugins})" "plugins=(${concatStringsSep " " cfg.plugins})"
} }
${optionalString (stringLength(cfg.custom) > 0) ${optionalString (custom != null)
"ZSH_CUSTOM=\"${cfg.custom}\"" "ZSH_CUSTOM=\"${custom}\""
} }
${optionalString (stringLength(cfg.theme) > 0) ${optionalString (stringLength(cfg.theme) > 0)
@ -92,5 +124,15 @@ in
source $ZSH/oh-my-zsh.sh source $ZSH/oh-my-zsh.sh
''; '';
assertions = [
{
assertion = cfg.custom != null -> cfg.customPkgs == [];
message = "If `cfg.custom` is set for `ZSH_CUSTOM`, `customPkgs` can't be used!";
}
];
}; };
meta.doc = ./oh-my-zsh.xml;
} }

View File

@ -0,0 +1,125 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-programs-zsh-ohmyzsh">
<title>Oh my ZSH</title>
<para><literal><link xlink:href="https://ohmyz.sh/">oh-my-zsh</link></literal> is a framework
to manage your <link xlink:href="https://www.zsh.org/">ZSH</link> configuration
including completion scripts for several CLI tools or custom prompt themes.</para>
<section><title>Basic usage</title>
<para>The module uses the <literal>oh-my-zsh</literal> package with all available features. The
initial setup using Nix expressions is fairly similar to the configuration format
of <literal>oh-my-zsh</literal>.
<programlisting>
{
programs.ohMyZsh = {
enable = true;
plugins = [ "git" "python" "man" ];
theme = "agnoster";
};
}
</programlisting>
For a detailed explanation of these arguments please refer to the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki"><literal>oh-my-zsh</literal> docs</link>.
</para>
<para>The expression generates the needed
configuration and writes it into your <literal>/etc/zshrc</literal>.
</para></section>
<section><title>Custom additions</title>
<para>Sometimes third-party or custom scripts such as a modified theme may be needed.
<literal>oh-my-zsh</literal> provides the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization#overriding-internals"><literal>ZSH_CUSTOM</literal></link>
environment variable for this which points to a directory with additional scripts.</para>
<para>The module can do this as well:
<programlisting>
{
programs.ohMyZsh.custom = "~/path/to/custom/scripts";
}
</programlisting>
</para></section>
<section><title>Custom environments</title>
<para>There are several extensions for <literal>oh-my-zsh</literal> packaged in <literal>nixpkgs</literal>.
One of them is <link xlink:href="https://github.com/spwhitt/nix-zsh-completions">nix-zsh-completions</link>
which bundles completion scripts and a plugin for <literal>oh-my-zsh</literal>.</para>
<para>Rather than using a single mutable path for <literal>ZSH_CUSTOM</literal>, it's also possible to
generate this path from a list of Nix packages:
<programlisting>
{ pkgs, ... }:
{
programs.ohMyZsh.customPkgs = with pkgs; [
pkgs.nix-zsh-completions
# and even more...
];
}
</programlisting>
Internally a single store path will be created using <literal>buildEnv</literal>.
Please refer to the docs of
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-building-environment"><literal>buildEnv</literal></link>
for further reference.</para>
<para><emphasis>Please keep in mind that this is not compatible with <literal>programs.ohMyZsh.custom</literal>
as it requires an immutable store path while <literal>custom</literal> shall remain mutable! An evaluation failure
will be thrown if both <literal>custom</literal> and <literal>customPkgs</literal> are set.</emphasis>
</para></section>
<section><title>Package your own customizations</title>
<para>If third-party customizations (e.g. new themes) are supposed to be added to <literal>oh-my-zsh</literal>
there are several pitfalls to keep in mind:</para>
<itemizedlist>
<listitem>
<para>To comply with the default structure of <literal>ZSH</literal> the entire output needs to be written to
<literal>$out/share/zsh.</literal></para>
</listitem>
<listitem>
<para>Completion scripts are supposed to be stored at <literal>$out/share/zsh/site-functions</literal>. This directory
is part of the <literal><link xlink:href="http://zsh.sourceforge.net/Doc/Release/Functions.html">fpath</link></literal>
and the package should be compatible with pure <literal>ZSH</literal> setups. The module will automatically link
the contents of <literal>site-functions</literal> to completions directory in the proper store path.</para>
</listitem>
<listitem>
<para>The <literal>plugins</literal> directory needs the structure <literal>pluginname/pluginname.plugin.zsh</literal>
as structured in the <link xlink:href="https://github.com/robbyrussell/oh-my-zsh/tree/91b771914bc7c43dd7c7a43b586c5de2c225ceb7/plugins">upstream repo.</link>
</para>
</listitem>
</itemizedlist>
<para>
A derivation for <literal>oh-my-zsh</literal> may look like this:
<programlisting>
{ stdenv, fetchFromGitHub }:
stdenv.mkDerivation rec {
name = "exemplary-zsh-customization-${version}";
version = "1.0.0";
src = fetchFromGitHub {
# path to the upstream repository
};
dontBuild = true;
installPhase = ''
mkdir -p $out/share/zsh/site-functions
cp {themes,plugins} $out/share/zsh
cp completions $out/share/zsh/site-functions
'';
}
</programlisting>
</para>
</section>
</chapter>

View File

@ -9,7 +9,6 @@ with lib;
(mkRenamedOptionModule [ "system" "nixos" "stateVersion" ] [ "system" "stateVersion" ]) (mkRenamedOptionModule [ "system" "nixos" "stateVersion" ] [ "system" "stateVersion" ])
(mkRenamedOptionModule [ "system" "nixos" "defaultChannel" ] [ "system" "defaultChannel" ]) (mkRenamedOptionModule [ "system" "nixos" "defaultChannel" ] [ "system" "defaultChannel" ])
(mkRenamedOptionModule [ "dysnomia" ] [ "services" "dysnomia" ])
(mkRenamedOptionModule [ "environment" "x11Packages" ] [ "environment" "systemPackages" ]) (mkRenamedOptionModule [ "environment" "x11Packages" ] [ "environment" "systemPackages" ])
(mkRenamedOptionModule [ "environment" "enableBashCompletion" ] [ "programs" "bash" "enableCompletion" ]) (mkRenamedOptionModule [ "environment" "enableBashCompletion" ] [ "programs" "bash" "enableCompletion" ])
(mkRenamedOptionModule [ "environment" "nix" ] [ "nix" "package" ]) (mkRenamedOptionModule [ "environment" "nix" ] [ "nix" "package" ])
@ -258,6 +257,7 @@ with lib;
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "") (mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
(mkRemovedOptionModule [ "virtualisation" "xen" "qemu" ] "You don't need this option anymore, it will work without it.") (mkRemovedOptionModule [ "virtualisation" "xen" "qemu" ] "You don't need this option anymore, it will work without it.")
(mkRemovedOptionModule [ "services" "logstash" "enableWeb" ] "The web interface was removed from logstash") (mkRemovedOptionModule [ "services" "logstash" "enableWeb" ] "The web interface was removed from logstash")
(mkRemovedOptionModule [ "boot" "zfs" "enableLegacyCrypto" ] "The corresponding package was removed from nixpkgs.")
# ZSH # ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ]) (mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ])

View File

@ -55,11 +55,11 @@ in {
}; };
musicDirectory = mkOption { musicDirectory = mkOption {
type = types.path; type = with types; either path (strMatching "(http|https|nfs|smb)://.+");
default = "${cfg.dataDir}/music"; default = "${cfg.dataDir}/music";
defaultText = ''''${dataDir}/music''; defaultText = ''''${dataDir}/music'';
description = '' description = ''
The directory where mpd reads music from. The directory or NFS/SMB network share where mpd reads music from.
''; '';
}; };

View File

@ -4,445 +4,288 @@ with lib;
let let
cfg = config.services.cassandra; cfg = config.services.cassandra;
cassandraPackage = cfg.package.override { defaultUser = "cassandra";
jre = cfg.jre; cassandraConfig = flip recursiveUpdate cfg.extraConfig
}; ({ commitlog_sync = "batch";
cassandraUser = { commitlog_sync_batch_window_in_ms = 2;
name = cfg.user; partitioner = "org.apache.cassandra.dht.Murmur3Partitioner";
home = "/var/lib/cassandra"; endpoint_snitch = "SimpleSnitch";
description = "Cassandra role user"; seed_provider =
}; [{ class_name = "org.apache.cassandra.locator.SimpleSeedProvider";
parameters = [ { seeds = "127.0.0.1"; } ];
cassandraRackDcProperties = '' }];
dc=${cfg.dc} data_file_directories = [ "${cfg.homeDir}/data" ];
rack=${cfg.rack} commitlog_directory = "${cfg.homeDir}/commitlog";
''; saved_caches_directory = "${cfg.homeDir}/saved_caches";
} // (if builtins.compareVersions cfg.package.version "3" >= 0
cassandraConf = '' then { hints_directory = "${cfg.homeDir}/hints"; }
cluster_name: ${cfg.clusterName} else {})
num_tokens: 256 );
auto_bootstrap: ${boolToString cfg.autoBootstrap} cassandraConfigWithAddresses = cassandraConfig //
hinted_handoff_enabled: ${boolToString cfg.hintedHandOff} ( if isNull cfg.listenAddress
hinted_handoff_throttle_in_kb: ${builtins.toString cfg.hintedHandOffThrottle} then { listen_interface = cfg.listenInterface; }
max_hints_delivery_threads: 2 else { listen_address = cfg.listenAddress; }
max_hint_window_in_ms: 10800000 # 3 hours ) // (
authenticator: ${cfg.authenticator} if isNull cfg.rpcAddress
authorizer: ${cfg.authorizer} then { rpc_interface = cfg.rpcInterface; }
permissions_validity_in_ms: 2000 else { rpc_address = cfg.rpcAddress; }
partitioner: org.apache.cassandra.dht.Murmur3Partitioner );
data_file_directories: cassandraEtc = pkgs.stdenv.mkDerivation
${builtins.concatStringsSep "\n" (map (v: " - "+v) cfg.dataDirs)} { name = "cassandra-etc";
commitlog_directory: ${cfg.commitLogDirectory} cassandraYaml = builtins.toJSON cassandraConfigWithAddresses;
disk_failure_policy: stop cassandraEnvPkg = "${cfg.package}/conf/cassandra-env.sh";
key_cache_size_in_mb: buildCommand = ''
key_cache_save_period: 14400 mkdir -p "$out"
row_cache_size_in_mb: 0
row_cache_save_period: 0
saved_caches_directory: ${cfg.savedCachesDirectory}
commitlog_sync: ${cfg.commitLogSync}
commitlog_sync_period_in_ms: ${builtins.toString cfg.commitLogSyncPeriod}
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "${builtins.concatStringsSep "," cfg.seeds}"
concurrent_reads: ${builtins.toString cfg.concurrentReads}
concurrent_writes: ${builtins.toString cfg.concurrentWrites}
memtable_flush_queue_size: 4
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: ${cfg.listenAddress}
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: ${cfg.rpcAddress}
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: ${boolToString cfg.incrementalBackups}
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
read_request_timeout_in_ms: 10000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 10000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: ${cfg.snitch}
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: ${cfg.internodeEncryption}
keystore: ${cfg.keyStorePath}
keystore_password: ${cfg.keyStorePassword}
truststore: ${cfg.trustStorePath}
truststore_password: ${cfg.trustStorePassword}
client_encryption_options:
enabled: ${boolToString cfg.clientEncryption}
keystore: ${cfg.keyStorePath}
keystore_password: ${cfg.keyStorePassword}
internode_compression: all
inter_dc_tcp_nodelay: false
preheat_kernel_page_cache: false
streaming_socket_timeout_in_ms: ${toString cfg.streamingSocketTimoutInMS}
'';
cassandraLog = ''
log4j.rootLogger=${cfg.logLevel},stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%t] %d{HH:mm:ss,SSS} %m%n
'';
cassandraConfFile = pkgs.writeText "cassandra.yaml" cassandraConf;
cassandraLogFile = pkgs.writeText "log4j-server.properties" cassandraLog;
cassandraRackFile = pkgs.writeText "cassandra-rackdc.properties" cassandraRackDcProperties;
cassandraEnvironment = {
CASSANDRA_HOME = cassandraPackage;
JAVA_HOME = cfg.jre;
CASSANDRA_CONF = "/etc/cassandra";
};
echo "$cassandraYaml" > "$out/cassandra.yaml"
ln -s "$cassandraEnvPkg" "$out/cassandra-env.sh"
'';
};
in { in {
###### interface
options.services.cassandra = { options.services.cassandra = {
enable = mkOption { enable = mkEnableOption ''
description = "Whether to enable cassandra."; Apache Cassandra Scalable and highly available database.
default = false; '';
type = types.bool;
};
package = mkOption {
description = "Cassandra package to use.";
default = pkgs.cassandra;
defaultText = "pkgs.cassandra";
type = types.package;
};
jre = mkOption {
description = "JRE package to run cassandra service.";
default = pkgs.jre;
defaultText = "pkgs.jre";
type = types.package;
};
user = mkOption { user = mkOption {
description = "User that runs cassandra service."; type = types.str;
default = "cassandra"; default = defaultUser;
type = types.string; description = "Run Apache Cassandra under this user.";
}; };
group = mkOption { group = mkOption {
description = "Group that runs cassandra service.";
default = "cassandra";
type = types.string;
};
envFile = mkOption {
description = "path to cassandra-env.sh";
default = "${cassandraPackage}/conf/cassandra-env.sh";
defaultText = "\${cassandraPackage}/conf/cassandra-env.sh";
type = types.path;
};
clusterName = mkOption {
description = "set cluster name";
default = "cassandra";
example = "prod-cluster0";
type = types.string;
};
commitLogDirectory = mkOption {
description = "directory for commit logs";
default = "/var/lib/cassandra/commit_log";
type = types.string;
};
savedCachesDirectory = mkOption {
description = "directory for saved caches";
default = "/var/lib/cassandra/saved_caches";
type = types.string;
};
hintedHandOff = mkOption {
description = "enable hinted handoff";
default = true;
type = types.bool;
};
hintedHandOffThrottle = mkOption {
description = "hinted hand off throttle rate in kb";
default = 1024;
type = types.int;
};
commitLogSync = mkOption {
description = "commitlog sync method";
default = "periodic";
type = types.str; type = types.str;
example = "batch"; default = defaultUser;
description = "Run Apache Cassandra under this group.";
}; };
commitLogSyncPeriod = mkOption { homeDir = mkOption {
description = "commitlog sync period in ms ";
default = 10000;
type = types.int;
};
envScript = mkOption {
default = "${cassandraPackage}/conf/cassandra-env.sh";
defaultText = "\${cassandraPackage}/conf/cassandra-env.sh";
type = types.path; type = types.path;
description = "Supply your own cassandra-env.sh rather than using the default"; default = "/var/lib/cassandra";
description = ''
Home directory for Apache Cassandra.
'';
}; };
extraParams = mkOption { package = mkOption {
description = "add additional lines to cassandra-env.sh"; type = types.package;
default = pkgs.cassandra;
defaultText = "pkgs.cassandra";
example = literalExample "pkgs.cassandra_3_11";
description = ''
The Apache Cassandra package to use.
'';
};
jvmOpts = mkOption {
type = types.listOf types.str;
default = []; default = [];
example = [''JVM_OPTS="$JVM_OPTS -Dcassandra.available_processors=1"'']; description = ''
type = types.listOf types.str; Populate the JVM_OPT environment variable.
}; '';
dataDirs = mkOption {
type = types.listOf types.path;
default = [ "/var/lib/cassandra/data" ];
description = "Data directories for cassandra";
};
logLevel = mkOption {
type = types.str;
default = "INFO";
description = "default logging level for log4j";
};
internodeEncryption = mkOption {
description = "enable internode encryption";
default = "none";
example = "all";
type = types.str;
};
clientEncryption = mkOption {
description = "enable client encryption";
default = false;
type = types.bool;
};
trustStorePath = mkOption {
description = "path to truststore";
default = ".conf/truststore";
type = types.str;
};
keyStorePath = mkOption {
description = "path to keystore";
default = ".conf/keystore";
type = types.str;
};
keyStorePassword = mkOption {
description = "password to keystore";
default = "cassandra";
type = types.str;
};
trustStorePassword = mkOption {
description = "password to truststore";
default = "cassandra";
type = types.str;
};
seeds = mkOption {
description = "password to truststore";
default = [ "127.0.0.1" ];
type = types.listOf types.str;
};
concurrentWrites = mkOption {
description = "number of concurrent writes allowed";
default = 32;
type = types.int;
};
concurrentReads = mkOption {
description = "number of concurrent reads allowed";
default = 32;
type = types.int;
}; };
listenAddress = mkOption { listenAddress = mkOption {
description = "listen address"; type = types.nullOr types.str;
default = "localhost"; default = "127.0.0.1";
type = types.str; example = literalExample "null";
description = ''
Address or interface to bind to and tell other Cassandra nodes
to connect to. You _must_ change this if you want multiple
nodes to be able to communicate!
Set listenAddress OR listenInterface, not both.
Leaving it blank leaves it up to
InetAddress.getLocalHost(). This will always do the Right
Thing _if_ the node is properly configured (hostname, name
resolution, etc), and the Right Thing is to use the address
associated with the hostname (it might not be).
Setting listen_address to 0.0.0.0 is always wrong.
'';
};
listenInterface = mkOption {
type = types.nullOr types.str;
default = null;
example = "eth1";
description = ''
Set listenAddress OR listenInterface, not both. Interfaces
must correspond to a single address, IP aliasing is not
supported.
'';
}; };
rpcAddress = mkOption { rpcAddress = mkOption {
description = "rpc listener address"; type = types.nullOr types.str;
default = "localhost"; default = "127.0.0.1";
type = types.str; example = literalExample "null";
};
incrementalBackups = mkOption {
description = "enable incremental backups";
default = false;
type = types.bool;
};
snitch = mkOption {
description = "snitch to use for topology discovery";
default = "GossipingPropertyFileSnitch";
example = "Ec2Snitch";
type = types.str;
};
dc = mkOption {
description = "datacenter for use in topology configuration";
default = "DC1";
example = "DC1";
type = types.str;
};
rack = mkOption {
description = "rack for use in topology configuration";
default = "RAC1";
example = "RAC1";
type = types.str;
};
authorizer = mkOption {
description = "
Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
";
default = "AllowAllAuthorizer";
example = "CassandraAuthorizer";
type = types.str;
};
authenticator = mkOption {
description = "
Authentication backend, implementing IAuthenticator; used to identify users
";
default = "AllowAllAuthenticator";
example = "PasswordAuthenticator";
type = types.str;
};
autoBootstrap = mkOption {
description = "It makes new (non-seed) nodes automatically migrate the right data to themselves.";
default = true;
type = types.bool;
};
streamingSocketTimoutInMS = mkOption {
description = "Enable or disable socket timeout for streaming operations";
default = 3600000; #CASSANDRA-8611
example = 120;
type = types.int;
};
repairStartAt = mkOption {
default = "Sun";
type = types.string;
description = '' description = ''
Defines realtime (i.e. wallclock) timers with calendar event The address or interface to bind the native transport server to.
expressions. For more details re: systemd OnCalendar at
https://www.freedesktop.org/software/systemd/man/systemd.time.html#Displaying%20Time%20Spans Set rpcAddress OR rpcInterface, not both.
'';
example = ["weekly" "daily" "08:05:40" "mon,fri *-1/2-1,3 *:30:45"]; Leaving rpcAddress blank has the same effect as on
}; listenAddress (i.e. it will be based on the configured hostname
repairRandomizedDelayInSec = mkOption { of the node).
default = 0;
type = types.int; Note that unlike listenAddress, you can specify 0.0.0.0, but you
description = ''Delay the timer by a randomly selected, evenly distributed must also set extraConfig.broadcast_rpc_address to a value other
amount of time between 0 and the specified time value. re: systemd timer than 0.0.0.0.
RandomizedDelaySec for more details
For security reasons, you should not expose this port to the
internet. Firewall it if needed.
''; '';
}; };
repairPostStop = mkOption { rpcInterface = mkOption {
type = types.nullOr types.str;
default = null; default = null;
type = types.nullOr types.string; example = "eth1";
description = '' description = ''
Run a script when repair is over. One can use it to send statsd events, email, etc. Set rpcAddress OR rpcInterface, not both. Interfaces must
correspond to a single address, IP aliasing is not supported.
''; '';
}; };
repairPostStart = mkOption {
default = null; extraConfig = mkOption {
type = types.nullOr types.string; type = types.attrs;
default = {};
example =
{ commitlog_sync_batch_window_in_ms = 3;
};
description = '' description = ''
Run a script when repair starts. One can use it to send statsd events, email, etc. Extra options to be merged into cassandra.yaml as nix attribute set.
It has same semantics as systemd ExecStopPost; So, if it fails, unit is consisdered
failed.
''; '';
}; };
fullRepairInterval = mkOption {
type = types.nullOr types.str;
default = "3w";
example = literalExample "null";
description = ''
Set the interval how often full repairs are run, i.e.
`nodetool repair --full` is executed. See
https://cassandra.apache.org/doc/latest/operating/repair.html
for more information.
Set to `null` to disable full repairs.
'';
};
fullRepairOptions = mkOption {
type = types.listOf types.str;
default = [];
example = [ "--partitioner-range" ];
description = ''
Options passed through to the full repair command.
'';
};
incrementalRepairInterval = mkOption {
type = types.nullOr types.str;
default = "3d";
example = literalExample "null";
description = ''
Set the interval how often incremental repairs are run, i.e.
`nodetool repair` is executed. See
https://cassandra.apache.org/doc/latest/operating/repair.html
for more information.
Set to `null` to disable incremental repairs.
'';
};
incrementalRepairOptions = mkOption {
type = types.listOf types.string;
default = [];
example = [ "--partitioner-range" ];
description = ''
Options passed through to the incremental repair command.
'';
};
}; };
###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
assertions =
environment.etc."cassandra/cassandra-rackdc.properties" = { [ { assertion =
source = cassandraRackFile; ((isNull cfg.listenAddress)
}; || (isNull cfg.listenInterface)
environment.etc."cassandra/cassandra.yaml" = { ) && !((isNull cfg.listenAddress)
source = cassandraConfFile; && (isNull cfg.listenInterface)
}; );
environment.etc."cassandra/log4j-server.properties" = { message = "You have to set either listenAddress or listenInterface";
source = cassandraLogFile; }
}; { assertion =
environment.etc."cassandra/cassandra-env.sh" = { ((isNull cfg.rpcAddress)
text = '' || (isNull cfg.rpcInterface)
${builtins.readFile cfg.envFile} ) && !((isNull cfg.rpcAddress)
${concatStringsSep "\n" cfg.extraParams} && (isNull cfg.rpcInterface)
''; );
}; message = "You have to set either rpcAddress or rpcInterface";
systemd.services.cassandra = { }
description = "Cassandra Daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = cassandraEnvironment;
restartTriggers = [ cassandraConfFile cassandraLogFile cassandraRackFile ];
serviceConfig = {
User = cfg.user;
PermissionsStartOnly = true;
LimitAS = "infinity";
LimitNOFILE = "100000";
LimitNPROC = "32768";
LimitMEMLOCK = "infinity";
};
script = ''
${cassandraPackage}/bin/cassandra -f
'';
path = [
cfg.jre
cassandraPackage
pkgs.coreutils
]; ];
preStart = '' users = mkIf (cfg.user == defaultUser) {
mkdir -m 0700 -p /etc/cassandra/triggers extraUsers."${defaultUser}" =
mkdir -m 0700 -p /var/lib/cassandra /var/log/cassandra { group = cfg.group;
chown ${cfg.user} /var/lib/cassandra /var/log/cassandra /etc/cassandra/triggers home = cfg.homeDir;
''; createHome = true;
postStart = '' uid = config.ids.uids.cassandra;
sleep 2 description = "Cassandra service user";
while ! nodetool status >/dev/null 2>&1; do };
sleep 2 extraGroups."${defaultUser}".gid = config.ids.gids.cassandra;
done
nodetool status
'';
}; };
environment.systemPackages = [ cassandraPackage ]; systemd.services.cassandra =
{ description = "Apache Cassandra service";
networking.firewall.allowedTCPPorts = [ after = [ "network.target" ];
7000 environment =
7001 { CASSANDRA_CONF = "${cassandraEtc}";
9042 JVM_OPTS = builtins.concatStringsSep " " cfg.jvmOpts;
9160 };
]; wantedBy = [ "multi-user.target" ];
serviceConfig =
users.users.cassandra = { User = cfg.user;
if config.ids.uids ? "cassandra" Group = cfg.group;
then { uid = config.ids.uids.cassandra; } // cassandraUser ExecStart = "${cfg.package}/bin/cassandra -f";
else cassandraUser ; SuccessExitStatus = 143;
};
boot.kernel.sysctl."vm.swappiness" = pkgs.lib.mkOptionDefault 0;
systemd.timers."cassandra-repair" = {
timerConfig = {
OnCalendar = "${toString cfg.repairStartAt}";
RandomizedDelaySec = cfg.repairRandomizedDelayInSec;
}; };
};
systemd.services."cassandra-repair" = { systemd.services.cassandra-full-repair =
description = "Cassandra repair daemon"; { description = "Perform a full repair on this Cassandra node";
environment = cassandraEnvironment; after = [ "cassandra.service" ];
script = "${cassandraPackage}/bin/nodetool repair -pr"; requires = [ "cassandra.service" ];
postStop = mkIf (cfg.repairPostStop != null) cfg.repairPostStop; serviceConfig =
postStart = mkIf (cfg.repairPostStart != null) cfg.repairPostStart; { User = cfg.user;
serviceConfig = { Group = cfg.group;
User = cfg.user; ExecStart =
lib.concatStringsSep " "
([ "${cfg.package}/bin/nodetool" "repair" "--full"
] ++ cfg.fullRepairOptions);
};
};
systemd.timers.cassandra-full-repair =
mkIf (!isNull cfg.fullRepairInterval) {
description = "Schedule full repairs on Cassandra";
wantedBy = [ "timers.target" ];
timerConfig =
{ OnBootSec = cfg.fullRepairInterval;
OnUnitActiveSec = cfg.fullRepairInterval;
Persistent = true;
};
};
systemd.services.cassandra-incremental-repair =
{ description = "Perform an incremental repair on this cassandra node.";
after = [ "cassandra.service" ];
requires = [ "cassandra.service" ];
serviceConfig =
{ User = cfg.user;
Group = cfg.group;
ExecStart =
lib.concatStringsSep " "
([ "${cfg.package}/bin/nodetool" "repair"
] ++ cfg.incrementalRepairOptions);
};
};
systemd.timers.cassandra-incremental-repair =
mkIf (!isNull cfg.incrementalRepairInterval) {
description = "Schedule incremental repairs on Cassandra";
wantedBy = [ "timers.target" ];
timerConfig =
{ OnBootSec = cfg.incrementalRepairInterval;
OnUnitActiveSec = cfg.incrementalRepairInterval;
Persistent = true;
};
}; };
};
}; };
} }

View File

@ -12,12 +12,10 @@
<para><emphasis>Maintainer:</emphasis> Austin Seipp</para> <para><emphasis>Maintainer:</emphasis> Austin Seipp</para>
<para><emphasis>Available version(s):</emphasis> 5.1.x</para> <para><emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x</para>
<para>FoundationDB (or "FDB") is a distributed, open source, high performance, <para>FoundationDB (or "FDB") is an open source, distributed, transactional
transactional key-value store. It can store petabytes of data and deliver key-value store.</para>
exceptional performance while maintaining consistency and ACID semantics
(serializable transactions) over a large cluster.</para>
<section><title>Configuring and basic setup</title> <section><title>Configuring and basic setup</title>
@ -26,12 +24,12 @@ exceptional performance while maintaining consistency and ACID semantics
<programlisting> <programlisting>
services.foundationdb.enable = true; services.foundationdb.enable = true;
services.foundationdb.package = pkgs.foundationdb51; # FoundationDB 5.1.x services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
</programlisting> </programlisting>
</para> </para>
<para>The <option>services.foundationdb.package</option> option is required, <para>The <option>services.foundationdb.package</option> option is required,
and must always be specified. Because FoundationDB network protocols and and must always be specified. Due to the fact FoundationDB network protocols and
on-disk storage formats may change between (major) versions, and upgrades must on-disk storage formats may change between (major) versions, and upgrades must
be explicitly handled by the user, you must always manually specify this be explicitly handled by the user, you must always manually specify this
yourself so that the NixOS module will use the proper version. Note that minor, yourself so that the NixOS module will use the proper version. Note that minor,
@ -70,6 +68,40 @@ fdb>
</programlisting> </programlisting>
</para> </para>
<para>You can also write programs using the available client libraries.
For example, the following Python program can be run in order to grab the
cluster status, as a quick example. (This example uses
<command>nix-shell</command> shebang support to automatically supply the
necessary Python modules).
<programlisting>
a@link> cat fdb-status.py
#! /usr/bin/env nix-shell
#! nix-shell -i python -p python pythonPackages.foundationdb52
import fdb
import json
def main():
fdb.api_version(520)
db = fdb.open()
@fdb.transactional
def get_status(tr):
return str(tr['\xff\xff/status/json'])
obj = json.loads(get_status(db))
print('FoundationDB available: %s' % obj['client']['database_status']['available'])
if __name__ == "__main__":
main()
a@link> chmod +x fdb-status.py
a@link> ./fdb-status.py
FoundationDB available: True
a@link>
</programlisting>
</para>
<para>FoundationDB is run under the <command>foundationdb</command> user and <para>FoundationDB is run under the <command>foundationdb</command> user and
group by default, but this may be changed in the NixOS configuration. The group by default, but this may be changed in the NixOS configuration. The
systemd unit <command>foundationdb.service</command> controls the systemd unit <command>foundationdb.service</command> controls the
@ -295,7 +327,6 @@ only undergone fairly basic testing of all the available functionality.</para>
individual <command>fdbserver</command> processes. Currently, all server individual <command>fdbserver</command> processes. Currently, all server
processes inherit all the global <command>fdbmonitor</command> settings. processes inherit all the global <command>fdbmonitor</command> settings.
</para></listitem> </para></listitem>
<listitem><para>Python bindings are not currently installed.</para></listitem>
<listitem><para>Ruby bindings are not currently installed.</para></listitem> <listitem><para>Ruby bindings are not currently installed.</para></listitem>
<listitem><para>Go bindings are not currently installed.</para></listitem> <listitem><para>Go bindings are not currently installed.</para></listitem>
</itemizedlist> </itemizedlist>
@ -306,8 +337,9 @@ only undergone fairly basic testing of all the available functionality.</para>
<para>NixOS's FoundationDB module allows you to configure all of the most <para>NixOS's FoundationDB module allows you to configure all of the most
relevant configuration options for <command>fdbmonitor</command>, matching it relevant configuration options for <command>fdbmonitor</command>, matching it
quite closely. For a complete list of all options, check <command>man quite closely. A complete list of options for the FoundationDB module may be
configuration.nix</command>.</para> found <link linkend="opt-services.foundationdb.enable">here</link>. You should
also read the FoundationDB documentation as well.</para>
</section> </section>

View File

@ -32,15 +32,21 @@ with lib;
environment.systemPackages = [ pkgs.accountsservice ]; environment.systemPackages = [ pkgs.accountsservice ];
# Accounts daemon looks for dbus interfaces in $XDG_DATA_DIRS/accountsservice
environment.pathsToLink = [ "/share/accountsservice" ];
services.dbus.packages = [ pkgs.accountsservice ]; services.dbus.packages = [ pkgs.accountsservice ];
systemd.packages = [ pkgs.accountsservice ]; systemd.packages = [ pkgs.accountsservice ];
systemd.services.accounts-daemon= { systemd.services.accounts-daemon = {
wantedBy = [ "graphical.target" ]; wantedBy = [ "graphical.target" ];
} // (mkIf (!config.users.mutableUsers) { # Accounts daemon looks for dbus interfaces in $XDG_DATA_DIRS/accountsservice
environment.XDG_DATA_DIRS = "${config.system.path}/share";
} // (optionalAttrs (!config.users.mutableUsers) {
environment.NIXOS_USERS_PURE = "true"; environment.NIXOS_USERS_PURE = "true";
}); });
}; };

View File

@ -4,6 +4,10 @@
with lib; with lib;
let
# the demo agent isn't built by default, but we need it here
package = pkgs.geoclue2.override { withDemoAgent = config.services.geoclue2.enableDemoAgent; };
in
{ {
###### interface ###### interface
@ -21,21 +25,42 @@ with lib;
''; '';
}; };
enableDemoAgent = mkOption {
type = types.bool;
default = true;
description = ''
Whether to use the GeoClue demo agent. This should be
overridden by desktop environments that provide their own
agent.
'';
};
}; };
}; };
###### implementation ###### implementation
config = mkIf config.services.geoclue2.enable { config = mkIf config.services.geoclue2.enable {
environment.systemPackages = [ pkgs.geoclue2 ]; environment.systemPackages = [ package ];
services.dbus.packages = [ pkgs.geoclue2 ]; services.dbus.packages = [ package ];
systemd.packages = [ pkgs.geoclue2 ];
systemd.packages = [ package ];
# this needs to run as a user service, since it's associated with the
# user who is making the requests
systemd.user.services = mkIf config.services.geoclue2.enableDemoAgent {
"geoclue-agent" = {
description = "Geoclue agent";
script = "${package}/libexec/geoclue-2.0/demos/agent";
# this should really be `partOf = [ "geoclue.service" ]`, but
# we can't be part of a system service, and the agent should
# be okay with the main service coming and going
wantedBy = [ "default.target" ];
};
};
}; };
} }

View File

@ -0,0 +1,26 @@
# Zeitgeist
{ config, lib, pkgs, ... }:
with lib;
{
###### interface
options = {
services.zeitgeist = {
enable = mkEnableOption "zeitgeist";
};
};
###### implementation
config = mkIf config.services.zeitgeist.enable {
environment.systemPackages = [ pkgs.zeitgeist ];
services.dbus.packages = [ pkgs.zeitgeist ];
systemd.packages = [ pkgs.zeitgeist ];
};
}

View File

@ -10,8 +10,8 @@ in {
package = mkOption { package = mkOption {
type = types.package; type = types.package;
default = pkgs.libinfinity.override { daemon = true; }; default = pkgs.libinfinity;
defaultText = "pkgs.libinfinity.override { daemon = true; }"; defaultText = "pkgs.libinfinity";
description = '' description = ''
Package providing infinoted Package providing infinoted
''; '';
@ -119,7 +119,7 @@ in {
users.groups = optional (cfg.group == "infinoted") users.groups = optional (cfg.group == "infinoted")
{ name = "infinoted"; { name = "infinoted";
}; };
systemd.services.infinoted = systemd.services.infinoted =
{ description = "Gobby Dedicated Server"; { description = "Gobby Dedicated Server";
@ -129,7 +129,7 @@ in {
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";
Restart = "always"; Restart = "always";
ExecStart = "${cfg.package}/bin/infinoted-${versions.majorMinor cfg.package.version} --config-file=/var/lib/infinoted/infinoted.conf"; ExecStart = "${cfg.package.infinoted} --config-file=/var/lib/infinoted/infinoted.conf";
User = cfg.user; User = cfg.user;
Group = cfg.group; Group = cfg.group;
PermissionsStartOnly = true; PermissionsStartOnly = true;

View File

@ -18,6 +18,16 @@ let
(boolFlag "secure" cfg.secure) (boolFlag "secure" cfg.secure)
(boolFlag "noupnp" cfg.noUPnP) (boolFlag "noupnp" cfg.noUPnP)
]; ];
stopScript = pkgs.writeScript "terraria-stop" ''
#!${pkgs.runtimeShell}
if ! [ -d "/proc/$1" ]; then
exit 0
fi
${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock send-keys Enter exit Enter
${getBin pkgs.coreutils}/bin/tail --pid="$1" -f /dev/null
'';
in in
{ {
options = { options = {
@ -124,10 +134,10 @@ in
serviceConfig = { serviceConfig = {
User = "terraria"; User = "terraria";
Type = "oneshot"; Type = "forking";
RemainAfterExit = true; GuessMainPID = true;
ExecStart = "${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock new -d ${pkgs.terraria-server}/bin/TerrariaServer ${concatStringsSep " " flags}"; ExecStart = "${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock new -d ${pkgs.terraria-server}/bin/TerrariaServer ${concatStringsSep " " flags}";
ExecStop = "${getBin pkgs.tmux}/bin/tmux -S /var/lib/terraria/terraria.sock send-keys Enter \"exit\" Enter"; ExecStop = "${stopScript} $MAINPID";
}; };
postStart = '' postStart = ''

View File

@ -71,6 +71,13 @@ in {
BlacklistPlugins=${lib.concatStringsSep ";" cfg.blacklistPlugins} BlacklistPlugins=${lib.concatStringsSep ";" cfg.blacklistPlugins}
''; '';
}; };
"fwupd/uefi.conf" = {
source = pkgs.writeText "uefi.conf" ''
[uefi]
OverrideESPMountPoint=${config.boot.loader.efi.efiSysMountPoint}
'';
};
} // originalEtc // extraTrustedKeys; } // originalEtc // extraTrustedKeys;
services.dbus.packages = [ pkgs.fwupd ]; services.dbus.packages = [ pkgs.fwupd ];

View File

@ -6,16 +6,30 @@ let
cfg = config.services.thermald; cfg = config.services.thermald;
in { in {
###### interface ###### interface
options = { options = {
services.thermald = { services.thermald = {
enable = mkOption { enable = mkOption {
default = false; default = false;
description = '' description = ''
Whether to enable thermald, the temperature management daemon. Whether to enable thermald, the temperature management daemon.
''; '';
}; };
};
}; debug = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable debug logging.
'';
};
configFile = mkOption {
type = types.nullOr types.path;
default = null;
description = "the thermald manual configuration file.";
};
};
};
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
@ -24,7 +38,15 @@ in {
systemd.services.thermald = { systemd.services.thermald = {
description = "Thermal Daemon Service"; description = "Thermal Daemon Service";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
script = "exec ${pkgs.thermald}/sbin/thermald --no-daemon --dbus-enable"; serviceConfig = {
ExecStart = ''
${pkgs.thermald}/sbin/thermald \
--no-daemon \
${optionalString cfg.debug "--loglevel=debug"} \
${optionalString (cfg.configFile != null) "--config-file ${cfg.configFile}"} \
--dbus-enable
'';
};
}; };
}; };
} }

View File

@ -0,0 +1,134 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.undervolt;
in {
options.services.undervolt = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to undervolt intel cpus.
'';
};
verbose = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable verbose logging.
'';
};
package = mkOption {
type = types.package;
default = pkgs.undervolt;
defaultText = "pkgs.undervolt";
description = ''
undervolt derivation to use.
'';
};
coreOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset the CPU cores by. Accepts a floating point number.
'';
};
gpuOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset the GPU by. Accepts a floating point number.
'';
};
uncoreOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset uncore by. Accepts a floating point number.
'';
};
analogioOffset = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The amount of voltage to offset analogio by. Accepts a floating point number.
'';
};
temp = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The temperature target. Accepts a floating point number.
'';
};
tempAc = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The temperature target on AC power. Accepts a floating point number.
'';
};
tempBat = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The temperature target on battery power. Accepts a floating point number.
'';
};
};
config = mkIf cfg.enable {
boot.kernelModules = [ "msr" ];
environment.systemPackages = [ cfg.package ];
systemd.services.undervolt = {
path = [ pkgs.undervolt ];
description = "Intel Undervolting Service";
serviceConfig = {
Type = "oneshot";
Restart = "no";
# `core` and `cache` are both intentionally set to `cfg.coreOffset` as according to the undervolt docs:
#
# Core or Cache offsets have no effect. It is not possible to set different offsets for
# CPU Core and Cache. The CPU will take the smaller of the two offsets, and apply that to
# both CPU and Cache. A warning message will be displayed if you attempt to set different offsets.
ExecStart = ''
${pkgs.undervolt}/bin/undervolt \
${optionalString cfg.verbose "--verbose"} \
${optionalString (cfg.coreOffset != null) "--core ${cfg.coreOffset}"} \
${optionalString (cfg.coreOffset != null) "--cache ${cfg.coreOffset}"} \
${optionalString (cfg.gpuOffset != null) "--gpu ${cfg.gpuOffset}"} \
${optionalString (cfg.uncoreOffset != null) "--uncore ${cfg.uncoreOffset}"} \
${optionalString (cfg.analogioOffset != null) "--analogio ${cfg.analogioOffset}"} \
${optionalString (cfg.temp != null) "--temp ${cfg.temp}"} \
${optionalString (cfg.tempAc != null) "--temp-ac ${cfg.tempAc}"} \
${optionalString (cfg.tempBat != null) "--temp-bat ${cfg.tempBat}"}
'';
};
};
systemd.timers.undervolt = {
description = "Undervolt timer to ensure voltage settings are always applied";
partOf = [ "undervolt.service" ];
wantedBy = [ "multi-user.target" ];
timerConfig = {
OnBootSec = "2min";
OnUnitActiveSec = "30";
};
};
};
}

View File

@ -85,9 +85,11 @@ in {
after = [ "multi-user.target" ]; # makes sure hostname etc is set after = [ "multi-user.target" ]; # makes sure hostname etc is set
serviceConfig = { serviceConfig = {
Type = "notify"; Type = "notify";
PIDFile = pidFile;
StandardOutput = "null"; StandardOutput = "null";
Restart = "on-failure"; Restart = "on-failure";
ExecStart = "${cfg.package}/sbin/syslog-ng ${concatStringsSep " " syslogngOptions}"; ExecStart = "${cfg.package}/sbin/syslog-ng ${concatStringsSep " " syslogngOptions}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
}; };
}; };
}; };

View File

@ -47,7 +47,7 @@ in
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.dysnomia.enable = true; dysnomia.enable = true;
environment.systemPackages = [ pkgs.disnix ] ++ optional cfg.useWebServiceInterface pkgs.DisnixWebService; environment.systemPackages = [ pkgs.disnix ] ++ optional cfg.useWebServiceInterface pkgs.DisnixWebService;

View File

@ -5,6 +5,43 @@ with lib;
let let
cfg = config.services.dockerRegistry; cfg = config.services.dockerRegistry;
blobCache = if cfg.enableRedisCache
then "redis"
else "inmemory";
registryConfig = {
version = "0.1";
log.fields.service = "registry";
storage = {
cache.blobdescriptor = blobCache;
filesystem.rootdirectory = cfg.storagePath;
delete.enabled = cfg.enableDelete;
};
http = {
addr = ":${builtins.toString cfg.port}";
headers.X-Content-Type-Options = ["nosniff"];
};
health.storagedriver = {
enabled = true;
interval = "10s";
threshold = 3;
};
};
registryConfig.redis = mkIf cfg.enableRedisCache {
addr = "${cfg.redisUrl}";
password = "${cfg.redisPassword}";
db = 0;
dialtimeout = "10ms";
readtimeout = "10ms";
writetimeout = "10ms";
pool = {
maxidle = 16;
maxactive = 64;
idletimeout = "300s";
};
};
configFile = pkgs.writeText "docker-registry-config.yml" (builtins.toJSON (recursiveUpdate registryConfig cfg.extraConfig)); configFile = pkgs.writeText "docker-registry-config.yml" (builtins.toJSON (recursiveUpdate registryConfig cfg.extraConfig));
in { in {

View File

@ -3,7 +3,7 @@
with lib; with lib;
let let
cfg = config.services.dysnomia; cfg = config.dysnomia;
printProperties = properties: printProperties = properties:
concatMapStrings (propertyName: concatMapStrings (propertyName:
@ -69,7 +69,7 @@ let
in in
{ {
options = { options = {
services.dysnomia = { dysnomia = {
enable = mkOption { enable = mkOption {
type = types.bool; type = types.bool;
@ -142,7 +142,7 @@ in
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
services.dysnomia.package = pkgs.dysnomia.override (origArgs: { dysnomia.package = pkgs.dysnomia.override (origArgs: {
enableApacheWebApplication = config.services.httpd.enable; enableApacheWebApplication = config.services.httpd.enable;
enableAxis2WebService = config.services.tomcat.axis2.enable; enableAxis2WebService = config.services.tomcat.axis2.enable;
enableEjabberdDump = config.services.ejabberd.enable; enableEjabberdDump = config.services.ejabberd.enable;
@ -153,7 +153,7 @@ in
enableMongoDatabase = config.services.mongodb.enable; enableMongoDatabase = config.services.mongodb.enable;
}); });
services.dysnomia.properties = { dysnomia.properties = {
hostname = config.networking.hostName; hostname = config.networking.hostName;
inherit (config.nixpkgs.localSystem) system; inherit (config.nixpkgs.localSystem) system;
@ -171,7 +171,7 @@ in
}}"); }}");
}; };
services.dysnomia.containers = lib.recursiveUpdate ({ dysnomia.containers = lib.recursiveUpdate ({
process = {}; process = {};
wrapper = {}; wrapper = {};
} }

View File

@ -88,7 +88,7 @@ in
}; };
maxJobs = mkOption { maxJobs = mkOption {
type = types.int; type = types.either types.int (types.enum ["auto"]);
default = 1; default = 1;
example = 64; example = 64;
description = '' description = ''

View File

@ -1,121 +1,124 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
# TODO: support non-postgresql
with lib; with lib;
let let
cfg = config.services.redmine; cfg = config.services.redmine;
ruby = pkgs.ruby; bundle = "${pkgs.redmine}/share/redmine/bin/bundle";
databaseYml = '' databaseYml = pkgs.writeText "database.yml" ''
production: production:
adapter: postgresql adapter: ${cfg.database.type}
database: ${cfg.databaseName} database: ${cfg.database.name}
host: ${cfg.databaseHost} host: ${cfg.database.host}
password: ${cfg.databasePassword} port: ${toString cfg.database.port}
username: ${cfg.databaseUsername} username: ${cfg.database.user}
encoding: utf8 password: #dbpass#
''; '';
configurationYml = '' configurationYml = pkgs.writeText "configuration.yml" ''
default: default:
# Absolute path to the directory where attachments are stored. scm_subversion_command: ${pkgs.subversion}/bin/svn
# The default is the 'files' directory in your Redmine instance. scm_mercurial_command: ${pkgs.mercurial}/bin/hg
# Your Redmine instance needs to have write permission on this scm_git_command: ${pkgs.gitAndTools.git}/bin/git
# directory. scm_cvs_command: ${pkgs.cvs}/bin/cvs
# Examples: scm_bazaar_command: ${pkgs.bazaar}/bin/bzr
# attachments_storage_path: /var/redmine/files scm_darcs_command: ${pkgs.darcs}/bin/darcs
# attachments_storage_path: D:/redmine/files
attachments_storage_path: ${cfg.stateDir}/files
# Absolute path to the SCM commands errors (stderr) log file. ${cfg.extraConfig}
# The default is to log in the 'log' directory of your Redmine instance.
# Example:
# scm_stderr_log_file: /var/log/redmine_scm_stderr.log
scm_stderr_log_file: ${cfg.stateDir}/redmine_scm_stderr.log
${cfg.extraConfig}
''; '';
unpackTheme = unpack "theme"; in
unpackPlugin = unpack "plugin";
unpack = id: (name: source:
pkgs.stdenv.mkDerivation {
name = "redmine-${id}-${name}";
buildInputs = [ pkgs.unzip ];
buildCommand = ''
mkdir -p $out
cd $out
unpackFile ${source}
'';
});
in {
{
options = { options = {
services.redmine = { services.redmine = {
enable = mkOption { enable = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = '' description = "Enable the Redmine service.";
Enable the redmine service. };
'';
user = mkOption {
type = types.str;
default = "redmine";
description = "User under which Redmine is ran.";
};
group = mkOption {
type = types.str;
default = "redmine";
description = "Group under which Redmine is ran.";
}; };
stateDir = mkOption { stateDir = mkOption {
type = types.str; type = types.str;
default = "/var/redmine"; default = "/var/lib/redmine";
description = "The state directory, logs and plugins are stored here"; description = "The state directory, logs and plugins are stored here.";
}; };
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
description = "Extra configuration in configuration.yml"; description = ''
Extra configuration in configuration.yml.
See https://guides.rubyonrails.org/action_mailer_basics.html#action-mailer-configuration
'';
}; };
themes = mkOption { database = {
type = types.attrsOf types.path; type = mkOption {
default = {}; type = types.enum [ "mysql2" "postgresql" ];
description = "Set of themes"; example = "postgresql";
}; default = "mysql2";
description = "Database engine to use.";
};
plugins = mkOption { host = mkOption {
type = types.attrsOf types.path; type = types.str;
default = {}; default = "127.0.0.1";
description = "Set of plugins"; description = "Database host address.";
}; };
#databaseType = mkOption { port = mkOption {
# type = types.str; type = types.int;
# default = "postgresql"; default = 3306;
# description = "Type of database"; description = "Database host port.";
#}; };
databaseHost = mkOption { name = mkOption {
type = types.str; type = types.str;
default = "127.0.0.1"; default = "redmine";
description = "Database hostname"; description = "Database name.";
}; };
databasePassword = mkOption { user = mkOption {
type = types.str; type = types.str;
default = ""; default = "redmine";
description = "Database user password"; description = "Database user.";
}; };
databaseName = mkOption { password = mkOption {
type = types.str; type = types.str;
default = "redmine"; default = "";
description = "Database name"; description = ''
}; The password corresponding to <option>database.user</option>.
Warning: this is stored in cleartext in the Nix store!
Use <option>database.passwordFile</option> instead.
'';
};
databaseUsername = mkOption { passwordFile = mkOption {
type = types.str; type = types.nullOr types.path;
default = "redmine"; default = null;
description = "Database user"; example = "/run/keys/redmine-dbpassword";
description = ''
A file containing the password corresponding to
<option>database.user</option>.
'';
};
}; };
}; };
}; };
@ -123,99 +126,106 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
assertions = [ assertions = [
{ assertion = cfg.databasePassword != ""; { assertion = cfg.database.passwordFile != null || cfg.database.password != "";
message = "services.redmine.databasePassword must be set"; message = "either services.redmine.database.passwordFile or services.redmine.database.password must be set";
} }
]; ];
users.users = [ environment.systemPackages = [ pkgs.redmine ];
{ name = "redmine";
group = "redmine";
uid = config.ids.uids.redmine;
} ];
users.groups = [
{ name = "redmine";
gid = config.ids.gids.redmine;
} ];
systemd.services.redmine = { systemd.services.redmine = {
after = [ "network.target" "postgresql.service" ]; after = [ "network.target" (if cfg.database.type == "mysql2" then "mysql.service" else "postgresql.service") ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
environment.RAILS_ENV = "production";
environment.RAILS_ETC = "${cfg.stateDir}/config";
environment.RAILS_LOG = "${cfg.stateDir}/log";
environment.RAILS_VAR = "${cfg.stateDir}/var";
environment.RAILS_CACHE = "${cfg.stateDir}/cache";
environment.RAILS_PLUGINS = "${cfg.stateDir}/plugins";
environment.RAILS_PUBLIC = "${cfg.stateDir}/public";
environment.RAILS_TMP = "${cfg.stateDir}/tmp";
environment.SCHEMA = "${cfg.stateDir}/cache/schema.db";
environment.HOME = "${pkgs.redmine}/share/redmine"; environment.HOME = "${pkgs.redmine}/share/redmine";
environment.RAILS_ENV = "production";
environment.RAILS_CACHE = "${cfg.stateDir}/cache";
environment.REDMINE_LANG = "en"; environment.REDMINE_LANG = "en";
environment.GEM_HOME = "${pkgs.redmine}/share/redmine/vendor/bundle/ruby/1.9.1"; environment.SCHEMA = "${cfg.stateDir}/cache/schema.db";
environment.GEM_PATH = "${pkgs.bundler}/${pkgs.bundler.ruby.gemPath}";
path = with pkgs; [ path = with pkgs; [
imagemagickBig imagemagickBig
subversion
mercurial
cvs
config.services.postgresql.package
bazaar bazaar
cvs
darcs
gitAndTools.git gitAndTools.git
# once we build binaries for darc enable it mercurial
#darcs subversion
]; ];
preStart = '' preStart = ''
# TODO: use env vars # start with a fresh config directory every time
for i in plugins public/plugin_assets db files log config cache var/files tmp; do rm -rf ${cfg.stateDir}/config
cp -r ${pkgs.redmine}/share/redmine/config.dist ${cfg.stateDir}/config
# create the basic state directory layout pkgs.redmine expects
mkdir -p /run/redmine
for i in config files log plugins tmp; do
mkdir -p ${cfg.stateDir}/$i mkdir -p ${cfg.stateDir}/$i
ln -fs ${cfg.stateDir}/$i /run/redmine/$i
done done
chown -R redmine:redmine ${cfg.stateDir} # ensure cache directory exists for db:migrate command
chmod -R 755 ${cfg.stateDir} mkdir -p ${cfg.stateDir}/cache
rm -rf ${cfg.stateDir}/public/* # link in the application configuration
cp -R ${pkgs.redmine}/share/redmine/public/* ${cfg.stateDir}/public/ ln -fs ${configurationYml} ${cfg.stateDir}/config/configuration.yml
for theme in ${concatStringsSep " " (mapAttrsToList unpackTheme cfg.themes)}; do
ln -fs $theme/* ${cfg.stateDir}/public/themes/
done
rm -rf ${cfg.stateDir}/plugins/* chmod -R ug+rwX,o-rwx+x ${cfg.stateDir}/
for plugin in ${concatStringsSep " " (mapAttrsToList unpackPlugin cfg.plugins)}; do
ln -fs $plugin/* ${cfg.stateDir}/plugins/''${plugin##*-redmine-plugin-}
done
ln -fs ${pkgs.writeText "database.yml" databaseYml} ${cfg.stateDir}/config/database.yml # handle database.passwordFile
ln -fs ${pkgs.writeText "configuration.yml" configurationYml} ${cfg.stateDir}/config/configuration.yml DBPASS=$(head -n1 ${cfg.database.passwordFile})
cp -f ${databaseYml} ${cfg.stateDir}/config/database.yml
sed -e "s,#dbpass#,$DBPASS,g" -i ${cfg.stateDir}/config/database.yml
chmod 440 ${cfg.stateDir}/config/database.yml
if [ "${cfg.databaseHost}" = "127.0.0.1" ]; then # generate a secret token if required
if ! test -e "${cfg.stateDir}/db-created"; then if ! test -e "${cfg.stateDir}/config/initializers/secret_token.rb"; then
psql postgres -c "CREATE ROLE redmine WITH LOGIN NOCREATEDB NOCREATEROLE ENCRYPTED PASSWORD '${cfg.databasePassword}'" ${bundle} exec rake generate_secret_token
${config.services.postgresql.package}/bin/createdb --owner redmine redmine || true chmod 440 ${cfg.stateDir}/config/initializers/secret_token.rb
touch "${cfg.stateDir}/db-created"
fi
fi fi
cd ${pkgs.redmine}/share/redmine/ # ensure everything is owned by ${cfg.user}
${ruby}/bin/rake db:migrate chown -R ${cfg.user}:${cfg.group} ${cfg.stateDir}
${ruby}/bin/rake redmine:plugins:migrate
${ruby}/bin/rake redmine:load_default_data ${bundle} exec rake db:migrate
${ruby}/bin/rake generate_secret_token ${bundle} exec rake redmine:load_default_data
''; '';
serviceConfig = { serviceConfig = {
PermissionsStartOnly = true; # preStart must be run as root PermissionsStartOnly = true; # preStart must be run as root
Type = "simple"; Type = "simple";
User = "redmine"; User = cfg.user;
Group = "redmine"; Group = cfg.group;
TimeoutSec = "300"; TimeoutSec = "300";
WorkingDirectory = "${pkgs.redmine}/share/redmine"; WorkingDirectory = "${pkgs.redmine}/share/redmine";
ExecStart="${ruby}/bin/ruby ${pkgs.redmine}/share/redmine/script/rails server webrick -e production -P ${cfg.stateDir}/redmine.pid"; ExecStart="${bundle} exec rails server webrick -e production -P ${cfg.stateDir}/redmine.pid";
}; };
}; };
users.extraUsers = optionalAttrs (cfg.user == "redmine") (singleton
{ name = "redmine";
group = cfg.group;
home = cfg.stateDir;
createHome = true;
uid = config.ids.uids.redmine;
});
users.extraGroups = optionalAttrs (cfg.group == "redmine") (singleton
{ name = "redmine";
gid = config.ids.gids.redmine;
});
warnings = optional (cfg.database.password != "")
''config.services.redmine.database.password will be stored as plaintext
in the Nix store. Use database.passwordFile instead.'';
# Create database passwordFile default when password is configured.
services.redmine.database.passwordFile =
(mkDefault (toString (pkgs.writeTextFile {
name = "redmine-database-password";
text = cfg.database.password;
})));
}; };
} }

View File

@ -83,20 +83,20 @@ in
config = mkMerge [ config = mkMerge [
(mkIf cfgC.enable { (mkIf cfgC.enable {
systemd.services."synergy-client" = { systemd.user.services."synergy-client" = {
after = [ "network.target" ]; after = [ "network.target" "graphical-session.target" ];
description = "Synergy client"; description = "Synergy client";
wantedBy = optional cfgC.autoStart "multi-user.target"; wantedBy = optional cfgC.autoStart "graphical-session.target";
path = [ pkgs.synergy ]; path = [ pkgs.synergy ];
serviceConfig.ExecStart = ''${pkgs.synergy}/bin/synergyc -f ${optionalString (cfgC.screenName != "") "-n ${cfgC.screenName}"} ${cfgC.serverAddress}''; serviceConfig.ExecStart = ''${pkgs.synergy}/bin/synergyc -f ${optionalString (cfgC.screenName != "") "-n ${cfgC.screenName}"} ${cfgC.serverAddress}'';
serviceConfig.Restart = "on-failure"; serviceConfig.Restart = "on-failure";
}; };
}) })
(mkIf cfgS.enable { (mkIf cfgS.enable {
systemd.services."synergy-server" = { systemd.user.services."synergy-server" = {
after = [ "network.target" ]; after = [ "network.target" "graphical-session.target" ];
description = "Synergy server"; description = "Synergy server";
wantedBy = optional cfgS.autoStart "multi-user.target"; wantedBy = optional cfgS.autoStart "graphical-session.target";
path = [ pkgs.synergy ]; path = [ pkgs.synergy ];
serviceConfig.ExecStart = ''${pkgs.synergy}/bin/synergys -c ${cfgS.configFile} -f ${optionalString (cfgS.address != "") "-a ${cfgS.address}"} ${optionalString (cfgS.screenName != "") "-n ${cfgS.screenName}" }''; serviceConfig.ExecStart = ''${pkgs.synergy}/bin/synergys -c ${cfgS.configFile} -f ${optionalString (cfgS.address != "") "-a ${cfgS.address}"} ${optionalString (cfgS.screenName != "") "-n ${cfgS.screenName}" }'';
serviceConfig.Restart = "on-failure"; serviceConfig.Restart = "on-failure";

View File

@ -0,0 +1,236 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.datadog-agent;
ddConf = {
dd_url = "https://app.datadoghq.com";
skip_ssl_validation = "no";
api_key = "";
confd_path = "/etc/datadog-agent/conf.d";
additional_checksd = "/etc/datadog-agent/checks.d";
use_dogstatsd = "yes";
}
// optionalAttrs (cfg.logLevel != null) { log_level = cfg.logLevel; }
// optionalAttrs (cfg.hostname != null) { inherit (cfg) hostname; }
// optionalAttrs (cfg.tags != null ) { tags = concatStringsSep ", " cfg.tags; }
// cfg.extraConfig;
# Generate Datadog configuration files for each configured checks.
# This works because check configurations have predictable paths,
# and because JSON is a valid subset of YAML.
makeCheckConfigs = entries: mapAttrsToList (name: conf: {
source = pkgs.writeText "${name}-check-conf.yaml" (builtins.toJSON conf);
target = "datadog-agent/conf.d/${name}.d/conf.yaml";
}) entries;
defaultChecks = {
disk = cfg.diskCheck;
network = cfg.networkCheck;
};
# Assemble all check configurations and the top-level agent
# configuration.
etcfiles = with pkgs; with builtins; [{
source = writeText "datadog.yaml" (toJSON ddConf);
target = "datadog-agent/datadog.yaml";
}] ++ makeCheckConfigs (cfg.checks // defaultChecks);
# Apply the configured extraIntegrations to the provided agent
# package. See the documentation of `dd-agent/integrations-core.nix`
# for detailed information on this.
datadogPkg = cfg.package.overrideAttrs(_: {
python = (pkgs.datadog-integrations-core cfg.extraIntegrations).python;
});
in {
options.services.datadog-agent = {
enable = mkOption {
description = ''
Whether to enable the datadog-agent v6 monitoring service
'';
default = false;
type = types.bool;
};
package = mkOption {
default = pkgs.datadog-agent;
defaultText = "pkgs.datadog-agent";
description = ''
Which DataDog v6 agent package to use. Note that the provided
package is expected to have an overridable `python`-attribute
which configures the Python environment with the Datadog
checks.
'';
type = types.package;
};
apiKeyFile = mkOption {
description = ''
Path to a file containing the Datadog API key to associate the
agent with your account.
'';
example = "/run/keys/datadog_api_key";
type = types.path;
};
tags = mkOption {
description = "The tags to mark this Datadog agent";
example = [ "test" "service" ];
default = null;
type = types.nullOr (types.listOf types.str);
};
hostname = mkOption {
description = "The hostname to show in the Datadog dashboard (optional)";
default = null;
example = "mymachine.mydomain";
type = types.uniq (types.nullOr types.string);
};
logLevel = mkOption {
description = "Logging verbosity.";
default = null;
type = types.nullOr (types.enum ["DEBUG" "INFO" "WARN" "ERROR"]);
};
extraIntegrations = mkOption {
default = {};
type = types.attrs;
description = ''
Extra integrations from the Datadog core-integrations
repository that should be built and included.
By default the included integrations are disk, mongo, network,
nginx and postgres.
To include additional integrations the name of the derivation
and a function to filter its dependencies from the Python
package set must be provided.
'';
example = {
ntp = (pythonPackages: [ pythonPackages.ntplib ]);
};
};
extraConfig = mkOption {
default = {};
type = types.attrs;
description = ''
Extra configuration options that will be merged into the
main config file <filename>datadog.yaml</filename>.
'';
};
checks = mkOption {
description = ''
Configuration for all Datadog checks. Keys of this attribute
set will be used as the name of the check to create the
appropriate configuration in `conf.d/$check.d/conf.yaml`.
The configuration is converted into JSON from the plain Nix
language configuration, meaning that you should write
configuration adhering to Datadog's documentation - but in Nix
language.
Refer to the implementation of this module (specifically the
definition of `defaultChecks`) for an example.
Note: The 'disk' and 'network' check are configured in
separate options because they exist by default. Attempting to
override their configuration here will have no effect.
'';
example = {
http_check = {
init_config = null; # sic!
instances = [
{
name = "some-service";
url = "http://localhost:1337/healthz";
tags = [ "some-service" ];
}
];
};
};
default = {};
# sic! The structure of the values is up to the check, so we can
# not usefully constrain the type further.
type = with types; attrsOf attrs;
};
diskCheck = mkOption {
description = "Disk check config";
type = types.attrs;
default = {
init_config = {};
instances = [ { use-mount = "no"; } ];
};
};
networkCheck = mkOption {
description = "Network check config";
type = types.attrs;
default = {
init_config = {};
# Network check only supports one configured instance
instances = [ { collect_connection_state = false;
excluded_interfaces = [ "lo" "lo0" ]; } ];
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ datadogPkg pkgs.sysstat pkgs.procps ];
users.extraUsers.datadog = {
description = "Datadog Agent User";
uid = config.ids.uids.datadog;
group = "datadog";
home = "/var/log/datadog/";
createHome = true;
};
users.extraGroups.datadog.gid = config.ids.gids.datadog;
systemd.services = let
makeService = attrs: recursiveUpdate {
path = [ datadogPkg pkgs.python pkgs.sysstat pkgs.procps ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = "datadog";
Group = "datadog";
Restart = "always";
RestartSec = 2;
PrivateTmp = true;
};
restartTriggers = [ datadogPkg ] ++ map (etc: etc.source) etcfiles;
} attrs;
in {
datadog-agent = makeService {
description = "Datadog agent monitor";
preStart = ''
chown -R datadog: /etc/datadog-agent
rm -f /etc/datadog-agent/auth_token
'';
script = ''
export DD_API_KEY=$(head -n 1 ${cfg.apiKeyFile})
exec ${datadogPkg}/bin/agent start -c /etc/datadog-agent/datadog.yaml
'';
serviceConfig.PermissionsStartOnly = true;
};
dd-jmxfetch = lib.mkIf (lib.hasAttr "jmx" cfg.checks) (makeService {
description = "Datadog JMX Fetcher";
path = [ datadogPkg pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
serviceConfig.ExecStart = "${datadogPkg}/bin/dd-jmxfetch";
});
};
environment.etc = etcfiles;
};
}

View File

@ -114,13 +114,22 @@ let
in { in {
options.services.dd-agent = { options.services.dd-agent = {
enable = mkOption { enable = mkOption {
description = "Whether to enable the dd-agent montioring service"; description = ''
Whether to enable the dd-agent v5 monitoring service.
For datadog-agent v6, see <option>services.datadog-agent.enable</option>.
'';
default = false; default = false;
type = types.bool; type = types.bool;
}; };
api_key = mkOption { api_key = mkOption {
description = "The Datadog API key to associate the agent with your account"; description = ''
The Datadog API key to associate the agent with your account.
Warning: this key is stored in cleartext within the world-readable
Nix store! Consider using the new v6
<option>services.datadog-agent</option> module instead.
'';
example = "ae0aa6a8f08efa988ba0a17578f009ab"; example = "ae0aa6a8f08efa988ba0a17578f009ab";
type = types.str; type = types.str;
}; };
@ -188,48 +197,41 @@ in {
users.groups.datadog.gid = config.ids.gids.datadog; users.groups.datadog.gid = config.ids.gids.datadog;
systemd.services.dd-agent = { systemd.services = let
description = "Datadog agent monitor"; makeService = attrs: recursiveUpdate {
path = [ pkgs."dd-agent" pkgs.python pkgs.sysstat pkgs.procps pkgs.gohai ]; path = [ pkgs.dd-agent pkgs.python pkgs.sysstat pkgs.procps pkgs.gohai ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${pkgs.dd-agent}/bin/dd-agent foreground"; User = "datadog";
User = "datadog"; Group = "datadog";
Group = "datadog"; Restart = "always";
Restart = "always"; RestartSec = 2;
RestartSec = 2; PrivateTmp = true;
};
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
} attrs;
in {
dd-agent = makeService {
description = "Datadog agent monitor";
serviceConfig.ExecStart = "${pkgs.dd-agent}/bin/dd-agent foreground";
}; };
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
};
systemd.services.dogstatsd = { dogstatsd = makeService {
description = "Datadog statsd"; description = "Datadog statsd";
path = [ pkgs."dd-agent" pkgs.python pkgs.procps ]; environment.TMPDIR = "/run/dogstatsd";
wantedBy = [ "multi-user.target" ]; serviceConfig = {
serviceConfig = { ExecStart = "${pkgs.dd-agent}/bin/dogstatsd start";
ExecStart = "${pkgs.dd-agent}/bin/dogstatsd start"; Type = "forking";
User = "datadog"; PIDFile = "/run/dogstatsd/dogstatsd.pid";
Group = "datadog"; RuntimeDirectory = "dogstatsd";
Type = "forking"; };
PIDFile = "/tmp/dogstatsd.pid";
Restart = "always";
RestartSec = 2;
}; };
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig processConfig ];
};
systemd.services.dd-jmxfetch = lib.mkIf (cfg.jmxConfig != null) { dd-jmxfetch = lib.mkIf (cfg.jmxConfig != null) {
description = "Datadog JMX Fetcher"; description = "Datadog JMX Fetcher";
path = [ pkgs."dd-agent" pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ]; path = [ pkgs.dd-agent pkgs.python pkgs.sysstat pkgs.procps pkgs.jdk ];
wantedBy = [ "multi-user.target" ]; serviceConfig.ExecStart = "${pkgs.dd-agent}/bin/dd-jmxfetch";
serviceConfig = {
ExecStart = "${pkgs.dd-agent}/bin/dd-jmxfetch";
User = "datadog";
Group = "datadog";
Restart = "always";
RestartSec = 2;
}; };
restartTriggers = [ pkgs.dd-agent ddConf diskConfig networkConfig postgresqlConfig nginxConfig mongoConfig jmxConfig ];
}; };
environment.etc = etcfiles; environment.etc = etcfiles;

View File

@ -57,12 +57,6 @@ let
--nodaemon --syslog --prefix=${name} --pidfile /run/${name}/${name}.pid ${name} --nodaemon --syslog --prefix=${name} --pidfile /run/${name}/${name}.pid ${name}
''; '';
mkPidFileDir = name: ''
mkdir -p /run/${name}
chmod 0700 /run/${name}
chown -R graphite:graphite /run/${name}
'';
carbonEnv = { carbonEnv = {
PYTHONPATH = let PYTHONPATH = let
cenv = pkgs.python.buildEnv.override { cenv = pkgs.python.buildEnv.override {
@ -412,18 +406,16 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
environment = carbonEnv; environment = carbonEnv;
serviceConfig = { serviceConfig = {
RuntimeDirectory = name;
ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}"; ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}";
User = "graphite"; User = "graphite";
Group = "graphite"; Group = "graphite";
PermissionsStartOnly = true; PermissionsStartOnly = true;
PIDFile="/run/${name}/${name}.pid"; PIDFile="/run/${name}/${name}.pid";
}; };
preStart = mkPidFileDir name + '' preStart = ''
install -dm0700 -o graphite -g graphite ${cfg.dataDir}
mkdir -p ${cfg.dataDir}/whisper install -dm0700 -o graphite -g graphite ${cfg.dataDir}/whisper
chmod 0700 ${cfg.dataDir}/whisper
chown graphite:graphite ${cfg.dataDir}
chown graphite:graphite ${cfg.dataDir}/whisper
''; '';
}; };
}) })
@ -436,12 +428,12 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
environment = carbonEnv; environment = carbonEnv;
serviceConfig = { serviceConfig = {
RuntimeDirectory = name;
ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}"; ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}";
User = "graphite"; User = "graphite";
Group = "graphite"; Group = "graphite";
PIDFile="/run/${name}/${name}.pid"; PIDFile="/run/${name}/${name}.pid";
}; };
preStart = mkPidFileDir name;
}; };
}) })
@ -452,12 +444,12 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
environment = carbonEnv; environment = carbonEnv;
serviceConfig = { serviceConfig = {
RuntimeDirectory = name;
ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}"; ExecStart = "${pkgs.pythonPackages.twisted}/bin/twistd ${carbonOpts name}";
User = "graphite"; User = "graphite";
Group = "graphite"; Group = "graphite";
PIDFile="/run/${name}/${name}.pid"; PIDFile="/run/${name}/${name}.pid";
}; };
preStart = mkPidFileDir name;
}; };
}) })

View File

@ -14,6 +14,10 @@ let
global = { global = {
"plugins directory" = "${wrappedPlugins}/libexec/netdata/plugins.d ${pkgs.netdata}/libexec/netdata/plugins.d"; "plugins directory" = "${wrappedPlugins}/libexec/netdata/plugins.d ${pkgs.netdata}/libexec/netdata/plugins.d";
}; };
web = {
"web files owner" = "root";
"web files group" = "root";
};
}; };
mkConfig = generators.toINI {} (recursiveUpdate localConfig cfg.config); mkConfig = generators.toINI {} (recursiveUpdate localConfig cfg.config);
configFile = pkgs.writeText "netdata.conf" (if cfg.configText != null then cfg.configText else mkConfig); configFile = pkgs.writeText "netdata.conf" (if cfg.configText != null then cfg.configText else mkConfig);

View File

@ -73,7 +73,7 @@ let
description = '' description = ''
Specify a filter for iptables to use when Specify a filter for iptables to use when
<option>services.prometheus.exporters.${name}.openFirewall</option> <option>services.prometheus.exporters.${name}.openFirewall</option>
is true. It is used as `ip46tables -I INPUT <option>firewallFilter</option> -j ACCEPT`. is true. It is used as `ip46tables -I nixos-fw <option>firewallFilter</option> -j nixos-fw-accept`.
''; '';
}; };
user = mkOption { user = mkOption {
@ -116,9 +116,10 @@ let
mkExporterConf = { name, conf, serviceOpts }: mkExporterConf = { name, conf, serviceOpts }:
mkIf conf.enable { mkIf conf.enable {
networking.firewall.extraCommands = mkIf conf.openFirewall '' networking.firewall.extraCommands = mkIf conf.openFirewall (concatStrings [
ip46tables -I INPUT ${conf.firewallFilter} -j ACCEPT "ip46tables -I nixos-fw ${conf.firewallFilter} "
''; "-m comment --comment ${name}-exporter -j nixos-fw-accept"
]);
systemd.services."prometheus-${name}-exporter" = mkMerge ([{ systemd.services."prometheus-${name}-exporter" = mkMerge ([{
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "network.target" ]; after = [ "network.target" ];

View File

@ -214,12 +214,10 @@ in
} }
]; ];
# Always provide a smb.conf to shut up programs like smbclient and smbspool. # Always provide a smb.conf to shut up programs like smbclient and smbspool.
environment.etc = singleton environment.etc."samba/smb.conf".source = mkOptionDefault (
{ source = if cfg.enable then configFile
if cfg.enable then configFile else pkgs.writeText "smb-dummy.conf" "# Samba is disabled."
else pkgs.writeText "smb-dummy.conf" "# Samba is disabled."; );
target = "samba/smb.conf";
};
} }
(mkIf cfg.enable { (mkIf cfg.enable {

View File

@ -161,8 +161,9 @@ in
{ description = "DHCP Client"; { description = "DHCP Client";
wantedBy = [ "multi-user.target" ] ++ optional (!hasDefaultGatewaySet) "network-online.target"; wantedBy = [ "multi-user.target" ] ++ optional (!hasDefaultGatewaySet) "network-online.target";
after = [ "network.target" ]; wants = [ "network.target" "systemd-udev-settle.service" ];
wants = [ "network.target" ]; before = [ "network.target" ];
after = [ "systemd-udev-settle.service" ];
# Stopping dhcpcd during a reconfiguration is undesirable # Stopping dhcpcd during a reconfiguration is undesirable
# because it brings down the network interfaces configured by # because it brings down the network interfaces configured by

View File

@ -8,6 +8,7 @@ let
${optionalString cfg.userControlled.enable '' ${optionalString cfg.userControlled.enable ''
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=${cfg.userControlled.group} ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=${cfg.userControlled.group}
update_config=1''} update_config=1''}
${cfg.extraConfig}
${concatStringsSep "\n" (mapAttrsToList (ssid: config: with config; let ${concatStringsSep "\n" (mapAttrsToList (ssid: config: with config; let
key = if psk != null key = if psk != null
then ''"${psk}"'' then ''"${psk}"''
@ -165,6 +166,17 @@ in {
description = "Members of this group can control wpa_supplicant."; description = "Members of this group can control wpa_supplicant.";
}; };
}; };
extraConfig = mkOption {
type = types.str;
default = "";
example = ''
p2p_disabled=1
'';
description = ''
Extra lines appended to the configuration file.
See wpa_supplicant.conf(5) for available options.
'';
};
}; };
}; };

View File

@ -17,6 +17,15 @@ in
''; '';
}; };
options.services.zerotierone.port = mkOption {
default = 9993;
example = 9993;
type = types.int;
description = ''
Network port used by ZeroTier.
'';
};
options.services.zerotierone.package = mkOption { options.services.zerotierone.package = mkOption {
default = pkgs.zerotierone; default = pkgs.zerotierone;
defaultText = "pkgs.zerotierone"; defaultText = "pkgs.zerotierone";
@ -40,7 +49,7 @@ in
touch "/var/lib/zerotier-one/networks.d/${netId}.conf" touch "/var/lib/zerotier-one/networks.d/${netId}.conf"
'') cfg.joinNetworks); '') cfg.joinNetworks);
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/zerotier-one"; ExecStart = "${cfg.package}/bin/zerotier-one -p${toString cfg.port}";
Restart = "always"; Restart = "always";
KillMode = "process"; KillMode = "process";
}; };
@ -49,8 +58,8 @@ in
# ZeroTier does not issue DHCP leases, but some strangers might... # ZeroTier does not issue DHCP leases, but some strangers might...
networking.dhcpcd.denyInterfaces = [ "zt*" ]; networking.dhcpcd.denyInterfaces = [ "zt*" ];
# ZeroTier receives UDP transmissions on port 9993 by default # ZeroTier receives UDP transmissions
networking.firewall.allowedUDPPorts = [ 9993 ]; networking.firewall.allowedUDPPorts = [ cfg.port ];
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
}; };

View File

@ -0,0 +1,194 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.certmgr;
specs = mapAttrsToList (n: v: rec {
name = n + ".json";
path = if isAttrs v then pkgs.writeText name (builtins.toJSON v) else v;
}) cfg.specs;
allSpecs = pkgs.linkFarm "certmgr.d" specs;
certmgrYaml = pkgs.writeText "certmgr.yaml" (builtins.toJSON {
dir = allSpecs;
default_remote = cfg.defaultRemote;
svcmgr = cfg.svcManager;
before = cfg.validMin;
interval = cfg.renewInterval;
inherit (cfg) metricsPort metricsAddress;
});
specPaths = map dirOf (concatMap (spec:
if isAttrs spec then
collect isString (filterAttrsRecursive (n: v: isAttrs v || n == "path") spec)
else
[ spec ]
) (attrValues cfg.specs));
preStart = ''
${concatStringsSep " \\\n" (["mkdir -p"] ++ map escapeShellArg specPaths)}
${pkgs.certmgr}/bin/certmgr -f ${certmgrYaml} check
'';
in
{
options.services.certmgr = {
enable = mkEnableOption "certmgr";
defaultRemote = mkOption {
type = types.str;
default = "127.0.0.1:8888";
description = "The default CA host:port to use.";
};
validMin = mkOption {
default = "72h";
type = types.str;
description = "The interval before a certificate expires to start attempting to renew it.";
};
renewInterval = mkOption {
default = "30m";
type = types.str;
description = "How often to check certificate expirations and how often to update the cert_next_expires metric.";
};
metricsAddress = mkOption {
default = "127.0.0.1";
type = types.str;
description = "The address for the Prometheus HTTP endpoint.";
};
metricsPort = mkOption {
default = 9488;
type = types.ints.u16;
description = "The port for the Prometheus HTTP endpoint.";
};
specs = mkOption {
default = {};
example = literalExample ''
{
exampleCert =
let
domain = "example.com";
secret = name: "/var/lib/secrets/''${name}.pem";
in {
service = "nginx";
action = "reload";
authority = {
file.path = secret "ca";
};
certificate = {
path = secret domain;
};
private_key = {
owner = "root";
group = "root";
mode = "0600";
path = secret "''${domain}-key";
};
request = {
CN = domain;
hosts = [ "mail.''${domain}" "www.''${domain}" ];
key = {
algo = "rsa";
size = 2048;
};
names = {
O = "Example Organization";
C = "USA";
};
};
};
otherCert = "/var/certmgr/specs/other-cert.json";
}
'';
type = with types; attrsOf (either (submodule {
options = {
service = mkOption {
type = nullOr str;
default = null;
description = "The service on which to perform &lt;action&gt; after fetching.";
};
action = mkOption {
type = addCheck str (x: cfg.svcManager == "command" || elem x ["restart" "reload" "nop"]);
default = "nop";
description = "The action to take after fetching.";
};
# These ought all to be specified according to certmgr spec def.
authority = mkOption {
type = attrs;
description = "certmgr spec authority object.";
};
certificate = mkOption {
type = nullOr attrs;
description = "certmgr spec certificate object.";
};
private_key = mkOption {
type = nullOr attrs;
description = "certmgr spec private_key object.";
};
request = mkOption {
type = nullOr attrs;
description = "certmgr spec request object.";
};
};
}) path);
description = ''
Certificate specs as described by:
<link xlink:href="https://github.com/cloudflare/certmgr#certificate-specs" />
These will be added to the Nix store, so they will be world readable.
'';
};
svcManager = mkOption {
default = "systemd";
type = types.enum [ "circus" "command" "dummy" "openrc" "systemd" "sysv" ];
description = ''
This specifies the service manager to use for restarting or reloading services.
See: <link xlink:href="https://github.com/cloudflare/certmgr#certmgryaml" />.
For how to use the "command" service manager in particular,
see: <link xlink:href="https://github.com/cloudflare/certmgr#command-svcmgr-and-how-to-use-it" />.
'';
};
};
config = mkIf cfg.enable {
assertions = [
{
assertion = cfg.specs != {};
message = "Certmgr specs cannot be empty.";
}
{
assertion = !any (hasAttrByPath [ "authority" "auth_key" ]) (attrValues cfg.specs);
message = ''
Inline services.certmgr.specs are added to the Nix store rendering them world readable.
Specify paths as specs, if you want to use include auth_key - or use the auth_key_file option."
'';
}
];
systemd.services.certmgr = {
description = "certmgr";
path = mkIf (cfg.svcManager == "command") [ pkgs.bash ];
after = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
inherit preStart;
serviceConfig = {
Restart = "always";
RestartSec = "10s";
ExecStart = "${pkgs.certmgr}/bin/certmgr -f ${certmgrYaml}";
};
};
};
}

View File

@ -0,0 +1,209 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.cfssl;
in {
options.services.cfssl = {
enable = mkEnableOption "the CFSSL CA api-server";
dataDir = mkOption {
default = "/var/lib/cfssl";
type = types.path;
description = "Cfssl work directory.";
};
address = mkOption {
default = "127.0.0.1";
type = types.str;
description = "Address to bind.";
};
port = mkOption {
default = 8888;
type = types.ints.u16;
description = "Port to bind.";
};
ca = mkOption {
defaultText = "\${cfg.dataDir}/ca.pem";
type = types.str;
description = "CA used to sign the new certificate -- accepts '[file:]fname' or 'env:varname'.";
};
caKey = mkOption {
defaultText = "file:\${cfg.dataDir}/ca-key.pem";
type = types.str;
description = "CA private key -- accepts '[file:]fname' or 'env:varname'.";
};
caBundle = mkOption {
default = null;
type = types.nullOr types.path;
description = "Path to root certificate store.";
};
intBundle = mkOption {
default = null;
type = types.nullOr types.path;
description = "Path to intermediate certificate store.";
};
intDir = mkOption {
default = null;
type = types.nullOr types.path;
description = "Intermediates directory.";
};
metadata = mkOption {
default = null;
type = types.nullOr types.path;
description = ''
Metadata file for root certificate presence.
The content of the file is a json dictionary (k,v): each key k is
a SHA-1 digest of a root certificate while value v is a list of key
store filenames.
'';
};
remote = mkOption {
default = null;
type = types.nullOr types.str;
description = "Remote CFSSL server.";
};
configFile = mkOption {
default = null;
type = types.nullOr types.str;
description = "Path to configuration file. Do not put this in nix-store as it might contain secrets.";
};
responder = mkOption {
default = null;
type = types.nullOr types.path;
description = "Certificate for OCSP responder.";
};
responderKey = mkOption {
default = null;
type = types.nullOr types.str;
description = "Private key for OCSP responder certificate. Do not put this in nix-store.";
};
tlsKey = mkOption {
default = null;
type = types.nullOr types.str;
description = "Other endpoint's CA private key. Do not put this in nix-store.";
};
tlsCert = mkOption {
default = null;
type = types.nullOr types.path;
description = "Other endpoint's CA to set up TLS protocol.";
};
mutualTlsCa = mkOption {
default = null;
type = types.nullOr types.path;
description = "Mutual TLS - require clients be signed by this CA.";
};
mutualTlsCn = mkOption {
default = null;
type = types.nullOr types.str;
description = "Mutual TLS - regex for whitelist of allowed client CNs.";
};
tlsRemoteCa = mkOption {
default = null;
type = types.nullOr types.path;
description = "CAs to trust for remote TLS requests.";
};
mutualTlsClientCert = mkOption {
default = null;
type = types.nullOr types.path;
description = "Mutual TLS - client certificate to call remote instance requiring client certs.";
};
mutualTlsClientKey = mkOption {
default = null;
type = types.nullOr types.path;
description = "Mutual TLS - client key to call remote instance requiring client certs. Do not put this in nix-store.";
};
dbConfig = mkOption {
default = null;
type = types.nullOr types.path;
description = "Certificate db configuration file. Path must be writeable.";
};
logLevel = mkOption {
default = 1;
type = types.enum [ 0 1 2 3 4 5 ];
description = "Log level (0 = DEBUG, 5 = FATAL).";
};
};
config = mkIf cfg.enable {
users.extraGroups.cfssl = {
gid = config.ids.gids.cfssl;
};
users.extraUsers.cfssl = {
description = "cfssl user";
createHome = true;
home = cfg.dataDir;
group = "cfssl";
uid = config.ids.uids.cfssl;
};
systemd.services.cfssl = {
description = "CFSSL CA API server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
WorkingDirectory = cfg.dataDir;
StateDirectory = cfg.dataDir;
StateDirectoryMode = 700;
Restart = "always";
User = "cfssl";
ExecStart = with cfg; let
opt = n: v: optionalString (v != null) ''-${n}="${v}"'';
in
lib.concatStringsSep " \\\n" [
"${pkgs.cfssl}/bin/cfssl serve"
(opt "address" address)
(opt "port" (toString port))
(opt "ca" ca)
(opt "ca-key" caKey)
(opt "ca-bundle" caBundle)
(opt "int-bundle" intBundle)
(opt "int-dir" intDir)
(opt "metadata" metadata)
(opt "remote" remote)
(opt "config" configFile)
(opt "responder" responder)
(opt "responder-key" responderKey)
(opt "tls-key" tlsKey)
(opt "tls-cert" tlsCert)
(opt "mutual-tls-ca" mutualTlsCa)
(opt "mutual-tls-cn" mutualTlsCn)
(opt "mutual-tls-client-key" mutualTlsClientKey)
(opt "mutual-tls-client-cert" mutualTlsClientCert)
(opt "tls-remote-ca" tlsRemoteCa)
(opt "db-config" dbConfig)
(opt "loglevel" (toString logLevel))
];
};
};
services.cfssl = {
ca = mkDefault "${cfg.dataDir}/ca.pem";
caKey = mkDefault "${cfg.dataDir}/ca-key.pem";
};
};
}

View File

@ -1,6 +1,7 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
with lib; with lib;
let let
cfg = config.services.vault; cfg = config.services.vault;
@ -24,15 +25,22 @@ let
${cfg.telemetryConfig} ${cfg.telemetryConfig}
} }
''} ''}
${cfg.extraConfig}
''; '';
in in
{ {
options = { options = {
services.vault = { services.vault = {
enable = mkEnableOption "Vault daemon"; enable = mkEnableOption "Vault daemon";
package = mkOption {
type = types.package;
default = pkgs.vault;
defaultText = "pkgs.vault";
description = "This option specifies the vault package to use.";
};
address = mkOption { address = mkOption {
type = types.str; type = types.str;
default = "127.0.0.1:8200"; default = "127.0.0.1:8200";
@ -58,7 +66,7 @@ in
default = '' default = ''
tls_min_version = "tls12" tls_min_version = "tls12"
''; '';
description = "extra configuration"; description = "Extra text appended to the listener section.";
}; };
storageBackend = mkOption { storageBackend = mkOption {
@ -84,6 +92,12 @@ in
default = ""; default = "";
description = "Telemetry configuration"; description = "Telemetry configuration";
}; };
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Extra text appended to <filename>vault.hcl</filename>.";
};
}; };
}; };
@ -122,7 +136,7 @@ in
User = "vault"; User = "vault";
Group = "vault"; Group = "vault";
PermissionsStartOnly = true; PermissionsStartOnly = true;
ExecStart = "${pkgs.vault}/bin/vault server -config ${configFile}"; ExecStart = "${cfg.package}/bin/vault server -config ${configFile}";
PrivateDevices = true; PrivateDevices = true;
PrivateTmp = true; PrivateTmp = true;
ProtectSystem = "full"; ProtectSystem = "full";

View File

@ -104,8 +104,9 @@ in
systemd.services.cloud-init = systemd.services.cloud-init =
{ description = "Initial cloud-init job (metadata service crawler)"; { description = "Initial cloud-init job (metadata service crawler)";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
wants = [ "local-fs.target" "cloud-init-local.service" "sshd.service" "sshd-keygen.service" ]; wants = [ "local-fs.target" "network-online.target" "cloud-init-local.service"
after = [ "local-fs.target" "network.target" "cloud-init-local.service" ]; "sshd.service" "sshd-keygen.service" ];
after = [ "local-fs.target" "network-online.target" "cloud-init-local.service" ];
before = [ "sshd.service" "sshd-keygen.service" ]; before = [ "sshd.service" "sshd-keygen.service" ];
requires = [ "network.target "]; requires = [ "network.target "];
path = path; path = path;
@ -121,8 +122,8 @@ in
systemd.services.cloud-config = systemd.services.cloud-config =
{ description = "Apply the settings specified in cloud-config"; { description = "Apply the settings specified in cloud-config";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
wants = [ "network.target" ]; wants = [ "network-online.target" ];
after = [ "network.target" "syslog.target" "cloud-config.target" ]; after = [ "network-online.target" "syslog.target" "cloud-config.target" ];
path = path; path = path;
serviceConfig = serviceConfig =
@ -137,8 +138,8 @@ in
systemd.services.cloud-final = systemd.services.cloud-final =
{ description = "Execute cloud user/final scripts"; { description = "Execute cloud user/final scripts";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
wants = [ "network.target" ]; wants = [ "network-online.target" ];
after = [ "network.target" "syslog.target" "cloud-config.service" "rc-local.service" ]; after = [ "network-online.target" "syslog.target" "cloud-config.service" "rc-local.service" ];
requires = [ "cloud-config.target" ]; requires = [ "cloud-config.target" ];
path = path; path = path;
serviceConfig = serviceConfig =

View File

@ -22,14 +22,8 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.geoclue2.enable = true; services.geoclue2.enable = true;
security.polkit.extraConfig = '' # so polkit will pick up the rules
polkit.addRule(function(action, subject) { environment.systemPackages = [ pkgs.localtime ];
if (action.id == "org.freedesktop.timedate1.set-timezone"
&& subject.user == "localtimed") {
return polkit.Result.YES;
}
});
'';
users.users = [{ users.users = [{
name = "localtimed"; name = "localtimed";

View File

@ -118,14 +118,14 @@ in
systemd.services.youtrack = { systemd.services.youtrack = {
environment.HOME = cfg.statePath; environment.HOME = cfg.statePath;
environment.YOUTRACK_JVM_OPTS = "-Xmx${cfg.maxMemory} -XX:MaxMetaspaceSize=${cfg.maxMetaspaceSize} ${cfg.jvmOpts} ${extraAttr}"; environment.YOUTRACK_JVM_OPTS = "${extraAttr}";
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";
User = "youtrack"; User = "youtrack";
Group = "youtrack"; Group = "youtrack";
ExecStart = ''${cfg.package}/bin/youtrack ${cfg.address}:${toString cfg.port}''; ExecStart = ''${cfg.package}/bin/youtrack --J-Xmx${cfg.maxMemory} --J-XX:MaxMetaspaceSize=${cfg.maxMetaspaceSize} ${cfg.jvmOpts} ${cfg.address}:${toString cfg.port}'';
}; };
}; };

View File

@ -1,6 +1,8 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
let cfg = config.services.hydron; let
cfg = config.services.hydron;
postgres = config.services.postgresql;
in with lib; { in with lib; {
options.services.hydron = { options.services.hydron = {
enable = mkEnableOption "hydron"; enable = mkEnableOption "hydron";
@ -14,10 +16,10 @@ in with lib; {
interval = mkOption { interval = mkOption {
type = types.str; type = types.str;
default = "hourly"; default = "weekly";
example = "06:00"; example = "06:00";
description = '' description = ''
How often we run hydron import and possibly fetch tags. Runs by default every hour. How often we run hydron import and possibly fetch tags. Runs by default every week.
The format is described in The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle> <citerefentry><refentrytitle>systemd.time</refentrytitle>
@ -25,6 +27,38 @@ in with lib; {
''; '';
}; };
password = mkOption {
type = types.str;
default = "hydron";
example = "dumbpass";
description = "Password for the hydron database.";
};
passwordFile = mkOption {
type = types.path;
default = "/run/keys/hydron-password-file";
example = "/home/okina/hydron/keys/pass";
description = "Password file for the hydron database.";
};
postgresArgs = mkOption {
type = types.str;
description = "Postgresql connection arguments.";
example = ''
{
"driver": "postgres",
"connection": "user=hydron password=dumbpass dbname=hydron sslmode=disable"
}
'';
};
postgresArgsFile = mkOption {
type = types.path;
default = "/run/keys/hydron-postgres-args";
example = "/home/okina/hydron/keys/postgres";
description = "Postgresql connection arguments file.";
};
listenAddress = mkOption { listenAddress = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
@ -47,16 +81,36 @@ in with lib; {
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.sudo.enable = cfg.enable;
services.postgresql.enable = cfg.enable;
services.hydron.passwordFile = mkDefault (pkgs.writeText "hydron-password-file" cfg.password);
services.hydron.postgresArgsFile = mkDefault (pkgs.writeText "hydron-postgres-args" cfg.postgresArgs);
services.hydron.postgresArgs = mkDefault ''
{
"driver": "postgres",
"connection": "user=hydron password=${cfg.password} dbname=hydron sslmode=disable"
}
'';
systemd.services.hydron = { systemd.services.hydron = {
description = "hydron"; description = "hydron";
after = [ "network.target" ]; after = [ "network.target" "postgresql.service" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
# Ensure folder exists and permissions are correct # Ensure folder exists or create it and permissions are correct
mkdir -p ${escapeShellArg cfg.dataDir}/images mkdir -p ${escapeShellArg cfg.dataDir}/{.hydron,images}
ln -sf ${escapeShellArg cfg.postgresArgsFile} ${escapeShellArg cfg.dataDir}/.hydron/db_conf.json
chmod 750 ${escapeShellArg cfg.dataDir} chmod 750 ${escapeShellArg cfg.dataDir}
chown -R hydron:hydron ${escapeShellArg cfg.dataDir} chown -R hydron:hydron ${escapeShellArg cfg.dataDir}
# Ensure the database is correct or create it
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createuser \
-SDR hydron || true
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createdb \
-T template0 -E UTF8 -O hydron hydron || true
${pkgs.sudo}/bin/sudo -u hydron ${postgres.package}/bin/psql \
-c "ALTER ROLE hydron WITH PASSWORD '$(cat ${escapeShellArg cfg.passwordFile})';" || true
''; '';
serviceConfig = { serviceConfig = {
@ -83,9 +137,13 @@ in with lib; {
systemd.timers.hydron-fetch = { systemd.timers.hydron-fetch = {
description = "Automatically import paths into hydron and possibly fetch tags"; description = "Automatically import paths into hydron and possibly fetch tags";
after = [ "network.target" ]; after = [ "network.target" "hydron.service" ];
wantedBy = [ "timers.target" ]; wantedBy = [ "timers.target" ];
timerConfig.OnCalendar = cfg.interval;
timerConfig = {
Persistent = true;
OnCalendar = cfg.interval;
};
}; };
users = { users = {
@ -101,5 +159,9 @@ in with lib; {
}; };
}; };
imports = [
(mkRenamedOptionModule [ "services" "hydron" "baseDir" ] [ "services" "hydron" "dataDir" ])
];
meta.maintainers = with maintainers; [ chiiruno ]; meta.maintainers = with maintainers; [ chiiruno ];
} }

View File

@ -1,65 +1,71 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
with lib;
let let
cfg = config.services.meguca; cfg = config.services.meguca;
postgres = config.services.postgresql; postgres = config.services.postgresql;
in in with lib; {
{
options.services.meguca = { options.services.meguca = {
enable = mkEnableOption "meguca"; enable = mkEnableOption "meguca";
baseDir = mkOption { dataDir = mkOption {
type = types.path; type = types.path;
default = "/run/meguca"; default = "/var/lib/meguca";
example = "/home/okina/meguca";
description = "Location where meguca stores it's database and links."; description = "Location where meguca stores it's database and links.";
}; };
password = mkOption { password = mkOption {
type = types.str; type = types.str;
default = "meguca"; default = "meguca";
example = "dumbpass";
description = "Password for the meguca database."; description = "Password for the meguca database.";
}; };
passwordFile = mkOption { passwordFile = mkOption {
type = types.path; type = types.path;
default = "/run/keys/meguca-password-file"; default = "/run/keys/meguca-password-file";
example = "/home/okina/meguca/keys/pass";
description = "Password file for the meguca database."; description = "Password file for the meguca database.";
}; };
reverseProxy = mkOption { reverseProxy = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = "192.168.1.5";
description = "Reverse proxy IP."; description = "Reverse proxy IP.";
}; };
sslCertificate = mkOption { sslCertificate = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = "/home/okina/meguca/ssl.cert";
description = "Path to the SSL certificate."; description = "Path to the SSL certificate.";
}; };
listenAddress = mkOption { listenAddress = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = "127.0.0.1:8000";
description = "Listen on a specific IP address and port."; description = "Listen on a specific IP address and port.";
}; };
cacheSize = mkOption { cacheSize = mkOption {
type = types.nullOr types.int; type = types.nullOr types.int;
default = null; default = null;
example = 256;
description = "Cache size in MB."; description = "Cache size in MB.";
}; };
postgresArgs = mkOption { postgresArgs = mkOption {
type = types.str; type = types.str;
default = "user=meguca password=" + cfg.password + " dbname=meguca sslmode=disable"; example = "user=meguca password=dumbpass dbname=meguca sslmode=disable";
description = "Postgresql connection arguments."; description = "Postgresql connection arguments.";
}; };
postgresArgsFile = mkOption { postgresArgsFile = mkOption {
type = types.path; type = types.path;
default = "/run/keys/meguca-postgres-args"; default = "/run/keys/meguca-postgres-args";
example = "/home/okina/meguca/keys/postgres";
description = "Postgresql connection arguments file."; description = "Postgresql connection arguments file.";
}; };
@ -83,18 +89,11 @@ in
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.sudo.enable = cfg.enable == true; security.sudo.enable = cfg.enable;
services.postgresql.enable = cfg.enable == true; services.postgresql.enable = cfg.enable;
services.meguca.passwordFile = mkDefault (pkgs.writeText "meguca-password-file" cfg.password);
services.meguca.passwordFile = mkDefault (toString (pkgs.writeTextFile { services.meguca.postgresArgsFile = mkDefault (pkgs.writeText "meguca-postgres-args" cfg.postgresArgs);
name = "meguca-password-file"; services.meguca.postgresArgs = mkDefault "user=meguca password=${cfg.password} dbname=meguca sslmode=disable";
text = cfg.password;
}));
services.meguca.postgresArgsFile = mkDefault (toString (pkgs.writeTextFile {
name = "meguca-postgres-args";
text = cfg.postgresArgs;
}));
systemd.services.meguca = { systemd.services.meguca = {
description = "meguca"; description = "meguca";
@ -102,10 +101,11 @@ in
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
# Ensure folder exists and links are correct or create them # Ensure folder exists or create it and links and permissions are correct
mkdir -p ${cfg.baseDir} mkdir -p ${escapeShellArg cfg.dataDir}
chmod 750 ${cfg.baseDir} ln -sf ${pkgs.meguca}/share/meguca/www ${escapeShellArg cfg.dataDir}
ln -sf ${pkgs.meguca}/share/meguca/www ${cfg.baseDir} chmod 750 ${escapeShellArg cfg.dataDir}
chown -R meguca:meguca ${escapeShellArg cfg.dataDir}
# Ensure the database is correct or create it # Ensure the database is correct or create it
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createuser \ ${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createuser \
@ -113,47 +113,46 @@ in
${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createdb \ ${pkgs.sudo}/bin/sudo -u ${postgres.superUser} ${postgres.package}/bin/createdb \
-T template0 -E UTF8 -O meguca meguca || true -T template0 -E UTF8 -O meguca meguca || true
${pkgs.sudo}/bin/sudo -u meguca ${postgres.package}/bin/psql \ ${pkgs.sudo}/bin/sudo -u meguca ${postgres.package}/bin/psql \
-c "ALTER ROLE meguca WITH PASSWORD '$(cat ${cfg.passwordFile})';" || true -c "ALTER ROLE meguca WITH PASSWORD '$(cat ${escapeShellArg cfg.passwordFile})';" || true
''; '';
script = '' script = ''
cd ${cfg.baseDir} cd ${escapeShellArg cfg.dataDir}
${pkgs.meguca}/bin/meguca -d "$(cat ${cfg.postgresArgsFile})"\ ${pkgs.meguca}/bin/meguca -d "$(cat ${escapeShellArg cfg.postgresArgsFile})"''
${optionalString (cfg.reverseProxy != null) " -R ${cfg.reverseProxy}"}\ + optionalString (cfg.reverseProxy != null) " -R ${cfg.reverseProxy}"
${optionalString (cfg.sslCertificate != null) " -S ${cfg.sslCertificate}"}\ + optionalString (cfg.sslCertificate != null) " -S ${cfg.sslCertificate}"
${optionalString (cfg.listenAddress != null) " -a ${cfg.listenAddress}"}\ + optionalString (cfg.listenAddress != null) " -a ${cfg.listenAddress}"
${optionalString (cfg.cacheSize != null) " -c ${toString cfg.cacheSize}"}\ + optionalString (cfg.cacheSize != null) " -c ${toString cfg.cacheSize}"
${optionalString (cfg.compressTraffic) " -g"}\ + optionalString (cfg.compressTraffic) " -g"
${optionalString (cfg.assumeReverseProxy) " -r"}\ + optionalString (cfg.assumeReverseProxy) " -r"
${optionalString (cfg.httpsOnly) " -s"} start + optionalString (cfg.httpsOnly) " -s" + " start";
'';
serviceConfig = { serviceConfig = {
PermissionsStartOnly = true; PermissionsStartOnly = true;
Type = "forking"; Type = "forking";
User = "meguca"; User = "meguca";
Group = "meguca"; Group = "meguca";
RuntimeDirectory = "meguca";
ExecStop = "${pkgs.meguca}/bin/meguca stop"; ExecStop = "${pkgs.meguca}/bin/meguca stop";
}; };
}; };
users = { users = {
groups.meguca.gid = config.ids.gids.meguca;
users.meguca = { users.meguca = {
description = "meguca server service user"; description = "meguca server service user";
home = cfg.baseDir; home = cfg.dataDir;
createHome = true; createHome = true;
group = "meguca"; group = "meguca";
uid = config.ids.uids.meguca; uid = config.ids.uids.meguca;
}; };
groups.meguca = {
gid = config.ids.gids.meguca;
members = [ "meguca" ];
};
}; };
}; };
imports = [
(mkRenamedOptionModule [ "services" "meguca" "baseDir" ] [ "services" "meguca" "dataDir" ])
];
meta.maintainers = with maintainers; [ chiiruno ]; meta.maintainers = with maintainers; [ chiiruno ];
} }

View File

@ -96,13 +96,13 @@ in
else if any (w: w.name == defaultDM) cfg.session.list then else if any (w: w.name == defaultDM) cfg.session.list then
defaultDM defaultDM
else else
throw '' builtins.trace ''
Default desktop manager (${defaultDM}) not found. Default desktop manager (${defaultDM}) not found at evaluation time.
Probably you want to change These are the known valid session names:
services.xserver.desktopManager.default = "${defaultDM}";
to one of
${concatMapStringsSep "\n " (w: "services.xserver.desktopManager.default = \"${w.name}\";") cfg.session.list} ${concatMapStringsSep "\n " (w: "services.xserver.desktopManager.default = \"${w.name}\";") cfg.session.list}
''; It's also possible the default can be found in one of these packages:
${concatMapStringsSep "\n " (p: p.name) config.services.xserver.displayManager.extraSessionFilePackages}
'' defaultDM;
}; };
}; };

View File

@ -57,8 +57,12 @@ in {
sessionPath = mkOption { sessionPath = mkOption {
default = []; default = [];
example = literalExample "[ pkgs.gnome3.gpaste ]"; example = literalExample "[ pkgs.gnome3.gpaste ]";
description = "Additional list of packages to be added to the session search path. description = ''
Useful for gnome shell extensions or gsettings-conditionated autostart."; Additional list of packages to be added to the session search path.
Useful for GNOME Shell extensions or GSettings-conditional autostart.
Note that this should be a last resort; patching the package is preferred (see GPaste).
'';
apply = list: list ++ [ pkgs.gnome3.gnome-shell pkgs.gnome3.gnome-shell-extensions ]; apply = list: list ++ [ pkgs.gnome3.gnome-shell pkgs.gnome3.gnome-shell-extensions ];
}; };
@ -93,6 +97,8 @@ in {
services.udisks2.enable = true; services.udisks2.enable = true;
services.accounts-daemon.enable = true; services.accounts-daemon.enable = true;
services.geoclue2.enable = mkDefault true; services.geoclue2.enable = mkDefault true;
# GNOME should have its own geoclue agent
services.geoclue2.enableDemoAgent = false;
services.dleyna-renderer.enable = mkDefault true; services.dleyna-renderer.enable = mkDefault true;
services.dleyna-server.enable = mkDefault true; services.dleyna-server.enable = mkDefault true;
services.gnome3.at-spi2-core.enable = true; services.gnome3.at-spi2-core.enable = true;
@ -126,18 +132,10 @@ in {
fonts.fonts = [ pkgs.dejavu_fonts pkgs.cantarell-fonts ]; fonts.fonts = [ pkgs.dejavu_fonts pkgs.cantarell-fonts ];
services.xserver.desktopManager.session = singleton services.xserver.displayManager.extraSessionFilePackages = [ pkgs.gnome3.gnome-session ];
{ name = "gnome3";
bgSupport = true;
start = ''
# Set GTK_DATA_PREFIX so that GTK+ can find the themes
export GTK_DATA_PREFIX=${config.system.path}
# find theme engines
export GTK_PATH=${config.system.path}/lib/gtk-3.0:${config.system.path}/lib/gtk-2.0
export XDG_MENU_PREFIX=gnome-
services.xserver.displayManager.sessionCommands = ''
if test "$XDG_CURRENT_DESKTOP" = "GNOME"; then
${concatMapStrings (p: '' ${concatMapStrings (p: ''
if [ -d "${p}/share/gsettings-schemas/${p.name}" ]; then if [ -d "${p}/share/gsettings-schemas/${p.name}" ]; then
export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}${p}/share/gsettings-schemas/${p.name} export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}${p}/share/gsettings-schemas/${p.name}
@ -148,34 +146,28 @@ in {
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH''${LD_LIBRARY_PATH:+:}${p}/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH''${LD_LIBRARY_PATH:+:}${p}/lib
fi fi
'') cfg.sessionPath} '') cfg.sessionPath}
fi
'';
# Override default mimeapps environment.variables.GNOME_SESSION_DEBUG = optionalString cfg.debug "1";
export XDG_DATA_DIRS=$XDG_DATA_DIRS''${XDG_DATA_DIRS:+:}${mimeAppsList}/share
# Override gsettings-desktop-schema # Override default mimeapps
export NIX_GSETTINGS_OVERRIDES_DIR=${nixos-gsettings-desktop-schemas}/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas environment.variables.XDG_DATA_DIRS = [ "${mimeAppsList}/share" ];
# Let nautilus find extensions # Override GSettings schemas
export NAUTILUS_EXTENSION_DIR=${config.system.path}/lib/nautilus/extensions-3.0/ environment.variables.NIX_GSETTINGS_OVERRIDES_DIR = "${nixos-gsettings-desktop-schemas}/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas";
# Find the mouse # Let nautilus find extensions
export XCURSOR_PATH=~/.icons:${config.system.path}/share/icons # TODO: Create nautilus-with-extensions package
environment.variables.NAUTILUS_EXTENSION_DIR = "${config.system.path}/lib/nautilus/extensions-3.0";
# Update user dirs as described in http://freedesktop.org/wiki/Software/xdg-user-dirs/
${pkgs.xdg-user-dirs}/bin/xdg-user-dirs-update
${pkgs.gnome3.gnome-session}/bin/gnome-session ${optionalString cfg.debug "--debug"} &
waitPID=$!
'';
};
services.xserver.updateDbusEnvironment = true;
environment.variables.GIO_EXTRA_MODULES = [ "${lib.getLib pkgs.gnome3.dconf}/lib/gio/modules" environment.variables.GIO_EXTRA_MODULES = [ "${lib.getLib pkgs.gnome3.dconf}/lib/gio/modules"
"${pkgs.gnome3.glib-networking.out}/lib/gio/modules" "${pkgs.gnome3.glib-networking.out}/lib/gio/modules"
"${pkgs.gnome3.gvfs}/lib/gio/modules" ]; "${pkgs.gnome3.gvfs}/lib/gio/modules" ];
environment.systemPackages = pkgs.gnome3.corePackages ++ cfg.sessionPath environment.systemPackages = pkgs.gnome3.corePackages ++ cfg.sessionPath
++ (removePackagesByName pkgs.gnome3.optionalPackages config.environment.gnome3.excludePackages); ++ (removePackagesByName pkgs.gnome3.optionalPackages config.environment.gnome3.excludePackages) ++ [
pkgs.xdg-user-dirs # Update user dirs as described in http://freedesktop.org/wiki/Software/xdg-user-dirs/
];
# Use the correct gnome3 packageSet # Use the correct gnome3 packageSet
networking.networkmanager.basePackages = networking.networkmanager.basePackages =

View File

@ -224,7 +224,7 @@ in
# Update the start menu for each user that has `isNormalUser` set. # Update the start menu for each user that has `isNormalUser` set.
system.activationScripts.plasmaSetup = stringAfter [ "users" "groups" ] system.activationScripts.plasmaSetup = stringAfter [ "users" "groups" ]
(concatStringsSep "\n" (concatStringsSep "\n"
(mapAttrsToList (name: value: "${pkgs.su}/bin/su ${name} -c kbuildsycoca5") (mapAttrsToList (name: value: "${pkgs.su}/bin/su ${name} -c ${pkgs.libsForQt5.kservice}/bin/kbuildsycoca5")
(filterAttrs (n: v: v.isNormalUser) config.users.users))); (filterAttrs (n: v: v.isNormalUser) config.users.users)));
}) })
]; ];

View File

@ -27,55 +27,26 @@ let
Xft.hintstyle: hintslight Xft.hintstyle: hintslight
''; '';
# file provided by services.xserver.displayManager.session.script # file provided by services.xserver.displayManager.session.wrapper
xsession = wm: dm: pkgs.writeScript "xsession" xsessionWrapper = pkgs.writeScript "xsession-wrapper"
'' ''
#! ${pkgs.bash}/bin/bash #! ${pkgs.bash}/bin/bash
# Expected parameters: # Shared environment setup for graphical sessions.
# $1 = <desktop-manager>+<window-manager>
# Actual parameters (FIXME):
# SDDM is calling this script like the following:
# $1 = /nix/store/xxx-xsession (= $0)
# $2 = <desktop-manager>+<window-manager>
# SLiM is using the following parameter:
# $1 = /nix/store/xxx-xsession <desktop-manager>+<window-manager>
# LightDM keeps the double quotes:
# $1 = /nix/store/xxx-xsession "<desktop-manager>+<window-manager>"
# The fake/auto display manager doesn't use any parameters and GDM is
# broken.
# If you want to "debug" this script don't print the parameters to stdout
# or stderr because this script will be executed multiple times and the
# output won't be visible in the log when the script is executed for the
# first time (e.g. append them to a file instead)!
# All of the above cases are handled by the following hack (FIXME).
# Since this line is *very important* for *all display managers* it is
# very important to test changes to the following line with all display
# managers:
if [ "''${1:0:1}" = "/" ]; then eval exec "$1" "$2" ; fi
# Now it should be safe to assume that the script was called with the
# expected parameters.
. /etc/profile . /etc/profile
cd "$HOME" cd "$HOME"
# The first argument of this script is the session type.
sessionType="$1"
if [ "$sessionType" = default ]; then sessionType=""; fi
${optionalString cfg.startDbusSession '' ${optionalString cfg.startDbusSession ''
if test -z "$DBUS_SESSION_BUS_ADDRESS"; then if test -z "$DBUS_SESSION_BUS_ADDRESS"; then
exec ${pkgs.dbus.dbus-launch} --exit-with-session "$0" "$sessionType" exec ${pkgs.dbus.dbus-launch} --exit-with-session "$0" "$@"
fi fi
''} ''}
${optionalString cfg.displayManager.job.logToJournal '' ${optionalString cfg.displayManager.job.logToJournal ''
if [ -z "$_DID_SYSTEMD_CAT" ]; then if [ -z "$_DID_SYSTEMD_CAT" ]; then
export _DID_SYSTEMD_CAT=1 export _DID_SYSTEMD_CAT=1
exec ${config.systemd.package}/bin/systemd-cat -t xsession "$0" "$sessionType" exec ${config.systemd.package}/bin/systemd-cat -t xsession "$0" "$@"
fi fi
''} ''}
@ -85,12 +56,10 @@ let
# Start PulseAudio if enabled. # Start PulseAudio if enabled.
${optionalString (config.hardware.pulseaudio.enable) '' ${optionalString (config.hardware.pulseaudio.enable) ''
${optionalString (!config.hardware.pulseaudio.systemWide)
"${config.hardware.pulseaudio.package.out}/bin/pulseaudio --start"
}
# Publish access credentials in the root window. # Publish access credentials in the root window.
${config.hardware.pulseaudio.package.out}/bin/pactl load-module module-x11-publish "display=$DISPLAY" if ${config.hardware.pulseaudio.package.out}/bin/pulseaudio --dump-modules | grep module-x11-publish &> /dev/null; then
${config.hardware.pulseaudio.package.out}/bin/pactl load-module module-x11-publish "display=$DISPLAY"
fi
''} ''}
# Tell systemd about our $DISPLAY and $XAUTHORITY. # Tell systemd about our $DISPLAY and $XAUTHORITY.
@ -101,6 +70,7 @@ let
${config.systemd.package}/bin/systemctl --user import-environment DISPLAY XAUTHORITY DBUS_SESSION_BUS_ADDRESS ${config.systemd.package}/bin/systemctl --user import-environment DISPLAY XAUTHORITY DBUS_SESSION_BUS_ADDRESS
# Load X defaults. # Load X defaults.
# FIXME: Check XDG_SESSION_TYPE against x11
${xorg.xrdb}/bin/xrdb -merge ${xresourcesXft} ${xorg.xrdb}/bin/xrdb -merge ${xresourcesXft}
if test -e ~/.Xresources; then if test -e ~/.Xresources; then
${xorg.xrdb}/bin/xrdb -merge ~/.Xresources ${xorg.xrdb}/bin/xrdb -merge ~/.Xresources
@ -132,12 +102,33 @@ let
# Allow the user to setup a custom session type. # Allow the user to setup a custom session type.
if test -x ~/.xsession; then if test -x ~/.xsession; then
exec ~/.xsession exec ~/.xsession
else
if test "$sessionType" = "custom"; then
sessionType="" # fall-thru if there is no ~/.xsession
fi
fi fi
if test "$1"; then
# Run the supplied session command. Remove any double quotes with eval.
eval exec "$@"
else
# Fall back to the default window/desktopManager
exec ${cfg.displayManager.session.script}
fi
'';
# file provided by services.xserver.displayManager.session.script
xsession = wm: dm: pkgs.writeScript "xsession"
''
#! ${pkgs.bash}/bin/bash
# Legacy session script used to construct .desktop files from
# `services.xserver.displayManager.session` entries. Called from
# `sessionWrapper`.
# Expected parameters:
# $1 = <desktop-manager>+<window-manager>
# The first argument of this script is the session type.
sessionType="$1"
if [ "$sessionType" = default ]; then sessionType=""; fi
# The session type is "<desktop-manager>+<window-manager>", so # The session type is "<desktop-manager>+<window-manager>", so
# extract those (see: # extract those (see:
# http://wiki.bash-hackers.org/syntax/pe#substring_removal). # http://wiki.bash-hackers.org/syntax/pe#substring_removal).
@ -186,19 +177,22 @@ let
allowSubstitutes = false; allowSubstitutes = false;
} }
'' ''
mkdir -p "$out" mkdir -p "$out/share/xsessions"
${concatMapStrings (n: '' ${concatMapStrings (n: ''
cat - > "$out/${n}.desktop" << EODESKTOP cat - > "$out/share/xsessions/${n}.desktop" << EODESKTOP
[Desktop Entry] [Desktop Entry]
Version=1.0 Version=1.0
Type=XSession Type=XSession
TryExec=${cfg.displayManager.session.script} TryExec=${cfg.displayManager.session.script}
Exec=${cfg.displayManager.session.script} "${n}" Exec=${cfg.displayManager.session.script} "${n}"
X-GDM-BypassXsession=true
Name=${n} Name=${n}
Comment= Comment=
EODESKTOP EODESKTOP
'') names} '') names}
${concatMapStrings (pkg: ''
${xorg.lndir}/bin/lndir ${pkg}/share/xsessions $out/share/xsessions
'') cfg.displayManager.extraSessionFilePackages}
''; '';
in in
@ -245,6 +239,14 @@ in
''; '';
}; };
extraSessionFilePackages = mkOption {
type = types.listOf types.package;
default = [];
description = ''
A list of packages containing xsession files to be passed to the display manager.
'';
};
session = mkOption { session = mkOption {
default = []; default = [];
example = literalExample example = literalExample
@ -280,6 +282,7 @@ in
(filter (w: d.name != "none" || w.name != "none") wm)); (filter (w: d.name != "none" || w.name != "none") wm));
desktops = mkDesktops names; desktops = mkDesktops names;
script = xsession wm dm; script = xsession wm dm;
wrapper = xsessionWrapper;
}; };
}; };

View File

@ -109,7 +109,7 @@ in
environment = { environment = {
GDM_X_SERVER_EXTRA_ARGS = toString GDM_X_SERVER_EXTRA_ARGS = toString
(filter (arg: arg != "-terminate") cfg.xserverArgs); (filter (arg: arg != "-terminate") cfg.xserverArgs);
GDM_SESSIONS_DIR = "${cfg.session.desktops}"; GDM_SESSIONS_DIR = "${cfg.session.desktops}/share/xsessions";
# Find the mouse # Find the mouse
XCURSOR_PATH = "~/.icons:${pkgs.gnome3.adwaita-icon-theme}/share/icons"; XCURSOR_PATH = "~/.icons:${pkgs.gnome3.adwaita-icon-theme}/share/icons";
}; };
@ -173,6 +173,8 @@ in
${optionalString cfg.gdm.debug "Enable=true"} ${optionalString cfg.gdm.debug "Enable=true"}
''; '';
environment.etc."gdm/Xsession".source = config.services.xserver.displayManager.session.wrapper;
# GDM LFS PAM modules, adapted somehow to NixOS # GDM LFS PAM modules, adapted somehow to NixOS
security.pam.services = { security.pam.services = {
gdm-launch-environment.text = '' gdm-launch-environment.text = ''

View File

@ -23,7 +23,7 @@ let
makeWrapper ${pkgs.lightdm_gtk_greeter}/sbin/lightdm-gtk-greeter \ makeWrapper ${pkgs.lightdm_gtk_greeter}/sbin/lightdm-gtk-greeter \
$out/greeter \ $out/greeter \
--prefix PATH : "${pkgs.glibc.bin}/bin" \ --prefix PATH : "${pkgs.glibc.bin}/bin" \
--set GDK_PIXBUF_MODULE_FILE "${pkgs.gdk_pixbuf.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \ --set GDK_PIXBUF_MODULE_FILE "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \
--set GTK_PATH "${theme}:${pkgs.gtk3.out}" \ --set GTK_PATH "${theme}:${pkgs.gtk3.out}" \
--set GTK_EXE_PREFIX "${theme}" \ --set GTK_EXE_PREFIX "${theme}" \
--set GTK_DATA_PREFIX "${theme}" \ --set GTK_DATA_PREFIX "${theme}" \

View File

@ -15,7 +15,7 @@ let
inherit (pkgs) lightdm writeScript writeText; inherit (pkgs) lightdm writeScript writeText;
# lightdm runs with clearenv(), but we need a few things in the enviornment for X to startup # lightdm runs with clearenv(), but we need a few things in the environment for X to startup
xserverWrapper = writeScript "xserver-wrapper" xserverWrapper = writeScript "xserver-wrapper"
'' ''
#! ${pkgs.bash}/bin/bash #! ${pkgs.bash}/bin/bash
@ -45,11 +45,11 @@ let
greeter-user = ${config.users.users.lightdm.name} greeter-user = ${config.users.users.lightdm.name}
greeters-directory = ${cfg.greeter.package} greeters-directory = ${cfg.greeter.package}
''} ''}
sessions-directory = ${dmcfg.session.desktops} sessions-directory = ${dmcfg.session.desktops}/share/xsessions
[Seat:*] [Seat:*]
xserver-command = ${xserverWrapper} xserver-command = ${xserverWrapper}
session-wrapper = ${dmcfg.session.script} session-wrapper = ${dmcfg.session.wrapper}
${optionalString cfg.greeter.enable '' ${optionalString cfg.greeter.enable ''
greeter-session = ${cfg.greeter.name} greeter-session = ${cfg.greeter.name}
''} ''}
@ -176,21 +176,13 @@ in
LightDM auto-login requires services.xserver.displayManager.lightdm.autoLogin.user to be set LightDM auto-login requires services.xserver.displayManager.lightdm.autoLogin.user to be set
''; '';
} }
{ assertion = cfg.autoLogin.enable -> elem defaultSessionName dmcfg.session.names; { assertion = cfg.autoLogin.enable -> dmDefault != "none" || wmDefault != "none";
message = '' message = ''
LightDM auto-login requires that services.xserver.desktopManager.default and LightDM auto-login requires that services.xserver.desktopManager.default and
services.xserver.windowMananger.default are set to valid values. The current services.xserver.windowMananger.default are set to valid values. The current
default session: ${defaultSessionName} is not valid. default session: ${defaultSessionName} is not valid.
''; '';
} }
{ assertion = hasDefaultUserSession -> elem defaultSessionName dmcfg.session.names;
message = ''
services.xserver.desktopManager.default and
services.xserver.windowMananger.default are not set to valid
values. The current default session: ${defaultSessionName}
is not valid.
'';
}
{ assertion = !cfg.greeter.enable -> (cfg.autoLogin.enable && cfg.autoLogin.timeout == 0); { assertion = !cfg.greeter.enable -> (cfg.autoLogin.enable && cfg.autoLogin.timeout == 0);
message = '' message = ''
LightDM can only run without greeter if automatic login is enabled and the timeout for it LightDM can only run without greeter if automatic login is enabled and the timeout for it
@ -217,9 +209,12 @@ in
services.dbus.enable = true; services.dbus.enable = true;
services.dbus.packages = [ lightdm ]; services.dbus.packages = [ lightdm ];
# lightdm uses the accounts daemon to rember language/window-manager per user # lightdm uses the accounts daemon to remember language/window-manager per user
services.accounts-daemon.enable = true; services.accounts-daemon.enable = true;
# Enable the accounts daemon to find lightdm's dbus interface
environment.systemPackages = [ lightdm ];
security.pam.services.lightdm = { security.pam.services.lightdm = {
allowNullPassword = true; allowNullPassword = true;
startSession = true; startSession = true;

View File

@ -49,8 +49,8 @@ let
MinimumVT=${toString (if xcfg.tty != null then xcfg.tty else 7)} MinimumVT=${toString (if xcfg.tty != null then xcfg.tty else 7)}
ServerPath=${xserverWrapper} ServerPath=${xserverWrapper}
XephyrPath=${pkgs.xorg.xorgserver.out}/bin/Xephyr XephyrPath=${pkgs.xorg.xorgserver.out}/bin/Xephyr
SessionCommand=${dmcfg.session.script} SessionCommand=${dmcfg.session.wrapper}
SessionDir=${dmcfg.session.desktops} SessionDir=${dmcfg.session.desktops}/share/xsessions
XauthPath=${pkgs.xorg.xauth}/bin/xauth XauthPath=${pkgs.xorg.xauth}/bin/xauth
DisplayCommand=${Xsetup} DisplayCommand=${Xsetup}
DisplayStopCommand=${Xstop} DisplayStopCommand=${Xstop}
@ -265,6 +265,7 @@ in
}; };
environment.etc."sddm.conf".source = cfgFile; environment.etc."sddm.conf".source = cfgFile;
environment.pathsToLink = [ "/share/sddm/themes" ];
users.groups.sddm.gid = config.ids.gids.sddm; users.groups.sddm.gid = config.ids.gids.sddm;

View File

@ -13,8 +13,8 @@ let
xauth_path ${dmcfg.xauthBin} xauth_path ${dmcfg.xauthBin}
default_xserver ${dmcfg.xserverBin} default_xserver ${dmcfg.xserverBin}
xserver_arguments ${toString dmcfg.xserverArgs} xserver_arguments ${toString dmcfg.xserverArgs}
sessiondir ${dmcfg.session.desktops} sessiondir ${dmcfg.session.desktops}/share/xsessions
login_cmd exec ${pkgs.runtimeShell} ${dmcfg.session.script} "%session" login_cmd exec ${pkgs.runtimeShell} ${dmcfg.session.wrapper} "%session"
halt_cmd ${config.systemd.package}/sbin/shutdown -h now halt_cmd ${config.systemd.package}/sbin/shutdown -h now
reboot_cmd ${config.systemd.package}/sbin/shutdown -r now reboot_cmd ${config.systemd.package}/sbin/shutdown -r now
logfile /dev/stderr logfile /dev/stderr

View File

@ -116,6 +116,9 @@ in {
} }
]; ];
# needed so that .desktop files are installed, which geoclue cares about
environment.systemPackages = [ cfg.package ];
services.geoclue2.enable = mkIf (cfg.provider == "geoclue2") true; services.geoclue2.enable = mkIf (cfg.provider == "geoclue2") true;
systemd.user.services.redshift = systemd.user.services.redshift =

View File

@ -5,9 +5,7 @@ with lib;
let let
cfg = config.services.xserver.windowManager.metacity; cfg = config.services.xserver.windowManager.metacity;
xorg = config.services.xserver.package; inherit (pkgs) gnome3;
gnome = pkgs.gnome;
in in
{ {
@ -20,16 +18,12 @@ in
services.xserver.windowManager.session = singleton services.xserver.windowManager.session = singleton
{ name = "metacity"; { name = "metacity";
start = '' start = ''
env LD_LIBRARY_PATH=${lib.makeLibraryPath [ xorg.libX11 xorg.libXext ]}:/usr/lib/ ${gnome3.metacity}/bin/metacity &
# !!! Hack: load the schemas for Metacity.
GCONF_CONFIG_SOURCE=xml::~/.gconf ${gnome.GConf.out}/bin/gconftool-2 \
--makefile-install-rule ${gnome.metacity}/etc/gconf/schemas/*.schemas # */
${gnome.metacity}/bin/metacity &
waitPID=$! waitPID=$!
''; '';
}; };
environment.systemPackages = [ gnome.metacity ]; environment.systemPackages = [ gnome3.metacity ];
}; };

View File

@ -46,7 +46,7 @@ let
ln -s ${kernelPath} $out/kernel ln -s ${kernelPath} $out/kernel
ln -s ${config.system.modulesTree} $out/kernel-modules ln -s ${config.system.modulesTree} $out/kernel-modules
${optionalString (pkgs.stdenv.platform.kernelDTB or false) '' ${optionalString (pkgs.stdenv.hostPlatform.platform.kernelDTB or false) ''
ln -s ${config.boot.kernelPackages.kernel}/dtbs $out/dtbs ln -s ${config.boot.kernelPackages.kernel}/dtbs $out/dtbs
''} ''}
@ -74,7 +74,7 @@ let
echo -n "$configurationName" > $out/configuration-name echo -n "$configurationName" > $out/configuration-name
echo -n "systemd ${toString config.systemd.package.interfaceVersion}" > $out/init-interface-version echo -n "systemd ${toString config.systemd.package.interfaceVersion}" > $out/init-interface-version
echo -n "$nixosLabel" > $out/nixos-version echo -n "$nixosLabel" > $out/nixos-version
echo -n "$system" > $out/system echo -n "${pkgs.stdenv.hostPlatform.system}" > $out/system
mkdir $out/fine-tune mkdir $out/fine-tune
childCount=0 childCount=0
@ -175,7 +175,7 @@ in
system.boot.loader.kernelFile = mkOption { system.boot.loader.kernelFile = mkOption {
internal = true; internal = true;
default = pkgs.stdenv.platform.kernelTarget; default = pkgs.stdenv.hostPlatform.platform.kernelTarget;
type = types.str; type = types.str;
description = '' description = ''
Name of the kernel file to be passed to the bootloader. Name of the kernel file to be passed to the bootloader.

View File

@ -13,7 +13,7 @@ let
}; };
# Temporary check, for nixos to cope both with nixpkgs stdenv-updates and trunk # Temporary check, for nixos to cope both with nixpkgs stdenv-updates and trunk
platform = pkgs.stdenv.platform; inherit (pkgs.stdenv.hostPlatform) platform;
in in

View File

@ -15,7 +15,7 @@ let
inherit configTxt; inherit configTxt;
}; };
platform = pkgs.stdenv.platform; inherit (pkgs.stdenv.hostPlatform) platform;
builderUboot = import ./builder_uboot.nix { inherit config; inherit pkgs; inherit configTxt; }; builderUboot = import ./builder_uboot.nix { inherit config; inherit pkgs; inherit configTxt; };

View File

@ -42,7 +42,8 @@ def write_loader_conf(profile, generation):
else: else:
f.write("default nixos-generation-%d\n" % (generation)) f.write("default nixos-generation-%d\n" % (generation))
if not @editor@: if not @editor@:
f.write("editor 0"); f.write("editor 0\n");
f.write("console-mode @consoleMode@\n");
os.rename("@efiSysMountPoint@/loader/loader.conf.tmp", "@efiSysMountPoint@/loader/loader.conf") os.rename("@efiSysMountPoint@/loader/loader.conf.tmp", "@efiSysMountPoint@/loader/loader.conf")
def profile_path(profile, generation, name): def profile_path(profile, generation, name):

View File

@ -22,6 +22,8 @@ let
editor = if cfg.editor then "True" else "False"; editor = if cfg.editor then "True" else "False";
inherit (cfg) consoleMode;
inherit (efi) efiSysMountPoint canTouchEfiVariables; inherit (efi) efiSysMountPoint canTouchEfiVariables;
}; };
in { in {
@ -52,6 +54,38 @@ in {
compatibility. compatibility.
''; '';
}; };
consoleMode = mkOption {
default = "keep";
type = types.enum [ "0" "1" "2" "auto" "max" "keep" ];
description = ''
The resolution of the console. The following values are valid:
</para>
<para>
<itemizedlist>
<listitem><para>
<literal>"0"</literal>: Standard UEFI 80x25 mode
</para></listitem>
<listitem><para>
<literal>"1"</literal>: 80x50 mode, not supported by all devices
</para></listitem>
<listitem><para>
<literal>"2"</literal>: The first non-standard mode provided by the device firmware, if any
</para></listitem>
<listitem><para>
<literal>"auto"</literal>: Pick a suitable mode automatically using heuristics
</para></listitem>
<listitem><para>
<literal>"max"</literal>: Pick the highest-numbered available mode
</para></listitem>
<listitem><para>
<literal>"keep"</literal>: Keep the mode selected by firmware (the default)
</para></listitem>
</itemizedlist>
'';
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {

View File

@ -5,61 +5,171 @@ with lib;
let let
luks = config.boot.initrd.luks; luks = config.boot.initrd.luks;
openCommand = name': { name, device, header, keyFile, keyFileSize, allowDiscards, yubikey, fallbackToPassword, ... }: assert name' == name; '' commonFunctions = ''
die() {
echo "$@" >&2
exit 1
}
# Wait for a target (e.g. device, keyFile, header, ...) to appear.
wait_target() { wait_target() {
local name="$1" local name="$1"
local target="$2" local target="$2"
local secs="''${3:-10}"
local desc="''${4:-$name $target to appear}"
if [ ! -e $target ]; then if [ ! -e $target ]; then
echo -n "Waiting 10 seconds for $name $target to appear" echo -n "Waiting $secs seconds for $desc..."
local success=false; local success=false;
for try in $(seq 10); do for try in $(seq $secs); do
echo -n "." echo -n "."
sleep 1 sleep 1
if [ -e $target ]; then success=true break; fi if [ -e $target ]; then
success=true
break
fi
done done
if [ $success = true ]; then if [ $success == true ]; then
echo " - success"; echo " - success";
return 0
else else
echo " - failure"; echo " - failure";
return 1
fi fi
fi fi
return 0
} }
wait_yubikey() {
local secs="''${1:-10}"
ykinfo -v 1>/dev/null 2>&1
if [ $? != 0 ]; then
echo -n "Waiting $secs seconds for Yubikey to appear..."
local success=false
for try in $(seq $secs); do
echo -n .
sleep 1
ykinfo -v 1>/dev/null 2>&1
if [ $? == 0 ]; then
success=true
break
fi
done
if [ $success == true ]; then
echo " - success";
return 0
else
echo " - failure";
return 1
fi
fi
return 0
}
'';
preCommands = ''
# A place to store crypto things
# A ramfs is used here to ensure that the file used to update
# the key slot with cryptsetup will never get swapped out.
# Warning: Do NOT replace with tmpfs!
mkdir -p /crypt-ramfs
mount -t ramfs none /crypt-ramfs
# For Yubikey salt storage
mkdir -p /crypt-storage
# Disable all input echo for the whole stage. We could use read -s
# instead but that would ocasionally leak characters between read
# invocations.
stty -echo
'';
postCommands = ''
stty echo
umount /crypt-storage 2>/dev/null
umount /crypt-ramfs 2>/dev/null
'';
openCommand = name': { name, device, header, keyFile, keyFileSize, keyFileOffset, allowDiscards, yubikey, fallbackToPassword, ... }: assert name' == name;
let
csopen = "cryptsetup luksOpen ${device} ${name} ${optionalString allowDiscards "--allow-discards"} ${optionalString (header != null) "--header=${header}"}";
cschange = "cryptsetup luksChangeKey ${device} ${optionalString (header != null) "--header=${header}"}";
in ''
# Wait for luksRoot (and optionally keyFile and/or header) to appear, e.g. # Wait for luksRoot (and optionally keyFile and/or header) to appear, e.g.
# if on a USB drive. # if on a USB drive.
wait_target "device" ${device} wait_target "device" ${device} || die "${device} is unavailable"
${optionalString (keyFile != null) ''
wait_target "key file" ${keyFile}
''}
${optionalString (header != null) '' ${optionalString (header != null) ''
wait_target "header" ${header} wait_target "header" ${header} || die "${header} is unavailable"
''} ''}
open_normally() { do_open_passphrase() {
echo luksOpen ${device} ${name} ${optionalString allowDiscards "--allow-discards"} \ local passphrase
${optionalString (header != null) "--header=${header}"} \
> /.luksopen_args while true; do
${optionalString (keyFile != null) '' echo -n "Passphrase for ${device}: "
${optionalString fallbackToPassword "if [ -e ${keyFile} ]; then"} passphrase=
echo " --key-file=${keyFile} ${optionalString (keyFileSize != null) "--keyfile-size=${toString keyFileSize}"}" \ while true; do
>> /.luksopen_args if [ -e /crypt-ramfs/passphrase ]; then
${optionalString fallbackToPassword '' echo "reused"
else passphrase=$(cat /crypt-ramfs/passphrase)
echo "keyfile ${keyFile} not found -- fallback to interactive unlocking" break
fi else
''} # ask cryptsetup-askpass
''} echo -n "${device}" > /crypt-ramfs/device
cryptsetup-askpass
rm /.luksopen_args # and try reading it from /dev/console with a timeout
IFS= read -t 1 -r passphrase
if [ -n "$passphrase" ]; then
${if luks.reusePassphrases then ''
# remember it for the next device
echo -n "$passphrase" > /crypt-ramfs/passphrase
'' else ''
# Don't save it to ramfs. We are very paranoid
''}
echo
break
fi
fi
done
echo -n "Verifiying passphrase for ${device}..."
echo -n "$passphrase" | ${csopen} --key-file=-
if [ $? == 0 ]; then
echo " - success"
${if luks.reusePassphrases then ''
# we don't rm here because we might reuse it for the next device
'' else ''
rm -f /crypt-ramfs/passphrase
''}
break
else
echo " - failure"
# ask for a different one
rm -f /crypt-ramfs/passphrase
fi
done
} }
${optionalString (luks.yubikeySupport && (yubikey != null)) '' # LUKS
open_normally() {
${if (keyFile != null) then ''
if wait_target "key file" ${keyFile}; then
${csopen} --key-file=${keyFile} \
${optionalString (keyFileSize != null) "--keyfile-size=${toString keyFileSize}"} \
${optionalString (keyFileOffset != null) "--keyfile-offset=${toString keyFileOffset}"}
else
${if fallbackToPassword then "echo" else "die"} "${keyFile} is unavailable"
echo " - failing back to interactive password prompt"
do_open_passphrase
fi
'' else ''
do_open_passphrase
''}
}
${if luks.yubikeySupport && (yubikey != null) then ''
# Yubikey
rbtohex() { rbtohex() {
( od -An -vtx1 | tr -d ' \n' ) ( od -An -vtx1 | tr -d ' \n' )
} }
@ -68,8 +178,7 @@ let
( tr '[:lower:]' '[:upper:]' | sed -e 's/\([0-9A-F]\{2\}\)/\\\\\\x\1/gI' | xargs printf ) ( tr '[:lower:]' '[:upper:]' | sed -e 's/\([0-9A-F]\{2\}\)/\\\\\\x\1/gI' | xargs printf )
} }
open_yubikey() { do_open_yubikey() {
# Make all of these local to this function # Make all of these local to this function
# to prevent their values being leaked # to prevent their values being leaked
local salt local salt
@ -85,19 +194,18 @@ let
local new_response local new_response
local new_k_luks local new_k_luks
mkdir -p ${yubikey.storage.mountPoint} mount -t ${yubikey.storage.fsType} ${yubikey.storage.device} /crypt-storage || \
mount -t ${yubikey.storage.fsType} ${toString yubikey.storage.device} ${yubikey.storage.mountPoint} die "Failed to mount Yubikey salt storage device"
salt="$(cat ${yubikey.storage.mountPoint}${yubikey.storage.path} | sed -n 1p | tr -d '\n')" salt="$(cat /crypt-storage${yubikey.storage.path} | sed -n 1p | tr -d '\n')"
iterations="$(cat ${yubikey.storage.mountPoint}${yubikey.storage.path} | sed -n 2p | tr -d '\n')" iterations="$(cat /crypt-storage${yubikey.storage.path} | sed -n 2p | tr -d '\n')"
challenge="$(echo -n $salt | openssl-wrap dgst -binary -sha512 | rbtohex)" challenge="$(echo -n $salt | openssl-wrap dgst -binary -sha512 | rbtohex)"
response="$(ykchalresp -${toString yubikey.slot} -x $challenge 2>/dev/null)" response="$(ykchalresp -${toString yubikey.slot} -x $challenge 2>/dev/null)"
for try in $(seq 3); do for try in $(seq 3); do
${optionalString yubikey.twoFactor '' ${optionalString yubikey.twoFactor ''
echo -n "Enter two-factor passphrase: " echo -n "Enter two-factor passphrase: "
read -s k_user read -r k_user
echo echo
''} ''}
@ -107,9 +215,9 @@ let
k_luks="$(echo | pbkdf2-sha512 ${toString yubikey.keyLength} $iterations $response | rbtohex)" k_luks="$(echo | pbkdf2-sha512 ${toString yubikey.keyLength} $iterations $response | rbtohex)"
fi fi
echo -n "$k_luks" | hextorb | cryptsetup luksOpen ${device} ${name} ${optionalString allowDiscards "--allow-discards"} --key-file=- echo -n "$k_luks" | hextorb | ${csopen} --key-file=-
if [ $? == "0" ]; then if [ $? == 0 ]; then
opened=true opened=true
break break
else else
@ -118,11 +226,7 @@ let
fi fi
done done
if [ "$opened" == false ]; then [ "$opened" == false ] && die "Maximum authentication errors reached"
umount ${yubikey.storage.mountPoint}
echo "Maximum authentication errors reached"
exit 1
fi
echo -n "Gathering entropy for new salt (please enter random keys to generate entropy if this blocks for long)..." echo -n "Gathering entropy for new salt (please enter random keys to generate entropy if this blocks for long)..."
for i in $(seq ${toString yubikey.saltLength}); do for i in $(seq ${toString yubikey.saltLength}); do
@ -147,69 +251,52 @@ let
new_k_luks="$(echo | pbkdf2-sha512 ${toString yubikey.keyLength} $new_iterations $new_response | rbtohex)" new_k_luks="$(echo | pbkdf2-sha512 ${toString yubikey.keyLength} $new_iterations $new_response | rbtohex)"
fi fi
mkdir -p ${yubikey.ramfsMountPoint} echo -n "$new_k_luks" | hextorb > /crypt-ramfs/new_key
# A ramfs is used here to ensure that the file used to update echo -n "$k_luks" | hextorb | ${cschange} --key-file=- /crypt-ramfs/new_key
# the key slot with cryptsetup will never get swapped out.
# Warning: Do NOT replace with tmpfs!
mount -t ramfs none ${yubikey.ramfsMountPoint}
echo -n "$new_k_luks" | hextorb > ${yubikey.ramfsMountPoint}/new_key if [ $? == 0 ]; then
echo -n "$k_luks" | hextorb | cryptsetup luksChangeKey ${device} --key-file=- ${yubikey.ramfsMountPoint}/new_key echo -ne "$new_salt\n$new_iterations" > /crypt-storage${yubikey.storage.path}
if [ $? == "0" ]; then
echo -ne "$new_salt\n$new_iterations" > ${yubikey.storage.mountPoint}${yubikey.storage.path}
else else
echo "Warning: Could not update LUKS key, current challenge persists!" echo "Warning: Could not update LUKS key, current challenge persists!"
fi fi
rm -f ${yubikey.ramfsMountPoint}/new_key rm -f /crypt-ramfs/new_key
umount ${yubikey.ramfsMountPoint} umount /crypt-storage
rm -rf ${yubikey.ramfsMountPoint}
umount ${yubikey.storage.mountPoint}
} }
${optionalString (yubikey.gracePeriod > 0) '' open_yubikey() {
echo -n "Waiting ${toString yubikey.gracePeriod} seconds as grace..." if wait_yubikey ${toString yubikey.gracePeriod}; then
for i in $(seq ${toString yubikey.gracePeriod}); do do_open_yubikey
sleep 1 else
echo -n . echo "No yubikey found, falling back to non-yubikey open procedure"
done open_normally
echo "ok" fi
''} }
yubikey_missing=true open_yubikey
ykinfo -v 1>/dev/null 2>&1 '' else ''
if [ $? != "0" ]; then
echo -n "waiting 10 seconds for yubikey to appear..."
for try in $(seq 10); do
sleep 1
ykinfo -v 1>/dev/null 2>&1
if [ $? == "0" ]; then
yubikey_missing=false
break
fi
echo -n .
done
echo "ok"
else
yubikey_missing=false
fi
if [ "$yubikey_missing" == true ]; then
echo "no yubikey found, falling back to non-yubikey open procedure"
open_normally
else
open_yubikey
fi
''}
# open luksRoot and scan for logical volumes
${optionalString ((!luks.yubikeySupport) || (yubikey == null)) ''
open_normally open_normally
''} ''}
''; '';
askPass = pkgs.writeScriptBin "cryptsetup-askpass" ''
#!/bin/sh
${commonFunctions}
while true; do
wait_target "luks" /crypt-ramfs/device 10 "LUKS to request a passphrase" || die "Passphrase is not requested now"
device=$(cat /crypt-ramfs/device)
echo -n "Passphrase for $device: "
IFS= read -rs passphrase
echo
rm /crypt-ramfs/device
echo -n "$passphrase" > /crypt-ramfs/passphrase
done
'';
preLVM = filterAttrs (n: v: v.preLVM) luks.devices; preLVM = filterAttrs (n: v: v.preLVM) luks.devices;
postLVM = filterAttrs (n: v: !v.preLVM) luks.devices; postLVM = filterAttrs (n: v: !v.preLVM) luks.devices;
@ -255,6 +342,22 @@ in
''; '';
}; };
boot.initrd.luks.reusePassphrases = mkOption {
type = types.bool;
default = true;
description = ''
When opening a new LUKS device try reusing last successful
passphrase.
Useful for mounting a number of devices that use the same
passphrase without retyping it several times.
Such setup can be useful if you use <command>cryptsetup
luksSuspend</command>. Different LUKS devices will still have
different master keys even when using the same passphrase.
'';
};
boot.initrd.luks.devices = mkOption { boot.initrd.luks.devices = mkOption {
default = { }; default = { };
example = { "luksroot".device = "/dev/disk/by-uuid/430e9eff-d852-4f68-aa3b-2fa3599ebe08"; }; example = { "luksroot".device = "/dev/disk/by-uuid/430e9eff-d852-4f68-aa3b-2fa3599ebe08"; };
@ -316,6 +419,19 @@ in
''; '';
}; };
keyFileOffset = mkOption {
default = null;
example = 4096;
type = types.nullOr types.int;
description = ''
The offset of the key file. Use this in combination with
<literal>keyFileSize</literal> to use part of a file as key file
(often the case if a raw device or partition is used as a key file).
If not specified, the key begins at the first byte of
<literal>keyFile</literal>.
'';
};
# FIXME: get rid of this option. # FIXME: get rid of this option.
preLVM = mkOption { preLVM = mkOption {
default = true; default = true;
@ -383,15 +499,9 @@ in
}; };
gracePeriod = mkOption { gracePeriod = mkOption {
default = 2; default = 10;
type = types.int; type = types.int;
description = "Time in seconds to wait before attempting to find the Yubikey."; description = "Time in seconds to wait for the Yubikey.";
};
ramfsMountPoint = mkOption {
default = "/crypt-ramfs";
type = types.str;
description = "Path where the ramfs used to update the LUKS key will be mounted during early boot.";
}; };
/* TODO: Add to the documentation of the current module: /* TODO: Add to the documentation of the current module:
@ -414,12 +524,6 @@ in
description = "The filesystem of the unencrypted device."; description = "The filesystem of the unencrypted device.";
}; };
mountPoint = mkOption {
default = "/crypt-storage";
type = types.str;
description = "Path where the unencrypted device will be mounted during early boot.";
};
path = mkOption { path = mkOption {
default = "/crypt-storage/default"; default = "/crypt-storage/default";
type = types.str; type = types.str;
@ -432,8 +536,8 @@ in
}; };
}); });
}; };
};
}; })); }));
}; };
boot.initrd.luks.yubikeySupport = mkOption { boot.initrd.luks.yubikeySupport = mkOption {
@ -463,18 +567,8 @@ in
# copy the cryptsetup binary and it's dependencies # copy the cryptsetup binary and it's dependencies
boot.initrd.extraUtilsCommands = '' boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.cryptsetup}/bin/cryptsetup copy_bin_and_libs ${pkgs.cryptsetup}/bin/cryptsetup
copy_bin_and_libs ${askPass}/bin/cryptsetup-askpass
cat > $out/bin/cryptsetup-askpass <<EOF sed -i s,/bin/sh,$out/bin/sh, $out/bin/cryptsetup-askpass
#!$out/bin/sh -e
if [ -e /.luksopen_args ]; then
cryptsetup \$(cat /.luksopen_args)
killall -q cryptsetup
else
echo "Passphrase is not requested now"
exit 1
fi
EOF
chmod +x $out/bin/cryptsetup-askpass
${optionalString luks.yubikeySupport '' ${optionalString luks.yubikeySupport ''
copy_bin_and_libs ${pkgs.yubikey-personalization}/bin/ykchalresp copy_bin_and_libs ${pkgs.yubikey-personalization}/bin/ykchalresp
@ -506,8 +600,9 @@ in
''} ''}
''; '';
boot.initrd.preLVMCommands = concatStrings (mapAttrsToList openCommand preLVM); boot.initrd.preFailCommands = postCommands;
boot.initrd.postDeviceCommands = concatStrings (mapAttrsToList openCommand postLVM); boot.initrd.preLVMCommands = commonFunctions + preCommands + concatStrings (mapAttrsToList openCommand preLVM) + postCommands;
boot.initrd.postDeviceCommands = commonFunctions + preCommands + concatStrings (mapAttrsToList openCommand postLVM) + postCommands;
environment.systemPackages = [ pkgs.cryptsetup ]; environment.systemPackages = [ pkgs.cryptsetup ];
}; };

View File

@ -11,17 +11,29 @@ let
checkLink = checkUnitConfig "Link" [ checkLink = checkUnitConfig "Link" [
(assertOnlyFields [ (assertOnlyFields [
"Description" "Alias" "MACAddressPolicy" "MACAddress" "NamePolicy" "Name" "Description" "Alias" "MACAddressPolicy" "MACAddress" "NamePolicy" "Name"
"MTUBytes" "BitsPerSecond" "Duplex" "WakeOnLan" "MTUBytes" "BitsPerSecond" "Duplex" "AutoNegotiation" "WakeOnLan" "Port"
"TCPSegmentationOffload" "TCP6SegmentationOffload" "GenericSegmentationOffload"
"GenericReceiveOffload" "LargeReceiveOffload" "RxChannels" "TxChannels"
"OtherChannels" "CombinedChannels"
]) ])
(assertValueOneOf "MACAddressPolicy" ["persistent" "random"]) (assertValueOneOf "MACAddressPolicy" ["persistent" "random" "none"])
(assertMacAddress "MACAddress") (assertMacAddress "MACAddress")
(assertValueOneOf "NamePolicy" [
"kernel" "database" "onboard" "slot" "path" "mac"
])
(assertByteFormat "MTUBytes") (assertByteFormat "MTUBytes")
(assertByteFormat "BitsPerSecond") (assertByteFormat "BitsPerSecond")
(assertValueOneOf "Duplex" ["half" "full"]) (assertValueOneOf "Duplex" ["half" "full"])
(assertValueOneOf "WakeOnLan" ["phy" "magic" "off"]) (assertValueOneOf "AutoNegotiation" boolValues)
(assertValueOneOf "WakeOnLan" ["phy" "unicast" "multicast" "broadcast" "arp" "magic" "secureon" "off"])
(assertValueOneOf "Port" ["tp" "aui" "bnc" "mii" "fibre"])
(assertValueOneOf "TCPSegmentationOffload" boolValues)
(assertValueOneOf "TCP6SegmentationOffload" boolValues)
(assertValueOneOf "GenericSegmentationOffload" boolValues)
(assertValueOneOf "UDPSegmentationOffload" boolValues)
(assertValueOneOf "GenericReceiveOffload" boolValues)
(assertValueOneOf "LargeReceiveOffload" boolValues)
(assertRange "RxChannels" 1 4294967295)
(assertRange "TxChannels" 1 4294967295)
(assertRange "OtherChannels" 1 4294967295)
(assertRange "CombinedChannels" 1 4294967295)
]; ];
checkNetdev = checkUnitConfig "Netdev" [ checkNetdev = checkUnitConfig "Netdev" [
@ -31,16 +43,21 @@ let
(assertHasField "Name") (assertHasField "Name")
(assertHasField "Kind") (assertHasField "Kind")
(assertValueOneOf "Kind" [ (assertValueOneOf "Kind" [
"bridge" "bond" "vlan" "macvlan" "vxlan" "ipip" "bond" "bridge" "dummy" "gre" "gretap" "ip6gre" "ip6tnl" "ip6gretap" "ipip"
"gre" "sit" "vti" "veth" "tun" "tap" "dummy" "ipvlan" "macvlan" "macvtap" "sit" "tap" "tun" "veth" "vlan" "vti" "vti6"
"vxlan" "geneve" "vrf" "vcan" "vxcan" "wireguard" "netdevsim"
]) ])
(assertByteFormat "MTUBytes") (assertByteFormat "MTUBytes")
(assertMacAddress "MACAddress") (assertMacAddress "MACAddress")
]; ];
checkVlan = checkUnitConfig "VLAN" [ checkVlan = checkUnitConfig "VLAN" [
(assertOnlyFields ["Id"]) (assertOnlyFields ["Id" "GVRP" "MVRP" "LooseBinding" "ReorderHeader"])
(assertRange "Id" 0 4094) (assertRange "Id" 0 4094)
(assertValueOneOf "GVRP" boolValues)
(assertValueOneOf "MVRP" boolValues)
(assertValueOneOf "LooseBinding" boolValues)
(assertValueOneOf "ReorderHeader" boolValues)
]; ];
checkMacvlan = checkUnitConfig "MACVLAN" [ checkMacvlan = checkUnitConfig "MACVLAN" [
@ -49,15 +66,41 @@ let
]; ];
checkVxlan = checkUnitConfig "VXLAN" [ checkVxlan = checkUnitConfig "VXLAN" [
(assertOnlyFields ["Id" "Group" "TOS" "TTL" "MacLearning"]) (assertOnlyFields [
"Id" "Remote" "Local" "TOS" "TTL" "MacLearning" "FDBAgeingSec"
"MaximumFDBEntries" "ReduceARPProxy" "L2MissNotification"
"L3MissNotification" "RouteShortCircuit" "UDPChecksum"
"UDP6ZeroChecksumTx" "UDP6ZeroChecksumRx" "RemoteChecksumTx"
"RemoteChecksumRx" "GroupPolicyExtension" "DestinationPort" "PortRange"
"FlowLabel"
])
(assertRange "TTL" 0 255) (assertRange "TTL" 0 255)
(assertValueOneOf "MacLearning" boolValues) (assertValueOneOf "MacLearning" boolValues)
(assertValueOneOf "ReduceARPProxy" boolValues)
(assertValueOneOf "L2MissNotification" boolValues)
(assertValueOneOf "L3MissNotification" boolValues)
(assertValueOneOf "RouteShortCircuit" boolValues)
(assertValueOneOf "UDPChecksum" boolValues)
(assertValueOneOf "UDP6ZeroChecksumTx" boolValues)
(assertValueOneOf "UDP6ZeroChecksumRx" boolValues)
(assertValueOneOf "RemoteChecksumTx" boolValues)
(assertValueOneOf "RemoteChecksumRx" boolValues)
(assertValueOneOf "GroupPolicyExtension" boolValues)
(assertRange "FlowLabel" 0 1048575)
]; ];
checkTunnel = checkUnitConfig "Tunnel" [ checkTunnel = checkUnitConfig "Tunnel" [
(assertOnlyFields ["Local" "Remote" "TOS" "TTL" "DiscoverPathMTU"]) (assertOnlyFields [
"Local" "Remote" "TOS" "TTL" "DiscoverPathMTU" "IPv6FlowLabel" "CopyDSCP"
"EncapsulationLimit" "Key" "InputKey" "OutputKey" "Mode" "Independent"
"AllowLocalRemote"
])
(assertRange "TTL" 0 255) (assertRange "TTL" 0 255)
(assertValueOneOf "DiscoverPathMTU" boolValues) (assertValueOneOf "DiscoverPathMTU" boolValues)
(assertValueOneOf "CopyDSCP" boolValues)
(assertValueOneOf "Mode" ["ip6ip6" "ipip6" "any"])
(assertValueOneOf "Independent" boolValues)
(assertValueOneOf "AllowLocalRemote" boolValues)
]; ];
checkPeer = checkUnitConfig "Peer" [ checkPeer = checkUnitConfig "Peer" [
@ -66,10 +109,11 @@ let
]; ];
tunTapChecks = [ tunTapChecks = [
(assertOnlyFields ["OneQueue" "MultiQueue" "PacketInfo" "User" "Group"]) (assertOnlyFields ["OneQueue" "MultiQueue" "PacketInfo" "VNetHeader" "User" "Group"])
(assertValueOneOf "OneQueue" boolValues) (assertValueOneOf "OneQueue" boolValues)
(assertValueOneOf "MultiQueue" boolValues) (assertValueOneOf "MultiQueue" boolValues)
(assertValueOneOf "PacketInfo" boolValues) (assertValueOneOf "PacketInfo" boolValues)
(assertValueOneOf "VNetHeader" boolValues)
]; ];
checkTun = checkUnitConfig "Tun" tunTapChecks; checkTun = checkUnitConfig "Tun" tunTapChecks;
@ -79,67 +123,121 @@ let
checkBond = checkUnitConfig "Bond" [ checkBond = checkUnitConfig "Bond" [
(assertOnlyFields [ (assertOnlyFields [
"Mode" "TransmitHashPolicy" "LACPTransmitRate" "MIIMonitorSec" "Mode" "TransmitHashPolicy" "LACPTransmitRate" "MIIMonitorSec"
"UpDelaySec" "DownDelaySec" "GratuitousARP" "UpDelaySec" "DownDelaySec" "LearnPacketIntervalSec" "AdSelect"
"FailOverMACPolicy" "ARPValidate" "ARPIntervalSec" "ARPIPTargets"
"ARPAllTargets" "PrimaryReselectPolicy" "ResendIGMP" "PacketsPerSlave"
"GratuitousARP" "AllSlavesActive" "MinLinks"
]) ])
(assertValueOneOf "Mode" [ (assertValueOneOf "Mode" [
"balance-rr" "active-backup" "balance-xor" "balance-rr" "active-backup" "balance-xor"
"broadcast" "802.3ad" "balance-tlb" "balance-alb" "broadcast" "802.3ad" "balance-tlb" "balance-alb"
]) ])
(assertValueOneOf "TransmitHashPolicy" [ (assertValueOneOf "TransmitHashPolicy" [
"layer2" "layer3+4" "layer2+3" "encap2+3" "802.3ad" "encap3+4" "layer2" "layer3+4" "layer2+3" "encap2+3" "encap3+4"
]) ])
(assertValueOneOf "LACPTransmitRate" ["slow" "fast"]) (assertValueOneOf "LACPTransmitRate" ["slow" "fast"])
(assertValueOneOf "AdSelect" ["stable" "bandwidth" "count"])
(assertValueOneOf "FailOverMACPolicy" ["none" "active" "follow"])
(assertValueOneOf "ARPValidate" ["none" "active" "backup" "all"])
(assertValueOneOf "ARPAllTargets" ["any" "all"])
(assertValueOneOf "PrimaryReselectPolicy" ["always" "better" "failure"])
(assertRange "ResendIGMP" 0 255)
(assertRange "PacketsPerSlave" 0 65535)
(assertRange "GratuitousARP" 0 255)
(assertValueOneOf "AllSlavesActive" boolValues)
]; ];
checkNetwork = checkUnitConfig "Network" [ checkNetwork = checkUnitConfig "Network" [
(assertOnlyFields [ (assertOnlyFields [
"Description" "DHCP" "DHCPServer" "IPForward" "IPMasquerade" "IPv4LL" "IPv4LLRoute" "Description" "DHCP" "DHCPServer" "LinkLocalAddressing" "IPv4LLRoute"
"LLMNR" "MulticastDNS" "Domains" "Bridge" "Bond" "IPv6PrivacyExtensions" "IPv6Token" "LLMNR" "MulticastDNS" "DNSOverTLS" "DNSSEC"
"DNSSECNegativeTrustAnchors" "LLDP" "EmitLLDP" "BindCarrier" "Address"
"Gateway" "DNS" "Domains" "NTP" "IPForward" "IPMasquerade"
"IPv6PrivacyExtensions" "IPv6AcceptRA" "IPv6DuplicateAddressDetection"
"IPv6HopLimit" "IPv4ProxyARP" "IPv6ProxyNDP" "IPv6ProxyNDPAddress"
"IPv6PrefixDelegation" "IPv6MTUBytes" "Bridge" "Bond" "VRF" "VLAN"
"IPVLAN" "MACVLAN" "VXLAN" "Tunnel" "ActiveSlave" "PrimarySlave"
"ConfigureWithoutCarrier"
]) ])
(assertValueOneOf "DHCP" ["both" "none" "v4" "v6"]) # Note: For DHCP the values both, none, v4, v6 are deprecated
(assertValueOneOf "DHCP" ["yes" "no" "ipv4" "ipv6" "both" "none" "v4" "v6"])
(assertValueOneOf "DHCPServer" boolValues) (assertValueOneOf "DHCPServer" boolValues)
(assertValueOneOf "LinkLocalAddressing" ["yes" "no" "ipv4" "ipv6"])
(assertValueOneOf "IPv4LLRoute" boolValues)
(assertValueOneOf "LLMNR" ["yes" "resolve" "no"])
(assertValueOneOf "MulticastDNS" ["yes" "resolve" "no"])
(assertValueOneOf "DNSOverTLS" ["opportunistic" "no"])
(assertValueOneOf "DNSSEC" ["yes" "allow-downgrade" "no"])
(assertValueOneOf "LLDP" ["yes" "routers-only" "no"])
(assertValueOneOf "EmitLLDP" ["yes" "no" "nearest-bridge" "non-tpmr-bridge" "customer-bridge"])
(assertValueOneOf "IPForward" ["yes" "no" "ipv4" "ipv6"]) (assertValueOneOf "IPForward" ["yes" "no" "ipv4" "ipv6"])
(assertValueOneOf "IPMasquerade" boolValues) (assertValueOneOf "IPMasquerade" boolValues)
(assertValueOneOf "IPv4LL" boolValues)
(assertValueOneOf "IPv4LLRoute" boolValues)
(assertValueOneOf "LLMNR" boolValues)
(assertValueOneOf "MulticastDNS" boolValues)
(assertValueOneOf "IPv6PrivacyExtensions" ["yes" "no" "prefer-public" "kernel"]) (assertValueOneOf "IPv6PrivacyExtensions" ["yes" "no" "prefer-public" "kernel"])
(assertValueOneOf "IPv6AcceptRA" boolValues)
(assertValueOneOf "IPv4ProxyARP" boolValues)
(assertValueOneOf "IPv6ProxyNDP" boolValues)
(assertValueOneOf "IPv6PrefixDelegation" boolValues)
(assertValueOneOf "ActiveSlave" boolValues)
(assertValueOneOf "PrimarySlave" boolValues)
(assertValueOneOf "ConfigureWithoutCarrier" boolValues)
]; ];
checkAddress = checkUnitConfig "Address" [ checkAddress = checkUnitConfig "Address" [
(assertOnlyFields ["Address" "Peer" "Broadcast" "Label"]) (assertOnlyFields [
"Address" "Peer" "Broadcast" "Label" "PreferredLifetime" "Scope"
"HomeAddress" "DuplicateAddressDetection" "ManageTemporaryAddress"
"PrefixRoute" "AutoJoin"
])
(assertHasField "Address") (assertHasField "Address")
(assertValueOneOf "PreferredLifetime" ["forever" "infinity" "0" 0])
(assertValueOneOf "HomeAddress" boolValues)
(assertValueOneOf "DuplicateAddressDetection" boolValues)
(assertValueOneOf "ManageTemporaryAddress" boolValues)
(assertValueOneOf "PrefixRoute" boolValues)
(assertValueOneOf "AutoJoin" boolValues)
]; ];
checkRoute = checkUnitConfig "Route" [ checkRoute = checkUnitConfig "Route" [
(assertOnlyFields ["Gateway" "Destination" "Metric"]) (assertOnlyFields [
"Gateway" "GatewayOnlink" "Destination" "Source" "Metric"
"IPv6Preference" "Scope" "PreferredSource" "Table" "Protocol" "Type"
"InitialCongestionWindow" "InitialAdvertisedReceiveWindow" "QuickAck"
"MTUBytes"
])
(assertHasField "Gateway") (assertHasField "Gateway")
]; ];
checkDhcp = checkUnitConfig "DHCP" [ checkDhcp = checkUnitConfig "DHCP" [
(assertOnlyFields [ (assertOnlyFields [
"UseDNS" "UseMTU" "SendHostname" "UseHostname" "UseDomains" "UseRoutes" "UseDNS" "UseNTP" "UseMTU" "Anonymize" "SendHostname" "UseHostname"
"CriticalConnections" "VendorClassIdentifier" "RequestBroadcast" "Hostname" "UseDomains" "UseRoutes" "UseTimezone" "CriticalConnection"
"RouteMetric" "ClientIdentifier" "VendorClassIdentifier" "UserClass" "DUIDType"
"DUIDRawData" "IAID" "RequestBroadcast" "RouteMetric" "RouteTable"
"ListenPort" "RapidCommit"
]) ])
(assertValueOneOf "UseDNS" boolValues) (assertValueOneOf "UseDNS" boolValues)
(assertValueOneOf "UseNTP" boolValues)
(assertValueOneOf "UseMTU" boolValues) (assertValueOneOf "UseMTU" boolValues)
(assertValueOneOf "Anonymize" boolValues)
(assertValueOneOf "SendHostname" boolValues) (assertValueOneOf "SendHostname" boolValues)
(assertValueOneOf "UseHostname" boolValues) (assertValueOneOf "UseHostname" boolValues)
(assertValueOneOf "UseDomains" boolValues) (assertValueOneOf "UseDomains" ["yes" "no" "route"])
(assertValueOneOf "UseRoutes" boolValues) (assertValueOneOf "UseRoutes" boolValues)
(assertValueOneOf "CriticalConnections" boolValues) (assertValueOneOf "UseTimezone" boolValues)
(assertValueOneOf "CriticalConnection" boolValues)
(assertValueOneOf "RequestBroadcast" boolValues) (assertValueOneOf "RequestBroadcast" boolValues)
(assertRange "RouteTable" 0 4294967295)
(assertValueOneOf "RapidCommit" boolValues)
]; ];
checkDhcpServer = checkUnitConfig "DHCPServer" [ checkDhcpServer = checkUnitConfig "DHCPServer" [
(assertOnlyFields [ (assertOnlyFields [
"PoolOffset" "PoolSize" "DefaultLeaseTimeSec" "MaxLeaseTimeSec" "PoolOffset" "PoolSize" "DefaultLeaseTimeSec" "MaxLeaseTimeSec"
"EmitDNS" "DNS" "EmitNTP" "NTP" "EmitTimezone" "Timezone" "EmitDNS" "DNS" "EmitNTP" "NTP" "EmitRouter" "EmitTimezone" "Timezone"
]) ])
(assertValueOneOf "EmitDNS" boolValues) (assertValueOneOf "EmitDNS" boolValues)
(assertValueOneOf "EmitNTP" boolValues) (assertValueOneOf "EmitNTP" boolValues)
(assertValueOneOf "EmitRouter" boolValues)
(assertValueOneOf "EmitTimezone" boolValues) (assertValueOneOf "EmitTimezone" boolValues)
]; ];
@ -461,6 +559,36 @@ let
''; '';
}; };
bridge = mkOption {
default = [ ];
type = types.listOf types.str;
description = ''
A list of bridge interfaces to be added to the network section of the
unit. See <citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
bond = mkOption {
default = [ ];
type = types.listOf types.str;
description = ''
A list of bond interfaces to be added to the network section of the
unit. See <citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
vrf = mkOption {
default = [ ];
type = types.listOf types.str;
description = ''
A list of vrf interfaces to be added to the network section of the
unit. See <citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
vlan = mkOption { vlan = mkOption {
default = [ ]; default = [ ];
type = types.listOf types.str; type = types.listOf types.str;
@ -619,6 +747,9 @@ let
${concatStringsSep "\n" (map (s: "Gateway=${s}") def.gateway)} ${concatStringsSep "\n" (map (s: "Gateway=${s}") def.gateway)}
${concatStringsSep "\n" (map (s: "DNS=${s}") def.dns)} ${concatStringsSep "\n" (map (s: "DNS=${s}") def.dns)}
${concatStringsSep "\n" (map (s: "NTP=${s}") def.ntp)} ${concatStringsSep "\n" (map (s: "NTP=${s}") def.ntp)}
${concatStringsSep "\n" (map (s: "Bridge=${s}") def.bridge)}
${concatStringsSep "\n" (map (s: "Bond=${s}") def.bond)}
${concatStringsSep "\n" (map (s: "VRF=${s}") def.vrf)}
${concatStringsSep "\n" (map (s: "VLAN=${s}") def.vlan)} ${concatStringsSep "\n" (map (s: "VLAN=${s}") def.vlan)}
${concatStringsSep "\n" (map (s: "MACVLAN=${s}") def.macvlan)} ${concatStringsSep "\n" (map (s: "MACVLAN=${s}") def.macvlan)}
${concatStringsSep "\n" (map (s: "VXLAN=${s}") def.vxlan)} ${concatStringsSep "\n" (map (s: "VXLAN=${s}") def.vxlan)}

View File

@ -179,7 +179,7 @@ let
fi fi
done done
if [ -z "${toString pkgs.stdenv.isCross}" ]; then if [ -z "${toString (pkgs.stdenv.hostPlatform != pkgs.stdenv.buildPlatform)}" ]; then
# Make sure that the patchelf'ed binaries still work. # Make sure that the patchelf'ed binaries still work.
echo "testing patched programs..." echo "testing patched programs..."
$out/bin/ash -c 'echo hello world' | grep "hello world" $out/bin/ash -c 'echo hello world' | grep "hello world"
@ -248,6 +248,14 @@ let
isExecutable = true; isExecutable = true;
postInstall = ''
echo checking syntax
# check both with bash
${pkgs.bash}/bin/sh -n $target
# and with ash shell, just in case
${extraUtils}/bin/ash -n $target
'';
inherit udevRules extraUtils modulesClosure; inherit udevRules extraUtils modulesClosure;
inherit (config.boot) resumeDevice; inherit (config.boot) resumeDevice;

View File

@ -65,6 +65,7 @@ let
"systemd-user-sessions.service" "systemd-user-sessions.service"
"dbus-org.freedesktop.machine1.service" "dbus-org.freedesktop.machine1.service"
"user@.service" "user@.service"
"user-runtime-dir@.service"
# Journal. # Journal.
"systemd-journald.socket" "systemd-journald.socket"
@ -189,9 +190,8 @@ let
]; ];
makeJobScript = name: text: makeJobScript = name: text:
let mkScriptName = s: (replaceChars [ "\\" ] [ "-" ] (shellEscape s) ); let mkScriptName = s: "unit-script-" + (replaceChars [ "\\" "@" ] [ "-" "_" ] (shellEscape s) );
x = pkgs.writeTextFile { name = "unit-script"; executable = true; destination = "/bin/${mkScriptName name}"; inherit text; }; in pkgs.writeTextFile { name = mkScriptName name; executable = true; inherit text; };
in "${x}/bin/${mkScriptName name}";
unitConfig = { config, ... }: { unitConfig = { config, ... }: {
config = { config = {

View File

@ -23,12 +23,8 @@ let
kernel = config.boot.kernelPackages; kernel = config.boot.kernelPackages;
packages = if config.boot.zfs.enableLegacyCrypto then { packages = if config.boot.zfs.enableUnstable then {
spl = kernel.splLegacyCrypto; spl = null;
zfs = kernel.zfsLegacyCrypto;
zfsUser = pkgs.zfsLegacyCrypto;
} else if config.boot.zfs.enableUnstable then {
spl = kernel.splUnstable;
zfs = kernel.zfsUnstable; zfs = kernel.zfsUnstable;
zfsUser = pkgs.zfsUnstable; zfsUser = pkgs.zfsUnstable;
} else { } else {
@ -117,27 +113,6 @@ in
''; '';
}; };
enableLegacyCrypto = mkOption {
type = types.bool;
default = false;
description = ''
Enabling this option will allow you to continue to use the old format for
encrypted datasets. With the inclusion of stability patches the format of
encrypted datasets has changed. They can still be accessed and mounted but
in read-only mode mounted. It is highly recommended to convert them to
the new format.
This option is only for convenience to people that cannot convert their
datasets to the new format yet and it will be removed in due time.
For migration strategies from old format to this new one, check the Wiki:
https://nixos.wiki/wiki/NixOS_on_ZFS#Encrypted_Dataset_Format_Change
See https://github.com/zfsonlinux/zfs/pull/6864 for more details about
the stability patches.
'';
};
extraPools = mkOption { extraPools = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = []; default = [];
@ -350,12 +325,12 @@ in
virtualisation.lxd.zfsSupport = true; virtualisation.lxd.zfsSupport = true;
boot = { boot = {
kernelModules = [ "spl" "zfs" ] ; kernelModules = [ "zfs" ] ++ optional (!cfgZfs.enableUnstable) "spl";
extraModulePackages = with packages; [ spl zfs ]; extraModulePackages = with packages; [ zfs ] ++ optional (!cfgZfs.enableUnstable) spl;
}; };
boot.initrd = mkIf inInitrd { boot.initrd = mkIf inInitrd {
kernelModules = [ "spl" "zfs" ]; kernelModules = [ "zfs" ] ++ optional (!cfgZfs.enableUnstable) "spl";
extraUtilsCommands = extraUtilsCommands =
'' ''
copy_bin_and_libs ${packages.zfsUser}/sbin/zfs copy_bin_and_libs ${packages.zfsUser}/sbin/zfs

View File

@ -55,6 +55,15 @@ with lib;
''; '';
}; };
device = mkOption {
default = "TPPS/2 IBM TrackPoint";
type = types.str;
description = ''
The device name of the trackpoint. You can check with xinput.
Some newer devices (example x1c6) use "TPPS/2 Elan TrackPoint".
'';
};
}; };
}; };
@ -68,12 +77,12 @@ with lib;
(mkIf cfg.enable { (mkIf cfg.enable {
services.udev.extraRules = services.udev.extraRules =
'' ''
ACTION=="add|change", SUBSYSTEM=="input", ATTR{name}=="TPPS/2 IBM TrackPoint", ATTR{device/speed}="${toString cfg.speed}", ATTR{device/sensitivity}="${toString cfg.sensitivity}" ACTION=="add|change", SUBSYSTEM=="input", ATTR{name}=="${cfg.device}", ATTR{device/speed}="${toString cfg.speed}", ATTR{device/sensitivity}="${toString cfg.sensitivity}"
''; '';
system.activationScripts.trackpoint = system.activationScripts.trackpoint =
'' ''
${config.systemd.package}/bin/udevadm trigger --attr-match=name="TPPS/2 IBM TrackPoint" ${config.systemd.package}/bin/udevadm trigger --attr-match=name="${cfg.device}"
''; '';
}) })
@ -81,7 +90,7 @@ with lib;
services.xserver.inputClassSections = services.xserver.inputClassSections =
['' [''
Identifier "Trackpoint Wheel Emulation" Identifier "Trackpoint Wheel Emulation"
MatchProduct "${if cfg.fakeButtons then "PS/2 Generic Mouse" else "ETPS/2 Elantech TrackPoint|Elantech PS/2 TrackPoint|TPPS/2 IBM TrackPoint|DualPoint Stick|Synaptics Inc. Composite TouchPad / TrackPoint|ThinkPad USB Keyboard with TrackPoint|USB Trackpoint pointing device|Composite TouchPad / TrackPoint"}" MatchProduct "${if cfg.fakeButtons then "PS/2 Generic Mouse" else "ETPS/2 Elantech TrackPoint|Elantech PS/2 TrackPoint|TPPS/2 IBM TrackPoint|DualPoint Stick|Synaptics Inc. Composite TouchPad / TrackPoint|ThinkPad USB Keyboard with TrackPoint|USB Trackpoint pointing device|Composite TouchPad / TrackPoint|${cfg.device}"}"
MatchDevicePath "/dev/input/event*" MatchDevicePath "/dev/input/event*"
Option "EmulateWheel" "true" Option "EmulateWheel" "true"
Option "EmulateWheelButton" "2" Option "EmulateWheelButton" "2"

Some files were not shown because too many files have changed in this diff Show More