Merge remote-tracking branch 'upstream/master' into parnell/fetchdocker

This commit is contained in:
Parnell Springmeyer 2018-02-13 17:28:45 -06:00
commit 0a603ee165
No known key found for this signature in database
GPG Key ID: C7FD72B325BC271F
1975 changed files with 99892 additions and 64306 deletions

10
.github/CODEOWNERS vendored
View File

@ -12,17 +12,17 @@
# Libraries # Libraries
/lib @edolstra @nbp /lib @edolstra @nbp
/lib/systems @edolstra @nbp @ericson2314 /lib/systems @nbp @ericson2314
# Nixpkgs Internals # Nixpkgs Internals
/default.nix @nbp /default.nix @nbp
/pkgs/top-level/default.nix @nbp @Ericson2314 /pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314 /pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314 /pkgs/top-level/stage.nix @nbp @Ericson2314
/pkgs/stdenv @edolstra /pkgs/stdenv
/pkgs/build-support/cc-wrapper @edolstra @Ericson2314 /pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @edolstra @Ericson2314 /pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @edolstra @Ericson2314 /pkgs/build-support/setup-hooks @Ericson2314
# NixOS Internals # NixOS Internals
/nixos/default.nix @nbp /nixos/default.nix @nbp

View File

@ -61,7 +61,7 @@
<listitem> <listitem>
<para> <para>
The "target platform" attribute is, unlike the other two attributes, not actually fundamental to the process of building software. The "target platform" attribute is, unlike the other two attributes, not actually fundamental to the process of building software.
Instead, it is only relevant for compatability with building certain specific compilers and build tools. Instead, it is only relevant for compatibility with building certain specific compilers and build tools.
It can be safely ignored for all other packages. It can be safely ignored for all other packages.
</para> </para>
<para> <para>
@ -162,7 +162,7 @@
<para> <para>
A runtime dependency between 2 packages implies that between them both the host and target platforms match. A runtime dependency between 2 packages implies that between them both the host and target platforms match.
This is directly implied by the meaning of "host platform" and "runtime dependency": This is directly implied by the meaning of "host platform" and "runtime dependency":
The package dependency exists while both packages are runnign on a single host platform. The package dependency exists while both packages are running on a single host platform.
</para> </para>
<para> <para>
A build time dependency, however, implies a shift in platforms between the depending package and the depended-on package. A build time dependency, however, implies a shift in platforms between the depending package and the depended-on package.
@ -253,8 +253,19 @@
or also with <varname>crossSystem</varname>, in which case packages run on the latter, but all building happens on the former. or also with <varname>crossSystem</varname>, in which case packages run on the latter, but all building happens on the former.
Both parameters take the same schema as the 3 (build, host, and target) platforms defined in the previous section. Both parameters take the same schema as the 3 (build, host, and target) platforms defined in the previous section.
As mentioned above, <literal>lib.systems.examples</literal> has some platforms which are used as arguments for these parameters in practice. As mentioned above, <literal>lib.systems.examples</literal> has some platforms which are used as arguments for these parameters in practice.
You can use them programmatically, or on the command line like <command>nix-build &lt;nixpkgs&gt; --arg crossSystem '(import &lt;nixpkgs/lib&gt;).systems.examples.fooBarBaz'</command>. You can use them programmatically, or on the command line: <programlisting>
nix-build &lt;nixpkgs&gt; --arg crossSystem '(import &lt;nixpkgs/lib&gt;).systems.examples.fooBarBaz' -A whatever</programlisting>
</para> </para>
<note>
<para>
Eventually we would like to make these platform examples an unnecessary convenience so that <programlisting>
nix-build &lt;nixpkgs&gt; --arg crossSystem.config '&lt;arch&gt;-&lt;os&gt;-&lt;vendor&gt;-&lt;abi&gt;' -A whatever</programlisting>
works in the vast majority of cases.
The problem today is dependencies on other sorts of configuration which aren't given proper defaults.
We rely on the examples to crudely to set those configuration parameters in some vaguely sane manner on the users behalf.
Issue <link xlink:href="https://github.com/NixOS/nixpkgs/issues/34274">#34274</link> tracks this inconvenience along with its root cause in crufty configuration options.
</para>
</note>
<para> <para>
While one is free to pass both parameters in full, there's a lot of logic to fill in missing fields. While one is free to pass both parameters in full, there's a lot of logic to fill in missing fields.
As discussed in the previous section, only one of <varname>system</varname>, <varname>config</varname>, and <varname>parsed</varname> is needed to infer the other two. As discussed in the previous section, only one of <varname>system</varname>, <varname>config</varname>, and <varname>parsed</varname> is needed to infer the other two.

View File

@ -334,14 +334,10 @@ navigate there.
Finally, you can run Finally, you can run
```shell ```shell
hoogle server -p 8080 hoogle server -p 8080 --local
``` ```
and navigate to http://localhost:8080/ for your own local and navigate to http://localhost:8080/ for your own local
[Hoogle](https://www.haskell.org/hoogle/). Note, however, that Firefox and [Hoogle](https://www.haskell.org/hoogle/).
possibly other browsers disallow navigation from `http:` to `file:` URIs for
security reasons, which might be quite an inconvenience. See [this
page](http://kb.mozillazine.org/Links_to_local_pages_do_not_work) for
workarounds.
### How to build a Haskell project using Stack ### How to build a Haskell project using Stack

View File

@ -191,7 +191,6 @@ building Python libraries is `buildPythonPackage`. Let's see how we can build th
toolz = buildPythonPackage rec { toolz = buildPythonPackage rec {
pname = "toolz"; pname = "toolz";
version = "0.7.4"; version = "0.7.4";
name = "${pname}-${version}";
src = fetchPypi { src = fetchPypi {
inherit pname version; inherit pname version;
@ -237,7 +236,6 @@ with import <nixpkgs> {};
my_toolz = python35.pkgs.buildPythonPackage rec { my_toolz = python35.pkgs.buildPythonPackage rec {
pname = "toolz"; pname = "toolz";
version = "0.7.4"; version = "0.7.4";
name = "${pname}-${version}";
src = python35.pkgs.fetchPypi { src = python35.pkgs.fetchPypi {
inherit pname version; inherit pname version;
@ -283,15 +281,15 @@ order to build [`datashape`](https://github.com/blaze/datashape).
{ # ... { # ...
datashape = buildPythonPackage rec { datashape = buildPythonPackage rec {
name = "datashape-${version}"; pname = "datashape";
version = "0.4.7"; version = "0.4.7";
src = pkgs.fetchurl { src = fetchPypi {
url = "mirror://pypi/D/DataShape/${name}.tar.gz"; inherit pname version;
sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278"; sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278";
}; };
buildInputs = with self; [ pytest ]; checkInputs = with self; [ pytest ];
propagatedBuildInputs = with self; [ numpy multipledispatch dateutil ]; propagatedBuildInputs = with self; [ numpy multipledispatch dateutil ];
meta = { meta = {
@ -318,10 +316,11 @@ when building the bindings and are therefore added as `buildInputs`.
{ # ... { # ...
lxml = buildPythonPackage rec { lxml = buildPythonPackage rec {
name = "lxml-3.4.4"; pname = "lxml";
version = "3.4.4";
src = pkgs.fetchurl { src = fetchPypi {
url = "mirror://pypi/l/lxml/${name}.tar.gz"; inherit pname version;
sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk"; sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk";
}; };
@ -351,11 +350,11 @@ and `CFLAGS`.
{ # ... { # ...
pyfftw = buildPythonPackage rec { pyfftw = buildPythonPackage rec {
name = "pyfftw-${version}"; pname = "pyFFTW";
version = "0.9.2"; version = "0.9.2";
src = pkgs.fetchurl { src = fetchPypi {
url = "mirror://pypi/p/pyFFTW/pyFFTW-${version}.tar.gz"; inherit pname version;
sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074"; sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074";
}; };
@ -440,11 +439,11 @@ We first create a function that builds `toolz` in `~/path/to/toolz/release.nix`
{ pkgs, buildPythonPackage }: { pkgs, buildPythonPackage }:
buildPythonPackage rec { buildPythonPackage rec {
name = "toolz-${version}"; pname = "toolz";
version = "0.7.4"; version = "0.7.4";
src = pkgs.fetchurl { src = fetchPypi {
url = "mirror://pypi/t/toolz/toolz-${version}.tar.gz"; inherit pname version;
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd"; sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
}; };
@ -549,25 +548,31 @@ The `buildPythonPackage` function is implemented in
The following is an example: The following is an example:
```nix ```nix
{ # ...
twisted = buildPythonPackage { buildPythonPackage rec {
name = "twisted-8.1.0"; version = "3.3.1";
pname = "pytest";
src = pkgs.fetchurl { preCheck = ''
url = http://tmrc.mit.edu/mirror/twisted/Twisted/8.1/Twisted-8.1.0.tar.bz2; # don't test bash builtins
sha256 = "0q25zbr4xzknaghha72mq57kh53qw1bf8csgp63pm9sfi72qhirl"; rm testing/test_argcomplete.py
}; '';
propagatedBuildInputs = [ self.ZopeInterface ]; src = fetchPypi {
inherit pname version;
sha256 = "cf8436dc59d8695346fcd3ab296de46425ecab00d64096cebe79fb51ecb2eb93";
};
meta = { checkInputs = [ hypothesis ];
homepage = http://twistedmatrix.com/; buildInputs = [ setuptools_scm ];
description = "Twisted, an event-driven networking engine written in Python"; propagatedBuildInputs = [ attrs py setuptools six pluggy ];
license = stdenv.lib.licenses.mit;
}; meta = with stdenv.lib; {
maintainers = with maintainers; [ domenkozar lovek323 madjar lsix ];
description = "Framework for writing tests";
}; };
} }
``` ```
The `buildPythonPackage` mainly does four things: The `buildPythonPackage` mainly does four things:
@ -623,7 +628,6 @@ with import <nixpkgs> {};
packageOverrides = self: super: { packageOverrides = self: super: {
pandas = super.pandas.overridePythonAttrs(old: rec { pandas = super.pandas.overridePythonAttrs(old: rec {
version = "0.19.1"; version = "0.19.1";
name = "pandas-${version}";
src = super.fetchPypi { src = super.fetchPypi {
pname = "pandas"; pname = "pandas";
inherit version; inherit version;

View File

@ -79,19 +79,24 @@ an example for a minimal `hello` crate:
Now, the file produced by the call to `carnix`, called `hello.nix`, looks like: Now, the file produced by the call to `carnix`, called `hello.nix`, looks like:
``` ```
with import <nixpkgs> {}; # Generated by carnix 0.6.5: carnix -o hello.nix --src ./. Cargo.lock --standalone
{ lib, buildPlatform, buildRustCrate, fetchgit }:
let kernel = buildPlatform.parsed.kernel.name; let kernel = buildPlatform.parsed.kernel.name;
# ... (content skipped) # ... (content skipped)
hello_0_1_0_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
crateName = "hello";
version = "0.1.0";
authors = [ "Authorname <user@example.com>" ];
src = ./.;
inherit dependencies buildDependencies features;
};
in in
rec { rec {
hello_0_1_0 = hello_0_1_0_ rec {}; hello = f: hello_0_1_0 { features = hello_0_1_0_features { hello_0_1_0 = f; }; };
hello_0_1_0_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
crateName = "hello";
version = "0.1.0";
authors = [ "pe@pijul.org <pe@pijul.org>" ];
src = ./.;
inherit dependencies buildDependencies features;
};
hello_0_1_0 = { features?(hello_0_1_0_features {}) }: hello_0_1_0_ {};
hello_0_1_0_features = f: updateFeatures f (rec {
hello_0_1_0.default = (f.hello_0_1_0.default or true);
}) [ ];
} }
``` ```
@ -103,33 +108,44 @@ dependencies, for instance by adding a single line `libc="*"` to our
following nix file: following nix file:
``` ```
with import <nixpkgs> {}; # Generated by carnix 0.6.5: carnix -o hello.nix --src ./. Cargo.lock --standalone
{ lib, buildPlatform, buildRustCrate, fetchgit }:
let kernel = buildPlatform.parsed.kernel.name; let kernel = buildPlatform.parsed.kernel.name;
# ... (content skipped) # ... (content skipped)
hello_0_1_0_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
crateName = "hello";
version = "0.1.0";
authors = [ "Jörg Thalheim <joerg@thalheim.io>" ];
src = ./.;
inherit dependencies buildDependencies features;
};
libc_0_2_34_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
crateName = "libc";
version = "0.2.34";
authors = [ "The Rust Project Developers" ];
sha256 = "11jmqdxmv0ka10ay0l8nzx0nl7s2lc3dbrnh1mgbr2grzwdyxi2s";
inherit dependencies buildDependencies features;
};
in in
rec { rec {
hello_0_1_0 = hello_0_1_0_ rec { hello = f: hello_0_1_0 { features = hello_0_1_0_features { hello_0_1_0 = f; }; };
dependencies = [ libc_0_2_34 ]; hello_0_1_0_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
crateName = "hello";
version = "0.1.0";
authors = [ "pe@pijul.org <pe@pijul.org>" ];
src = ./.;
inherit dependencies buildDependencies features;
}; };
libc_0_2_34_features."default".from_hello_0_1_0__default = true; libc_0_2_36_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
libc_0_2_34 = libc_0_2_34_ rec { crateName = "libc";
features = mkFeatures libc_0_2_34_features; version = "0.2.36";
authors = [ "The Rust Project Developers" ];
sha256 = "01633h4yfqm0s302fm0dlba469bx8y6cs4nqc8bqrmjqxfxn515l";
inherit dependencies buildDependencies features;
}; };
libc_0_2_34_features."use_std".self_default = hasDefault libc_0_2_34_features; hello_0_1_0 = { features?(hello_0_1_0_features {}) }: hello_0_1_0_ {
dependencies = mapFeatures features ([ libc_0_2_36 ]);
};
hello_0_1_0_features = f: updateFeatures f (rec {
hello_0_1_0.default = (f.hello_0_1_0.default or true);
libc_0_2_36.default = true;
}) [ libc_0_2_36_features ];
libc_0_2_36 = { features?(libc_0_2_36_features {}) }: libc_0_2_36_ {
features = mkFeatures (features.libc_0_2_36 or {});
};
libc_0_2_36_features = f: updateFeatures f (rec {
libc_0_2_36.default = (f.libc_0_2_36.default or true);
libc_0_2_36.use_std =
(f.libc_0_2_36.use_std or false) ||
(f.libc_0_2_36.default or false) ||
(libc_0_2_36.default or false);
}) [];
} }
``` ```
@ -146,7 +162,7 @@ or build inputs by overriding the hello crate in a seperate file.
``` ```
with import <nixpkgs> {}; with import <nixpkgs> {};
(import ./hello.nix).hello_0_1_0.override { ((import ./hello.nix).hello {}).override {
crateOverrides = defaultCrateOverrides // { crateOverrides = defaultCrateOverrides // {
hello = attrs: { buildInputs = [ openssl ]; }; hello = attrs: { buildInputs = [ openssl ]; };
}; };
@ -166,7 +182,7 @@ patches the derivation:
``` ```
with import <nixpkgs> {}; with import <nixpkgs> {};
(import ./hello.nix).hello_0_1_0.override { ((import ./hello.nix).hello {}).override {
crateOverrides = defaultCrateOverrides // { crateOverrides = defaultCrateOverrides // {
hello = attrs: lib.optionalAttrs (lib.versionAtLeast attrs.version "1.0") { hello = attrs: lib.optionalAttrs (lib.versionAtLeast attrs.version "1.0") {
postPatch = '' postPatch = ''
@ -187,7 +203,7 @@ crate, we could do:
``` ```
with import <nixpkgs> {}; with import <nixpkgs> {};
(import hello.nix).hello_0_1_0.override { ((import hello.nix).hello {}).override {
crateOverrides = defaultCrateOverrides // { crateOverrides = defaultCrateOverrides // {
libc = attrs: { buildInputs = []; }; libc = attrs: { buildInputs = []; };
}; };
@ -199,23 +215,35 @@ Three more parameters can be overridden:
- The version of rustc used to compile the crate: - The version of rustc used to compile the crate:
``` ```
hello_0_1_0.override { rust = pkgs.rust; }; (hello {}).override { rust = pkgs.rust; };
``` ```
- Whether to build in release mode or debug mode (release mode by - Whether to build in release mode or debug mode (release mode by
default): default):
``` ```
hello_0_1_0.override { release = false; }; (hello {}).override { release = false; };
``` ```
- Whether to print the commands sent to rustc when building - Whether to print the commands sent to rustc when building
(equivalent to `--verbose` in cargo: (equivalent to `--verbose` in cargo:
``` ```
hello_0_1_0.override { verbose = false; }; (hello {}).override { verbose = false; };
``` ```
One can also supply features switches. For example, if we want to
compile `diesel_cli` only with the `postgres` feature, and no default
features, we would write:
```
(callPackage ./diesel.nix {}).diesel {
default = false;
postgres = true;
}
```
## Using the Rust nightlies overlay ## Using the Rust nightlies overlay

View File

@ -660,6 +660,32 @@ cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
passing <command>-q</command> to the Emacs command. passing <command>-q</command> to the Emacs command.
</para> </para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as
this package set has some priorities imposed on packages (with
the lowest priority assigned to Melpa Unstable, and the highest for
packages manually defined in
<filename>pkgs/top-level/emacs-packages.nix</filename>). But you
can't control this priorities when some package is installed as a
dependency. You can override it on per-package-basis, providing all
the required dependencies manually - but it's tedious and there is
always a possibility that an unwanted dependency will sneak in
through some other package. To completely override such a package
you can use <varname>overrideScope</varname>.
</para>
<screen>
overrides = super: self: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesNgGen emacs).overrideScope overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section> </section>
</section> </section>

View File

@ -1802,6 +1802,20 @@ addEnvHooks "$hostOffset" myBashFunction
disabled or patched to work with PaX.</para></listitem> disabled or patched to work with PaX.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>autoPatchelfHook</term>
<listitem><para>This is a special setup hook which helps in packaging
proprietary software in that it automatically tries to find missing shared
library dependencies of ELF files. All packages within the
<envar>runtimeDependencies</envar> environment variable are unconditionally
added to executables, which is useful for programs that use
<citerefentry>
<refentrytitle>dlopen</refentrytitle>
<manvolnum>3</manvolnum>
</citerefentry>
to load libraries at runtime.</para></listitem>
</varlistentry>
</variablelist> </variablelist>
</para> </para>

View File

@ -1,7 +1,7 @@
{ lib }: { lib }:
let let
inherit (builtins) attrNames isFunction; inherit (builtins) attrNames;
in in
@ -36,7 +36,7 @@ rec {
overrideDerivation = drv: f: overrideDerivation = drv: f:
let let
newDrv = derivation (drv.drvAttrs // (f drv)); newDrv = derivation (drv.drvAttrs // (f drv));
in addPassthru newDrv ( in lib.flip (extendDerivation true) newDrv (
{ meta = drv.meta or {}; { meta = drv.meta or {};
passthru = if drv ? passthru then drv.passthru else {}; passthru = if drv ? passthru then drv.passthru else {};
} }
@ -72,7 +72,7 @@ rec {
makeOverridable = f: origArgs: makeOverridable = f: origArgs:
let let
ff = f origArgs; ff = f origArgs;
overrideWith = newArgs: origArgs // (if builtins.isFunction newArgs then newArgs origArgs else newArgs); overrideWith = newArgs: origArgs // (if lib.isFunction newArgs then newArgs origArgs else newArgs);
in in
if builtins.isAttrs ff then (ff // { if builtins.isAttrs ff then (ff // {
override = newArgs: makeOverridable f (overrideWith newArgs); override = newArgs: makeOverridable f (overrideWith newArgs);
@ -81,7 +81,7 @@ rec {
${if ff ? overrideAttrs then "overrideAttrs" else null} = fdrv: ${if ff ? overrideAttrs then "overrideAttrs" else null} = fdrv:
makeOverridable (args: (f args).overrideAttrs fdrv) origArgs; makeOverridable (args: (f args).overrideAttrs fdrv) origArgs;
}) })
else if builtins.isFunction ff then { else if lib.isFunction ff then {
override = newArgs: makeOverridable f (overrideWith newArgs); override = newArgs: makeOverridable f (overrideWith newArgs);
__functor = self: ff; __functor = self: ff;
overrideDerivation = throw "overrideDerivation not yet supported for functors"; overrideDerivation = throw "overrideDerivation not yet supported for functors";
@ -112,8 +112,8 @@ rec {
*/ */
callPackageWith = autoArgs: fn: args: callPackageWith = autoArgs: fn: args:
let let
f = if builtins.isFunction fn then fn else import fn; f = if lib.isFunction fn then fn else import fn;
auto = builtins.intersectAttrs (builtins.functionArgs f) autoArgs; auto = builtins.intersectAttrs (lib.functionArgs f) autoArgs;
in makeOverridable f (auto // args); in makeOverridable f (auto // args);
@ -122,8 +122,8 @@ rec {
individual attributes. */ individual attributes. */
callPackagesWith = autoArgs: fn: args: callPackagesWith = autoArgs: fn: args:
let let
f = if builtins.isFunction fn then fn else import fn; f = if lib.isFunction fn then fn else import fn;
auto = builtins.intersectAttrs (builtins.functionArgs f) autoArgs; auto = builtins.intersectAttrs (lib.functionArgs f) autoArgs;
origArgs = auto // args; origArgs = auto // args;
pkgs = f origArgs; pkgs = f origArgs;
mkAttrOverridable = name: pkg: makeOverridable (newArgs: (f newArgs).${name}) origArgs; mkAttrOverridable = name: pkg: makeOverridable (newArgs: (f newArgs).${name}) origArgs;
@ -131,8 +131,8 @@ rec {
/* Add attributes to each output of a derivation without changing /* Add attributes to each output of a derivation without changing
the derivation itself. */ the derivation itself and check a given condition when evaluating. */
addPassthru = drv: passthru: extendDerivation = condition: passthru: drv:
let let
outputs = drv.outputs or [ "out" ]; outputs = drv.outputs or [ "out" ];
@ -142,13 +142,24 @@ rec {
outputToAttrListElement = outputName: outputToAttrListElement = outputName:
{ name = outputName; { name = outputName;
value = commonAttrs // { value = commonAttrs // {
inherit (drv.${outputName}) outPath drvPath type outputName; inherit (drv.${outputName}) type outputName;
drvPath = assert condition; drv.${outputName}.drvPath;
outPath = assert condition; drv.${outputName}.outPath;
}; };
}; };
outputsList = map outputToAttrListElement outputs; outputsList = map outputToAttrListElement outputs;
in commonAttrs // { outputUnspecified = true; }; in commonAttrs // {
outputUnspecified = true;
drvPath = assert condition; drv.drvPath;
outPath = assert condition; drv.outPath;
};
/* Add attributes to each output of a derivation without changing
the derivation itself. */
addPassthru =
lib.warn "`addPassthru drv passthru` is deprecated, replace with `extendDerivation true passthru drv`"
(drv: passthru: extendDerivation true passthru drv);
/* Strip a derivation of all non-essential attributes, returning /* Strip a derivation of all non-essential attributes, returning
only those needed by hydra-eval-jobs. Also strictly evaluate the only those needed by hydra-eval-jobs. Also strictly evaluate the

View File

@ -2,10 +2,10 @@
let let
inherit (builtins) trace attrNamesToStr isAttrs isFunction isList isInt inherit (builtins) trace attrNamesToStr isAttrs isList isInt
isString isBool head substring attrNames; isString isBool head substring attrNames;
inherit (lib) all id mapAttrsFlatten elem; inherit (lib) all id mapAttrsFlatten elem isFunction;
in in

View File

@ -51,12 +51,13 @@ let
inherit (builtins) add addErrorContext attrNames inherit (builtins) add addErrorContext attrNames
concatLists deepSeq elem elemAt filter genericClosure genList concatLists deepSeq elem elemAt filter genericClosure genList
getAttr hasAttr head isAttrs isBool isFunction isInt isList getAttr hasAttr head isAttrs isBool isInt isList
isString length lessThan listToAttrs pathExists readFile isString length lessThan listToAttrs pathExists readFile
replaceStrings seq stringLength sub substring tail; replaceStrings seq stringLength sub substring tail;
inherit (trivial) id const concat or and boolToString mergeAttrs inherit (trivial) id const concat or and boolToString mergeAttrs
flip mapNullable inNixShell min max importJSON warn info flip mapNullable inNixShell min max importJSON warn info
nixpkgsVersion mod; nixpkgsVersion mod compare splitByAndCompare
functionArgs setFunctionArgs isFunction;
inherit (fixedPoints) fix fix' extends composeExtensions inherit (fixedPoints) fix fix' extends composeExtensions
makeExtensible makeExtensibleWithCustomName; makeExtensible makeExtensibleWithCustomName;
@ -71,8 +72,8 @@ let
inherit (lists) singleton foldr fold foldl foldl' imap0 imap1 inherit (lists) singleton foldr fold foldl foldl' imap0 imap1
concatMap flatten remove findSingle findFirst any all count concatMap flatten remove findSingle findFirst any all count
optional optionals toList range partition zipListsWith zipLists optional optionals toList range partition zipListsWith zipLists
reverseList listDfs toposort sort take drop sublist last init reverseList listDfs toposort sort compareLists take drop sublist
crossLists unique intersectLists subtractLists last init crossLists unique intersectLists subtractLists
mutuallyExclusive; mutuallyExclusive;
inherit (strings) concatStrings concatMapStrings concatImapStrings inherit (strings) concatStrings concatMapStrings concatImapStrings
intersperse concatStringsSep concatMapStringsSep intersperse concatStringsSep concatMapStringsSep
@ -87,13 +88,14 @@ let
inherit (stringsWithDeps) textClosureList textClosureMap inherit (stringsWithDeps) textClosureList textClosureMap
noDepEntry fullDepEntry packEntry stringAfter; noDepEntry fullDepEntry packEntry stringAfter;
inherit (customisation) overrideDerivation makeOverridable inherit (customisation) overrideDerivation makeOverridable
callPackageWith callPackagesWith addPassthru hydraJob makeScope; callPackageWith callPackagesWith extendDerivation addPassthru
hydraJob makeScope;
inherit (meta) addMetaAttrs dontDistribute setName updateName inherit (meta) addMetaAttrs dontDistribute setName updateName
appendToName mapDerivationAttrset lowPrio lowPrioSet hiPrio appendToName mapDerivationAttrset lowPrio lowPrioSet hiPrio
hiPrioSet; hiPrioSet;
inherit (sources) pathType pathIsDirectory cleanSourceFilter inherit (sources) pathType pathIsDirectory cleanSourceFilter
cleanSource sourceByRegex sourceFilesBySuffices cleanSource sourceByRegex sourceFilesBySuffices
commitIdFromGitRepo cleanSourceWith; commitIdFromGitRepo cleanSourceWith pathHasContext canCleanSource;
inherit (modules) evalModules closeModules unifyModuleSyntax inherit (modules) evalModules closeModules unifyModuleSyntax
applyIfFunction unpackSubmodule packSubmodule mergeModules applyIfFunction unpackSubmodule packSubmodule mergeModules
mergeModules' mergeOptionDecls evalOptionValue mergeDefinitions mergeModules' mergeOptionDecls evalOptionValue mergeDefinitions

View File

@ -1,6 +1,6 @@
{ lib }: { lib }:
let let
inherit (builtins) isFunction head tail isList isAttrs isInt attrNames; inherit (builtins) head tail isList isAttrs isInt attrNames;
in in
@ -53,7 +53,7 @@ rec {
f: # the function applied to the arguments f: # the function applied to the arguments
initial: # you pass attrs, the functions below are passing a function taking the fix argument initial: # you pass attrs, the functions below are passing a function taking the fix argument
let let
takeFixed = if isFunction initial then initial else (fixed : initial); # transform initial to an expression always taking the fixed argument takeFixed = if lib.isFunction initial then initial else (fixed : initial); # transform initial to an expression always taking the fixed argument
tidy = args: tidy = args:
let # apply all functions given in "applyPreTidy" in sequence let # apply all functions given in "applyPreTidy" in sequence
applyPreTidyFun = fold ( n: a: x: n ( a x ) ) lib.id (maybeAttr "applyPreTidy" [] args); applyPreTidyFun = fold ( n: a: x: n ( a x ) ) lib.id (maybeAttr "applyPreTidy" [] args);
@ -63,7 +63,7 @@ rec {
let args = takeFixed fixed; let args = takeFixed fixed;
mergeFun = args.${n}; mergeFun = args.${n};
in if isAttrs x then (mergeFun args x) in if isAttrs x then (mergeFun args x)
else assert isFunction x; else assert lib.isFunction x;
mergeFun args (x ( args // { inherit fixed; })); mergeFun args (x ( args // { inherit fixed; }));
in overridableDelayableArgs f newArgs; in overridableDelayableArgs f newArgs;
in in
@ -374,7 +374,7 @@ rec {
if isAttrs x then if isAttrs x then
if x ? outPath then "derivation" if x ? outPath then "derivation"
else "attrs" else "attrs"
else if isFunction x then "function" else if lib.isFunction x then "function"
else if isList x then "list" else if isList x then "list"
else if x == true then "bool" else if x == true then "bool"
else if x == false then "bool" else if x == false then "bool"

View File

@ -14,6 +14,8 @@ let
libAttr = lib.attrsets; libAttr = lib.attrsets;
flipMapAttrs = flip libAttr.mapAttrs; flipMapAttrs = flip libAttr.mapAttrs;
inherit (lib) isFunction;
in in
rec { rec {
@ -110,7 +112,7 @@ rec {
else if isString v then "\"" + v + "\"" else if isString v then "\"" + v + "\""
else if null == v then "null" else if null == v then "null"
else if isFunction v then else if isFunction v then
let fna = functionArgs v; let fna = lib.functionArgs v;
showFnas = concatStringsSep "," (libAttr.mapAttrsToList showFnas = concatStringsSep "," (libAttr.mapAttrsToList
(name: hasDefVal: if hasDefVal then "(${name})" else name) (name: hasDefVal: if hasDefVal then "(${name})" else name)
fna); fna);

View File

@ -79,6 +79,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = ''Beerware License''; fullName = ''Beerware License'';
}; };
bsd0 = spdx {
spdxId = "0BSD";
fullName = "BSD Zero Clause License";
};
bsd2 = spdx { bsd2 = spdx {
spdxId = "BSD-2-Clause"; spdxId = "BSD-2-Clause";
fullName = ''BSD 2-clause "Simplified" License''; fullName = ''BSD 2-clause "Simplified" License'';
@ -200,6 +205,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Eclipse Public License 1.0"; fullName = "Eclipse Public License 1.0";
}; };
epl20 = spdx {
spdxId = "EPL-2.0";
fullName = "Eclipse Public License 2.0";
};
epson = { epson = {
fullName = "Seiko Epson Corporation Software License Agreement for Linux"; fullName = "Seiko Epson Corporation Software License Agreement for Linux";
url = https://download.ebz.epson.net/dsc/du/02/eula/global/LINUX_EN.html; url = https://download.ebz.epson.net/dsc/du/02/eula/global/LINUX_EN.html;
@ -477,6 +487,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "PostgreSQL License"; fullName = "PostgreSQL License";
}; };
postman = {
fullName = "Postman EULA";
url = https://www.getpostman.com/licenses/postman_base_app;
free = false;
};
psfl = spdx { psfl = spdx {
spdxId = "Python-2.0"; spdxId = "Python-2.0";
fullName = "Python Software Foundation License version 2"; fullName = "Python Software Foundation License version 2";

View File

@ -385,6 +385,30 @@ rec {
if len < 2 then list if len < 2 then list
else (sort strictLess pivot.left) ++ [ first ] ++ (sort strictLess pivot.right)); else (sort strictLess pivot.left) ++ [ first ] ++ (sort strictLess pivot.right));
/* Compare two lists element-by-element.
Example:
compareLists compare [] []
=> 0
compareLists compare [] [ "a" ]
=> -1
compareLists compare [ "a" ] []
=> 1
compareLists compare [ "a" "b" ] [ "a" "c" ]
=> 1
*/
compareLists = cmp: a: b:
if a == []
then if b == []
then 0
else -1
else if b == []
then 1
else let rel = cmp (head a) (head b); in
if rel == 0
then compareLists cmp (tail a) (tail b)
else rel;
/* Return the first (at most) N elements of a list. /* Return the first (at most) N elements of a list.
Example: Example:
@ -440,8 +464,12 @@ rec {
init = list: assert list != []; take (length list - 1) list; init = list: assert list != []; take (length list - 1) list;
/* FIXME(zimbatm) Not used anywhere /* return the image of the cross product of some lists by a function
*/
Example:
crossLists (x:y: "${toString x}${toString y}") [[1 2] [3 4]]
=> [ "13" "14" "23" "24" ]
*/
crossLists = f: foldl (fs: args: concatMap (f: map f args) fs) [f]; crossLists = f: foldl (fs: args: concatMap (f: map f args) fs) [f];

View File

@ -47,6 +47,7 @@
andir = "Andreas Rammhold <andreas@rammhold.de>"; andir = "Andreas Rammhold <andreas@rammhold.de>";
andres = "Andres Loeh <ksnixos@andres-loeh.de>"; andres = "Andres Loeh <ksnixos@andres-loeh.de>";
andrestylianos = "Andre S. Ramos <andre.stylianos@gmail.com>"; andrestylianos = "Andre S. Ramos <andre.stylianos@gmail.com>";
andrew-d = "Andrew Dunham <andrew@du.nham.ca>";
andrewrk = "Andrew Kelley <superjoe30@gmail.com>"; andrewrk = "Andrew Kelley <superjoe30@gmail.com>";
andsild = "Anders Sildnes <andsild@gmail.com>"; andsild = "Anders Sildnes <andsild@gmail.com>";
aneeshusa = "Aneesh Agrawal <aneeshusa@gmail.com>"; aneeshusa = "Aneesh Agrawal <aneeshusa@gmail.com>";
@ -55,11 +56,13 @@
antonxy = "Anton Schirg <anton.schirg@posteo.de>"; antonxy = "Anton Schirg <anton.schirg@posteo.de>";
apeschar = "Albert Peschar <albert@peschar.net>"; apeschar = "Albert Peschar <albert@peschar.net>";
apeyroux = "Alexandre Peyroux <alex@px.io>"; apeyroux = "Alexandre Peyroux <alex@px.io>";
arcadio = "Arcadio Rubio García <arc@well.ox.ac.uk>";
ardumont = "Antoine R. Dumont <eniotna.t@gmail.com>"; ardumont = "Antoine R. Dumont <eniotna.t@gmail.com>";
aristid = "Aristid Breitkreuz <aristidb@gmail.com>"; aristid = "Aristid Breitkreuz <aristidb@gmail.com>";
arobyn = "Alexei Robyn <shados@shados.net>"; arobyn = "Alexei Robyn <shados@shados.net>";
artuuge = "Artur E. Ruuge <artuuge@gmail.com>"; artuuge = "Artur E. Ruuge <artuuge@gmail.com>";
ashalkhakov = "Artyom Shalkhakov <artyom.shalkhakov@gmail.com>"; ashalkhakov = "Artyom Shalkhakov <artyom.shalkhakov@gmail.com>";
ashgillman = "Ashley Gillman <gillmanash@gmail.com>";
aske = "Kirill Boltaev <aske@fmap.me>"; aske = "Kirill Boltaev <aske@fmap.me>";
asppsa = "Alastair Pharo <asppsa@gmail.com>"; asppsa = "Alastair Pharo <asppsa@gmail.com>";
astsmtl = "Alexander Tsamutali <astsmtl@yandex.ru>"; astsmtl = "Alexander Tsamutali <astsmtl@yandex.ru>";
@ -117,6 +120,7 @@
chaoflow = "Florian Friesdorf <flo@chaoflow.net>"; chaoflow = "Florian Friesdorf <flo@chaoflow.net>";
chattered = "Phil Scott <me@philscotted.com>"; chattered = "Phil Scott <me@philscotted.com>";
ChengCat = "Yucheng Zhang <yu@cheng.cat>"; ChengCat = "Yucheng Zhang <yu@cheng.cat>";
chiiruno = "Okina Matara <okinan@protonmail.com>";
choochootrain = "Hurshal Patel <hurshal@imap.cc>"; choochootrain = "Hurshal Patel <hurshal@imap.cc>";
chpatrick = "Patrick Chilton <chpatrick@gmail.com>"; chpatrick = "Patrick Chilton <chpatrick@gmail.com>";
chreekat = "Bryan Richter <b@chreekat.net>"; chreekat = "Bryan Richter <b@chreekat.net>";
@ -222,12 +226,14 @@
ertes = "Ertugrul Söylemez <esz@posteo.de>"; ertes = "Ertugrul Söylemez <esz@posteo.de>";
ethercrow = "Dmitry Ivanov <ethercrow@gmail.com>"; ethercrow = "Dmitry Ivanov <ethercrow@gmail.com>";
etu = "Elis Hirwing <elis@hirwing.se>"; etu = "Elis Hirwing <elis@hirwing.se>";
exfalso = "Andras Slemmer <0slemi0@gmail.com>";
exi = "Reno Reckling <nixos@reckling.org>"; exi = "Reno Reckling <nixos@reckling.org>";
exlevan = "Alexey Levan <exlevan@gmail.com>"; exlevan = "Alexey Levan <exlevan@gmail.com>";
expipiplus1 = "Joe Hermaszewski <nix@monoid.al>"; expipiplus1 = "Joe Hermaszewski <nix@monoid.al>";
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>"; fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
falsifian = "James Cook <james.cook@utoronto.ca>"; falsifian = "James Cook <james.cook@utoronto.ca>";
fare = "Francois-Rene Rideau <fahree@gmail.com>"; fare = "Francois-Rene Rideau <fahree@gmail.com>";
f-breidenstein = "Felix Breidenstein <mail@felixbreidenstein.de>";
fgaz = "Francesco Gazzetta <francygazz@gmail.com>"; fgaz = "Francesco Gazzetta <francygazz@gmail.com>";
FireyFly = "Jonas Höglund <nix@firefly.nu>"; FireyFly = "Jonas Höglund <nix@firefly.nu>";
flokli = "Florian Klink <flokli@flokli.de>"; flokli = "Florian Klink <flokli@flokli.de>";
@ -254,6 +260,7 @@
gavin = "Gavin Rogers <gavin@praxeology.co.uk>"; gavin = "Gavin Rogers <gavin@praxeology.co.uk>";
gebner = "Gabriel Ebner <gebner@gebner.org>"; gebner = "Gabriel Ebner <gebner@gebner.org>";
geistesk = "Alvar Penning <post@0x21.biz>"; geistesk = "Alvar Penning <post@0x21.biz>";
genesis = "Ronan Bignaux <ronan@aimao.org>";
georgewhewell = "George Whewell <georgerw@gmail.com>"; georgewhewell = "George Whewell <georgerw@gmail.com>";
gilligan = "Tobias Pflug <tobias.pflug@gmail.com>"; gilligan = "Tobias Pflug <tobias.pflug@gmail.com>";
giogadi = "Luis G. Torres <lgtorres42@gmail.com>"; giogadi = "Luis G. Torres <lgtorres42@gmail.com>";
@ -298,6 +305,7 @@
ivan-tkatchev = "Ivan Tkatchev <tkatchev@gmail.com>"; ivan-tkatchev = "Ivan Tkatchev <tkatchev@gmail.com>";
ixmatus = "Parnell Springmeyer <parnell@digitalmentat.com>"; ixmatus = "Parnell Springmeyer <parnell@digitalmentat.com>";
izorkin = "Yurii Izorkin <Izorkin@gmail.com>"; izorkin = "Yurii Izorkin <Izorkin@gmail.com>";
ixxie = "Matan Bendix Shenhav <matan@fluxcraft.net>";
j-keck = "Jürgen Keck <jhyphenkeck@gmail.com>"; j-keck = "Jürgen Keck <jhyphenkeck@gmail.com>";
jagajaga = "Arseniy Seroka <ars.seroka@gmail.com>"; jagajaga = "Arseniy Seroka <ars.seroka@gmail.com>";
jammerful = "jammerful <jammerful@gmail.com>"; jammerful = "jammerful <jammerful@gmail.com>";
@ -324,6 +332,7 @@
joelmo = "Joel Moberg <joel.moberg@gmail.com>"; joelmo = "Joel Moberg <joel.moberg@gmail.com>";
joelteon = "Joel Taylor <me@joelt.io>"; joelteon = "Joel Taylor <me@joelt.io>";
johbo = "Johannes Bornhold <johannes@bornhold.name>"; johbo = "Johannes Bornhold <johannes@bornhold.name>";
johnazoidberg = "Daniel Schäfer <git@danielschaefer.me>";
johnmh = "John M. Harris, Jr. <johnmh@openblox.org>"; johnmh = "John M. Harris, Jr. <johnmh@openblox.org>";
johnramsden = "John Ramsden <johnramsden@riseup.net>"; johnramsden = "John Ramsden <johnramsden@riseup.net>";
joko = "Ioannis Koutras <ioannis.koutras@gmail.com>"; joko = "Ioannis Koutras <ioannis.koutras@gmail.com>";
@ -383,12 +392,14 @@
lovek323 = "Jason O'Conal <jason@oconal.id.au>"; lovek323 = "Jason O'Conal <jason@oconal.id.au>";
lowfatcomputing = "Andreas Wagner <andreas.wagner@lowfatcomputing.org>"; lowfatcomputing = "Andreas Wagner <andreas.wagner@lowfatcomputing.org>";
lsix = "Lancelot SIX <lsix@lancelotsix.com>"; lsix = "Lancelot SIX <lsix@lancelotsix.com>";
lschuermann = "Leon Schuermann <leon.git@is.currently.online>";
ltavard = "Laure Tavard <laure.tavard@univ-grenoble-alpes.fr>"; ltavard = "Laure Tavard <laure.tavard@univ-grenoble-alpes.fr>";
lucas8 = "Luc Chabassier <luc.linux@mailoo.org>"; lucas8 = "Luc Chabassier <luc.linux@mailoo.org>";
ludo = "Ludovic Courtès <ludo@gnu.org>"; ludo = "Ludovic Courtès <ludo@gnu.org>";
lufia = "Kyohei Kadota <lufia@lufia.org>"; lufia = "Kyohei Kadota <lufia@lufia.org>";
luispedro = "Luis Pedro Coelho <luis@luispedro.org>"; luispedro = "Luis Pedro Coelho <luis@luispedro.org>";
lukego = "Luke Gorrie <luke@snabb.co>"; lukego = "Luke Gorrie <luke@snabb.co>";
luz = "Luz <luz666@daum.net>";
lw = "Sergey Sofeychuk <lw@fmap.me>"; lw = "Sergey Sofeychuk <lw@fmap.me>";
lyt = "Tim Liou <wheatdoge@gmail.com>"; lyt = "Tim Liou <wheatdoge@gmail.com>";
m3tti = "Mathaeus Sander <mathaeus.peter.sander@gmail.com>"; m3tti = "Mathaeus Sander <mathaeus.peter.sander@gmail.com>";
@ -439,7 +450,9 @@
mirrexagon = "Andrew Abbott <mirrexagon@mirrexagon.com>"; mirrexagon = "Andrew Abbott <mirrexagon@mirrexagon.com>";
mjanczyk = "Marcin Janczyk <m@dragonvr.pl>"; mjanczyk = "Marcin Janczyk <m@dragonvr.pl>";
mjp = "Mike Playle <mike@mythik.co.uk>"; # github = "MikePlayle"; mjp = "Mike Playle <mike@mythik.co.uk>"; # github = "MikePlayle";
mkg = "Mark K Gardner <mkg@vt.edu>";
mlieberman85 = "Michael Lieberman <mlieberman85@gmail.com>"; mlieberman85 = "Michael Lieberman <mlieberman85@gmail.com>";
mmahut = "Marek Mahut <marek.mahut@gmail.com>";
moaxcp = "John Mercier <moaxcp@gmail.com>"; moaxcp = "John Mercier <moaxcp@gmail.com>";
modulistic = "Pablo Costa <modulistic@gmail.com>"; modulistic = "Pablo Costa <modulistic@gmail.com>";
mog = "Matthew O'Gorman <mog-lists@rldn.net>"; mog = "Matthew O'Gorman <mog-lists@rldn.net>";
@ -452,6 +465,7 @@
mounium = "Katona László <muoniurn@gmail.com>"; mounium = "Katona László <muoniurn@gmail.com>";
MP2E = "Cray Elliott <MP2E@archlinux.us>"; MP2E = "Cray Elliott <MP2E@archlinux.us>";
mpcsh = "Mark Cohen <m@mpc.sh>"; mpcsh = "Mark Cohen <m@mpc.sh>";
mpickering = "Matthew Pickering <matthewtpickering@gmail.com>";
mpscholten = "Marc Scholten <marc@mpscholten.de>"; mpscholten = "Marc Scholten <marc@mpscholten.de>";
mpsyco = "Francis St-Amour <fr.st-amour@gmail.com>"; mpsyco = "Francis St-Amour <fr.st-amour@gmail.com>";
mrVanDalo = "Ingolf Wanger <contact@ingolf-wagner.de>"; mrVanDalo = "Ingolf Wanger <contact@ingolf-wagner.de>";
@ -479,7 +493,9 @@
nicknovitski = "Nick Novitski <nixpkgs@nicknovitski.com>"; nicknovitski = "Nick Novitski <nixpkgs@nicknovitski.com>";
nico202 = "Nicolò Balzarotti <anothersms@gmail.com>"; nico202 = "Nicolò Balzarotti <anothersms@gmail.com>";
NikolaMandic = "Ratko Mladic <nikola@mandic.email>"; NikolaMandic = "Ratko Mladic <nikola@mandic.email>";
nipav = "Niko Pavlinek <niko.pavlinek@gmail.com>";
nixy = "Andrew R. M. <nixy@nixy.moe>"; nixy = "Andrew R. M. <nixy@nixy.moe>";
nmattia = "Nicolas Mattia <nicolas@nmattia.com>";
nocoolnametom = "Tom Doggett <nocoolnametom@gmail.com>"; nocoolnametom = "Tom Doggett <nocoolnametom@gmail.com>";
notthemessiah = "Brian Cohen <brian.cohen.88@gmail.com>"; notthemessiah = "Brian Cohen <brian.cohen.88@gmail.com>";
np = "Nicolas Pouillard <np.nix@nicolaspouillard.fr>"; np = "Nicolas Pouillard <np.nix@nicolaspouillard.fr>";
@ -505,6 +521,7 @@
pakhfn = "Fedor Pakhomov <pakhfn@gmail.com>"; pakhfn = "Fedor Pakhomov <pakhfn@gmail.com>";
panaeon = "Vitalii Voloshyn <vitalii.voloshyn@gmail.com"; panaeon = "Vitalii Voloshyn <vitalii.voloshyn@gmail.com";
paperdigits = "Mica Semrick <mica@silentumbrella.com>"; paperdigits = "Mica Semrick <mica@silentumbrella.com>";
paraseba = "Sebastian Galkin <paraseba@gmail.com>";
pashev = "Igor Pashev <pashev.igor@gmail.com>"; pashev = "Igor Pashev <pashev.igor@gmail.com>";
patternspandemic = "Brad Christensen <patternspandemic@live.com>"; patternspandemic = "Brad Christensen <patternspandemic@live.com>";
pawelpacana = "Paweł Pacana <pawel.pacana@gmail.com>"; pawelpacana = "Paweł Pacana <pawel.pacana@gmail.com>";
@ -532,11 +549,12 @@
pmahoney = "Patrick Mahoney <pat@polycrystal.org>"; pmahoney = "Patrick Mahoney <pat@polycrystal.org>";
pmeunier = "Pierre-Étienne Meunier <pierre-etienne.meunier@inria.fr>"; pmeunier = "Pierre-Étienne Meunier <pierre-etienne.meunier@inria.fr>";
pmiddend = "Philipp Middendorf <pmidden@secure.mailbox.org>"; pmiddend = "Philipp Middendorf <pmidden@secure.mailbox.org>";
pneumaticat = "Kevin Liu <kevin@potatofrom.space>";
polyrod = "Maurizio Di Pietro <dc1mdp@gmail.com>"; polyrod = "Maurizio Di Pietro <dc1mdp@gmail.com>";
pradeepchhetri = "Pradeep Chhetri <pradeep.chhetri89@gmail.com>"; pradeepchhetri = "Pradeep Chhetri <pradeep.chhetri89@gmail.com>";
prikhi = "Pavan Rikhi <pavan.rikhi@gmail.com>"; prikhi = "Pavan Rikhi <pavan.rikhi@gmail.com>";
primeos = "Michael Weiss <dev.primeos@gmail.com>"; primeos = "Michael Weiss <dev.primeos@gmail.com>";
profpatsch = "Profpatsch <mail@profpatsch.de>"; Profpatsch = "Profpatsch <mail@profpatsch.de>";
proglodyte = "Proglodyte <proglodyte23@gmail.com>"; proglodyte = "Proglodyte <proglodyte23@gmail.com>";
pshendry = "Paul Hendry <paul@pshendry.com>"; pshendry = "Paul Hendry <paul@pshendry.com>";
psibi = "Sibi <sibi@psibi.in>"; psibi = "Sibi <sibi@psibi.in>";
@ -552,6 +570,7 @@
rasendubi = "Alexey Shmalko <rasen.dubi@gmail.com>"; rasendubi = "Alexey Shmalko <rasen.dubi@gmail.com>";
raskin = "Michael Raskin <7c6f434c@mail.ru>"; raskin = "Michael Raskin <7c6f434c@mail.ru>";
ravloony = "Tom Macdonald <ravloony@gmail.com>"; ravloony = "Tom Macdonald <ravloony@gmail.com>";
razvan = "Răzvan Flavius Panda <razvan.panda@gmail.com>";
rbasso = "Rafael Basso <rbasso@sharpgeeks.net>"; rbasso = "Rafael Basso <rbasso@sharpgeeks.net>";
redbaron = "Maxim Ivanov <ivanov.maxim@gmail.com>"; redbaron = "Maxim Ivanov <ivanov.maxim@gmail.com>";
redvers = "Redvers Davies <red@infect.me>"; redvers = "Redvers Davies <red@infect.me>";
@ -593,6 +612,7 @@
rzetterberg = "Richard Zetterberg <richard.zetterberg@gmail.com>"; rzetterberg = "Richard Zetterberg <richard.zetterberg@gmail.com>";
s1lvester = "Markus Silvester <s1lvester@bockhacker.me>"; s1lvester = "Markus Silvester <s1lvester@bockhacker.me>";
samdroid-apps = "Sam Parkinson <sam@sam.today>"; samdroid-apps = "Sam Parkinson <sam@sam.today>";
samueldr = "Samuel Dionne-Riel <samuel@dionne-riel.com>";
samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>"; samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>";
sander = "Sander van der Burg <s.vanderburg@tudelft.nl>"; sander = "Sander van der Burg <s.vanderburg@tudelft.nl>";
sargon = "Daniel Ehlers <danielehlers@mindeye.net>"; sargon = "Daniel Ehlers <danielehlers@mindeye.net>";
@ -600,11 +620,14 @@
schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>"; schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>";
schneefux = "schneefux <schneefux+nixos_pkg@schneefux.xyz>"; schneefux = "schneefux <schneefux+nixos_pkg@schneefux.xyz>";
schristo = "Scott Christopher <schristopher@konputa.com>"; schristo = "Scott Christopher <schristopher@konputa.com>";
scode = "Peter Schuller <peter.schuller@infidyne.com>";
scolobb = "Sergiu Ivanov <sivanov@colimite.fr>"; scolobb = "Sergiu Ivanov <sivanov@colimite.fr>";
sdll = "Sasha Illarionov <sasha.delly@gmail.com>"; sdll = "Sasha Illarionov <sasha.delly@gmail.com>";
SeanZicari = "Sean Zicari <sean.zicari@gmail.com>"; SeanZicari = "Sean Zicari <sean.zicari@gmail.com>";
sellout = "Greg Pfeil <greg@technomadic.org>";
sepi = "Raffael Mancini <raffael@mancini.lu>"; sepi = "Raffael Mancini <raffael@mancini.lu>";
seppeljordan = "Sebastian Jordan <sebastian.jordan.mail@googlemail.com>"; seppeljordan = "Sebastian Jordan <sebastian.jordan.mail@googlemail.com>";
sfrijters = "Stefan Frijters <sfrijters@gmail.com>";
shanemikel = "Shane Pearlman <shanemikel1@gmail.com>"; shanemikel = "Shane Pearlman <shanemikel1@gmail.com>";
shawndellysse = "Shawn Dellysse <sdellysse@gmail.com>"; shawndellysse = "Shawn Dellysse <sdellysse@gmail.com>";
sheenobu = "Sheena Artrip <sheena.artrip@gmail.com>"; sheenobu = "Sheena Artrip <sheena.artrip@gmail.com>";
@ -638,6 +661,7 @@
sternenseemann = "Lukas Epple <post@lukasepple.de>"; sternenseemann = "Lukas Epple <post@lukasepple.de>";
stesie = "Stefan Siegl <stesie@brokenpipe.de>"; stesie = "Stefan Siegl <stesie@brokenpipe.de>";
steveej = "Stefan Junker <mail@stefanjunker.de>"; steveej = "Stefan Junker <mail@stefanjunker.de>";
StillerHarpo = "Florian Engel <florianengel39@gmail.com>";
stumoss = "Stuart Moss <samoss@gmail.com>"; stumoss = "Stuart Moss <samoss@gmail.com>";
SuprDewd = "Bjarki Ágúst Guðmundsson <suprdewd@gmail.com>"; SuprDewd = "Bjarki Ágúst Guðmundsson <suprdewd@gmail.com>";
swarren83 = "Shawn Warren <shawn.w.warren@gmail.com>"; swarren83 = "Shawn Warren <shawn.w.warren@gmail.com>";
@ -667,6 +691,7 @@
ThomasMader = "Thomas Mader <thomas.mader@gmail.com>"; ThomasMader = "Thomas Mader <thomas.mader@gmail.com>";
thoughtpolice = "Austin Seipp <aseipp@pobox.com>"; thoughtpolice = "Austin Seipp <aseipp@pobox.com>";
thpham = "Thomas Pham <thomas.pham@ithings.ch>"; thpham = "Thomas Pham <thomas.pham@ithings.ch>";
tilpner = "Till Höppner <till@hoeppner.ws>";
timbertson = "Tim Cuthbertson <tim@gfxmonk.net>"; timbertson = "Tim Cuthbertson <tim@gfxmonk.net>";
timokau = "Timo Kaufmann <timokau@zoho.com>"; timokau = "Timo Kaufmann <timokau@zoho.com>";
tiramiseb = "Sébastien Maccagnoni <sebastien@maccagnoni.eu>"; tiramiseb = "Sébastien Maccagnoni <sebastien@maccagnoni.eu>";
@ -677,6 +702,7 @@
tomberek = "Thomas Bereknyei <tomberek@gmail.com>"; tomberek = "Thomas Bereknyei <tomberek@gmail.com>";
tomsmeets = "Tom Smeets <tom@tsmeets.nl>"; tomsmeets = "Tom Smeets <tom@tsmeets.nl>";
travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>"; travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>";
treemo = "Matthieu Chevrier <matthieu.chevrier@treemo.fr>";
trevorj = "Trevor Joynson <nix@trevor.joynson.io>"; trevorj = "Trevor Joynson <nix@trevor.joynson.io>";
trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>"; trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>";
tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>"; tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>";
@ -686,15 +712,18 @@
tvorog = "Marsel Zaripov <marszaripov@gmail.com>"; tvorog = "Marsel Zaripov <marszaripov@gmail.com>";
tweber = "Thorsten Weber <tw+nixpkgs@360vier.de>"; tweber = "Thorsten Weber <tw+nixpkgs@360vier.de>";
twey = "James Twey Kay <twey@twey.co.uk>"; twey = "James Twey Kay <twey@twey.co.uk>";
unode = "Renato Alves <alves.rjc@gmail.com>";
uralbash = "Svintsov Dmitry <root@uralbash.ru>"; uralbash = "Svintsov Dmitry <root@uralbash.ru>";
utdemir = "Utku Demir <me@utdemir.com>"; utdemir = "Utku Demir <me@utdemir.com>";
#urkud = "Yury G. Kudryashov <urkud+nix@ya.ru>"; inactive since 2012 #urkud = "Yury G. Kudryashov <urkud+nix@ya.ru>"; inactive since 2012
uwap = "uwap <me@uwap.name>"; uwap = "uwap <me@uwap.name>";
va1entin = "Valentin Heidelberger <github@valentinsblog.com>";
vaibhavsagar = "Vaibhav Sagar <vaibhavsagar@gmail.com>"; vaibhavsagar = "Vaibhav Sagar <vaibhavsagar@gmail.com>";
valeriangalliat = "Valérian Galliat <val@codejam.info>"; valeriangalliat = "Valérian Galliat <val@codejam.info>";
vandenoever = "Jos van den Oever <jos@vandenoever.info>"; vandenoever = "Jos van den Oever <jos@vandenoever.info>";
vanschelven = "Klaas van Schelven <klaas@vanschelven.com>"; vanschelven = "Klaas van Schelven <klaas@vanschelven.com>";
vanzef = "Ivan Solyankin <vanzef@gmail.com>"; vanzef = "Ivan Solyankin <vanzef@gmail.com>";
varunpatro = "Varun Patro <varun.kumar.patro@gmail.com>";
vbgl = "Vincent Laporte <Vincent.Laporte@gmail.com>"; vbgl = "Vincent Laporte <Vincent.Laporte@gmail.com>";
vbmithr = "Vincent Bernardoff <vb@luminar.eu.org>"; vbmithr = "Vincent Bernardoff <vb@luminar.eu.org>";
vcunat = "Vladimír Čunát <vcunat@gmail.com>"; vcunat = "Vladimír Čunát <vcunat@gmail.com>";
@ -729,11 +758,14 @@
wyvie = "Elijah Rum <elijahrum@gmail.com>"; wyvie = "Elijah Rum <elijahrum@gmail.com>";
xaverdh = "Dominik Xaver Hörl <hoe.dom@gmx.de>"; xaverdh = "Dominik Xaver Hörl <hoe.dom@gmx.de>";
xnwdd = "Guillermo NWDD <nwdd+nixos@no.team>"; xnwdd = "Guillermo NWDD <nwdd+nixos@no.team>";
xurei = "Olivier Bourdoux <olivier.bourdoux@gmail.com>";
xvapx = "Marti Serra <marti.serra.coscollano@gmail.com>"; xvapx = "Marti Serra <marti.serra.coscollano@gmail.com>";
xwvvvvwx = "David Terry <davidterry@posteo.de>"; xwvvvvwx = "David Terry <davidterry@posteo.de>";
xzfc = "Albert Safin <xzfcpw@gmail.com>"; xzfc = "Albert Safin <xzfcpw@gmail.com>";
y0no = "Yoann Ono <y0no@y0no.fr>";
yarr = "Dmitry V. <savraz@gmail.com>"; yarr = "Dmitry V. <savraz@gmail.com>";
yegortimoshenko = "Yegor Timoshenko <yegortimoshenko@gmail.com>"; yegortimoshenko = "Yegor Timoshenko <yegortimoshenko@gmail.com>";
yesbox = "Jesper Geertsen Jonsson <jesper.geertsen.jonsson@gmail.com>";
ylwghst = "Burim Augustin Berisa <ylwghst@onionmail.info>"; ylwghst = "Burim Augustin Berisa <ylwghst@onionmail.info>";
yochai = "Yochai <yochai@titat.info>"; yochai = "Yochai <yochai@titat.info>";
yorickvp = "Yorick van Pelt <yorickvanpelt@gmail.com>"; yorickvp = "Yorick van Pelt <yorickvanpelt@gmail.com>";

View File

@ -155,7 +155,7 @@ rec {
# a module will resolve strictly the attributes used as argument but # a module will resolve strictly the attributes used as argument but
# not their values. The values are forwarding the result of the # not their values. The values are forwarding the result of the
# evaluation of the option. # evaluation of the option.
requiredArgs = builtins.attrNames (builtins.functionArgs f); requiredArgs = builtins.attrNames (lib.functionArgs f);
context = name: ''while evaluating the module argument `${name}' in "${key}":''; context = name: ''while evaluating the module argument `${name}' in "${key}":'';
extraArgs = builtins.listToAttrs (map (name: { extraArgs = builtins.listToAttrs (map (name: {
inherit name; inherit name;

View File

@ -14,6 +14,7 @@ rec {
, defaultText ? null # Textual representation of the default, for in the manual. , defaultText ? null # Textual representation of the default, for in the manual.
, example ? null # Example value used in the manual. , example ? null # Example value used in the manual.
, description ? null # String describing the option. , description ? null # String describing the option.
, relatedPackages ? null # Related packages used in the manual (see `genRelatedPackages` in ../nixos/doc/manual/default.nix).
, type ? null # Option type, providing type-checking and value merging. , type ? null # Option type, providing type-checking and value merging.
, apply ? null # Function that converts the option value to something else. , apply ? null # Function that converts the option value to something else.
, internal ? null # Whether the option is for NixOS developers only. , internal ? null # Whether the option is for NixOS developers only.
@ -76,7 +77,6 @@ rec {
getValues = map (x: x.value); getValues = map (x: x.value);
getFiles = map (x: x.file); getFiles = map (x: x.file);
# Generate documentation template from the list of option declaration like # Generate documentation template from the list of option declaration like
# the set generated with filterOptionSets. # the set generated with filterOptionSets.
optionAttrSetToDocList = optionAttrSetToDocList' []; optionAttrSetToDocList = optionAttrSetToDocList' [];
@ -85,6 +85,7 @@ rec {
concatMap (opt: concatMap (opt:
let let
docOption = rec { docOption = rec {
loc = opt.loc;
name = showOption opt.loc; name = showOption opt.loc;
description = opt.description or (throw "Option `${name}' has no description."); description = opt.description or (throw "Option `${name}' has no description.");
declarations = filter (x: x != unknownModule) opt.declarations; declarations = filter (x: x != unknownModule) opt.declarations;
@ -93,9 +94,10 @@ rec {
readOnly = opt.readOnly or false; readOnly = opt.readOnly or false;
type = opt.type.description or null; type = opt.type.description or null;
} }
// (if opt ? example then { example = scrubOptionValue opt.example; } else {}) // optionalAttrs (opt ? example) { example = scrubOptionValue opt.example; }
// (if opt ? default then { default = scrubOptionValue opt.default; } else {}) // optionalAttrs (opt ? default) { default = scrubOptionValue opt.default; }
// (if opt ? defaultText then { default = opt.defaultText; } else {}); // optionalAttrs (opt ? defaultText) { default = opt.defaultText; }
// optionalAttrs (opt ? relatedPackages && opt.relatedPackages != null) { inherit (opt) relatedPackages; };
subOptions = subOptions =
let ss = opt.type.getSubOptions opt.loc; let ss = opt.type.getSubOptions opt.loc;

View File

@ -93,4 +93,8 @@ rec {
else lib.head matchRef else lib.head matchRef
else throw ("Not a .git directory: " + path); else throw ("Not a .git directory: " + path);
in lib.flip readCommitFromFile "HEAD"; in lib.flip readCommitFromFile "HEAD";
pathHasContext = builtins.hasContext or (lib.hasPrefix builtins.storeDir);
canCleanSource = src: src ? _isLibCleanSourceWith || !(pathHasContext (toString src));
} }

View File

@ -1,8 +1,8 @@
{ lib }: { lib }:
let let
inherit (lib) lists; inherit (lib) lists;
parse = import ./parse.nix { inherit lib; }; inherit (lib.systems) parse;
inherit (import ./inspect.nix { inherit lib; }) predicates; inherit (lib.systems.inspect) predicates;
inherit (lib.attrsets) matchAttrs; inherit (lib.attrsets) matchAttrs;
all = [ all = [

View File

@ -11,44 +11,33 @@ rec {
sheevaplug = rec { sheevaplug = rec {
config = "armv5tel-unknown-linux-gnueabi"; config = "armv5tel-unknown-linux-gnueabi";
bigEndian = false;
arch = "armv5tel"; arch = "armv5tel";
float = "soft"; float = "soft";
withTLS = true;
libc = "glibc"; libc = "glibc";
platform = platforms.sheevaplug; platform = platforms.sheevaplug;
openssl.system = "linux-generic32";
}; };
raspberryPi = rec { raspberryPi = rec {
config = "armv6l-unknown-linux-gnueabihf"; config = "armv6l-unknown-linux-gnueabihf";
bigEndian = false;
arch = "armv6l"; arch = "armv6l";
float = "hard"; float = "hard";
fpu = "vfp"; fpu = "vfp";
withTLS = true;
libc = "glibc"; libc = "glibc";
platform = platforms.raspberrypi; platform = platforms.raspberrypi;
openssl.system = "linux-generic32";
}; };
armv7l-hf-multiplatform = rec { armv7l-hf-multiplatform = rec {
config = "arm-unknown-linux-gnueabihf"; config = "arm-unknown-linux-gnueabihf";
bigEndian = false;
arch = "armv7-a"; arch = "armv7-a";
float = "hard"; float = "hard";
fpu = "vfpv3-d16"; fpu = "vfpv3-d16";
withTLS = true;
libc = "glibc"; libc = "glibc";
platform = platforms.armv7l-hf-multiplatform; platform = platforms.armv7l-hf-multiplatform;
openssl.system = "linux-generic32";
}; };
aarch64-multiplatform = rec { aarch64-multiplatform = rec {
config = "aarch64-unknown-linux-gnu"; config = "aarch64-unknown-linux-gnu";
bigEndian = false;
arch = "aarch64"; arch = "aarch64";
withTLS = true;
libc = "glibc"; libc = "glibc";
platform = platforms.aarch64-multiplatform; platform = platforms.aarch64-multiplatform;
}; };
@ -62,24 +51,16 @@ rec {
arch = "armv5tel"; arch = "armv5tel";
config = "armv5tel-unknown-linux-gnueabi"; config = "armv5tel-unknown-linux-gnueabi";
float = "soft"; float = "soft";
platform = platforms.pogoplug4;
libc = "glibc"; libc = "glibc";
platform = platforms.pogoplug4;
withTLS = true;
openssl.system = "linux-generic32";
}; };
fuloongminipc = rec { fuloongminipc = rec {
config = "mips64el-unknown-linux-gnu"; config = "mips64el-unknown-linux-gnu";
bigEndian = false;
arch = "mips"; arch = "mips";
float = "hard"; float = "hard";
withTLS = true;
libc = "glibc"; libc = "glibc";
platform = platforms.fuloong2f_n32; platform = platforms.fuloong2f_n32;
openssl.system = "linux-generic32";
}; };
# #

View File

@ -5,8 +5,6 @@ with lib.lists;
rec { rec {
patterns = rec { patterns = rec {
"32bit" = { cpu = { bits = 32; }; };
"64bit" = { cpu = { bits = 64; }; };
i686 = { cpu = cpuTypes.i686; }; i686 = { cpu = cpuTypes.i686; };
x86_64 = { cpu = cpuTypes.x86_64; }; x86_64 = { cpu = cpuTypes.x86_64; };
PowerPC = { cpu = cpuTypes.powerpc; }; PowerPC = { cpu = cpuTypes.powerpc; };
@ -14,6 +12,11 @@ rec {
Arm = { cpu = { family = "arm"; }; }; Arm = { cpu = { family = "arm"; }; };
Aarch64 = { cpu = { family = "aarch64"; }; }; Aarch64 = { cpu = { family = "aarch64"; }; };
Mips = { cpu = { family = "mips"; }; }; Mips = { cpu = { family = "mips"; }; };
RiscV = { cpu = { family = "riscv"; }; };
Wasm = { cpu = { family = "wasm"; }; };
"32bit" = { cpu = { bits = 32; }; };
"64bit" = { cpu = { bits = 64; }; };
BigEndian = { cpu = { significantByte = significantBytes.bigEndian; }; }; BigEndian = { cpu = { significantByte = significantBytes.bigEndian; }; };
LittleEndian = { cpu = { significantByte = significantBytes.littleEndian; }; }; LittleEndian = { cpu = { significantByte = significantBytes.littleEndian; }; };

View File

@ -4,6 +4,16 @@
# http://llvm.org/docs/doxygen/html/Triple_8cpp_source.html especially # http://llvm.org/docs/doxygen/html/Triple_8cpp_source.html especially
# Triple::normalize. Parsing should essentially act as a more conservative # Triple::normalize. Parsing should essentially act as a more conservative
# version of that last function. # version of that last function.
#
# Most of the types below come in "open" and "closed" pairs. The open ones
# specify what information we need to know about systems in general, and the
# closed ones are sub-types representing the whitelist of systems we support in
# practice.
#
# Code in the remainder of nixpkgs shouldn't rely on the closed ones in
# e.g. exhaustive cases. Its more a sanity check to make sure nobody defines
# systems that overlap with existing ones and won't notice something amiss.
#
{ lib }: { lib }:
with lib.lists; with lib.lists;
with lib.types; with lib.types;
@ -11,29 +21,52 @@ with lib.attrsets;
with (import ./inspect.nix { inherit lib; }).predicates; with (import ./inspect.nix { inherit lib; }).predicates;
let let
setTypesAssert = type: pred: inherit (lib.options) mergeOneOption;
setTypes = type:
mapAttrs (name: value: mapAttrs (name: value:
assert pred value; assert type.check value;
setType type ({ inherit name; } // value)); setType type.name ({ inherit name; } // value));
setTypes = type: setTypesAssert type (_: true);
in in
rec { rec {
isSignificantByte = isType "significant-byte"; ################################################################################
significantBytes = setTypes "significant-byte" {
types.openSignifiantByte = mkOptionType {
name = "significant-byte";
description = "Endianness";
merge = mergeOneOption;
};
types.significantByte = enum (attrValues significantBytes);
significantBytes = setTypes types.openSignifiantByte {
bigEndian = {}; bigEndian = {};
littleEndian = {}; littleEndian = {};
}; };
isCpuType = isType "cpu-type"; ################################################################################
cpuTypes = with significantBytes; setTypesAssert "cpu-type"
(x: elem x.bits [8 16 32 64 128] # Reasonable power of 2
&& (if 8 < x.bits types.bitWidth = enum [ 8 16 32 64 128 ];
then isSignificantByte x.significantByte
else !(x ? significantByte))) ################################################################################
{
types.openCpuType = mkOptionType {
name = "cpu-type";
description = "instruction set architecture name and information";
merge = mergeOneOption;
check = x: types.bitWidth.check x.bits
&& (if 8 < x.bits
then types.significantByte.check x.significantByte
else !(x ? significantByte));
};
types.cpuType = enum (attrValues cpuTypes);
cpuTypes = with significantBytes; setTypes types.openCpuType {
arm = { bits = 32; significantByte = littleEndian; family = "arm"; }; arm = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; }; armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; }; armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; };
@ -44,18 +77,40 @@ rec {
x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; }; x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; };
mips64el = { bits = 32; significantByte = littleEndian; family = "mips"; }; mips64el = { bits = 32; significantByte = littleEndian; family = "mips"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "power"; }; powerpc = { bits = 32; significantByte = bigEndian; family = "power"; };
riscv32 = { bits = 32; significantByte = littleEndian; family = "riscv"; };
riscv64 = { bits = 64; significantByte = littleEndian; family = "riscv"; };
wasm32 = { bits = 32; significantByte = littleEndian; family = "wasm"; };
wasm64 = { bits = 64; significantByte = littleEndian; family = "wasm"; };
}; };
isVendor = isType "vendor"; ################################################################################
vendors = setTypes "vendor" {
types.openVendor = mkOptionType {
name = "vendor";
description = "vendor for the platform";
merge = mergeOneOption;
};
types.vendor = enum (attrValues vendors);
vendors = setTypes types.openVendor {
apple = {}; apple = {};
pc = {}; pc = {};
unknown = {}; unknown = {};
}; };
isExecFormat = isType "exec-format"; ################################################################################
execFormats = setTypes "exec-format" {
types.openExecFormat = mkOptionType {
name = "exec-format";
description = "executable container used by the kernel";
merge = mergeOneOption;
};
types.execFormat = enum (attrValues execFormats);
execFormats = setTypes types.openExecFormat {
aout = {}; # a.out aout = {}; # a.out
elf = {}; elf = {};
macho = {}; macho = {};
@ -64,15 +119,33 @@ rec {
unknown = {}; unknown = {};
}; };
isKernelFamily = isType "kernel-family"; ################################################################################
kernelFamilies = setTypes "kernel-family" {
types.openKernelFamily = mkOptionType {
name = "exec-format";
description = "executable container used by the kernel";
merge = mergeOneOption;
};
types.kernelFamily = enum (attrValues kernelFamilies);
kernelFamilies = setTypes types.openKernelFamily {
bsd = {}; bsd = {};
}; };
isKernel = x: isType "kernel" x; ################################################################################
kernels = with execFormats; with kernelFamilies; setTypesAssert "kernel"
(x: isExecFormat x.execFormat && all isKernelFamily (attrValues x.families)) types.openKernel = mkOptionType {
{ name = "kernel";
description = "kernel name and information";
merge = mergeOneOption;
check = x: types.execFormat.check x.execFormat
&& all types.kernelFamily.check (attrValues x.families);
};
types.kernel = enum (attrValues kernels);
kernels = with execFormats; with kernelFamilies; setTypes types.openKernel {
darwin = { execFormat = macho; families = { }; }; darwin = { execFormat = macho; families = { }; };
freebsd = { execFormat = elf; families = { inherit bsd; }; }; freebsd = { execFormat = elf; families = { inherit bsd; }; };
hurd = { execFormat = elf; families = { }; }; hurd = { execFormat = elf; families = { }; };
@ -89,8 +162,17 @@ rec {
win32 = kernels.windows; win32 = kernels.windows;
}; };
isAbi = isType "abi"; ################################################################################
abis = setTypes "abi" {
types.openAbi = mkOptionType {
name = "abi";
description = "binary interface for compiled code and syscalls";
merge = mergeOneOption;
};
types.abi = enum (attrValues abis);
abis = setTypes types.openAbi {
cygnus = {}; cygnus = {};
gnu = {}; gnu = {};
msvc = {}; msvc = {};
@ -102,12 +184,24 @@ rec {
unknown = {}; unknown = {};
}; };
################################################################################
types.system = mkOptionType {
name = "system";
description = "fully parsed representation of llvm- or nix-style platform tuple";
merge = mergeOneOption;
check = { cpu, vendor, kernel, abi }:
types.cpuType.check cpu
&& types.vendor.check vendor
&& types.kernel.check kernel
&& types.abi.check abi;
};
isSystem = isType "system"; isSystem = isType "system";
mkSystem = { cpu, vendor, kernel, abi }:
assert isCpuType cpu && isVendor vendor && isKernel kernel && isAbi abi; mkSystem = components:
setType "system" { assert types.system.check components;
inherit cpu vendor kernel abi; setType "system" components;
};
mkSkeletonFromList = l: { mkSkeletonFromList = l: {
"2" = # We only do 2-part hacks for things Nix already supports "2" = # We only do 2-part hacks for things Nix already supports
@ -170,4 +264,6 @@ rec {
optAbi = lib.optionalString (abi != abis.unknown) "-${abi.name}"; optAbi = lib.optionalString (abi != abis.unknown) "-${abi.name}";
in "${cpu.name}-${vendor.name}-${kernel.name}${optAbi}"; in "${cpu.name}-${vendor.name}-${kernel.name}${optAbi}";
################################################################################
} }

View File

@ -479,6 +479,11 @@ rec {
kernelPreferBuiltin = true; kernelPreferBuiltin = true;
kernelTarget = "zImage"; kernelTarget = "zImage";
kernelExtraConfig = '' kernelExtraConfig = ''
# Serial port for Raspberry Pi 3. Upstream forgot to add it to the ARMv7 defconfig.
SERIAL_8250_BCM2835AUX y
SERIAL_8250_EXTENDED y
SERIAL_8250_SHARE_IRQ y
# Fix broken sunxi-sid nvmem driver. # Fix broken sunxi-sid nvmem driver.
TI_CPTS y TI_CPTS y

View File

@ -52,7 +52,7 @@ rec {
# Pull in some builtins not included elsewhere. # Pull in some builtins not included elsewhere.
inherit (builtins) inherit (builtins)
pathExists readFile isBool isFunction pathExists readFile isBool
isInt add sub lessThan isInt add sub lessThan
seq deepSeq genericClosure; seq deepSeq genericClosure;
@ -81,6 +81,42 @@ rec {
*/ */
mod = base: int: base - (int * (builtins.div base int)); mod = base: int: base - (int * (builtins.div base int));
/* C-style comparisons
a < b, compare a b => -1
a == b, compare a b => 0
a > b, compare a b => 1
*/
compare = a: b:
if a < b
then -1
else if a > b
then 1
else 0;
/* Split type into two subtypes by predicate `p`, take all elements
of the first subtype to be less than all the elements of the
second subtype, compare elements of a single subtype with `yes`
and `no` respectively.
Example:
let cmp = splitByAndCompare (hasPrefix "foo") compare compare; in
cmp "a" "z" => -1
cmp "fooa" "fooz" => -1
cmp "f" "a" => 1
cmp "fooa" "a" => -1
# while
compare "fooa" "a" => 1
*/
splitByAndCompare = p: yes: no: a: b:
if p a
then if p b then yes a b else -1
else if p b then 1 else no a b;
/* Reads a JSON file. */ /* Reads a JSON file. */
importJSON = path: importJSON = path:
builtins.fromJSON (builtins.readFile path); builtins.fromJSON (builtins.readFile path);
@ -99,4 +135,29 @@ rec {
*/ */
warn = msg: builtins.trace "WARNING: ${msg}"; warn = msg: builtins.trace "WARNING: ${msg}";
info = msg: builtins.trace "INFO: ${msg}"; info = msg: builtins.trace "INFO: ${msg}";
# | Add metadata about expected function arguments to a function.
# The metadata should match the format given by
# builtins.functionArgs, i.e. a set from expected argument to a bool
# representing whether that argument has a default or not.
# setFunctionArgs : (a → b) → Map String Bool → (a → b)
#
# This function is necessary because you can't dynamically create a
# function of the { a, b ? foo, ... }: format, but some facilities
# like callPackage expect to be able to query expected arguments.
setFunctionArgs = f: args:
{ # TODO: Should we add call-time "type" checking like built in?
__functor = self: f;
__functionArgs = args;
};
# | Extract the expected function arguments from a function.
# This works both with nix-native { a, b ? foo, ... }: style
# functions and functions with args set with 'setFunctionArgs'. It
# has the same return type and semantics as builtins.functionArgs.
# setFunctionArgs : (a → b) → Map String Bool.
functionArgs = f: f.__functionArgs or (builtins.functionArgs f);
isFunction = f: builtins.isFunction f ||
(f ? __functor && isFunction (f.__functor f));
} }

View File

@ -256,6 +256,10 @@ rec {
functor = (defaultFunctor name) // { wrapped = elemType; }; functor = (defaultFunctor name) // { wrapped = elemType; };
}; };
nonEmptyListOf = elemType:
let list = addCheck (types.listOf elemType) (l: l != []);
in list // { description = "non-empty " + list.description; };
attrsOf = elemType: mkOptionType rec { attrsOf = elemType: mkOptionType rec {
name = "attrsOf"; name = "attrsOf";
description = "attribute set of ${elemType.description}s"; description = "attribute set of ${elemType.description}s";

View File

@ -4,11 +4,13 @@
# Usage $0 debian-patches.txt debian-patches.nix # Usage $0 debian-patches.txt debian-patches.nix
# An example input and output files can be found in applications/graphics/xara/ # An example input and output files can be found in applications/graphics/xara/
DEB_URL=http://patch-tracker.debian.org/patch/series/dl DEB_URL=https://sources.debian.org/data/main
declare -a deb_patches declare -a deb_patches
mapfile -t deb_patches < $1 mapfile -t deb_patches < $1
prefix="${DEB_URL}/${deb_patches[0]}" # First letter
deb_prefix="${deb_patches[0]:0:1}"
prefix="${DEB_URL}/${deb_prefix}/${deb_patches[0]}/debian/patches"
if [[ -n "$2" ]]; then if [[ -n "$2" ]]; then
exec 1> $2 exec 1> $2

View File

@ -21,7 +21,7 @@ find . -type f | while read src; do
# Sanitize file name # Sanitize file name
filename=$(basename "$src" | tr '@' '_') filename=$(basename "$src" | tr '@' '_')
nameVersion="${filename%.tar.*}" nameVersion="${filename%.tar.*}"
name=$(echo "$nameVersion" | sed -e 's,-[[:digit:]].*,,' | sed -e 's,-opensource-src$,,') name=$(echo "$nameVersion" | sed -e 's,-[[:digit:]].*,,' | sed -e 's,-opensource-src$,,' | sed -e 's,-everywhere-src$,,')
version=$(echo "$nameVersion" | sed -e 's,^\([[:alpha:]][[:alnum:]]*-\)\+,,') version=$(echo "$nameVersion" | sed -e 's,^\([[:alpha:]][[:alnum:]]*-\)\+,,')
echo "$name,$version,$src,$filename" >>$csv echo "$name,$version,$src,$filename" >>$csv
done done

View File

@ -1,9 +1,16 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -e set -e
# --print: avoid dependency on environment
optPrint=
if [ "$1" == "--print" ]; then
optPrint=true
shift
fi
if [ "$#" != 1 ] && [ "$#" != 2 ]; then if [ "$#" != 1 ] && [ "$#" != 2 ]; then
cat <<-EOF cat <<-EOF
Usage: $0 commit-spec [commit-spec] Usage: $0 [--print] commit-spec [commit-spec]
You need to be in a git-controlled nixpkgs tree. You need to be in a git-controlled nixpkgs tree.
The current state of the tree will be used if the second commit is missing. The current state of the tree will be used if the second commit is missing.
EOF EOF
@ -113,3 +120,8 @@ newPkgs "${tree[1]}" "${tree[2]}" '--argstr system "x86_64-linux"' > "$newlist"
sed -n 's/\([^. ]*\.\)*\([^. ]*\) .*$/\2/p' < "$newlist" \ sed -n 's/\([^. ]*\.\)*\([^. ]*\) .*$/\2/p' < "$newlist" \
| sort | uniq -c | sort | uniq -c
if [ -n "$optPrint" ]; then
echo
cat "$newlist"
fi

View File

@ -9,8 +9,6 @@ let
modules = [ configuration ]; modules = [ configuration ];
}; };
inherit (eval) pkgs;
# This is for `nixos-rebuild build-vm'. # This is for `nixos-rebuild build-vm'.
vmConfig = (import ./lib/eval-config.nix { vmConfig = (import ./lib/eval-config.nix {
inherit system; inherit system;
@ -30,7 +28,7 @@ let
in in
{ {
inherit (eval) config options; inherit (eval) pkgs config options;
system = eval.config.system.build.toplevel; system = eval.config.system.build.toplevel;

View File

@ -6,22 +6,52 @@ let
lib = pkgs.lib; lib = pkgs.lib;
# Remove invisible and internal options. # Remove invisible and internal options.
optionsList = lib.filter (opt: opt.visible && !opt.internal) (lib.optionAttrSetToDocList options); optionsListVisible = lib.filter (opt: opt.visible && !opt.internal) (lib.optionAttrSetToDocList options);
# Replace functions by the string <function> # Replace functions by the string <function>
substFunction = x: substFunction = x:
if builtins.isAttrs x then lib.mapAttrs (name: substFunction) x if builtins.isAttrs x then lib.mapAttrs (name: substFunction) x
else if builtins.isList x then map substFunction x else if builtins.isList x then map substFunction x
else if builtins.isFunction x then "<function>" else if lib.isFunction x then "<function>"
else x; else x;
# Clean up declaration sites to not refer to the NixOS source tree. # Generate DocBook documentation for a list of packages. This is
optionsList' = lib.flip map optionsList (opt: opt // { # what `relatedPackages` option of `mkOption` from
# ../../../lib/options.nix influences.
#
# Each element of `relatedPackages` can be either
# - a string: that will be interpreted as an attribute name from `pkgs`,
# - a list: that will be interpreted as an attribute path from `pkgs`,
# - an attrset: that can specify `name`, `path`, `package`, `comment`
# (either of `name`, `path` is required, the rest are optional).
genRelatedPackages = packages:
let
unpack = p: if lib.isString p then { name = p; }
else if lib.isList p then { path = p; }
else p;
describe = args:
let
name = args.name or (lib.concatStringsSep "." args.path);
path = args.path or [ args.name ];
package = args.package or (lib.attrByPath path (throw "Invalid package attribute path `${toString path}'") pkgs);
in "<listitem>"
+ "<para><literal>pkgs.${name} (${package.meta.name})</literal>"
+ lib.optionalString (!package.meta.evaluates) " <emphasis>[UNAVAILABLE]</emphasis>"
+ ": ${package.meta.description or "???"}.</para>"
+ lib.optionalString (args ? comment) "\n<para>${args.comment}</para>"
# Lots of `longDescription's break DocBook, so we just wrap them into <programlisting>
+ lib.optionalString (package.meta ? longDescription) "\n<programlisting>${package.meta.longDescription}</programlisting>"
+ "</listitem>";
in "<itemizedlist>${lib.concatStringsSep "\n" (map (p: describe (unpack p)) packages)}</itemizedlist>";
optionsListDesc = lib.flip map optionsListVisible (opt: opt // {
# Clean up declaration sites to not refer to the NixOS source tree.
declarations = map stripAnyPrefixes opt.declarations; declarations = map stripAnyPrefixes opt.declarations;
} }
// lib.optionalAttrs (opt ? example) { example = substFunction opt.example; } // lib.optionalAttrs (opt ? example) { example = substFunction opt.example; }
// lib.optionalAttrs (opt ? default) { default = substFunction opt.default; } // lib.optionalAttrs (opt ? default) { default = substFunction opt.default; }
// lib.optionalAttrs (opt ? type) { type = substFunction opt.type; }); // lib.optionalAttrs (opt ? type) { type = substFunction opt.type; }
// lib.optionalAttrs (opt ? relatedPackages) { relatedPackages = genRelatedPackages opt.relatedPackages; });
# We need to strip references to /nix/store/* from options, # We need to strip references to /nix/store/* from options,
# including any `extraSources` if some modules came from elsewhere, # including any `extraSources` if some modules came from elsewhere,
@ -32,8 +62,21 @@ let
prefixesToStrip = map (p: "${toString p}/") ([ ../../.. ] ++ extraSources); prefixesToStrip = map (p: "${toString p}/") ([ ../../.. ] ++ extraSources);
stripAnyPrefixes = lib.flip (lib.fold lib.removePrefix) prefixesToStrip; stripAnyPrefixes = lib.flip (lib.fold lib.removePrefix) prefixesToStrip;
# Custom "less" that pushes up all the things ending in ".enable*"
# and ".package*"
optionLess = a: b:
let
ise = lib.hasPrefix "enable";
isp = lib.hasPrefix "package";
cmp = lib.splitByAndCompare ise lib.compare
(lib.splitByAndCompare isp lib.compare lib.compare);
in lib.compareLists cmp a.loc b.loc < 0;
# Customly sort option list for the man page.
optionsList = lib.sort optionLess optionsListDesc;
# Convert the list of options into an XML file. # Convert the list of options into an XML file.
optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsList'); optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsList);
optionsDocBook = runCommand "options-db.xml" {} '' optionsDocBook = runCommand "options-db.xml" {} ''
optionsXML=${optionsXML} optionsXML=${optionsXML}
@ -191,7 +234,7 @@ in rec {
mkdir -p $dst mkdir -p $dst
cp ${builtins.toFile "options.json" (builtins.unsafeDiscardStringContext (builtins.toJSON cp ${builtins.toFile "options.json" (builtins.unsafeDiscardStringContext (builtins.toJSON
(builtins.listToAttrs (map (o: { name = o.name; value = removeAttrs o ["name" "visible" "internal"]; }) optionsList')))) (builtins.listToAttrs (map (o: { name = o.name; value = removeAttrs o ["name" "visible" "internal"]; }) optionsList))))
} $dst/options.json } $dst/options.json
mkdir -p $out/nix-support mkdir -p $out/nix-support

View File

@ -70,9 +70,21 @@ $ ./result/bin/run-*-vm
</screen> </screen>
The VM does not have any data from your host system, so your existing The VM does not have any data from your host system, so your existing
user accounts and home directories will not be available. You can user accounts and home directories will not be available unless you
forward ports on the host to the guest. For instance, the following have set <literal>mutableUsers = false</literal>. Another way is to
will forward host port 2222 to guest port 22 (SSH): temporarily add the following to your configuration:
<screen>
users.extraUsers.your-user.initialPassword = "test"
</screen>
<emphasis>Important:</emphasis> delete the $hostname.qcow2 file if you
have started the virtual machine at least once without the right
users, otherwise the changes will not get picked up.
You can forward ports on the host to the guest. For
instance, the following will forward host port 2222 to guest port 22
(SSH):
<screen> <screen>
$ QEMU_NET_OPTS="hostfwd=tcp::2222-:22" ./result/bin/run-*-vm $ QEMU_NET_OPTS="hostfwd=tcp::2222-:22" ./result/bin/run-*-vm

View File

@ -4,18 +4,18 @@
version="5.0" version="5.0"
xml:id="sec-instaling-virtualbox-guest"> xml:id="sec-instaling-virtualbox-guest">
<title>Installing in a Virtualbox guest</title> <title>Installing in a VirtualBox guest</title>
<para> <para>
Installing NixOS into a Virtualbox guest is convenient for users who want to Installing NixOS into a VirtualBox guest is convenient for users who want to
try NixOS without installing it on bare metal. If you want to use a pre-made try NixOS without installing it on bare metal. If you want to use a pre-made
Virtualbox appliance, it is available at <link VirtualBox appliance, it is available at <link
xlink:href="https://nixos.org/nixos/download.html">the downloads page</link>. xlink:href="https://nixos.org/nixos/download.html">the downloads page</link>.
If you want to set up a Virtualbox guest manually, follow these instructions: If you want to set up a VirtualBox guest manually, follow these instructions:
</para> </para>
<orderedlist> <orderedlist>
<listitem><para>Add a New Machine in Virtualbox with OS Type "Linux / Other <listitem><para>Add a New Machine in VirtualBox with OS Type "Linux / Other
Linux"</para></listitem> Linux"</para></listitem>
<listitem><para>Base Memory Size: 768 MB or higher.</para></listitem> <listitem><para>Base Memory Size: 768 MB or higher.</para></listitem>

View File

@ -45,7 +45,10 @@ for a UEFI installation is by and large the same as a BIOS installation. The dif
using <command>ifconfig</command>.</para> using <command>ifconfig</command>.</para>
<para>To manually configure the network on the graphical installer, <para>To manually configure the network on the graphical installer,
first disable network-manager with first disable network-manager with
<command>systemctl stop network-manager</command>.</para></listitem> <command>systemctl stop network-manager</command>.</para>
<para>To manually configure the wifi on the minimal installer, run
<command>wpa_supplicant -B -i interface -c &lt;(wpa_passphrase 'SSID' 'key')</command>.</para></listitem>
<listitem><para>If you would like to continue the installation from a different <listitem><para>If you would like to continue the installation from a different
machine you need to activate the SSH daemon via <literal>systemctl start sshd</literal>. machine you need to activate the SSH daemon via <literal>systemctl start sshd</literal>.

View File

@ -70,6 +70,15 @@
</para> </para>
</xsl:if> </xsl:if>
<xsl:if test="attr[@name = 'relatedPackages']">
<para>
<emphasis>Related packages:</emphasis>
<xsl:text> </xsl:text>
<xsl:value-of disable-output-escaping="yes"
select="attr[@name = 'relatedPackages']/string/@value" />
</para>
</xsl:if>
<xsl:if test="count(attr[@name = 'declarations']/list/*) != 0"> <xsl:if test="count(attr[@name = 'declarations']/list/*) != 0">
<para> <para>
<emphasis>Declared by:</emphasis> <emphasis>Declared by:</emphasis>

View File

@ -38,6 +38,10 @@ has the following highlights: </para>
</itemizedlist> </itemizedlist>
</para> </para>
</listitem> </listitem>
<listitem>
<para>PHP now defaults to PHP 7.2</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -88,6 +92,28 @@ following incompatible changes:</para>
<option>services.pgmanage</option>. <option>services.pgmanage</option>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Package attributes starting with a digit have been prefixed with an
underscore sign. This is to avoid quoting in the configuration and
other issues with command-line tools like <literal>nix-env</literal>.
The change affects the following packages:
<itemizedlist>
<listitem>
<para><literal>2048-in-terminal</literal><literal>_2048-in-terminal</literal></para>
</listitem>
<listitem>
<para><literal>90secondportraits</literal><literal>_90secondportraits</literal></para>
</listitem>
<listitem>
<para><literal>2bwm</literal><literal>_2bwm</literal></para>
</listitem>
<listitem>
<para><literal>389-ds-base</literal><literal>_389-ds-base</literal></para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<emphasis role="strong"> <emphasis role="strong">
@ -113,7 +139,7 @@ following incompatible changes:</para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
<literal>cc-wrapper</literal>has been split in two; there is now also a <literal>bintools-wrapper</literal>. <literal>cc-wrapper</literal> has been split in two; there is now also a <literal>bintools-wrapper</literal>.
The most commonly used files in <filename>nix-support</filename> are now split between the two wrappers. The most commonly used files in <filename>nix-support</filename> are now split between the two wrappers.
Some commonly used ones, like <filename>nix-support/dynamic-linker</filename>, are duplicated for backwards compatability, even though they rightly belong only in <literal>bintools-wrapper</literal>. Some commonly used ones, like <filename>nix-support/dynamic-linker</filename>, are duplicated for backwards compatability, even though they rightly belong only in <literal>bintools-wrapper</literal>.
Other more obscure ones are just moved. Other more obscure ones are just moved.
@ -131,6 +157,11 @@ following incompatible changes:</para>
Other types dependencies should be unaffected. Other types dependencies should be unaffected.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<literal>lib.addPassthru drv passthru</literal> is removed. Use <literal>lib.extendDerivation true passthru drv</literal> instead. <emphasis role="strong">TODO: actually remove it before branching 18.03 off.</emphasis>
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>memcached</literal> service no longer accept dynamic socket The <literal>memcached</literal> service no longer accept dynamic socket
@ -139,6 +170,42 @@ following incompatible changes:</para>
will be accessible at <literal>/run/memcached/memcached.sock</literal>. will be accessible at <literal>/run/memcached/memcached.sock</literal>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <varname>hardware.amdHybridGraphics.disable</varname> option was removed for lack of a maintainer. If you still need this module, you may wish to include a copy of it from an older version of nixos in your imports.
</para>
</listitem>
<listitem>
<para>
The merging of config options for <varname>services.postfix.config</varname>
was buggy. Previously, if other options in the Postfix module like
<varname>services.postfix.useSrs</varname> were set and the user set config
options that were also set by such options, the resulting config wouldn't
include all options that were needed. They are now merged correctly. If
config options need to be overridden, <literal>lib.mkForce</literal> or
<literal>lib.mkOverride</literal> can be used.
</para>
</listitem>
<listitem>
<para>
The following changes apply if the <literal>stateVersion</literal> is changed to 18.03 or higher.
For <literal>stateVersion = "17.09"</literal> or lower the old behavior is preserved.
</para>
<itemizedlist>
<listitem>
<para>
<literal>matrix-synapse</literal> uses postgresql by default instead of sqlite.
Migration instructions can be found <link xlink:href="https://github.com/matrix-org/synapse/blob/master/docs/postgres.rst#porting-from-sqlite"> here </link>.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
The <literal>jid</literal> package has been removed, due to maintenance
overhead of a go package having non-versioned dependencies.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>

View File

@ -3,7 +3,7 @@
let pkgs = import ../.. { inherit system config; }; in let pkgs = import ../.. { inherit system config; }; in
with pkgs.lib; with pkgs.lib;
with import ../lib/qemu-flags.nix; with import ../lib/qemu-flags.nix { inherit pkgs; };
rec { rec {

View File

@ -13,10 +13,16 @@
# grafted in the file system at path `target'. # grafted in the file system at path `target'.
, contents ? [] , contents ? []
, # Whether the disk should be partitioned (with a single partition , # Type of partition table to use; either "legacy", "efi", or "none".
# containing the root filesystem) or contain the root filesystem # For "efi" images, the GPT partition table is used and a mandatory ESP
# directly. # partition of reasonable size is created in addition to the root partition.
partitioned ? true # If `installBootLoader` is true, GRUB will be installed in EFI mode.
# For "legacy", the msdos partition table is used and a single large root
# partition is created. If `installBootLoader` is true, GRUB will be
# installed in legacy mode.
# For "none", no partition table is created. Enabling `installBootLoader`
# most likely fails as GRUB will probably refuse to install.
partitionTableType ? "legacy"
# Whether to invoke switch-to-configuration boot during image creation # Whether to invoke switch-to-configuration boot during image creation
, installBootLoader ? true , installBootLoader ? true
@ -37,6 +43,10 @@
format ? "raw" format ? "raw"
}: }:
assert partitionTableType == "legacy" || partitionTableType == "efi" || partitionTableType == "none";
# We use -E offset=X below, which is only supported by e2fsprogs
assert partitionTableType != "none" -> fsType == "ext4";
with lib; with lib;
let format' = format; in let let format' = format; in let
@ -51,6 +61,27 @@ let format' = format; in let
raw = "img"; raw = "img";
}.${format}; }.${format};
rootPartition = { # switch-case
legacy = "1";
efi = "2";
}.${partitionTableType};
partitionDiskScript = { # switch-case
legacy = ''
parted --script $diskImage -- \
mklabel msdos \
mkpart primary ext4 1MiB -1
'';
efi = ''
parted --script $diskImage -- \
mklabel gpt \
mkpart ESP fat32 8MiB 256MiB \
set 1 boot on \
mkpart primary ext4 256MiB -1
'';
none = "";
}.${partitionTableType};
nixpkgs = cleanSource pkgs.path; nixpkgs = cleanSource pkgs.path;
channelSources = pkgs.runCommand "nixos-${config.system.nixosVersion}" {} '' channelSources = pkgs.runCommand "nixos-${config.system.nixosVersion}" {} ''
@ -79,20 +110,31 @@ let format' = format; in let
targets = map (x: x.target) contents; targets = map (x: x.target) contents;
prepareImage = '' prepareImage = ''
export PATH=${makeSearchPathOutput "bin" "bin" prepareImageInputs} export PATH=${makeBinPath prepareImageInputs}
# Yes, mkfs.ext4 takes different units in different contexts. Fun.
sectorsToKilobytes() {
echo $(( ( "$1" * 512 ) / 1024 ))
}
sectorsToBytes() {
echo $(( "$1" * 512 ))
}
mkdir $out mkdir $out
diskImage=nixos.raw diskImage=nixos.raw
truncate -s ${toString diskSize}M $diskImage truncate -s ${toString diskSize}M $diskImage
${if partitioned then '' ${partitionDiskScript}
parted --script $diskImage -- mklabel msdos mkpart primary ext4 1M -1s
offset=$((2048*512))
'' else ''
offset=0
''}
mkfs.${fsType} -F -L nixos -E offset=$offset $diskImage ${if partitionTableType != "none" then ''
# Get start & length of the root partition in sectors to $START and $SECTORS.
eval $(partx $diskImage -o START,SECTORS --nr ${rootPartition} --pairs)
mkfs.${fsType} -F -L nixos $diskImage -E offset=$(sectorsToBytes $START) $(sectorsToKilobytes $SECTORS)K
'' else ''
mkfs.${fsType} -F -L nixos $diskImage
''}
root="$PWD/root" root="$PWD/root"
mkdir -p $root mkdir -p $root
@ -133,12 +175,12 @@ let format' = format; in let
find $root/nix/store -mindepth 1 -maxdepth 1 -type f -o -type d | xargs chmod -R a-w find $root/nix/store -mindepth 1 -maxdepth 1 -type f -o -type d | xargs chmod -R a-w
echo "copying staging root to image..." echo "copying staging root to image..."
cptofs ${optionalString partitioned "-P 1"} -t ${fsType} -i $diskImage $root/* / cptofs -p ${optionalString (partitionTableType != "none") "-P ${rootPartition}"} -t ${fsType} -i $diskImage $root/* /
''; '';
in pkgs.vmTools.runInLinuxVM ( in pkgs.vmTools.runInLinuxVM (
pkgs.runCommand name pkgs.runCommand name
{ preVM = prepareImage; { preVM = prepareImage;
buildInputs = with pkgs; [ utillinux e2fsprogs ]; buildInputs = with pkgs; [ utillinux e2fsprogs dosfstools ];
exportReferencesGraph = [ "closure" metaClosure ]; exportReferencesGraph = [ "closure" metaClosure ];
postVM = '' postVM = ''
${if format == "raw" then '' ${if format == "raw" then ''
@ -152,11 +194,7 @@ in pkgs.vmTools.runInLinuxVM (
memSize = 1024; memSize = 1024;
} }
'' ''
${if partitioned then '' rootDisk=${if partitionTableType != "none" then "/dev/vda${rootPartition}" else "/dev/vda"}
rootDisk=/dev/vda1
'' else ''
rootDisk=/dev/vda
''}
# Some tools assume these exist # Some tools assume these exist
ln -s vda /dev/xvda ln -s vda /dev/xvda
@ -166,6 +204,14 @@ in pkgs.vmTools.runInLinuxVM (
mkdir $mountPoint mkdir $mountPoint
mount $rootDisk $mountPoint mount $rootDisk $mountPoint
# Create the ESP and mount it. Unlike e2fsprogs, mkfs.vfat doesn't support an
# '-E offset=X' option, so we can't do this outside the VM.
${optionalString (partitionTableType == "efi") ''
mkdir -p /mnt/boot
mkfs.vfat -n ESP /dev/vda1
mount /dev/vda1 /mnt/boot
''}
# Install a configuration.nix # Install a configuration.nix
mkdir -p /mnt/etc/nixos mkdir -p /mnt/etc/nixos
${optionalString (configFile != null) '' ${optionalString (configFile != null) ''

View File

@ -1,4 +1,5 @@
# QEMU flags shared between various Nix expressions. # QEMU flags shared between various Nix expressions.
{ pkgs }:
{ {
@ -7,4 +8,14 @@
"-net vde,vlan=${toString nic},sock=$QEMU_VDE_SOCKET_${toString net}" "-net vde,vlan=${toString nic},sock=$QEMU_VDE_SOCKET_${toString net}"
]; ];
qemuSerialDevice = if pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64 then "ttyS0"
else if pkgs.stdenv.isArm || pkgs.stdenv.isAarch64 then "ttyAMA0"
else throw "Unknown QEMU serial device for system '${pkgs.stdenv.system}'";
qemuBinary = qemuPkg: {
"i686-linux" = "${qemuPkg}/bin/qemu-kvm";
"x86_64-linux" = "${qemuPkg}/bin/qemu-kvm -cpu kvm64";
"armv7l-linux" = "${qemuPkg}/bin/qemu-system-arm -enable-kvm -machine virt -cpu host";
"aarch64-linux" = "${qemuPkg}/bin/qemu-system-aarch64 -enable-kvm -machine virt,gic-version=host -cpu host";
}.${pkgs.stdenv.system} or (throw "Unknown QEMU binary for '${pkgs.stdenv.system}'");
} }

View File

@ -29,7 +29,7 @@ rec {
cp ${./test-driver/Logger.pm} $libDir/Logger.pm cp ${./test-driver/Logger.pm} $libDir/Logger.pm
wrapProgram $out/bin/nixos-test-driver \ wrapProgram $out/bin/nixos-test-driver \
--prefix PATH : "${lib.makeBinPath [ qemu vde2 netpbm coreutils ]}" \ --prefix PATH : "${lib.makeBinPath [ qemu_test vde2 netpbm coreutils ]}" \
--prefix PERL5LIB : "${with perlPackages; lib.makePerlPath [ TermReadLineGnu XMLWriter IOTty FileSlurp ]}:$out/lib/perl5/site_perl" --prefix PERL5LIB : "${with perlPackages; lib.makePerlPath [ TermReadLineGnu XMLWriter IOTty FileSlurp ]}:$out/lib/perl5/site_perl"
''; '';
}; };
@ -85,7 +85,7 @@ rec {
testScript' = testScript' =
# Call the test script with the computed nodes. # Call the test script with the computed nodes.
if builtins.isFunction testScript if lib.isFunction testScript
then testScript { inherit nodes; } then testScript { inherit nodes; }
else testScript; else testScript;

View File

@ -46,7 +46,7 @@ in {
inherit lib config; inherit lib config;
inherit (cfg) contents format name; inherit (cfg) contents format name;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
partitioned = config.ec2.hvm; partitionTableType = if config.ec2.hvm then "legacy" else "none";
diskSize = cfg.sizeMB; diskSize = cfg.sizeMB;
configFile = pkgs.writeText "configuration.nix" configFile = pkgs.writeText "configuration.nix"
'' ''

View File

@ -69,9 +69,6 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
# Leftover for old setups, should be set by nixos-generate-config now
powerManagement.cpuFreqGovernor = mkDefault "ondemand";
systemd.targets.post-resume = { systemd.targets.post-resume = {
description = "Post-Resume Actions"; description = "Post-Resume Actions";
requires = [ "post-resume.service" ]; requires = [ "post-resume.service" ];

View File

@ -36,7 +36,7 @@ in
default = {}; default = {};
description = '' description = ''
A set of environment variables used in the global environment. A set of environment variables used in the global environment.
These variables will be set on shell initialisation. These variables will be set on shell initialisation (e.g. in /etc/profile).
The value of each variable can be either a string or a list of The value of each variable can be either a string or a list of
strings. The latter is concatenated, interspersed with colon strings. The latter is concatenated, interspersed with colon
characters. characters.

View File

@ -27,6 +27,7 @@ in
boot.loader.grub.enable = false; boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true; boot.loader.generic-extlinux-compatible.enable = true;
boot.consoleLogLevel = lib.mkDefault 7;
boot.kernelPackages = pkgs.linuxPackages_latest; boot.kernelPackages = pkgs.linuxPackages_latest;
# The serial ports listed here are: # The serial ports listed here are:
@ -42,8 +43,17 @@ in
populateBootCommands = let populateBootCommands = let
configTxt = pkgs.writeText "config.txt" '' configTxt = pkgs.writeText "config.txt" ''
kernel=u-boot-rpi3.bin kernel=u-boot-rpi3.bin
# Boot in 64-bit mode.
arm_control=0x200 arm_control=0x200
# U-Boot used to need this to work, regardless of whether UART is actually used or not.
# TODO: check when/if this can be removed.
enable_uart=1 enable_uart=1
# Prevent the firmware from smashing the framebuffer setup done by the mainline kernel
# when attempting to show low-voltage or overtemperature warnings.
avoid_warnings=1
''; '';
in '' in ''
(cd ${pkgs.raspberrypifw}/share/raspberrypi/boot && cp bootcode.bin fixup*.dat start*.elf $NIX_BUILD_TOP/boot/) (cd ${pkgs.raspberrypifw}/share/raspberrypi/boot && cp bootcode.bin fixup*.dat start*.elf $NIX_BUILD_TOP/boot/)

View File

@ -27,6 +27,7 @@ in
boot.loader.grub.enable = false; boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true; boot.loader.generic-extlinux-compatible.enable = true;
boot.consoleLogLevel = lib.mkDefault 7;
boot.kernelPackages = pkgs.linuxPackages_latest; boot.kernelPackages = pkgs.linuxPackages_latest;
# The serial ports listed here are: # The serial ports listed here are:
# - ttyS0: for Tegra (Jetson TK1) # - ttyS0: for Tegra (Jetson TK1)
@ -42,11 +43,18 @@ in
sdImage = { sdImage = {
populateBootCommands = let populateBootCommands = let
configTxt = pkgs.writeText "config.txt" '' configTxt = pkgs.writeText "config.txt" ''
# Prevent the firmware from smashing the framebuffer setup done by the mainline kernel
# when attempting to show low-voltage or overtemperature warnings.
avoid_warnings=1
[pi2] [pi2]
kernel=u-boot-rpi2.bin kernel=u-boot-rpi2.bin
[pi3] [pi3]
kernel=u-boot-rpi3.bin kernel=u-boot-rpi3.bin
# U-Boot used to need this to work, regardless of whether UART is actually used or not.
# TODO: check when/if this can be removed.
enable_uart=1 enable_uart=1
''; '';
in '' in ''

View File

@ -27,6 +27,7 @@ in
boot.loader.grub.enable = false; boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true; boot.loader.generic-extlinux-compatible.enable = true;
boot.consoleLogLevel = lib.mkDefault 7;
boot.kernelPackages = pkgs.linuxPackages_rpi; boot.kernelPackages = pkgs.linuxPackages_rpi;
# FIXME: this probably should be in installation-device.nix # FIXME: this probably should be in installation-device.nix

View File

@ -301,6 +301,9 @@
pykms = 282; pykms = 282;
kodi = 283; kodi = 283;
restya-board = 284; restya-board = 284;
mighttpd2 = 285;
hass = 286;
monero = 287;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399! # When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -570,6 +573,9 @@
pykms = 282; pykms = 282;
kodi = 283; kodi = 283;
restya-board = 284; restya-board = 284;
mighttpd2 = 285;
hass = 286;
monero = 287;
# When adding a gid, make sure it doesn't match an existing # When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal # uid. Users and groups with the same name should have equal

View File

@ -3,11 +3,13 @@
with lib; with lib;
let let
cfg = config.nixpkgs;
isConfig = x: isConfig = x:
builtins.isAttrs x || builtins.isFunction x; builtins.isAttrs x || lib.isFunction x;
optCall = f: x: optCall = f: x:
if builtins.isFunction f if lib.isFunction f
then f x then f x
else f; else f;
@ -38,16 +40,55 @@ let
overlayType = mkOptionType { overlayType = mkOptionType {
name = "nixpkgs-overlay"; name = "nixpkgs-overlay";
description = "nixpkgs overlay"; description = "nixpkgs overlay";
check = builtins.isFunction; check = lib.isFunction;
merge = lib.mergeOneOption; merge = lib.mergeOneOption;
}; };
_pkgs = import ../../.. config.nixpkgs; pkgsType = mkOptionType {
name = "nixpkgs";
description = "An evaluation of Nixpkgs; the top level attribute set of packages";
check = builtins.isAttrs;
};
in in
{ {
options.nixpkgs = { options.nixpkgs = {
pkgs = mkOption {
defaultText = literalExample
''import "''${nixos}/.." {
inherit (config.nixpkgs) config overlays system;
}
'';
default = import ../../.. { inherit (cfg) config overlays system; };
type = pkgsType;
example = literalExample ''import <nixpkgs> {}'';
description = ''
This is the evaluation of Nixpkgs that will be provided to
all NixOS modules. Defining this option has the effect of
ignoring the other options that would otherwise be used to
evaluate Nixpkgs, because those are arguments to the default
value. The default value imports the Nixpkgs source files
relative to the location of this NixOS module, because
NixOS and Nixpkgs are distributed together for consistency,
so the <code>nixos</code> in the default value is in fact a
relative path. The <code>config</code>, <code>overlays</code>
and <code>system</code> come from this option's siblings.
This option can be used by applications like NixOps to increase
the performance of evaluation, or to create packages that depend
on a container that should be built with the exact same evaluation
of Nixpkgs, for example. Applications like this should set
their default value using <code>lib.mkDefault</code>, so
user-provided configuration can override it without using
<code>lib</code>.
Note that using a distinct version of Nixpkgs with NixOS may
be an unexpected source of problems. Use this option with care.
'';
};
config = mkOption { config = mkOption {
default = {}; default = {};
example = literalExample example = literalExample
@ -59,6 +100,8 @@ in
The configuration of the Nix Packages collection. (For The configuration of the Nix Packages collection. (For
details, see the Nixpkgs documentation.) It allows you to set details, see the Nixpkgs documentation.) It allows you to set
package configuration options. package configuration options.
Ignored when <code>nixpkgs.pkgs</code> is set.
''; '';
}; };
@ -69,7 +112,6 @@ in
[ (self: super: { [ (self: super: {
openssh = super.openssh.override { openssh = super.openssh.override {
hpnSupport = true; hpnSupport = true;
withKerberos = true;
kerberos = self.libkrb5; kerberos = self.libkrb5;
}; };
}; };
@ -83,6 +125,8 @@ in
takes as an argument the <emphasis>original</emphasis> Nixpkgs. takes as an argument the <emphasis>original</emphasis> Nixpkgs.
The first argument should be used for finding dependencies, and The first argument should be used for finding dependencies, and
the second should be used for overriding recipes. the second should be used for overriding recipes.
Ignored when <code>nixpkgs.pkgs</code> is set.
''; '';
}; };
@ -94,14 +138,16 @@ in
If unset, it defaults to the platform type of your host system. If unset, it defaults to the platform type of your host system.
Specifying this option is useful when doing distributed Specifying this option is useful when doing distributed
multi-platform deployment, or when building virtual machines. multi-platform deployment, or when building virtual machines.
Ignored when <code>nixpkgs.pkgs</code> is set.
''; '';
}; };
}; };
config = { config = {
_module.args = { _module.args = {
pkgs = _pkgs; pkgs = cfg.pkgs;
pkgs_i686 = _pkgs.pkgsi686Linux; pkgs_i686 = cfg.pkgs.pkgsi686Linux;
}; };
}; };
} }

View File

@ -84,6 +84,7 @@
./programs/info.nix ./programs/info.nix
./programs/java.nix ./programs/java.nix
./programs/kbdlight.nix ./programs/kbdlight.nix
./programs/less.nix
./programs/light.nix ./programs/light.nix
./programs/man.nix ./programs/man.nix
./programs/mosh.nix ./programs/mosh.nix
@ -110,6 +111,7 @@
./programs/wireshark.nix ./programs/wireshark.nix
./programs/xfs_quota.nix ./programs/xfs_quota.nix
./programs/xonsh.nix ./programs/xonsh.nix
./programs/yabar.nix
./programs/zsh/oh-my-zsh.nix ./programs/zsh/oh-my-zsh.nix
./programs/zsh/zsh.nix ./programs/zsh/zsh.nix
./programs/zsh/zsh-syntax-highlighting.nix ./programs/zsh/zsh-syntax-highlighting.nix
@ -200,6 +202,7 @@
./services/desktops/dleyna-server.nix ./services/desktops/dleyna-server.nix
./services/desktops/geoclue2.nix ./services/desktops/geoclue2.nix
./services/desktops/gnome3/at-spi2-core.nix ./services/desktops/gnome3/at-spi2-core.nix
./services/desktops/gnome3/chrome-gnome-shell.nix
./services/desktops/gnome3/evolution-data-server.nix ./services/desktops/gnome3/evolution-data-server.nix
./services/desktops/gnome3/gnome-disks.nix ./services/desktops/gnome3/gnome-disks.nix
./services/desktops/gnome3/gnome-documents.nix ./services/desktops/gnome3/gnome-documents.nix
@ -225,7 +228,6 @@
./services/games/terraria.nix ./services/games/terraria.nix
./services/hardware/acpid.nix ./services/hardware/acpid.nix
./services/hardware/actkbd.nix ./services/hardware/actkbd.nix
./services/hardware/amd-hybrid-graphics.nix
./services/hardware/bluetooth.nix ./services/hardware/bluetooth.nix
./services/hardware/brltty.nix ./services/hardware/brltty.nix
./services/hardware/freefall.nix ./services/hardware/freefall.nix
@ -314,6 +316,7 @@
./services/misc/gogs.nix ./services/misc/gogs.nix
./services/misc/gollum.nix ./services/misc/gollum.nix
./services/misc/gpsd.nix ./services/misc/gpsd.nix
./services/misc/home-assistant.nix
./services/misc/ihaskell.nix ./services/misc/ihaskell.nix
./services/misc/irkerd.nix ./services/misc/irkerd.nix
./services/misc/jackett.nix ./services/misc/jackett.nix
@ -415,7 +418,8 @@
./services/network-filesystems/ipfs.nix ./services/network-filesystems/ipfs.nix
./services/network-filesystems/netatalk.nix ./services/network-filesystems/netatalk.nix
./services/network-filesystems/nfsd.nix ./services/network-filesystems/nfsd.nix
./services/network-filesystems/openafs-client/default.nix ./services/network-filesystems/openafs/client.nix
./services/network-filesystems/openafs/server.nix
./services/network-filesystems/rsyncd.nix ./services/network-filesystems/rsyncd.nix
./services/network-filesystems/samba.nix ./services/network-filesystems/samba.nix
./services/network-filesystems/tahoe.nix ./services/network-filesystems/tahoe.nix
@ -424,6 +428,7 @@
./services/network-filesystems/yandex-disk.nix ./services/network-filesystems/yandex-disk.nix
./services/network-filesystems/xtreemfs.nix ./services/network-filesystems/xtreemfs.nix
./services/networking/amuled.nix ./services/networking/amuled.nix
./services/networking/aria2.nix
./services/networking/asterisk.nix ./services/networking/asterisk.nix
./services/networking/atftpd.nix ./services/networking/atftpd.nix
./services/networking/avahi-daemon.nix ./services/networking/avahi-daemon.nix
@ -488,6 +493,7 @@
./services/networking/minidlna.nix ./services/networking/minidlna.nix
./services/networking/miniupnpd.nix ./services/networking/miniupnpd.nix
./services/networking/mosquitto.nix ./services/networking/mosquitto.nix
./services/networking/monero.nix
./services/networking/miredo.nix ./services/networking/miredo.nix
./services/networking/mstpd.nix ./services/networking/mstpd.nix
./services/networking/murmur.nix ./services/networking/murmur.nix
@ -525,6 +531,7 @@
./services/networking/redsocks.nix ./services/networking/redsocks.nix
./services/networking/resilio.nix ./services/networking/resilio.nix
./services/networking/rpcbind.nix ./services/networking/rpcbind.nix
./services/networking/rxe.nix
./services/networking/sabnzbd.nix ./services/networking/sabnzbd.nix
./services/networking/searx.nix ./services/networking/searx.nix
./services/networking/seeks.nix ./services/networking/seeks.nix
@ -540,6 +547,7 @@
./services/networking/ssh/lshd.nix ./services/networking/ssh/lshd.nix
./services/networking/ssh/sshd.nix ./services/networking/ssh/sshd.nix
./services/networking/strongswan.nix ./services/networking/strongswan.nix
./services/networking/stunnel.nix
./services/networking/supplicant.nix ./services/networking/supplicant.nix
./services/networking/supybot.nix ./services/networking/supybot.nix
./services/networking/syncthing.nix ./services/networking/syncthing.nix
@ -634,6 +642,7 @@
./services/web-servers/lighttpd/default.nix ./services/web-servers/lighttpd/default.nix
./services/web-servers/lighttpd/gitweb.nix ./services/web-servers/lighttpd/gitweb.nix
./services/web-servers/lighttpd/inginious.nix ./services/web-servers/lighttpd/inginious.nix
./services/web-servers/mighttpd2.nix
./services/web-servers/minio.nix ./services/web-servers/minio.nix
./services/web-servers/nginx/default.nix ./services/web-servers/nginx/default.nix
./services/web-servers/phpfpm/default.nix ./services/web-servers/phpfpm/default.nix

View File

@ -17,7 +17,7 @@ let
# you should use files). # you should use files).
moduleFiles = moduleFiles =
# FIXME: use typeOf (Nix 1.6.1). # FIXME: use typeOf (Nix 1.6.1).
filter (x: !isAttrs x && !builtins.isFunction x) modules; filter (x: !isAttrs x && !lib.isFunction x) modules;
# Partition module files because between NixOS and non-NixOS files. NixOS # Partition module files because between NixOS and non-NixOS files. NixOS
# files may change if the repository is updated. # files may change if the repository is updated.

View File

@ -16,6 +16,7 @@ with lib;
To grant access to a user, it must be part of adbusers group: To grant access to a user, it must be part of adbusers group:
<code>users.extraUsers.alice.extraGroups = ["adbusers"];</code> <code>users.extraUsers.alice.extraGroups = ["adbusers"];</code>
''; '';
relatedPackages = [ ["androidenv" "platformTools"] ];
}; };
}; };
}; };

View File

@ -0,0 +1,118 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.less;
configFile = ''
#command
${concatStringsSep "\n"
(mapAttrsToList (command: action: "${command} ${action}") cfg.commands)
}
${if cfg.clearDefaultCommands then "#stop" else ""}
#line-edit
${concatStringsSep "\n"
(mapAttrsToList (command: action: "${command} ${action}") cfg.lineEditingKeys)
}
#env
${concatStringsSep "\n"
(mapAttrsToList (variable: values: "${variable}=${values}") cfg.envVariables)
}
'';
lessKey = pkgs.runCommand "lesskey"
{ src = pkgs.writeText "lessconfig" configFile; }
"${pkgs.less}/bin/lesskey -o $out $src";
in
{
options = {
programs.less = {
enable = mkEnableOption "less";
commands = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
"h" = "noaction 5\e(";
"l" = "noaction 5\e)";
};
description = "Defines new command keys.";
};
clearDefaultCommands = mkOption {
type = types.bool;
default = false;
description = ''
Clear all default commands.
You should remember to set the quit key.
Otherwise you will not be able to leave less without killing it.
'';
};
lineEditingKeys = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
"\e" = "abort";
};
description = "Defines new line-editing keys.";
};
envVariables = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
LESS = "--quit-if-one-screen";
};
description = "Defines environment variables.";
};
lessopen = mkOption {
type = types.nullOr types.str;
default = "|${pkgs.lesspipe}/bin/lesspipe.sh %s";
description = ''
Before less opens a file, it first gives your input preprocessor a chance to modify the way the contents of the file are displayed.
'';
};
lessclose = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
When less closes a file opened in such a way, it will call another program, called the input postprocessor, which may perform any desired clean-up action (such as deleting the replacement file created by LESSOPEN).
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.less ];
environment.variables = {
"LESSKEY_SYSTEM" = toString lessKey;
} // optionalAttrs (cfg.lessopen != null) {
"LESSOPEN" = cfg.lessopen;
} // optionalAttrs (cfg.lessclose != null) {
"LESSCLOSE" = cfg.lessclose;
};
warnings = optional (
cfg.clearDefaultCommands && (all (x: x != "quit") (attrValues cfg.commands))
) ''
config.programs.less.clearDefaultCommands clears all default commands of less but there is no alternative binding for exiting.
Consider adding a binding for 'quit'.
'';
};
meta.maintainers = with maintainers; [ johnazoidberg ];
}

View File

@ -26,8 +26,9 @@ let
# Ensure privacy for newly created home directories. # Ensure privacy for newly created home directories.
UMASK 077 UMASK 077
# Uncomment this to allow non-root users to change their account # Uncomment this and install chfn SUID to allow non-root
#information. This should be made configurable. # users to change their account GECOS information.
# This should be made configurable.
#CHFN_RESTRICT frwh #CHFN_RESTRICT frwh
''; '';
@ -103,13 +104,12 @@ in
security.wrappers = { security.wrappers = {
su.source = "${pkgs.shadow.su}/bin/su"; su.source = "${pkgs.shadow.su}/bin/su";
chfn.source = "${pkgs.shadow.out}/bin/chfn"; sg.source = "${pkgs.shadow.out}/bin/sg";
newgrp.source = "${pkgs.shadow.out}/bin/newgrp";
newuidmap.source = "${pkgs.shadow.out}/bin/newuidmap"; newuidmap.source = "${pkgs.shadow.out}/bin/newuidmap";
newgidmap.source = "${pkgs.shadow.out}/bin/newgidmap"; newgidmap.source = "${pkgs.shadow.out}/bin/newgidmap";
} // (if config.users.mutableUsers then { } // (if config.users.mutableUsers then {
passwd.source = "${pkgs.shadow.out}/bin/passwd"; passwd.source = "${pkgs.shadow.out}/bin/passwd";
sg.source = "${pkgs.shadow.out}/bin/sg";
newgrp.source = "${pkgs.shadow.out}/bin/newgrp";
} else {}); } else {});
}; };
} }

View File

@ -61,7 +61,12 @@ in {
options = { options = {
programs.tmux = { programs.tmux = {
enable = mkEnableOption "<command>tmux</command> - a <command>screen</command> replacement."; enable = mkOption {
type = types.bool;
default = false;
description = "Whenever to configure <command>tmux</command> system-wide.";
relatedPackages = [ "tmux" ];
};
aggressiveResize = mkOption { aggressiveResize = mkOption {
default = false; default = false;

View File

@ -0,0 +1,149 @@
{ lib, pkgs, config, ... }:
with lib;
let
cfg = config.programs.yabar;
mapExtra = v: lib.concatStringsSep "\n" (mapAttrsToList (
key: val: "${key} = ${if (isString val) then "\"${val}\"" else "${builtins.toString val}"};"
) v);
listKeys = r: concatStringsSep "," (map (n: "\"${n}\"") (attrNames r));
configFile = let
bars = mapAttrsToList (
name: cfg: ''
${name}: {
font: "${cfg.font}";
position: "${cfg.position}";
${mapExtra cfg.extra}
block-list: [${listKeys cfg.indicators}]
${concatStringsSep "\n" (mapAttrsToList (
name: cfg: ''
${name}: {
exec: "${cfg.exec}";
align: "${cfg.align}";
${mapExtra cfg.extra}
};
''
) cfg.indicators)}
};
''
) cfg.bars;
in pkgs.writeText "yabar.conf" ''
bar-list = [${listKeys cfg.bars}];
${concatStringsSep "\n" bars}
'';
in
{
options.programs.yabar = {
enable = mkEnableOption "yabar";
package = mkOption {
default = pkgs.yabar;
example = literalExample "pkgs.yabar-unstable";
type = types.package;
description = ''
The package which contains the `yabar` binary.
Nixpkgs provides the `yabar` and `yabar-unstable`
derivations since 18.03, so it's possible to choose.
'';
};
bars = mkOption {
default = {};
type = types.attrsOf(types.submodule {
options = {
font = mkOption {
default = "sans bold 9";
example = "Droid Sans, FontAwesome Bold 9";
type = types.string;
description = ''
The font that will be used to draw the status bar.
'';
};
position = mkOption {
default = "top";
example = "bottom";
type = types.enum [ "top" "bottom" ];
description = ''
The position where the bar will be rendered.
'';
};
extra = mkOption {
default = {};
type = types.attrsOf types.string;
description = ''
An attribute set which contains further attributes of a bar.
'';
};
indicators = mkOption {
default = {};
type = types.attrsOf(types.submodule {
options.exec = mkOption {
example = "YABAR_DATE";
type = types.string;
description = ''
The type of the indicator to be executed.
'';
};
options.align = mkOption {
default = "left";
example = "right";
type = types.enum [ "left" "center" "right" ];
description = ''
Whether to align the indicator at the left or right of the bar.
'';
};
options.extra = mkOption {
default = {};
type = types.attrsOf (types.either types.string types.int);
description = ''
An attribute set which contains further attributes of a indicator.
'';
};
});
description = ''
Indicators that should be rendered by yabar.
'';
};
};
});
description = ''
List of bars that should be rendered by yabar.
'';
};
};
config = mkIf cfg.enable {
systemd.user.services.yabar = {
description = "yabar service";
wantedBy = [ "graphical-session.target" ];
partOf = [ "graphical-session.target" ];
script = ''
${cfg.package}/bin/yabar -c ${configFile}
'';
serviceConfig.Restart = "always";
};
};
}

View File

@ -48,6 +48,15 @@ in
Name of the theme to be used by oh-my-zsh. Name of the theme to be used by oh-my-zsh.
''; '';
}; };
cacheDir = mkOption {
default = "$HOME/.cache/oh-my-zsh";
type = types.str;
description = ''
Cache directory to be used by `oh-my-zsh`.
Without this option it would default to the read-only nix store.
'';
};
}; };
}; };
@ -74,6 +83,13 @@ in
"ZSH_THEME=\"${cfg.theme}\"" "ZSH_THEME=\"${cfg.theme}\""
} }
${optionalString (cfg.cacheDir != null) ''
if [[ ! -d "${cfg.cacheDir}" ]]; then
mkdir -p "${cfg.cacheDir}"
fi
ZSH_CACHE_DIR=${cfg.cacheDir}
''}
source $ZSH/oh-my-zsh.sh source $ZSH/oh-my-zsh.sh
''; '';
}; };

View File

@ -36,8 +36,9 @@ in
shellAliases = mkOption { shellAliases = mkOption {
default = config.environment.shellAliases; default = config.environment.shellAliases;
description = '' description = ''
Set of aliases for zsh shell. See <option>environment.shellAliases</option> Set of aliases for zsh shell. Overrides the default value taken from
for an option format description. <option>environment.shellAliases</option>.
See <option>environment.shellAliases</option> for an option format description.
''; '';
type = types.attrs; # types.attrsOf types.stringOrPath; type = types.attrs; # types.attrsOf types.stringOrPath;
}; };

View File

@ -210,6 +210,7 @@ with lib;
"Set the option `services.xserver.displayManager.sddm.package' instead.") "Set the option `services.xserver.displayManager.sddm.package' instead.")
(mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "") (mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "") (mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
(mkRemovedOptionModule [ "virtualisation" "xen" "qemu" ] "You don't need this option anymore, it will work without it.")
# ZSH # ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ]) (mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ])
@ -220,5 +221,8 @@ with lib;
(mkRenamedOptionModule [ "programs" "zsh" "oh-my-zsh" "theme" ] [ "programs" "zsh" "ohMyZsh" "theme" ]) (mkRenamedOptionModule [ "programs" "zsh" "oh-my-zsh" "theme" ] [ "programs" "zsh" "ohMyZsh" "theme" ])
(mkRenamedOptionModule [ "programs" "zsh" "oh-my-zsh" "custom" ] [ "programs" "zsh" "ohMyZsh" "custom" ]) (mkRenamedOptionModule [ "programs" "zsh" "oh-my-zsh" "custom" ] [ "programs" "zsh" "ohMyZsh" "custom" ])
(mkRenamedOptionModule [ "programs" "zsh" "oh-my-zsh" "plugins" ] [ "programs" "zsh" "ohMyZsh" "plugins" ]) (mkRenamedOptionModule [ "programs" "zsh" "oh-my-zsh" "plugins" ] [ "programs" "zsh" "ohMyZsh" "plugins" ])
# Xen
(mkRenamedOptionModule [ "virtualisation" "xen" "qemu-package" ] [ "virtualisation" "xen" "package-qemu" ])
]; ];
} }

View File

@ -6,10 +6,11 @@ let
cfg = config.security.acme; cfg = config.security.acme;
certOpts = { ... }: { certOpts = { name, ... }: {
options = { options = {
webroot = mkOption { webroot = mkOption {
type = types.str; type = types.str;
example = "/var/lib/acme/acme-challenges";
description = '' description = ''
Where the webroot of the HTTP vhost is located. Where the webroot of the HTTP vhost is located.
<filename>.well-known/acme-challenge/</filename> directory <filename>.well-known/acme-challenge/</filename> directory
@ -20,8 +21,8 @@ let
}; };
domain = mkOption { domain = mkOption {
type = types.nullOr types.str; type = types.str;
default = null; default = name;
description = "Domain to fetch certificate for (defaults to the entry name)"; description = "Domain to fetch certificate for (defaults to the entry name)";
}; };
@ -48,7 +49,7 @@ let
default = false; default = false;
description = '' description = ''
Give read permissions to the specified group Give read permissions to the specified group
(<option>security.acme.group</option>) to read SSL private certificates. (<option>security.acme.cert.&lt;name&gt;.group</option>) to read SSL private certificates.
''; '';
}; };
@ -87,7 +88,7 @@ let
} }
''; '';
description = '' description = ''
Extra domain names for which certificates are to be issued, with their A list of extra domain names, which are included in the one certificate to be issued, with their
own server roots if needed. own server roots if needed.
''; '';
}; };
@ -139,6 +140,14 @@ in
''; '';
}; };
tosHash = mkOption {
type = types.string;
default = "cc88d8d9517f490191401e7b54e9ffd12a2b9082ec7a1d4cec6101f9f1647e7b";
description = ''
SHA256 of the Terms of Services document. This changes once in a while.
'';
};
production = mkOption { production = mkOption {
type = types.bool; type = types.bool;
default = true; default = true;
@ -185,10 +194,9 @@ in
servicesLists = mapAttrsToList certToServices cfg.certs; servicesLists = mapAttrsToList certToServices cfg.certs;
certToServices = cert: data: certToServices = cert: data:
let let
domain = if data.domain != null then data.domain else cert;
cpath = "${cfg.directory}/${cert}"; cpath = "${cfg.directory}/${cert}";
rights = if data.allowKeysForGroup then "750" else "700"; rights = if data.allowKeysForGroup then "750" else "700";
cmdline = [ "-v" "-d" domain "--default_root" data.webroot "--valid_min" cfg.validMin ] cmdline = [ "-v" "-d" data.domain "--default_root" data.webroot "--valid_min" cfg.validMin "--tos_sha256" cfg.tosHash ]
++ optionals (data.email != null) [ "--email" data.email ] ++ optionals (data.email != null) [ "--email" data.email ]
++ concatMap (p: [ "-f" p ]) data.plugins ++ concatMap (p: [ "-f" p ]) data.plugins
++ concatLists (mapAttrsToList (name: root: [ "-d" (if root == null then name else "${name}:${root}")]) data.extraDomains) ++ concatLists (mapAttrsToList (name: root: [ "-d" (if root == null then name else "${name}:${root}")]) data.extraDomains)

View File

@ -46,6 +46,18 @@ let
''; '';
}; };
googleAuthenticator = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
If set, users with enabled Google Authenticator (created
<filename>~/.google_authenticator</filename>) will be required
to provide Google Authenticator token to log in.
'';
};
};
usbAuth = mkOption { usbAuth = mkOption {
default = config.security.pam.usb.enable; default = config.security.pam.usb.enable;
type = types.bool; type = types.bool;
@ -284,7 +296,12 @@ let
# prompts the user for password so we run it once with 'required' at an # prompts the user for password so we run it once with 'required' at an
# earlier point and it will run again with 'sufficient' further down. # earlier point and it will run again with 'sufficient' further down.
# We use try_first_pass the second time to avoid prompting password twice # We use try_first_pass the second time to avoid prompting password twice
(optionalString (cfg.unixAuth && (config.security.pam.enableEcryptfs || cfg.pamMount || cfg.enableKwallet || cfg.enableGnomeKeyring)) '' (optionalString (cfg.unixAuth &&
(config.security.pam.enableEcryptfs
|| cfg.pamMount
|| cfg.enableKwallet
|| cfg.enableGnomeKeyring
|| cfg.googleAuthenticator.enable)) ''
auth required pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} likeauth auth required pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} likeauth
${optionalString config.security.pam.enableEcryptfs ${optionalString config.security.pam.enableEcryptfs
"auth optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so unwrap"} "auth optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so unwrap"}
@ -295,6 +312,8 @@ let
" kwalletd=${pkgs.libsForQt5.kwallet.bin}/bin/kwalletd5")} " kwalletd=${pkgs.libsForQt5.kwallet.bin}/bin/kwalletd5")}
${optionalString cfg.enableGnomeKeyring ${optionalString cfg.enableGnomeKeyring
("auth optional ${pkgs.gnome3.gnome_keyring}/lib/security/pam_gnome_keyring.so")} ("auth optional ${pkgs.gnome3.gnome_keyring}/lib/security/pam_gnome_keyring.so")}
${optionalString cfg.googleAuthenticator.enable
"auth required ${pkgs.googleAuthenticator}/lib/security/pam_google_authenticator.so no_increment_hotp"}
'') + '' '') + ''
${optionalString cfg.unixAuth ${optionalString cfg.unixAuth
"auth sufficient pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} likeauth try_first_pass"} "auth sufficient pam_unix.so ${optionalString cfg.allowNullPassword "nullok"} likeauth try_first_pass"}

View File

@ -8,6 +8,22 @@ let
inherit (pkgs) sudo; inherit (pkgs) sudo;
toUserString = user: if (isInt user) then "#${toString user}" else "${user}";
toGroupString = group: if (isInt group) then "%#${toString group}" else "%${group}";
toCommandOptionsString = options:
"${concatStringsSep ":" options}${optionalString (length options != 0) ":"} ";
toCommandsString = commands:
concatStringsSep ", " (
map (command:
if (isString command) then
command
else
"${toCommandOptionsString command.options}${command.command}"
) commands
);
in in
{ {
@ -47,6 +63,97 @@ in
''; '';
}; };
security.sudo.extraRules = mkOption {
description = ''
Define specific rules to be in the <filename>sudoers</filename> file.
'';
default = [];
example = [
# Allow execution of any command by all users in group sudo,
# requiring a password.
{ groups = [ "sudo" ]; commands = [ "ALL" ]; }
# Allow execution of "/home/root/secret.sh" by user `backup`, `database`
# and the group with GID `1006` without a password.
{ users = [ "backup" ]; groups = [ 1006 ];
commands = [ { command = "/home/root/secret.sh"; options = [ "SETENV" "NOPASSWD" ]; } ]; }
# Allow all users of group `bar` to run two executables as user `foo`
# with arguments being pre-set.
{ groups = [ "bar" ]; runAs = "foo";
commands =
[ "/home/baz/cmd1.sh hello-sudo"
{ command = ''/home/baz/cmd2.sh ""''; options = [ "SETENV" ]; } ]; }
];
type = with types; listOf (submodule {
options = {
users = mkOption {
type = with types; listOf (either string int);
description = ''
The usernames / UIDs this rule should apply for.
'';
default = [];
};
groups = mkOption {
type = with types; listOf (either string int);
description = ''
The groups / GIDs this rule should apply for.
'';
default = [];
};
host = mkOption {
type = types.string;
default = "ALL";
description = ''
For what host this rule should apply.
'';
};
runAs = mkOption {
type = with types; string;
default = "ALL:ALL";
description = ''
Under which user/group the specified command is allowed to run.
A user can be specified using just the username: <code>"foo"</code>.
It is also possible to specify a user/group combination using <code>"foo:bar"</code>
or to only allow running as a specific group with <code>":bar"</code>.
'';
};
commands = mkOption {
description = ''
The commands for which the rule should apply.
'';
type = with types; listOf (either string (submodule {
options = {
command = mkOption {
type = with types; string;
description = ''
A command being either just a path to a binary to allow any arguments,
the full command with arguments pre-set or with <code>""</code> used as the argument,
not allowing arguments to the command at all.
'';
};
options = mkOption {
type = with types; listOf (enum [ "NOPASSWD" "PASSWD" "NOEXEC" "EXEC" "SETENV" "NOSETENV" "LOG_INPUT" "NOLOG_INPUT" "LOG_OUTPUT" "NOLOG_OUTPUT" ]);
description = ''
Options for running the command. Refer to the <a href="https://www.sudo.ws/man/1.7.10/sudoers.man.html">sudo manual</a>.
'';
default = [];
};
};
}));
};
};
});
};
security.sudo.extraConfig = mkOption { security.sudo.extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
@ -61,10 +168,16 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.sudo.extraRules = [
{ groups = [ "wheel" ];
commands = [ { command = "ALL"; options = (if cfg.wheelNeedsPassword then [ "SETENV" ] else [ "NOPASSWD" "SETENV" ]); } ];
}
];
security.sudo.configFile = security.sudo.configFile =
'' ''
# Don't edit this file. Set the NixOS options security.sudo.configFile # Don't edit this file. Set the NixOS options security.sudo.configFile
# or security.sudo.extraConfig instead. # or security.sudo.extraRules instead.
# Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic. # Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic.
Defaults env_keep+=SSH_AUTH_SOCK Defaults env_keep+=SSH_AUTH_SOCK
@ -72,8 +185,18 @@ in
# "root" is allowed to do anything. # "root" is allowed to do anything.
root ALL=(ALL:ALL) SETENV: ALL root ALL=(ALL:ALL) SETENV: ALL
# Users in the "wheel" group can do anything. # extraRules
%wheel ALL=(ALL:ALL) ${if cfg.wheelNeedsPassword then "" else "NOPASSWD: ALL, "}SETENV: ALL ${concatStringsSep "\n" (
lists.flatten (
map (
rule: if (length rule.commands != 0) then [
(map (user: "${toUserString user} ${rule.host}=(${rule.runAs}) ${toCommandsString rule.commands}") rule.users)
(map (group: "${toGroupString group} ${rule.host}=(${rule.runAs}) ${toCommandsString rule.commands}") rule.groups)
] else []
) cfg.extraRules
)
)}
${cfg.extraConfig} ${cfg.extraConfig}
''; '';

View File

@ -6,14 +6,20 @@ let
cfg = config.services.slurm; cfg = config.services.slurm;
# configuration file can be generated by http://slurm.schedmd.com/configurator.html # configuration file can be generated by http://slurm.schedmd.com/configurator.html
configFile = pkgs.writeText "slurm.conf" configFile = pkgs.writeText "slurm.conf"
'' ''
${optionalString (cfg.controlMachine != null) ''controlMachine=${cfg.controlMachine}''} ${optionalString (cfg.controlMachine != null) ''controlMachine=${cfg.controlMachine}''}
${optionalString (cfg.controlAddr != null) ''controlAddr=${cfg.controlAddr}''} ${optionalString (cfg.controlAddr != null) ''controlAddr=${cfg.controlAddr}''}
${optionalString (cfg.nodeName != null) ''nodeName=${cfg.nodeName}''} ${optionalString (cfg.nodeName != null) ''nodeName=${cfg.nodeName}''}
${optionalString (cfg.partitionName != null) ''partitionName=${cfg.partitionName}''} ${optionalString (cfg.partitionName != null) ''partitionName=${cfg.partitionName}''}
PlugStackConfig=${plugStackConfig}
${cfg.extraConfig} ${cfg.extraConfig}
''; '';
plugStackConfig = pkgs.writeText "plugstack.conf"
''
${optionalString cfg.enableSrunX11 ''optional ${pkgs.slurm-spank-x11}/lib/x11.so''}
'';
in in
{ {
@ -28,7 +34,7 @@ in
enable = mkEnableOption "slurm control daemon"; enable = mkEnableOption "slurm control daemon";
}; };
client = { client = {
enable = mkEnableOption "slurm rlient daemon"; enable = mkEnableOption "slurm rlient daemon";
@ -86,8 +92,19 @@ in
''; '';
}; };
enableSrunX11 = mkOption {
default = false;
type = types.bool;
description = ''
If enabled srun will accept the option "--x11" to allow for X11 forwarding
from within an interactive session or a batch job. This activates the
slurm-spank-x11 module. Note that this requires 'services.openssh.forwardX11'
to be enabled on the compute nodes.
'';
};
extraConfig = mkOption { extraConfig = mkOption {
default = ""; default = "";
type = types.lines; type = types.lines;
description = '' description = ''
Extra configuration options that will be added verbatim at Extra configuration options that will be added verbatim at
@ -134,7 +151,8 @@ in
environment.systemPackages = [ wrappedSlurm ]; environment.systemPackages = [ wrappedSlurm ];
systemd.services.slurmd = mkIf (cfg.client.enable) { systemd.services.slurmd = mkIf (cfg.client.enable) {
path = with pkgs; [ wrappedSlurm coreutils ]; path = with pkgs; [ wrappedSlurm coreutils ]
++ lib.optional cfg.enableSrunX11 slurm-spank-x11;
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "systemd-tmpfiles-clean.service" ]; after = [ "systemd-tmpfiles-clean.service" ];
@ -152,8 +170,9 @@ in
}; };
systemd.services.slurmctld = mkIf (cfg.server.enable) { systemd.services.slurmctld = mkIf (cfg.server.enable) {
path = with pkgs; [ wrappedSlurm munge coreutils ]; path = with pkgs; [ wrappedSlurm munge coreutils ]
++ lib.optional cfg.enableSrunX11 slurm-spank-x11;
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "network.target" "munged.service" ]; after = [ "network.target" "munged.service" ];
requires = [ "munged.service" ]; requires = [ "munged.service" ];

View File

@ -289,10 +289,10 @@ in
# Create initial databases # Create initial databases
if ! test -e "${cfg.dataDir}/${database.name}"; then if ! test -e "${cfg.dataDir}/${database.name}"; then
echo "Creating initial database: ${database.name}" echo "Creating initial database: ${database.name}"
( echo "create database ${database.name};" ( echo "create database `${database.name}`;"
${optionalString (database ? "schema") '' ${optionalString (database ? "schema") ''
echo "use ${database.name};" echo "use `${database.name}`;"
if [ -f "${database.schema}" ] if [ -f "${database.schema}" ]
then then

View File

@ -0,0 +1,27 @@
# Chrome GNOME Shell native host connector.
{ config, lib, pkgs, ... }:
with lib;
{
###### interface
options = {
services.gnome3.chrome-gnome-shell.enable = mkEnableOption ''
Chrome GNOME Shell native host connector, a DBus service
allowing to install GNOME Shell extensions from a web browser.
'';
};
###### implementation
config = mkIf config.services.gnome3.chrome-gnome-shell.enable {
environment.etc = {
"chromium/native-messaging-hosts/org.gnome.chrome_gnome_shell.json".source = "${pkgs.chrome-gnome-shell}/etc/chromium/native-messaging-hosts/org.gnome.chrome_gnome_shell.json";
"opt/chrome/native-messaging-hosts/org.gnome.chrome_gnome_shell.json".source = "${pkgs.chrome-gnome-shell}/etc/opt/chrome/native-messaging-hosts/org.gnome.chrome_gnome_shell.json";
};
environment.systemPackages = [ pkgs.chrome-gnome-shell ];
services.dbus.packages = [ pkgs.chrome-gnome-shell ];
};
}

View File

@ -0,0 +1,13 @@
# Copied from systemd 203.
ACTION=="remove", GOTO="net_name_slot_end"
SUBSYSTEM!="net", GOTO="net_name_slot_end"
NAME!="", GOTO="net_name_slot_end"
IMPORT{cmdline}="net.ifnames"
ENV{net.ifnames}=="0", GOTO="net_name_slot_end"
NAME=="", ENV{ID_NET_NAME_ONBOARD}!="", NAME="$env{ID_NET_NAME_ONBOARD}"
NAME=="", ENV{ID_NET_NAME_SLOT}!="", NAME="$env{ID_NET_NAME_SLOT}"
NAME=="", ENV{ID_NET_NAME_PATH}!="", NAME="$env{ID_NET_NAME_PATH}"
LABEL="net_name_slot_end"

View File

@ -31,7 +31,7 @@ let
'' ''
fn=$out/${name} fn=$out/${name}
echo "event=${handler.event}" > $fn echo "event=${handler.event}" > $fn
echo "action=${pkgs.writeScript "${name}.sh" (concatStringsSep "\n" [ "#! ${pkgs.bash}/bin/sh" handler.action ])}" >> $fn echo "action=${pkgs.writeShellScriptBin "${name}.sh" handler.action }/bin/${name}.sh '%e'" >> $fn
''; '';
in concatStringsSep "\n" (mapAttrsToList f (canonicalHandlers // config.services.acpid.handlers)) in concatStringsSep "\n" (mapAttrsToList f (canonicalHandlers // config.services.acpid.handlers))
} }
@ -69,11 +69,33 @@ in
}; };
}); });
description = "Event handlers."; description = ''
Event handlers.
<note><para>
Handler can be a single command.
</para></note>
'';
default = {}; default = {};
example = { mute = { event = "button/mute.*"; action = "amixer set Master toggle"; }; }; example = {
ac-power = {
event = "ac_adapter/*";
action = ''
vals=($1) # space separated string to array of multiple values
case ''${vals[3]} in
00000000)
echo unplugged >> /tmp/acpi.log
;;
00000001)
echo plugged in >> /tmp/acpi.log
;;
*)
echo unknown >> /tmp/acpi.log
;;
esac
'';
};
};
}; };
powerEventCommands = mkOption { powerEventCommands = mkOption {

View File

@ -1,46 +0,0 @@
{ config, pkgs, lib, ... }:
{
###### interface
options = {
hardware.amdHybridGraphics.disable = lib.mkOption {
default = false;
type = lib.types.bool;
description = ''
Completely disable the AMD graphics card and use the
integrated graphics processor instead.
'';
};
};
###### implementation
config = lib.mkIf config.hardware.amdHybridGraphics.disable {
systemd.services."amd-hybrid-graphics" = {
path = [ pkgs.bash ];
description = "Disable AMD Card";
after = [ "sys-kernel-debug.mount" ];
before = [ "systemd-vconsole-setup.service" "display-manager.service" ];
requires = [ "sys-kernel-debug.mount" "vgaswitcheroo.path" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${pkgs.bash}/bin/sh -c 'echo -e \"IGD\\nOFF\" > /sys/kernel/debug/vgaswitcheroo/switch'";
ExecStop = "${pkgs.bash}/bin/sh -c 'echo ON >/sys/kernel/debug/vgaswitcheroo/switch'";
};
};
systemd.paths."vgaswitcheroo" = {
pathConfig = {
PathExists = "/sys/kernel/debug/vgaswitcheroo/switch";
Unit = "amd-hybrid-graphics.service";
};
wantedBy = ["multi-user.target"];
};
};
}

View File

@ -23,7 +23,7 @@ let kernel = config.boot.kernelPackages; in
###### implementation ###### implementation
config = lib.mkIf config.hardware.nvidiaOptimus.disable { config = lib.mkIf config.hardware.nvidiaOptimus.disable {
boot.blacklistedKernelModules = ["nouveau" "nvidia" "nvidiafb"]; boot.blacklistedKernelModules = ["nouveau" "nvidia" "nvidiafb" "nvidia-drm"];
boot.kernelModules = [ "bbswitch" ]; boot.kernelModules = [ "bbswitch" ];
boot.extraModulePackages = [ kernel.bbswitch ]; boot.extraModulePackages = [ kernel.bbswitch ];

View File

@ -119,7 +119,7 @@ let
fi fi
${optionalString config.networking.usePredictableInterfaceNames '' ${optionalString config.networking.usePredictableInterfaceNames ''
cp ${udev}/lib/udev/rules.d/80-net-setup-link.rules $out/80-net-setup-link.rules cp ${./80-net-setup-link.rules} $out/80-net-setup-link.rules
''} ''}
# If auto-configuration is disabled, then remove # If auto-configuration is disabled, then remove

View File

@ -104,7 +104,7 @@ let
}; };
mailboxConfig = mailbox: '' mailboxConfig = mailbox: ''
mailbox ${mailbox.name} { mailbox "${mailbox.name}" {
auto = ${toString mailbox.auto} auto = ${toString mailbox.auto}
'' + optionalString (mailbox.specialUse != null) '' '' + optionalString (mailbox.specialUse != null) ''
special_use = \${toString mailbox.specialUse} special_use = \${toString mailbox.specialUse}
@ -113,7 +113,7 @@ let
mailboxes = { lib, pkgs, ... }: { mailboxes = { lib, pkgs, ... }: {
options = { options = {
name = mkOption { name = mkOption {
type = types.str; type = types.strMatching ''[^"]+'';
example = "Spam"; example = "Spam";
description = "The name of the mailbox."; description = "The name of the mailbox.";
}; };

View File

@ -15,20 +15,18 @@ let
haveVirtual = cfg.virtual != ""; haveVirtual = cfg.virtual != "";
clientAccess = clientAccess =
if (cfg.dnsBlacklistOverrides != "") optional (cfg.dnsBlacklistOverrides != "")
then [ "check_client_access hash:/etc/postfix/client_access" ] "check_client_access hash:/etc/postfix/client_access";
else [];
dnsBl = dnsBl =
if (cfg.dnsBlacklists != []) optionals (cfg.dnsBlacklists != [])
then [ (concatStringsSep ", " (map (s: "reject_rbl_client " + s) cfg.dnsBlacklists)) ] (map (s: "reject_rbl_client " + s) cfg.dnsBlacklists);
else [];
clientRestrictions = concatStringsSep ", " (clientAccess ++ dnsBl); clientRestrictions = concatStringsSep ", " (clientAccess ++ dnsBl);
mainCf = let mainCf = let
escape = replaceStrings ["$"] ["$$"]; escape = replaceStrings ["$"] ["$$"];
mkList = items: "\n " + concatStringsSep "\n " items; mkList = items: "\n " + concatStringsSep ",\n " items;
mkVal = value: mkVal = value:
if isList value then mkList value if isList value then mkList value
else " " + (if value == true then "yes" else " " + (if value == true then "yes"
@ -36,72 +34,9 @@ let
else toString value); else toString value);
mkEntry = name: value: "${escape name} =${mkVal value}"; mkEntry = name: value: "${escape name} =${mkVal value}";
in in
concatStringsSep "\n" (mapAttrsToList mkEntry (recursiveUpdate defaultConf cfg.config)) concatStringsSep "\n" (mapAttrsToList mkEntry cfg.config)
+ "\n" + cfg.extraConfig; + "\n" + cfg.extraConfig;
defaultConf = {
compatibility_level = "9999";
mail_owner = user;
default_privs = "nobody";
# NixOS specific locations
data_directory = "/var/lib/postfix/data";
queue_directory = "/var/lib/postfix/queue";
# Default location of everything in package
meta_directory = "${pkgs.postfix}/etc/postfix";
command_directory = "${pkgs.postfix}/bin";
sample_directory = "/etc/postfix";
newaliases_path = "${pkgs.postfix}/bin/newaliases";
mailq_path = "${pkgs.postfix}/bin/mailq";
readme_directory = false;
sendmail_path = "${pkgs.postfix}/bin/sendmail";
daemon_directory = "${pkgs.postfix}/libexec/postfix";
manpage_directory = "${pkgs.postfix}/share/man";
html_directory = "${pkgs.postfix}/share/postfix/doc/html";
shlib_directory = false;
relayhost = if cfg.relayHost == "" then "" else
if cfg.lookupMX
then "${cfg.relayHost}:${toString cfg.relayPort}"
else "[${cfg.relayHost}]:${toString cfg.relayPort}";
mail_spool_directory = "/var/spool/mail/";
setgid_group = setgidGroup;
}
// optionalAttrs config.networking.enableIPv6 { inet_protocols = "all"; }
// optionalAttrs (cfg.networks != null) { mynetworks = cfg.networks; }
// optionalAttrs (cfg.networksStyle != "") { mynetworks_style = cfg.networksStyle; }
// optionalAttrs (cfg.hostname != "") { myhostname = cfg.hostname; }
// optionalAttrs (cfg.domain != "") { mydomain = cfg.domain; }
// optionalAttrs (cfg.origin != "") { myorigin = cfg.origin; }
// optionalAttrs (cfg.destination != null) { mydestination = cfg.destination; }
// optionalAttrs (cfg.relayDomains != null) { relay_domains = cfg.relayDomains; }
// optionalAttrs (cfg.recipientDelimiter != "") { recipient_delimiter = cfg.recipientDelimiter; }
// optionalAttrs haveAliases { alias_maps = "${cfg.aliasMapType}:/etc/postfix/aliases"; }
// optionalAttrs haveTransport { transport_maps = "hash:/etc/postfix/transport"; }
// optionalAttrs haveVirtual { virtual_alias_maps = "${cfg.virtualMapType}:/etc/postfix/virtual"; }
// optionalAttrs (cfg.dnsBlacklists != []) { smtpd_client_restrictions = clientRestrictions; }
// optionalAttrs cfg.useSrs {
sender_canonical_maps = "tcp:127.0.0.1:10001";
sender_canonical_classes = "envelope_sender";
recipient_canonical_maps = "tcp:127.0.0.1:10002";
recipient_canonical_classes= "envelope_recipient";
}
// optionalAttrs cfg.enableHeaderChecks { header_checks = "regexp:/etc/postfix/header_checks"; }
// optionalAttrs (cfg.sslCert != "") {
smtp_tls_CAfile = cfg.sslCACert;
smtp_tls_cert_file = cfg.sslCert;
smtp_tls_key_file = cfg.sslKey;
smtp_use_tls = true;
smtpd_tls_CAfile = cfg.sslCACert;
smtpd_tls_cert_file = cfg.sslCert;
smtpd_tls_key_file = cfg.sslKey;
smtpd_use_tls = true;
};
masterCfOptions = { options, config, name, ... }: { masterCfOptions = { options, config, name, ... }: {
options = { options = {
name = mkOption { name = mkOption {
@ -507,7 +442,6 @@ in
config = mkOption { config = mkOption {
type = with types; attrsOf (either bool (either str (listOf str))); type = with types; attrsOf (either bool (either str (listOf str)));
default = defaultConf;
description = '' description = ''
The main.cf configuration file as key value set. The main.cf configuration file as key value set.
''; '';
@ -749,6 +683,67 @@ in
''; '';
}; };
services.postfix.config = (mapAttrs (_: v: mkDefault v) {
compatibility_level = "9999";
mail_owner = cfg.user;
default_privs = "nobody";
# NixOS specific locations
data_directory = "/var/lib/postfix/data";
queue_directory = "/var/lib/postfix/queue";
# Default location of everything in package
meta_directory = "${pkgs.postfix}/etc/postfix";
command_directory = "${pkgs.postfix}/bin";
sample_directory = "/etc/postfix";
newaliases_path = "${pkgs.postfix}/bin/newaliases";
mailq_path = "${pkgs.postfix}/bin/mailq";
readme_directory = false;
sendmail_path = "${pkgs.postfix}/bin/sendmail";
daemon_directory = "${pkgs.postfix}/libexec/postfix";
manpage_directory = "${pkgs.postfix}/share/man";
html_directory = "${pkgs.postfix}/share/postfix/doc/html";
shlib_directory = false;
mail_spool_directory = "/var/spool/mail/";
setgid_group = cfg.setgidGroup;
})
// optionalAttrs (cfg.relayHost != "") { relayhost = if cfg.lookupMX
then "${cfg.relayHost}:${toString cfg.relayPort}"
else "[${cfg.relayHost}]:${toString cfg.relayPort}"; }
// optionalAttrs config.networking.enableIPv6 { inet_protocols = mkDefault "all"; }
// optionalAttrs (cfg.networks != null) { mynetworks = cfg.networks; }
// optionalAttrs (cfg.networksStyle != "") { mynetworks_style = cfg.networksStyle; }
// optionalAttrs (cfg.hostname != "") { myhostname = cfg.hostname; }
// optionalAttrs (cfg.domain != "") { mydomain = cfg.domain; }
// optionalAttrs (cfg.origin != "") { myorigin = cfg.origin; }
// optionalAttrs (cfg.destination != null) { mydestination = cfg.destination; }
// optionalAttrs (cfg.relayDomains != null) { relay_domains = cfg.relayDomains; }
// optionalAttrs (cfg.recipientDelimiter != "") { recipient_delimiter = cfg.recipientDelimiter; }
// optionalAttrs haveAliases { alias_maps = [ "${cfg.aliasMapType}:/etc/postfix/aliases" ]; }
// optionalAttrs haveTransport { transport_maps = [ "hash:/etc/postfix/transport" ]; }
// optionalAttrs haveVirtual { virtual_alias_maps = [ "${cfg.virtualMapType}:/etc/postfix/virtual" ]; }
// optionalAttrs (cfg.dnsBlacklists != []) { smtpd_client_restrictions = clientRestrictions; }
// optionalAttrs cfg.useSrs {
sender_canonical_maps = [ "tcp:127.0.0.1:10001" ];
sender_canonical_classes = [ "envelope_sender" ];
recipient_canonical_maps = [ "tcp:127.0.0.1:10002" ];
recipient_canonical_classes = [ "envelope_recipient" ];
}
// optionalAttrs cfg.enableHeaderChecks { header_checks = [ "regexp:/etc/postfix/header_checks" ]; }
// optionalAttrs (cfg.sslCert != "") {
smtp_tls_CAfile = cfg.sslCACert;
smtp_tls_cert_file = cfg.sslCert;
smtp_tls_key_file = cfg.sslKey;
smtp_use_tls = true;
smtpd_tls_CAfile = cfg.sslCACert;
smtpd_tls_cert_file = cfg.sslCert;
smtpd_tls_key_file = cfg.sslKey;
smtpd_use_tls = true;
};
services.postfix.masterConfig = { services.postfix.masterConfig = {
smtp_inet = { smtp_inet = {
name = "smtp"; name = "smtp";

View File

@ -1,14 +1,152 @@
{ config, lib, pkgs, ... }: { config, options, pkgs, lib, ... }:
with lib; with lib;
let let
cfg = config.services.rspamd; cfg = config.services.rspamd;
opts = options.services.rspamd;
mkBindSockets = socks: concatStringsSep "\n" (map (each: " bind_socket = \"${each}\"") socks); bindSocketOpts = {options, config, ... }: {
options = {
socket = mkOption {
type = types.str;
example = "localhost:11333";
description = ''
Socket for this worker to listen on in a format acceptable by rspamd.
'';
};
mode = mkOption {
type = types.str;
default = "0644";
description = "Mode to set on unix socket";
};
owner = mkOption {
type = types.str;
default = "${cfg.user}";
description = "Owner to set on unix socket";
};
group = mkOption {
type = types.str;
default = "${cfg.group}";
description = "Group to set on unix socket";
};
rawEntry = mkOption {
type = types.str;
internal = true;
};
};
config.rawEntry = let
maybeOption = option:
optionalString options.${option}.isDefined " ${option}=${config.${option}}";
in
if (!(hasPrefix "/" config.socket)) then "${config.socket}"
else "${config.socket}${maybeOption "mode"}${maybeOption "owner"}${maybeOption "group"}";
};
rspamdConfFile = pkgs.writeText "rspamd.conf" workerOpts = { name, ... }: {
options = {
enable = mkOption {
type = types.nullOr types.bool;
default = null;
description = "Whether to run the rspamd worker.";
};
name = mkOption {
type = types.nullOr types.str;
default = name;
description = "Name of the worker";
};
type = mkOption {
type = types.nullOr (types.enum [
"normal" "controller" "fuzzy_storage" "proxy" "lua"
]);
description = "The type of this worker";
};
bindSockets = mkOption {
type = types.listOf (types.either types.str (types.submodule bindSocketOpts));
default = [];
description = ''
List of sockets to listen, in format acceptable by rspamd
'';
example = [{
socket = "/run/rspamd.sock";
mode = "0666";
owner = "rspamd";
} "*:11333"];
apply = value: map (each: if (isString each)
then if (isUnixSocket each)
then {socket = each; owner = cfg.user; group = cfg.group; mode = "0644"; rawEntry = "${each}";}
else {socket = each; rawEntry = "${each}";}
else each) value;
};
count = mkOption {
type = types.nullOr types.int;
default = null;
description = ''
Number of worker instances to run
'';
};
includes = mkOption {
type = types.listOf types.str;
default = [];
description = ''
List of files to include in configuration
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = "Additional entries to put verbatim into worker section of rspamd config file.";
};
};
config = mkIf (name == "normal" || name == "controller" || name == "fuzzy") {
type = mkDefault name;
includes = mkDefault [ "$CONFDIR/worker-${name}.inc" ];
bindSockets = mkDefault (if name == "normal"
then [{
socket = "/run/rspamd/rspamd.sock";
mode = "0660";
owner = cfg.user;
group = cfg.group;
}]
else if name == "controller"
then [ "localhost:11334" ]
else [] );
};
};
indexOf = default: start: list: e:
if list == []
then default
else if (head list) == e then start
else (indexOf default (start + (length (listenStreams (head list).socket))) (tail list) e);
systemdSocket = indexOf (abort "Socket not found") 0 allSockets;
isUnixSocket = socket: hasPrefix "/" (if (isString socket) then socket else socket.socket);
isPort = hasPrefix "*:";
isIPv4Socket = hasPrefix "*v4:";
isIPv6Socket = hasPrefix "*v6:";
isLocalHost = hasPrefix "localhost:";
listenStreams = socket:
if (isLocalHost socket) then
let port = (removePrefix "localhost:" socket);
in [ "127.0.0.1:${port}" ] ++ (if config.networking.enableIPv6 then ["[::1]:${port}"] else [])
else if (isIPv6Socket socket) then [removePrefix "*v6:" socket]
else if (isPort socket) then [removePrefix "*:" socket]
else if (isIPv4Socket socket) then
throw "error: IPv4 only socket not supported in rspamd with socket activation"
else if (length (splitString " " socket)) != 1 then
throw "error: string options not supported in rspamd with socket activation"
else [socket];
mkBindSockets = enabled: socks: concatStringsSep "\n " (flatten (map (each:
if cfg.socketActivation && enabled != false then
let systemd = (systemdSocket each);
in (imap (idx: e: "bind_socket = \"systemd:${toString (systemd + idx - 1)}\";") (listenStreams each.socket))
else "bind_socket = \"${each.rawEntry}\";") socks));
rspamdConfFile = pkgs.writeText "rspamd.conf"
'' ''
.include "$CONFDIR/common.conf" .include "$CONFDIR/common.conf"
@ -22,19 +160,33 @@ let
.include "$CONFDIR/logging.inc" .include "$CONFDIR/logging.inc"
} }
worker { ${concatStringsSep "\n" (mapAttrsToList (name: value: ''
${mkBindSockets cfg.bindSocket} worker ${optionalString (value.name != "normal" && value.name != "controller") "${value.name}"} {
.include "$CONFDIR/worker-normal.inc" type = "${value.type}";
} ${optionalString (value.enable != null)
"enabled = ${if value.enable != false then "yes" else "no"};"}
worker { ${mkBindSockets value.enable value.bindSockets}
${mkBindSockets cfg.bindUISocket} ${optionalString (value.count != null) "count = ${toString value.count};"}
.include "$CONFDIR/worker-controller.inc" ${concatStringsSep "\n " (map (each: ".include \"${each}\"") value.includes)}
} ${value.extraConfig}
}
'') cfg.workers)}
${cfg.extraConfig} ${cfg.extraConfig}
''; '';
allMappedSockets = flatten (mapAttrsToList (name: value:
if value.enable != false
then imap (idx: each: {
name = "${name}";
index = idx;
value = each;
}) value.bindSockets
else []) cfg.workers);
allSockets = map (e: e.value) allMappedSockets;
allSocketNames = map (each: "rspamd-${each.name}-${toString each.index}.socket") allMappedSockets;
in in
{ {
@ -48,36 +200,43 @@ in
enable = mkEnableOption "Whether to run the rspamd daemon."; enable = mkEnableOption "Whether to run the rspamd daemon.";
debug = mkOption { debug = mkOption {
type = types.bool;
default = false; default = false;
description = "Whether to run the rspamd daemon in debug mode."; description = "Whether to run the rspamd daemon in debug mode.";
}; };
bindSocket = mkOption { socketActivation = mkOption {
type = types.listOf types.str; type = types.bool;
default = [
"/run/rspamd/rspamd.sock mode=0660 owner=${cfg.user} group=${cfg.group}"
];
defaultText = ''[
"/run/rspamd/rspamd.sock mode=0660 owner=${cfg.user} group=${cfg.group}"
]'';
description = '' description = ''
List of sockets to listen, in format acceptable by rspamd Enable systemd socket activation for rspamd.
'';
example = ''
bindSocket = [
"/run/rspamd.sock mode=0666 owner=rspamd"
"*:11333"
];
''; '';
}; };
bindUISocket = mkOption { workers = mkOption {
type = types.listOf types.str; type = with types; attrsOf (submodule workerOpts);
default = [
"localhost:11334"
];
description = '' description = ''
List of sockets for web interface, in format acceptable by rspamd Attribute set of workers to start.
'';
default = {
normal = {};
controller = {};
};
example = literalExample ''
{
normal = {
includes = [ "$CONFDIR/worker-normal.inc" ];
bindSockets = [{
socket = "/run/rspamd/rspamd.sock";
mode = "0660";
owner = "${cfg.user}";
group = "${cfg.group}";
}];
};
controller = {
includes = [ "$CONFDIR/worker-controller.inc" ];
bindSockets = [ "[::1]:11334" ];
};
}
''; '';
}; };
@ -113,6 +272,13 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.rspamd.socketActivation = mkDefault (!opts.bindSocket.isDefined && !opts.bindUISocket.isDefined);
assertions = [ {
assertion = !cfg.socketActivation || !(opts.bindSocket.isDefined || opts.bindUISocket.isDefined);
message = "Can't use socketActivation for rspamd when using renamed bind socket options";
} ];
# Allow users to run 'rspamc' and 'rspamadm'. # Allow users to run 'rspamc' and 'rspamadm'.
environment.systemPackages = [ pkgs.rspamd ]; environment.systemPackages = [ pkgs.rspamd ];
@ -128,17 +294,22 @@ in
gid = config.ids.gids.rspamd; gid = config.ids.gids.rspamd;
}; };
environment.etc."rspamd.conf".source = rspamdConfFile;
systemd.services.rspamd = { systemd.services.rspamd = {
description = "Rspamd Service"; description = "Rspamd Service";
wantedBy = [ "multi-user.target" ]; wantedBy = mkIf (!cfg.socketActivation) [ "multi-user.target" ];
after = [ "network.target" ]; after = [ "network.target" ] ++
(if cfg.socketActivation then allSocketNames else []);
requires = mkIf cfg.socketActivation allSocketNames;
serviceConfig = { serviceConfig = {
ExecStart = "${pkgs.rspamd}/bin/rspamd ${optionalString cfg.debug "-d"} --user=${cfg.user} --group=${cfg.group} --pid=/run/rspamd.pid -c ${rspamdConfFile} -f"; ExecStart = "${pkgs.rspamd}/bin/rspamd ${optionalString cfg.debug "-d"} --user=${cfg.user} --group=${cfg.group} --pid=/run/rspamd.pid -c ${rspamdConfFile} -f";
Restart = "always"; Restart = "always";
RuntimeDirectory = "rspamd"; RuntimeDirectory = "rspamd";
PrivateTmp = true; PrivateTmp = true;
Sockets = mkIf cfg.socketActivation (concatStringsSep " " allSocketNames);
}; };
preStart = '' preStart = ''
@ -146,5 +317,25 @@ in
${pkgs.coreutils}/bin/chown ${cfg.user}:${cfg.group} /var/lib/rspamd ${pkgs.coreutils}/bin/chown ${cfg.user}:${cfg.group} /var/lib/rspamd
''; '';
}; };
systemd.sockets = mkIf cfg.socketActivation
(listToAttrs (map (each: {
name = "rspamd-${each.name}-${toString each.index}";
value = {
description = "Rspamd socket ${toString each.index} for worker ${each.name}";
wantedBy = [ "sockets.target" ];
listenStreams = (listenStreams each.value.socket);
socketConfig = {
BindIPv6Only = mkIf (isIPv6Socket each.value.socket) "ipv6-only";
Service = "rspamd.service";
SocketUser = mkIf (isUnixSocket each.value.socket) each.value.owner;
SocketGroup = mkIf (isUnixSocket each.value.socket) each.value.group;
SocketMode = mkIf (isUnixSocket each.value.socket) each.value.mode;
};
};
}) allMappedSockets));
}; };
imports = [
(mkRenamedOptionModule [ "services" "rspamd" "bindSocket" ] [ "services" "rspamd" "workers" "normal" "bindSockets" ])
(mkRenamedOptionModule [ "services" "rspamd" "bindUISocket" ] [ "services" "rspamd" "workers" "controller" "bindSockets" ])
];
} }

View File

@ -0,0 +1,135 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.home-assistant;
configFile = pkgs.writeText "configuration.yaml" (builtins.toJSON cfg.config);
availableComponents = pkgs.home-assistant.availableComponents;
# Given component "parentConfig.platform", returns whether config.parentConfig
# is a list containing a set with set.platform == "platform".
#
# For example, the component sensor.luftdaten is used as follows:
# config.sensor = [ {
# platform = "luftdaten";
# ...
# } ];
useComponentPlatform = component:
let
path = splitString "." component;
parentConfig = attrByPath (init path) null cfg.config;
platform = last path;
in isList parentConfig && any
(item: item.platform or null == platform)
parentConfig;
# Returns whether component is used in config
useComponent = component:
hasAttrByPath (splitString "." component) cfg.config
|| useComponentPlatform component;
# List of components used in config
extraComponents = filter useComponent availableComponents;
package = if cfg.autoExtraComponents
then (cfg.package.override { inherit extraComponents; })
else cfg.package;
in {
meta.maintainers = with maintainers; [ dotlambda ];
options.services.home-assistant = {
enable = mkEnableOption "Home Assistant";
configDir = mkOption {
default = "/var/lib/hass";
type = types.path;
description = "The config directory, where your <filename>configuration.yaml</filename> is located.";
};
config = mkOption {
default = null;
type = with types; nullOr attrs;
example = literalExample ''
{
homeassistant = {
name = "Home";
time_zone = "UTC";
};
frontend = { };
http = { };
feedreader.urls = [ "https://nixos.org/blogs.xml" ];
}
'';
description = ''
Your <filename>configuration.yaml</filename> as a Nix attribute set.
Beware that setting this option will delete your previous <filename>configuration.yaml</filename>.
'';
};
package = mkOption {
default = pkgs.home-assistant;
defaultText = "pkgs.home-assistant";
type = types.package;
example = literalExample ''
pkgs.home-assistant.override {
extraPackages = ps: with ps; [ colorlog ];
}
'';
description = ''
Home Assistant package to use.
Override <literal>extraPackages</literal> in order to add additional dependencies.
'';
};
autoExtraComponents = mkOption {
default = true;
type = types.bool;
description = ''
If set to <literal>true</literal>, the components used in <literal>config</literal>
are set as the specified package's <literal>extraComponents</literal>.
This in turn adds all packaged dependencies to the derivation.
You might still see import errors in your log.
In this case, you will need to package the necessary dependencies yourself
or ask for someone else to package them.
If a dependency is packaged but not automatically added to this list,
you might need to specify it in <literal>extraPackages</literal>.
'';
};
};
config = mkIf cfg.enable {
systemd.services.home-assistant = {
description = "Home Assistant";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
preStart = lib.optionalString (cfg.config != null) ''
rm -f ${cfg.configDir}/configuration.yaml
ln -s ${configFile} ${cfg.configDir}/configuration.yaml
'';
serviceConfig = {
ExecStart = ''
${package}/bin/hass --config "${cfg.configDir}"
'';
User = "hass";
Group = "hass";
Restart = "on-failure";
ProtectSystem = "strict";
ReadWritePaths = "${cfg.configDir}";
PrivateTmp = true;
};
};
users.extraUsers.hass = {
home = cfg.configDir;
createHome = true;
group = "hass";
uid = config.ids.uids.hass;
};
users.extraGroups.hass.gid = config.ids.gids.hass;
};
}

View File

@ -4,6 +4,8 @@ with lib;
let let
cfg = config.services.matrix-synapse; cfg = config.services.matrix-synapse;
pg = config.services.postgresql;
usePostgresql = cfg.database_type == "psycopg2";
logConfigFile = pkgs.writeText "log_config.yaml" cfg.logConfig; logConfigFile = pkgs.writeText "log_config.yaml" cfg.logConfig;
mkResource = r: ''{names: ${builtins.toJSON r.names}, compress: ${boolToString r.compress}}''; mkResource = r: ''{names: ${builtins.toJSON r.names}, compress: ${boolToString r.compress}}'';
mkListener = l: ''{port: ${toString l.port}, bind_address: "${l.bind_address}", type: ${l.type}, tls: ${boolToString l.tls}, x_forwarded: ${boolToString l.x_forwarded}, resources: [${concatStringsSep "," (map mkResource l.resources)}]}''; mkListener = l: ''{port: ${toString l.port}, bind_address: "${l.bind_address}", type: ${l.type}, tls: ${boolToString l.tls}, x_forwarded: ${boolToString l.x_forwarded}, resources: [${concatStringsSep "," (map mkResource l.resources)}]}'';
@ -38,7 +40,7 @@ database: {
name: "${cfg.database_type}", name: "${cfg.database_type}",
args: { args: {
${concatStringsSep ",\n " ( ${concatStringsSep ",\n " (
mapAttrsToList (n: v: "\"${n}\": ${v}") cfg.database_args mapAttrsToList (n: v: "\"${n}\": ${builtins.toJSON v}") cfg.database_args
)} )}
} }
} }
@ -155,7 +157,7 @@ in {
tls_certificate_path = mkOption { tls_certificate_path = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = "/var/lib/matrix-synapse/homeserver.tls.crt"; example = "${cfg.dataDir}/homeserver.tls.crt";
description = '' description = ''
PEM encoded X509 certificate for TLS. PEM encoded X509 certificate for TLS.
You can replace the self-signed certificate that synapse You can replace the self-signed certificate that synapse
@ -167,7 +169,7 @@ in {
tls_private_key_path = mkOption { tls_private_key_path = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = "/var/lib/matrix-synapse/homeserver.tls.key"; example = "${cfg.dataDir}/homeserver.tls.key";
description = '' description = ''
PEM encoded private key for TLS. Specify null if synapse is not PEM encoded private key for TLS. Specify null if synapse is not
speaking TLS directly. speaking TLS directly.
@ -176,7 +178,7 @@ in {
tls_dh_params_path = mkOption { tls_dh_params_path = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = "/var/lib/matrix-synapse/homeserver.tls.dh"; example = "${cfg.dataDir}/homeserver.tls.dh";
description = '' description = ''
PEM dh parameters for ephemeral keys PEM dh parameters for ephemeral keys
''; '';
@ -184,6 +186,7 @@ in {
server_name = mkOption { server_name = mkOption {
type = types.str; type = types.str;
example = "example.com"; example = "example.com";
default = config.networking.hostName;
description = '' description = ''
The domain name of the server, with optional explicit port. The domain name of the server, with optional explicit port.
This is used by remote servers to connect to this server, This is used by remote servers to connect to this server,
@ -339,16 +342,39 @@ in {
}; };
database_type = mkOption { database_type = mkOption {
type = types.enum [ "sqlite3" "psycopg2" ]; type = types.enum [ "sqlite3" "psycopg2" ];
default = "sqlite3"; default = if versionAtLeast config.system.stateVersion "18.03"
then "psycopg2"
else "sqlite3";
description = '' description = ''
The database engine name. Can be sqlite or psycopg2. The database engine name. Can be sqlite or psycopg2.
''; '';
}; };
create_local_database = mkOption {
type = types.bool;
default = true;
description = ''
Whether to create a local database automatically.
'';
};
database_name = mkOption {
type = types.str;
default = "matrix-synapse";
description = "Database name.";
};
database_user = mkOption {
type = types.str;
default = "matrix-synapse";
description = "Database user name.";
};
database_args = mkOption { database_args = mkOption {
type = types.attrs; type = types.attrs;
default = { default = {
database = "${cfg.dataDir}/homeserver.db"; sqlite3 = { database = "${cfg.dataDir}/homeserver.db"; };
}; psycopg2 = {
user = cfg.database_user;
database = cfg.database_name;
};
}."${cfg.database_type}";
description = '' description = ''
Arguments to pass to the engine. Arguments to pass to the engine.
''; '';
@ -623,15 +649,36 @@ in {
gid = config.ids.gids.matrix-synapse; gid = config.ids.gids.matrix-synapse;
} ]; } ];
services.postgresql.enable = mkIf usePostgresql (mkDefault true);
systemd.services.matrix-synapse = { systemd.services.matrix-synapse = {
description = "Synapse Matrix homeserver"; description = "Synapse Matrix homeserver";
after = [ "network.target" ]; after = [ "network.target" "postgresql.service" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
${cfg.package}/bin/homeserver \ ${cfg.package}/bin/homeserver \
--config-path ${configFile} \ --config-path ${configFile} \
--keys-directory ${cfg.dataDir} \ --keys-directory ${cfg.dataDir} \
--generate-keys --generate-keys
'' + optionalString (usePostgresql && cfg.create_local_database) ''
if ! test -e "${cfg.dataDir}/db-created"; then
${pkgs.sudo}/bin/sudo -u ${pg.superUser} \
${pg.package}/bin/createuser \
--login \
--no-createdb \
--no-createrole \
--encrypted \
${cfg.database_user}
${pkgs.sudo}/bin/sudo -u ${pg.superUser} \
${pg.package}/bin/createdb \
--owner=${cfg.database_user} \
--encoding=UTF8 \
--lc-collate=C \
--lc-ctype=C \
--template=template0 \
${cfg.database_name}
touch "${cfg.dataDir}/db-created"
fi
''; '';
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";

View File

@ -8,7 +8,7 @@ let
nix = cfg.package.out; nix = cfg.package.out;
isNix112 = versionAtLeast (getVersion nix) "1.12pre"; isNix20 = versionAtLeast (getVersion nix) "2.0pre";
makeNixBuildUser = nr: makeNixBuildUser = nr:
{ name = "nixbld${toString nr}"; { name = "nixbld${toString nr}";
@ -26,32 +26,40 @@ let
nixConf = nixConf =
let let
# If we're using sandbox for builds, then provide /bin/sh in # In Nix < 2.0, If we're using sandbox for builds, then provide
# the sandbox as a bind-mount to bash. This means we also need to # /bin/sh in the sandbox as a bind-mount to bash. This means we
# include the entire closure of bash. # also need to include the entire closure of bash. Nix >= 2.0
# provides a /bin/sh by default.
sh = pkgs.stdenv.shell; sh = pkgs.stdenv.shell;
binshDeps = pkgs.writeReferencesToFile sh; binshDeps = pkgs.writeReferencesToFile sh;
in in
pkgs.runCommand "nix.conf" {extraOptions = cfg.extraOptions; } '' pkgs.runCommand "nix.conf" { extraOptions = cfg.extraOptions; inherit binshDeps; } ''
extraPaths=$(for i in $(cat ${binshDeps}); do if test -d $i; then echo $i; fi; done) ${optionalString (!isNix20) ''
extraPaths=$(for i in $(cat binshDeps); do if test -d $i; then echo $i; fi; done)
''}
cat > $out <<END cat > $out <<END
# WARNING: this file is generated from the nix.* options in # WARNING: this file is generated from the nix.* options in
# your NixOS configuration, typically # your NixOS configuration, typically
# /etc/nixos/configuration.nix. Do not edit it! # /etc/nixos/configuration.nix. Do not edit it!
build-users-group = nixbld build-users-group = nixbld
build-max-jobs = ${toString (cfg.maxJobs)} ${if isNix20 then "max-jobs" else "build-max-jobs"} = ${toString (cfg.maxJobs)}
build-cores = ${toString (cfg.buildCores)} ${if isNix20 then "cores" else "build-cores"} = ${toString (cfg.buildCores)}
build-use-sandbox = ${if (builtins.isBool cfg.useSandbox) then boolToString cfg.useSandbox else cfg.useSandbox} ${if isNix20 then "sandbox" else "build-use-sandbox"} = ${if (builtins.isBool cfg.useSandbox) then boolToString cfg.useSandbox else cfg.useSandbox}
build-sandbox-paths = ${toString cfg.sandboxPaths} /bin/sh=${sh} $(echo $extraPaths) ${if isNix20 then "extra-sandbox-paths" else "build-sandbox-paths"} = ${toString cfg.sandboxPaths} ${optionalString (!isNix20) "/bin/sh=${sh} $(echo $extraPaths)"}
binary-caches = ${toString cfg.binaryCaches} ${if isNix20 then "substituters" else "binary-caches"} = ${toString cfg.binaryCaches}
trusted-binary-caches = ${toString cfg.trustedBinaryCaches} ${if isNix20 then "trusted-substituters" else "trusted-binary-caches"} = ${toString cfg.trustedBinaryCaches}
binary-cache-public-keys = ${toString cfg.binaryCachePublicKeys} ${if isNix20 then "trusted-public-keys" else "binary-cache-public-keys"} = ${toString cfg.binaryCachePublicKeys}
auto-optimise-store = ${boolToString cfg.autoOptimiseStore} auto-optimise-store = ${boolToString cfg.autoOptimiseStore}
${optionalString cfg.requireSignedBinaryCaches '' ${if isNix20 then ''
signed-binary-caches = * require-sigs = ${if cfg.requireSignedBinaryCaches then "true" else "false"}
'' else ''
signed-binary-caches = ${if cfg.requireSignedBinaryCaches then "*" else ""}
''} ''}
trusted-users = ${toString cfg.trustedUsers} trusted-users = ${toString cfg.trustedUsers}
allowed-users = ${toString cfg.allowedUsers} allowed-users = ${toString cfg.allowedUsers}
${optionalString (isNix20 && !cfg.distributedBuilds) ''
builders =
''}
$extraOptions $extraOptions
END END
''; '';
@ -377,8 +385,9 @@ in
systemd.sockets.nix-daemon.wantedBy = [ "sockets.target" ]; systemd.sockets.nix-daemon.wantedBy = [ "sockets.target" ];
systemd.services.nix-daemon = systemd.services.nix-daemon =
{ path = [ nix pkgs.openssl.bin pkgs.utillinux config.programs.ssh.package ] { path = [ nix pkgs.utillinux ]
++ optionals cfg.distributedBuilds [ pkgs.gzip ]; ++ optionals cfg.distributedBuilds [ config.programs.ssh.package pkgs.gzip ]
++ optionals (!isNix20) [ pkgs.openssl.bin ];
environment = cfg.envVars environment = cfg.envVars
// { CURL_CA_BUNDLE = "/etc/ssl/certs/ca-certificates.crt"; } // { CURL_CA_BUNDLE = "/etc/ssl/certs/ca-certificates.crt"; }
@ -396,10 +405,9 @@ in
}; };
nix.envVars = nix.envVars =
{ NIX_CONF_DIR = "/etc/nix"; optionalAttrs (!isNix20) {
} NIX_CONF_DIR = "/etc/nix";
// optionalAttrs (!isNix112) {
# Enable the copy-from-other-stores substituter, which allows # Enable the copy-from-other-stores substituter, which allows
# builds to be sped up by copying build results from remote # builds to be sped up by copying build results from remote
# Nix stores. To do this, mount the remote file system on a # Nix stores. To do this, mount the remote file system on a
@ -407,12 +415,8 @@ in
NIX_OTHER_STORES = "/run/nix/remote-stores/*/nix"; NIX_OTHER_STORES = "/run/nix/remote-stores/*/nix";
} }
// optionalAttrs cfg.distributedBuilds { // optionalAttrs (cfg.distributedBuilds && !isNix20) {
NIX_BUILD_HOOK = NIX_BUILD_HOOK = "${nix}/libexec/nix/build-remote.pl";
if isNix112 then
"${nix}/libexec/nix/build-remote"
else
"${nix}/libexec/nix/build-remote.pl";
}; };
# Set up the environment variables for running Nix. # Set up the environment variables for running Nix.
@ -420,7 +424,7 @@ in
{ NIX_PATH = concatStringsSep ":" cfg.nixPath; { NIX_PATH = concatStringsSep ":" cfg.nixPath;
}; };
environment.extraInit = environment.extraInit = optionalString (!isNix20)
'' ''
# Set up secure multi-user builds: non-root users build through the # Set up secure multi-user builds: non-root users build through the
# Nix daemon. # Nix daemon.

View File

@ -106,10 +106,19 @@ in {
''; '';
}; };
package = mkOption {
description = "The zookeeper package to use";
default = pkgs.zookeeper;
defaultText = "pkgs.zookeeper";
type = types.package;
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [cfg.package];
systemd.services.zookeeper = { systemd.services.zookeeper = {
description = "Zookeeper Daemon"; description = "Zookeeper Daemon";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
@ -118,7 +127,7 @@ in {
serviceConfig = { serviceConfig = {
ExecStart = '' ExecStart = ''
${pkgs.jre}/bin/java \ ${pkgs.jre}/bin/java \
-cp "${pkgs.zookeeper}/lib/*:${pkgs.zookeeper}/${pkgs.zookeeper.name}.jar:${configDir}" \ -cp "${cfg.package}/lib/*:${cfg.package}/${cfg.package.name}.jar:${configDir}" \
${escapeShellArgs cfg.extraCmdLineOptions} \ ${escapeShellArgs cfg.extraCmdLineOptions} \
-Dzookeeper.datadir.autocreate=false \ -Dzookeeper.datadir.autocreate=false \
${optionalString cfg.preferIPv4 "-Djava.net.preferIPv4Stack=true"} \ ${optionalString cfg.preferIPv4 "-Djava.net.preferIPv4Stack=true"} \

View File

@ -5,18 +5,25 @@ with lib;
let let
cfg = config.services.netdata; cfg = config.services.netdata;
configFile = pkgs.writeText "netdata.conf" cfg.configText; wrappedPlugins = pkgs.runCommand "wrapped-plugins" {} ''
mkdir -p $out/libexec/netdata/plugins.d
ln -s /run/wrappers/bin/apps.plugin $out/libexec/netdata/plugins.d/apps.plugin
'';
localConfig = {
global = {
"plugins directory" = "${wrappedPlugins}/libexec/netdata/plugins.d ${pkgs.netdata}/libexec/netdata/plugins.d";
};
};
mkConfig = generators.toINI {} (recursiveUpdate localConfig cfg.config);
configFile = pkgs.writeText "netdata.conf" (if cfg.configText != null then cfg.configText else mkConfig);
defaultUser = "netdata"; defaultUser = "netdata";
in { in {
options = { options = {
services.netdata = { services.netdata = {
enable = mkOption { enable = mkEnableOption "netdata";
default = false;
type = types.bool;
description = "Whether to enable netdata monitoring.";
};
user = mkOption { user = mkOption {
type = types.str; type = types.str;
@ -31,9 +38,9 @@ in {
}; };
configText = mkOption { configText = mkOption {
type = types.lines; type = types.nullOr types.lines;
default = ""; description = "Verbatim netdata.conf, cannot be combined with config.";
description = "netdata.conf configuration."; default = null;
example = '' example = ''
[global] [global]
debug log = syslog debug log = syslog
@ -42,11 +49,29 @@ in {
''; '';
}; };
config = mkOption {
type = types.attrsOf types.attrs;
default = {};
description = "netdata.conf configuration as nix attributes. cannot be combined with configText.";
example = literalExample ''
global = {
"debug log" = "syslog";
"access log" = "syslog";
"error log" = "syslog";
};
'';
};
};
}; };
};
config = mkIf cfg.enable { config = mkIf cfg.enable {
assertions =
[ { assertion = cfg.config != {} -> cfg.configText == null ;
message = "Cannot specify both config and configText";
}
];
systemd.services.netdata = { systemd.services.netdata = {
path = with pkgs; [ gawk curl ];
description = "Real time performance monitoring"; description = "Real time performance monitoring";
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
@ -66,6 +91,15 @@ in {
}; };
}; };
security.wrappers."apps.plugin" = {
source = "${pkgs.netdata}/libexec/netdata/plugins.d/apps.plugin";
capabilities = "cap_dac_read_search,cap_sys_ptrace+ep";
owner = cfg.user;
group = cfg.group;
permissions = "u+rx,g+rx,o-rwx";
};
users.extraUsers = optional (cfg.user == defaultUser) { users.extraUsers = optional (cfg.user == defaultUser) {
name = defaultUser; name = defaultUser;
}; };

View File

@ -111,11 +111,11 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
script = '' script = ''
${pkgs.prometheus-alertmanager.bin}/bin/alertmanager \ ${pkgs.prometheus-alertmanager.bin}/bin/alertmanager \
-config.file ${alertmanagerYml} \ --config.file ${alertmanagerYml} \
-web.listen-address ${cfg.listenAddress}:${toString cfg.port} \ --web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
-log.level ${cfg.logLevel} \ --log.level ${cfg.logLevel} \
${optionalString (cfg.webExternalUrl != null) ''-web.external-url ${cfg.webExternalUrl} \''} ${optionalString (cfg.webExternalUrl != null) ''--web.external-url ${cfg.webExternalUrl} \''}
${optionalString (cfg.logFormat != null) "-log.format ${cfg.logFormat}"} ${optionalString (cfg.logFormat != null) "--log.format ${cfg.logFormat}"}
''; '';
serviceConfig = { serviceConfig = {

View File

@ -1,99 +0,0 @@
{ config, pkgs, lib, ... }:
let
inherit (lib) mkOption mkIf;
cfg = config.services.openafsClient;
cellServDB = pkgs.fetchurl {
url = http://dl.central.org/dl/cellservdb/CellServDB.2017-03-14;
sha256 = "1197z6c5xrijgf66rhaymnm5cvyg2yiy1i20y4ah4mrzmjx0m7sc";
};
afsConfig = pkgs.runCommand "afsconfig" {} ''
mkdir -p $out
echo ${cfg.cellName} > $out/ThisCell
cp ${cellServDB} $out/CellServDB
echo "/afs:${cfg.cacheDirectory}:${cfg.cacheSize}" > $out/cacheinfo
'';
openafsPkgs = config.boot.kernelPackages.openafsClient;
in
{
###### interface
options = {
services.openafsClient = {
enable = mkOption {
default = false;
description = "Whether to enable the OpenAFS client.";
};
cellName = mkOption {
default = "grand.central.org";
description = "Cell name.";
};
cacheSize = mkOption {
default = "100000";
description = "Cache size.";
};
cacheDirectory = mkOption {
default = "/var/cache/openafs";
description = "Cache directory.";
};
crypt = mkOption {
default = false;
description = "Whether to enable (weak) protocol encryption.";
};
sparse = mkOption {
default = false;
description = "Minimal cell list in /afs.";
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ openafsPkgs ];
environment.etc = [
{ source = afsConfig;
target = "openafs";
}
];
systemd.services.afsd = {
description = "AFS client";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = { RemainAfterExit = true; };
preStart = ''
mkdir -p -m 0755 /afs
mkdir -m 0700 -p ${cfg.cacheDirectory}
${pkgs.kmod}/bin/insmod ${openafsPkgs}/lib/openafs/libafs-*.ko || true
${openafsPkgs}/sbin/afsd -confdir ${afsConfig} -cachedir ${cfg.cacheDirectory} ${if cfg.sparse then "-dynroot-sparse" else "-dynroot"} -fakestat -afsdb
${openafsPkgs}/bin/fs setcrypt ${if cfg.crypt then "on" else "off"}
'';
# Doing this in preStop, because after these commands AFS is basically
# stopped, so systemd has nothing to do, just noticing it. If done in
# postStop, then we get a hang + kernel oops, because AFS can't be
# stopped simply by sending signals to processes.
preStop = ''
${pkgs.utillinux}/bin/umount /afs
${openafsPkgs}/sbin/afsd -shutdown
'';
};
};
}

View File

@ -0,0 +1,239 @@
{ config, pkgs, lib, ... }:
with import ./lib.nix { inherit lib; };
let
inherit (lib) getBin mkOption mkIf optionalString singleton types;
cfg = config.services.openafsClient;
cellServDB = pkgs.fetchurl {
url = http://dl.central.org/dl/cellservdb/CellServDB.2017-03-14;
sha256 = "1197z6c5xrijgf66rhaymnm5cvyg2yiy1i20y4ah4mrzmjx0m7sc";
};
clientServDB = pkgs.writeText "client-cellServDB-${cfg.cellName}" (mkCellServDB cfg.cellName cfg.cellServDB);
afsConfig = pkgs.runCommand "afsconfig" {} ''
mkdir -p $out
echo ${cfg.cellName} > $out/ThisCell
cat ${cellServDB} ${clientServDB} > $out/CellServDB
echo "${cfg.mountPoint}:${cfg.cache.directory}:${toString cfg.cache.blocks}" > $out/cacheinfo
'';
openafsMod = config.boot.kernelPackages.openafs;
openafsBin = lib.getBin pkgs.openafs;
in
{
###### interface
options = {
services.openafsClient = {
enable = mkOption {
default = false;
type = types.bool;
description = "Whether to enable the OpenAFS client.";
};
afsdb = mkOption {
default = true;
type = types.bool;
description = "Resolve cells via AFSDB DNS records.";
};
cellName = mkOption {
default = "";
type = types.str;
description = "Cell name.";
example = "grand.central.org";
};
cellServDB = mkOption {
default = [];
type = with types; listOf (submodule { options = cellServDBConfig; });
description = ''
This cell's database server records, added to the global
CellServDB. See CellServDB(5) man page for syntax. Ignored when
<literal>afsdb</literal> is set to <literal>true</literal>.
'';
example = ''
[ { ip = "1.2.3.4"; dnsname = "first.afsdb.server.dns.fqdn.org"; }
{ ip = "2.3.4.5"; dnsname = "second.afsdb.server.dns.fqdn.org"; }
]
'';
};
cache = {
blocks = mkOption {
default = 100000;
type = types.int;
description = "Cache size in 1KB blocks.";
};
chunksize = mkOption {
default = 0;
type = types.ints.between 0 30;
description = ''
Size of each cache chunk given in powers of
2. <literal>0</literal> resets the chunk size to its default
values (13 (8 KB) for memcache, 18-20 (256 KB to 1 MB) for
diskcache). Maximum value is 30. Important performance
parameter. Set to higher values when dealing with large files.
'';
};
directory = mkOption {
default = "/var/cache/openafs";
type = types.str;
description = "Cache directory.";
};
diskless = mkOption {
default = false;
type = types.bool;
description = ''
Use in-memory cache for diskless machines. Has no real
performance benefit anymore.
'';
};
};
crypt = mkOption {
default = true;
type = types.bool;
description = "Whether to enable (weak) protocol encryption.";
};
daemons = mkOption {
default = 2;
type = types.int;
description = ''
Number of daemons to serve user requests. Numbers higher than 6
usually do no increase performance. Default is sufficient for up
to five concurrent users.
'';
};
fakestat = mkOption {
default = false;
type = types.bool;
description = ''
Return fake data on stat() calls. If <literal>true</literal>,
always do so. If <literal>false</literal>, only do so for
cross-cell mounts (as these are potentially expensive).
'';
};
inumcalc = mkOption {
default = "compat";
type = types.strMatching "compat|md5";
description = ''
Inode calculation method. <literal>compat</literal> is
computationally less expensive, but <literal>md5</literal> greatly
reduces the likelihood of inode collisions in larger scenarios
involving multiple cells mounted into one AFS space.
'';
};
mountPoint = mkOption {
default = "/afs";
type = types.str;
description = ''
Mountpoint of the AFS file tree, conventionally
<literal>/afs</literal>. When set to a different value, only
cross-cells that use the same value can be accessed.
'';
};
sparse = mkOption {
default = true;
type = types.bool;
description = "Minimal cell list in /afs.";
};
startDisconnected = mkOption {
default = false;
type = types.bool;
description = ''
Start up in disconnected mode. You need to execute
<literal>fs disco online</literal> (as root) to switch to
connected mode. Useful for roaming devices.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
assertions = [
{ assertion = cfg.afsdb || cfg.cellServDB != [];
message = "You should specify all cell-local database servers in config.services.openafsClient.cellServDB or set config.services.openafsClient.afsdb.";
}
{ assertion = cfg.cellName != "";
message = "You must specify the local cell name in config.services.openafsClient.cellName.";
}
];
environment.systemPackages = [ pkgs.openafs ];
environment.etc = {
clientCellServDB = {
source = pkgs.runCommand "CellServDB" {} ''
cat ${cellServDB} ${clientServDB} > $out
'';
target = "openafs/CellServDB";
mode = "0644";
};
clientCell = {
text = ''
${cfg.cellName}
'';
target = "openafs/ThisCell";
mode = "0644";
};
};
systemd.services.afsd = {
description = "AFS client";
wantedBy = [ "multi-user.target" ];
after = singleton (if cfg.startDisconnected then "network.target" else "network-online.target");
serviceConfig = { RemainAfterExit = true; };
restartIfChanged = false;
preStart = ''
mkdir -p -m 0755 ${cfg.mountPoint}
mkdir -m 0700 -p ${cfg.cache.directory}
${pkgs.kmod}/bin/insmod ${openafsMod}/lib/modules/*/extra/openafs/libafs.ko.xz
${openafsBin}/sbin/afsd \
-mountdir ${cfg.mountPoint} \
-confdir ${afsConfig} \
${optionalString (!cfg.cache.diskless) "-cachedir ${cfg.cache.directory}"} \
-blocks ${toString cfg.cache.blocks} \
-chunksize ${toString cfg.cache.chunksize} \
${optionalString cfg.cache.diskless "-memcache"} \
-inumcalc ${cfg.inumcalc} \
${if cfg.fakestat then "-fakestat-all" else "-fakestat"} \
${if cfg.sparse then "-dynroot-sparse" else "-dynroot"} \
${optionalString cfg.afsdb "-afsdb"}
${openafsBin}/bin/fs setcrypt ${if cfg.crypt then "on" else "off"}
${optionalString cfg.startDisconnected "${openafsBin}/bin/fs discon offline"}
'';
# Doing this in preStop, because after these commands AFS is basically
# stopped, so systemd has nothing to do, just noticing it. If done in
# postStop, then we get a hang + kernel oops, because AFS can't be
# stopped simply by sending signals to processes.
preStop = ''
${pkgs.utillinux}/bin/umount ${cfg.mountPoint}
${openafsBin}/sbin/afsd -shutdown
${pkgs.kmod}/sbin/rmmod libafs
'';
};
};
}

View File

@ -0,0 +1,28 @@
{ lib, ...}:
let
inherit (lib) concatStringsSep mkOption types;
in rec {
mkCellServDB = cellName: db: ''
>${cellName}
'' + (concatStringsSep "\n" (map (dbm: if (dbm.ip != "" && dbm.dnsname != "") then dbm.ip + " #" + dbm.dnsname else "")
db));
# CellServDB configuration type
cellServDBConfig = {
ip = mkOption {
type = types.str;
default = "";
example = "1.2.3.4";
description = "IP Address of a database server";
};
dnsname = mkOption {
type = types.str;
default = "";
example = "afs.example.org";
description = "DNS full-qualified domain name of a database server";
};
};
}

View File

@ -0,0 +1,260 @@
{ config, pkgs, lib, ... }:
with import ./lib.nix { inherit lib; };
let
inherit (lib) concatStringsSep intersperse mapAttrsToList mkForce mkIf mkMerge mkOption optionalString types;
bosConfig = pkgs.writeText "BosConfig" (''
restrictmode 1
restarttime 16 0 0 0 0
checkbintime 3 0 5 0 0
'' + (optionalString cfg.roles.database.enable ''
bnode simple vlserver 1
parm ${openafsBin}/libexec/openafs/vlserver ${optionalString cfg.dottedPrincipals "-allow-dotted-principals"} ${cfg.roles.database.vlserverArgs}
end
bnode simple ptserver 1
parm ${openafsBin}/libexec/openafs/ptserver ${optionalString cfg.dottedPrincipals "-allow-dotted-principals"} ${cfg.roles.database.ptserverArgs}
end
'') + (optionalString cfg.roles.fileserver.enable ''
bnode dafs dafs 1
parm ${openafsBin}/libexec/openafs/dafileserver ${optionalString cfg.dottedPrincipals "-allow-dotted-principals"} -udpsize ${udpSizeStr} ${cfg.roles.fileserver.fileserverArgs}
parm ${openafsBin}/libexec/openafs/davolserver ${optionalString cfg.dottedPrincipals "-allow-dotted-principals"} -udpsize ${udpSizeStr} ${cfg.roles.fileserver.volserverArgs}
parm ${openafsBin}/libexec/openafs/salvageserver ${cfg.roles.fileserver.salvageserverArgs}
parm ${openafsBin}/libexec/openafs/dasalvager ${cfg.roles.fileserver.salvagerArgs}
end
'') + (optionalString (cfg.roles.database.enable && cfg.roles.backup.enable) ''
bnode simple buserver 1
parm ${openafsBin}/libexec/openafs/buserver ${cfg.roles.backup.buserverArgs} ${optionalString (cfg.roles.backup.cellServDB != []) "-cellservdb /etc/openafs/backup/"}
end
''));
netInfo = if (cfg.advertisedAddresses != []) then
pkgs.writeText "NetInfo" ((concatStringsSep "\nf " cfg.advertisedAddresses) + "\n")
else null;
buCellServDB = pkgs.writeText "backup-cellServDB-${cfg.cellName}" (mkCellServDB cfg.cellName cfg.roles.backup.cellServDB);
cfg = config.services.openafsServer;
udpSizeStr = toString cfg.udpPacketSize;
openafsBin = lib.getBin pkgs.openafs;
in {
options = {
services.openafsServer = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to enable the OpenAFS server. An OpenAFS server needs a
complex setup. So, be aware that enabling this service and setting
some options does not give you a turn-key-ready solution. You need
at least a running Kerberos 5 setup, as OpenAFS relies on it for
authentication. See the Guide "QuickStartUnix" coming with
<literal>pkgs.openafs.doc</literal> for complete setup
instructions.
'';
};
advertisedAddresses = mkOption {
default = [];
description = "List of IP addresses this server is advertised under. See NetInfo(5)";
};
cellName = mkOption {
default = "";
type = types.str;
description = "Cell name, this server will serve.";
example = "grand.central.org";
};
cellServDB = mkOption {
default = [];
type = with types; listOf (submodule [ { options = cellServDBConfig;} ]);
description = "Definition of all cell-local database server machines.";
};
roles = {
fileserver = {
enable = mkOption {
default = true;
type = types.bool;
description = "Fileserver role, serves files and volumes from its local storage.";
};
fileserverArgs = mkOption {
default = "-vattachpar 128 -vhashsize 11 -L -rxpck 400 -cb 1000000";
type = types.str;
description = "Arguments to the dafileserver process. See its man page.";
};
volserverArgs = mkOption {
default = "";
type = types.str;
description = "Arguments to the davolserver process. See its man page.";
example = "-sync never";
};
salvageserverArgs = mkOption {
default = "";
type = types.str;
description = "Arguments to the salvageserver process. See its man page.";
example = "-showlog";
};
salvagerArgs = mkOption {
default = "";
type = types.str;
description = "Arguments to the dasalvager process. See its man page.";
example = "-showlog -showmounts";
};
};
database = {
enable = mkOption {
default = true;
type = types.bool;
description = ''
Database server role, maintains the Volume Location Database,
Protection Database (and Backup Database, see
<literal>backup</literal> role). There can be multiple
servers in the database role for replication, which then need
reliable network connection to each other.
Servers in this role appear in AFSDB DNS records or the
CellServDB.
'';
};
vlserverArgs = mkOption {
default = "";
type = types.str;
description = "Arguments to the vlserver process. See its man page.";
example = "-rxbind";
};
ptserverArgs = mkOption {
default = "";
type = types.str;
description = "Arguments to the ptserver process. See its man page.";
example = "-restricted -default_access S---- S-M---";
};
};
backup = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Backup server role. Use in conjunction with the
<literal>database</literal> role to maintain the Backup
Database. Normally only used in conjunction with tape storage
or IBM's Tivoli Storage Manager.
'';
};
buserverArgs = mkOption {
default = "";
type = types.str;
description = "Arguments to the buserver process. See its man page.";
example = "-p 8";
};
cellServDB = mkOption {
default = [];
type = with types; listOf (submodule [ { options = cellServDBConfig;} ]);
description = ''
Definition of all cell-local backup database server machines.
Use this when your cell uses less backup database servers than
other database server machines.
'';
};
};
};
dottedPrincipals= mkOption {
default = false;
type = types.bool;
description = ''
If enabled, allow principal names containing (.) dots. Enabling
this has security implications!
'';
};
udpPacketSize = mkOption {
default = 1310720;
type = types.int;
description = ''
UDP packet size to use in Bytes. Higher values can speed up
communications. The default of 1 MB is a sufficient in most
cases. Make sure to increase the kernel's UDP buffer size
accordingly via <literal>net.core(w|r|opt)mem_max</literal>
sysctl.
'';
};
};
};
config = mkIf cfg.enable {
assertions = [
{ assertion = cfg.cellServDB != [];
message = "You must specify all cell-local database servers in config.services.openafsServer.cellServDB.";
}
{ assertion = cfg.cellName != "";
message = "You must specify the local cell name in config.services.openafsServer.cellName.";
}
];
environment.systemPackages = [ pkgs.openafs ];
environment.etc = {
bosConfig = {
source = bosConfig;
target = "openafs/BosConfig";
mode = "0644";
};
cellServDB = {
text = mkCellServDB cfg.cellName cfg.cellServDB;
target = "openafs/server/CellServDB";
mode = "0644";
};
thisCell = {
text = cfg.cellName;
target = "openafs/server/ThisCell";
mode = "0644";
};
buCellServDB = {
enable = (cfg.roles.backup.cellServDB != []);
text = mkCellServDB cfg.cellName cfg.roles.backup.cellServDB;
target = "openafs/backup/CellServDB";
};
};
systemd.services = {
openafs-server = {
description = "OpenAFS server";
after = [ "syslog.target" "network.target" ];
wantedBy = [ "multi-user.target" ];
restartIfChanged = false;
unitConfig.ConditionPathExists = [ "/etc/openafs/server/rxkad.keytab" ];
preStart = ''
mkdir -m 0755 -p /var/openafs
${optionalString (netInfo != null) "cp ${netInfo} /var/openafs/netInfo"}
${optionalString (cfg.roles.backup.cellServDB != []) "cp ${buCellServDB}"}
'';
serviceConfig = {
ExecStart = "${openafsBin}/bin/bosserver -nofork";
ExecStop = "${openafsBin}/bin/bos shutdown localhost -wait -localauth";
};
};
};
};
}

View File

@ -54,10 +54,12 @@ let
}; };
serviceConfig = { serviceConfig = {
ExecStart = "${samba}/sbin/${appName} ${args}"; ExecStart = "${samba}/sbin/${appName} --foreground --no-process-group ${args}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID"; ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
LimitNOFILE = 16384; LimitNOFILE = 16384;
PIDFile = "/run/${appName}.pid";
Type = "notify"; Type = "notify";
NotifyAccess = "all"; #may not do anything...
}; };
restartTriggers = [ configFile ]; restartTriggers = [ configFile ];
@ -231,11 +233,12 @@ in
after = [ "samba-setup.service" "network.target" ]; after = [ "samba-setup.service" "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
}; };
# Refer to https://github.com/samba-team/samba/tree/master/packaging/systemd
# for correct use with systemd
services = { services = {
"samba-smbd" = daemonService "smbd" "-F"; "samba-smbd" = daemonService "smbd" "";
"samba-nmbd" = mkIf cfg.enableNmbd (daemonService "nmbd" "-F"); "samba-nmbd" = mkIf cfg.enableNmbd (daemonService "nmbd" "");
"samba-winbindd" = mkIf cfg.enableWinbindd (daemonService "winbindd" "-F"); "samba-winbindd" = mkIf cfg.enableWinbindd (daemonService "winbindd" "");
"samba-setup" = { "samba-setup" = {
description = "Samba Setup Task"; description = "Samba Setup Task";
script = setupScript; script = setupScript;

View File

@ -10,9 +10,9 @@ let
settingsDir = "${homeDir}"; settingsDir = "${homeDir}";
sessionFile = "${homeDir}/aria2.session"; sessionFile = "${homeDir}/aria2.session";
downloadDir = "${homeDir}/Downloads"; downloadDir = "${homeDir}/Downloads";
rangesToStringList = map (x: builtins.toString x.from +"-"+ builtins.toString x.to); rangesToStringList = map (x: builtins.toString x.from +"-"+ builtins.toString x.to);
settingsFile = pkgs.writeText "aria2.conf" settingsFile = pkgs.writeText "aria2.conf"
'' ''
dir=${cfg.downloadDir} dir=${cfg.downloadDir}
@ -110,12 +110,12 @@ in
mkdir -m 0770 -p "${homeDir}" mkdir -m 0770 -p "${homeDir}"
chown aria2:aria2 "${homeDir}" chown aria2:aria2 "${homeDir}"
if [[ ! -d "${config.services.aria2.downloadDir}" ]] if [[ ! -d "${config.services.aria2.downloadDir}" ]]
then then
mkdir -m 0770 -p "${config.services.aria2.downloadDir}" mkdir -m 0770 -p "${config.services.aria2.downloadDir}"
chown aria2:aria2 "${config.services.aria2.downloadDir}" chown aria2:aria2 "${config.services.aria2.downloadDir}"
fi fi
if [[ ! -e "${sessionFile}" ]] if [[ ! -e "${sessionFile}" ]]
then then
touch "${sessionFile}" touch "${sessionFile}"
chown aria2:aria2 "${sessionFile}" chown aria2:aria2 "${sessionFile}"
fi fi
@ -132,4 +132,4 @@ in
}; };
}; };
}; };
} }

View File

@ -7,21 +7,27 @@ let
let let
cfg = config.services.${variant}; cfg = config.services.${variant};
pkg = pkgs.${variant}; pkg = pkgs.${variant};
birdBin = if variant == "bird6" then "bird6" else "bird";
birdc = if variant == "bird6" then "birdc6" else "birdc"; birdc = if variant == "bird6" then "birdc6" else "birdc";
descr =
{ bird = "1.9.x with IPv4 suport";
bird6 = "1.9.x with IPv6 suport";
bird2 = "2.x";
}.${variant};
configFile = pkgs.stdenv.mkDerivation { configFile = pkgs.stdenv.mkDerivation {
name = "${variant}.conf"; name = "${variant}.conf";
text = cfg.config; text = cfg.config;
preferLocalBuild = true; preferLocalBuild = true;
buildCommand = '' buildCommand = ''
echo -n "$text" > $out echo -n "$text" > $out
${pkg}/bin/${variant} -d -p -c $out ${pkg}/bin/${birdBin} -d -p -c $out
''; '';
}; };
in { in {
###### interface ###### interface
options = { options = {
services.${variant} = { services.${variant} = {
enable = mkEnableOption "BIRD Internet Routing Daemon"; enable = mkEnableOption "BIRD Internet Routing Daemon (${descr})";
config = mkOption { config = mkOption {
type = types.lines; type = types.lines;
description = '' description = ''
@ -36,12 +42,12 @@ let
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ pkg ]; environment.systemPackages = [ pkg ];
systemd.services.${variant} = { systemd.services.${variant} = {
description = "BIRD Internet Routing Daemon"; description = "BIRD Internet Routing Daemon (${descr})";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { serviceConfig = {
Type = "forking"; Type = "forking";
Restart = "on-failure"; Restart = "on-failure";
ExecStart = "${pkg}/bin/${variant} -c ${configFile} -u ${variant} -g ${variant}"; ExecStart = "${pkg}/bin/${birdBin} -c ${configFile} -u ${variant} -g ${variant}";
ExecReload = "${pkg}/bin/${birdc} configure"; ExecReload = "${pkg}/bin/${birdc} configure";
ExecStop = "${pkg}/bin/${birdc} down"; ExecStop = "${pkg}/bin/${birdc} down";
CapabilityBoundingSet = [ "CAP_CHOWN" "CAP_FOWNER" "CAP_DAC_OVERRIDE" "CAP_SETUID" "CAP_SETGID" CapabilityBoundingSet = [ "CAP_CHOWN" "CAP_FOWNER" "CAP_DAC_OVERRIDE" "CAP_SETUID" "CAP_SETGID"
@ -56,14 +62,15 @@ let
users = { users = {
extraUsers.${variant} = { extraUsers.${variant} = {
description = "BIRD Internet Routing Daemon user"; description = "BIRD Internet Routing Daemon user";
group = "${variant}"; group = variant;
}; };
extraGroups.${variant} = {}; extraGroups.${variant} = {};
}; };
}; };
}; };
inherit (config.services) bird bird6; in
in {
imports = [(generic "bird") (generic "bird6")]; {
imports = map generic [ "bird" "bird6" "bird2" ];
} }

View File

@ -145,6 +145,16 @@ in {
}; };
users.groups.dnscrypt-wrapper = { }; users.groups.dnscrypt-wrapper = { };
security.polkit.extraConfig = ''
// Allow dnscrypt-wrapper user to restart dnscrypt-wrapper.service
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "dnscrypt-wrapper.service" &&
subject.user == "dnscrypt-wrapper") {
return polkit.Result.YES;
}
});
'';
systemd.services.dnscrypt-wrapper = { systemd.services.dnscrypt-wrapper = {
description = "dnscrypt-wrapper daemon"; description = "dnscrypt-wrapper daemon";

View File

@ -43,7 +43,16 @@ in
type = with types; listOf str; type = with types; listOf str;
default = [ "::1" "127.0.0.1" ]; default = [ "::1" "127.0.0.1" ];
description = '' description = ''
What addresses the server should listen on. What addresses the server should listen on. (UDP+TCP 53)
'';
};
listenTLS = mkOption {
type = with types; listOf str;
default = [];
example = [ "198.51.100.1:853" "[2001:db8::1]:853" "853" ];
description = ''
Addresses on which kresd should provide DNS over TLS (see RFC 7858).
For detailed syntax see ListenStream in man systemd.socket.
''; '';
}; };
# TODO: perhaps options for more common stuff like cache size or forwarding # TODO: perhaps options for more common stuff like cache size or forwarding
@ -75,6 +84,18 @@ in
socketConfig.FreeBind = true; socketConfig.FreeBind = true;
}; };
systemd.sockets.kresd-tls = mkIf (cfg.listenTLS != []) rec {
wantedBy = [ "sockets.target" ];
before = wantedBy;
partOf = [ "kresd.socket" ];
listenStreams = cfg.listenTLS;
socketConfig = {
FileDescriptorName = "tls";
FreeBind = true;
Service = "kresd.service";
};
};
systemd.sockets.kresd-control = rec { systemd.sockets.kresd-control = rec {
wantedBy = [ "sockets.target" ]; wantedBy = [ "sockets.target" ];
before = wantedBy; before = wantedBy;
@ -97,11 +118,13 @@ in
Type = "notify"; Type = "notify";
WorkingDirectory = cfg.cacheDir; WorkingDirectory = cfg.cacheDir;
Restart = "on-failure"; Restart = "on-failure";
Sockets = [ "kresd.socket" "kresd-control.socket" ]
++ optional (cfg.listenTLS != []) "kresd-tls.socket";
}; };
# Trust anchor goes from dns-root-data by default.
script = '' script = ''
exec '${package}/bin/kresd' --config '${configFile}' \ exec '${package}/bin/kresd' --config '${configFile}' --forks=1
-k '${pkgs.dns-root-data}/root.key'
''; '';
requires = [ "kresd.socket" ]; requires = [ "kresd.socket" ];

View File

@ -0,0 +1,238 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.monero;
dataDir = "/var/lib/monero";
listToConf = option: list:
concatMapStrings (value: "${option}=${value}\n") list;
login = (cfg.rpc.user != null && cfg.rpc.password != null);
configFile = with cfg; pkgs.writeText "monero.conf" ''
log-file=/dev/stdout
data-dir=${dataDir}
${optionalString mining.enable ''
start-mining=${mining.address}
mining-threads=${toString mining.threads}
''}
rpc-bind-ip=${rpc.address}
rpc-bind-port=${toString rpc.port}
${optionalString login ''
rpc-login=${rpc.user}:${rpc.password}
''}
${optionalString rpc.restricted ''
restrict-rpc=1
''}
limit-rate-up=${toString limits.upload}
limit-rate-down=${toString limits.download}
max-concurrency=${toString limits.threads}
block-sync-size=${toString limits.syncSize}
${listToConf "add-peer" extraNodes}
${listToConf "add-priority-node" priorityNodes}
${listToConf "add-exclusive-node" exclusiveNodes}
${extraConfig}
'';
in
{
###### interface
options = {
services.monero = {
enable = mkEnableOption "Monero node daemon.";
mining.enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to mine moneroj.
'';
};
mining.address = mkOption {
type = types.str;
default = "";
description = ''
Monero address where to send mining rewards.
'';
};
mining.threads = mkOption {
type = types.addCheck types.int (x: x>=0);
default = 0;
description = ''
Number of threads used for mining.
Set to <literal>0</literal> to use all available.
'';
};
rpc.user = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
User name for RPC connections.
'';
};
rpc.password = mkOption {
type = types.str;
default = null;
description = ''
Password for RPC connections.
'';
};
rpc.address = mkOption {
type = types.str;
default = "127.0.0.1";
description = ''
IP address the RPC server will bind to.
'';
};
rpc.port = mkOption {
type = types.int;
default = 18081;
description = ''
Port the RPC server will bind to.
'';
};
rpc.restricted = mkOption {
type = types.bool;
default = false;
description = ''
Whether to restrict RPC to view only commands.
'';
};
limits.upload = mkOption {
type = types.addCheck types.int (x: x>=-1);
default = -1;
description = ''
Limit of the upload rate in kB/s.
Set to <literal>-1</literal> to leave unlimited.
'';
};
limits.download = mkOption {
type = types.addCheck types.int (x: x>=-1);
default = -1;
description = ''
Limit of the download rate in kB/s.
Set to <literal>-1</literal> to leave unlimited.
'';
};
limits.threads = mkOption {
type = types.addCheck types.int (x: x>=0);
default = 0;
description = ''
Maximum number of threads used for a parallel job.
Set to <literal>0</literal> to leave unlimited.
'';
};
limits.syncSize = mkOption {
type = types.addCheck types.int (x: x>=0);
default = 0;
description = ''
Maximum number of blocks to sync at once.
Set to <literal>0</literal> for adaptive.
'';
};
extraNodes = mkOption {
type = types.listOf types.str;
default = [ ];
description = ''
List of additional peer IP addresses to add to the local list.
'';
};
priorityNodes = mkOption {
type = types.listOf types.str;
default = [ ];
description = ''
List of peer IP addresses to connect to and
attempt to keep the connection open.
'';
};
exclusiveNodes = mkOption {
type = types.listOf types.str;
default = [ ];
description = ''
List of peer IP addresses to connect to *only*.
If given the other peer options will be ignored.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra lines to be added verbatim to monerod configuration.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
users.extraUsers = singleton {
name = "monero";
uid = config.ids.uids.monero;
description = "Monero daemon user";
home = dataDir;
createHome = true;
};
users.extraGroups = singleton {
name = "monero";
gid = config.ids.gids.monero;
};
systemd.services.monero = {
description = "monero daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = "monero";
Group = "monero";
ExecStart = "${pkgs.monero}/bin/monerod --config-file=${configFile} --non-interactive";
Restart = "always";
SuccessExitStatus = [ 0 1 ];
};
};
assertions = singleton {
assertion = cfg.mining.enable -> cfg.mining.address != "";
message = ''
You need a Monero address to receive mining rewards:
specify one using option monero.mining.address.
'';
};
};
}

View File

@ -12,6 +12,10 @@ let
keyfile ${cfg.ssl.keyfile} keyfile ${cfg.ssl.keyfile}
''; '';
passwordConf = optionalString cfg.checkPasswords ''
password_file ${cfg.dataDir}/passwd
'';
mosquittoConf = pkgs.writeText "mosquitto.conf" '' mosquittoConf = pkgs.writeText "mosquitto.conf" ''
pid_file /run/mosquitto/pid pid_file /run/mosquitto/pid
acl_file ${aclFile} acl_file ${aclFile}
@ -19,6 +23,7 @@ let
allow_anonymous ${boolToString cfg.allowAnonymous} allow_anonymous ${boolToString cfg.allowAnonymous}
bind_address ${cfg.host} bind_address ${cfg.host}
port ${toString cfg.port} port ${toString cfg.port}
${passwordConf}
${listenerConf} ${listenerConf}
${cfg.extraConf} ${cfg.extraConf}
''; '';
@ -153,6 +158,15 @@ in
''; '';
}; };
checkPasswords = mkOption {
default = false;
example = true;
type = types.bool;
description = ''
Refuse connection when clients provide incorrect passwords.
'';
};
extraConf = mkOption { extraConf = mkOption {
default = ""; default = "";
type = types.lines; type = types.lines;
@ -198,7 +212,7 @@ in
'' + concatStringsSep "\n" ( '' + concatStringsSep "\n" (
mapAttrsToList (n: c: mapAttrsToList (n: c:
if c.hashedPassword != null then if c.hashedPassword != null then
"echo '${n}:${c.hashedPassword}' > ${cfg.dataDir}/passwd" "echo '${n}:${c.hashedPassword}' >> ${cfg.dataDir}/passwd"
else optionalString (c.password != null) else optionalString (c.password != null)
"${pkgs.mosquitto}/bin/mosquitto_passwd -b ${cfg.dataDir}/passwd ${n} ${c.password}" "${pkgs.mosquitto}/bin/mosquitto_passwd -b ${cfg.dataDir}/passwd ${n} ${c.password}"
) cfg.users); ) cfg.users);

View File

@ -50,6 +50,11 @@ let
"up ${pkgs.writeScript "openvpn-${name}-up" upScript}"} "up ${pkgs.writeScript "openvpn-${name}-up" upScript}"}
${optionalString (cfg.down != "" || cfg.updateResolvConf) ${optionalString (cfg.down != "" || cfg.updateResolvConf)
"down ${pkgs.writeScript "openvpn-${name}-down" downScript}"} "down ${pkgs.writeScript "openvpn-${name}-down" downScript}"}
${optionalString (cfg.authUserPass != null)
"auth-user-pass ${pkgs.writeText "openvpn-credentials-${name}" ''
${cfg.authUserPass.username}
${cfg.authUserPass.password}
''}"}
''; '';
in { in {
@ -161,6 +166,29 @@ in
''; '';
}; };
authUserPass = mkOption {
default = null;
description = ''
This option can be used to store the username / password credentials
with the "auth-user-pass" authentication method.
WARNING: Using this option will put the credentials WORLD-READABLE in the Nix store!
'';
type = types.nullOr (types.submodule {
options = {
username = mkOption {
description = "The username to store inside the credentials file.";
type = types.string;
};
password = mkOption {
description = "The password to store inside the credentials file.";
type = types.string;
};
};
});
};
}; };
}); });

View File

@ -17,7 +17,7 @@ let
search_lan = entry.searchLAN; search_lan = entry.searchLAN;
use_sync_trash = entry.useSyncTrash; use_sync_trash = entry.useSyncTrash;
known_hosts = knownHosts; known_hosts = entry.knownHosts;
}) cfg.sharedFolders; }) cfg.sharedFolders;
configFile = pkgs.writeText "config.json" (builtins.toJSON ({ configFile = pkgs.writeText "config.json" (builtins.toJSON ({

View File

@ -0,0 +1,63 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.networking.rxe;
runRxeCmd = cmd: ifcs:
concatStrings ( map (x: "${pkgs.rdma-core}/bin/rxe_cfg -n ${cmd} ${x};") ifcs);
startScript = pkgs.writeShellScriptBin "rxe-start" ''
${pkgs.rdma-core}/bin/rxe_cfg -n start
${runRxeCmd "add" cfg.interfaces}
${pkgs.rdma-core}/bin/rxe_cfg
'';
stopScript = pkgs.writeShellScriptBin "rxe-stop" ''
${runRxeCmd "remove" cfg.interfaces }
${pkgs.rdma-core}/bin/rxe_cfg -n stop
'';
in {
###### interface
options = {
networking.rxe = {
enable = mkEnableOption "RDMA over converged ethernet";
interfaces = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "eth0" ];
description = ''
Enable RDMA on the listed interfaces. The corresponding virtual
RDMA interfaces will be named rxe0 ... rxeN where the ordering
will be as they are named in the list. UDP port 4791 must be
open on the respective ethernet interfaces.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
systemd.services.rxe = {
path = with pkgs; [ kmod rdma-core ];
description = "RoCE interfaces";
wantedBy = [ "multi-user.target" ];
after = [ "systemd-modules-load.service" "network-online.target" ];
wants = [ "network-pre.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = "${startScript}/bin/rxe-start";
ExecStop = "${stopScript}/bin/rxe-stop";
};
};
};
}

View File

@ -21,7 +21,7 @@ let
daemon reads in addition to the the user's authorized_keys file. daemon reads in addition to the the user's authorized_keys file.
You can combine the <literal>keys</literal> and You can combine the <literal>keys</literal> and
<literal>keyFiles</literal> options. <literal>keyFiles</literal> options.
Warning: If you are using <literal>NixOps</literal> then don't use this Warning: If you are using <literal>NixOps</literal> then don't use this
option since it will replace the key required for deployment via ssh. option since it will replace the key required for deployment via ssh.
''; '';
}; };
@ -137,6 +137,14 @@ in
''; '';
}; };
openFirewall = mkOption {
type = types.bool;
default = true;
description = ''
Whether to automatically open the specified ports in the firewall.
'';
};
listenAddresses = mkOption { listenAddresses = mkOption {
type = with types; listOf (submodule { type = with types; listOf (submodule {
options = { options = {
@ -302,7 +310,7 @@ in
}; };
networking.firewall.allowedTCPPorts = cfg.ports; networking.firewall.allowedTCPPorts = if cfg.openFirewall then cfg.ports else [];
security.pam.services.sshd = security.pam.services.sshd =
{ startSession = true; { startSession = true;
@ -367,9 +375,6 @@ in
# LogLevel VERBOSE logs user's key fingerprint on login. # LogLevel VERBOSE logs user's key fingerprint on login.
# Needed to have a clear audit track of which key was used to log in. # Needed to have a clear audit track of which key was used to log in.
LogLevel VERBOSE LogLevel VERBOSE
# Use kernel sandbox mechanisms where possible in unprivileged processes.
UsePrivilegeSeparation sandbox
''; '';
assertions = [{ assertion = if cfg.forwardX11 then cfgc.setXAuthLocation else true; assertions = [{ assertion = if cfg.forwardX11 then cfgc.setXAuthLocation else true;

View File

@ -0,0 +1,221 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.stunnel;
yesNo = val: if val then "yes" else "no";
verifyChainPathAssert = n: c: {
assertion = c.verifyHostname == null || (c.verifyChain || c.verifyPeer);
message = "stunnel: \"${n}\" client configuration - hostname verification " +
"is not possible without either verifyChain or verifyPeer enabled";
};
serverConfig = {
options = {
accept = mkOption {
type = types.int;
description = "On which port stunnel should listen for incoming TLS connections.";
};
connect = mkOption {
type = types.int;
description = "To which port the decrypted connection should be forwarded.";
};
cert = mkOption {
type = types.path;
description = "File containing both the private and public keys.";
};
};
};
clientConfig = {
options = {
accept = mkOption {
type = types.string;
description = "IP:Port on which connections should be accepted.";
};
connect = mkOption {
type = types.string;
description = "IP:Port destination to connect to.";
};
verifyChain = mkOption {
type = types.bool;
default = true;
description = "Check if the provided certificate has a valid certificate chain (against CAPath).";
};
verifyPeer = mkOption {
type = types.bool;
default = false;
description = "Check if the provided certificate is contained in CAPath.";
};
CAPath = mkOption {
type = types.path;
default = "${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt";
description = "Path to a file containing certificates to validate against.";
};
verifyHostname = mkOption {
type = with types; nullOr string;
default = null;
description = "If set, stunnel checks if the provided certificate is valid for the given hostname.";
};
};
};
in
{
###### interface
options = {
services.stunnel = {
enable = mkOption {
type = types.bool;
default = false;
description = "Whether to enable the stunnel TLS tunneling service.";
};
user = mkOption {
type = with types; nullOr string;
default = "nobody";
description = "The user under which stunnel runs.";
};
group = mkOption {
type = with types; nullOr string;
default = "nogroup";
description = "The group under which stunnel runs.";
};
logLevel = mkOption {
type = types.enum [ "emerg" "alert" "crit" "err" "warning" "notice" "info" "debug" ];
default = "info";
description = "Verbosity of stunnel output.";
};
fipsMode = mkOption {
type = types.bool;
default = false;
description = "Enable FIPS 140-2 mode required for compliance.";
};
enableInsecureSSLv3 = mkOption {
type = types.bool;
default = false;
description = "Enable support for the insecure SSLv3 protocol.";
};
servers = mkOption {
description = "Define the server configuations.";
type = with types; attrsOf (submodule serverConfig);
example = {
fancyWebserver = {
enable = true;
accept = 443;
connect = 8080;
cert = "/path/to/pem/file";
};
};
default = { };
};
clients = mkOption {
description = "Define the client configurations.";
type = with types; attrsOf (submodule clientConfig);
example = {
foobar = {
accept = "0.0.0.0:8080";
connect = "nixos.org:443";
verifyChain = false;
};
};
default = { };
};
};
};
###### implementation
config = mkIf cfg.enable {
assertions = concatLists [
(singleton {
assertion = (length (attrValues cfg.servers) != 0) || ((length (attrValues cfg.clients)) != 0);
message = "stunnel: At least one server- or client-configuration has to be present.";
})
(mapAttrsToList verifyChainPathAssert cfg.clients)
];
environment.systemPackages = [ pkgs.stunnel ];
environment.etc."stunnel.cfg".text = ''
${ if cfg.user != null then "setuid = ${cfg.user}" else "" }
${ if cfg.group != null then "setgid = ${cfg.group}" else "" }
debug = ${cfg.logLevel}
${ optionalString cfg.fipsMode "fips = yes" }
${ optionalString cfg.enableInsecureSSLv3 "options = -NO_SSLv3" }
; ----- SERVER CONFIGURATIONS -----
${ lib.concatStringsSep "\n"
(lib.mapAttrsToList
(n: v: ''
[${n}]
accept = ${toString v.accept}
connect = ${toString v.connect}
cert = ${v.cert}
'')
cfg.servers)
}
; ----- CLIENT CONFIGURATIONS -----
${ lib.concatStringsSep "\n"
(lib.mapAttrsToList
(n: v: ''
[${n}]
client = yes
accept = ${v.accept}
connect = ${v.connect}
verifyChain = ${yesNo v.verifyChain}
verifyPeer = ${yesNo v.verifyPeer}
${optionalString (v.CAPath != null) "CApath = ${v.CAPath}"}
${optionalString (v.verifyHostname != null) "checkHost = ${v.verifyHostname}"}
OCSPaia = yes
'')
cfg.clients)
}
'';
systemd.services.stunnel = {
description = "stunnel TLS tunneling service";
after = [ "network.target" ];
wants = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
restartTriggers = [ config.environment.etc."stunnel.cfg".source ];
serviceConfig = {
ExecStart = "${pkgs.stunnel}/bin/stunnel ${config.environment.etc."stunnel.cfg".source}";
Type = "forking";
};
};
};
}

View File

@ -6,6 +6,7 @@ let
cfg = config.services.elasticsearch; cfg = config.services.elasticsearch;
es5 = builtins.compareVersions (builtins.parseDrvName cfg.package.name).version "5" >= 0; es5 = builtins.compareVersions (builtins.parseDrvName cfg.package.name).version "5" >= 0;
es6 = builtins.compareVersions (builtins.parseDrvName cfg.package.name).version "6" >= 0;
esConfig = '' esConfig = ''
network.host: ${cfg.listenAddress} network.host: ${cfg.listenAddress}
@ -92,8 +93,6 @@ in {
node.name: "elasticsearch" node.name: "elasticsearch"
node.master: true node.master: true
node.data: false node.data: false
index.number_of_shards: 5
index.number_of_replicas: 1
''; '';
}; };
@ -165,7 +164,10 @@ in {
path = [ pkgs.inetutils ]; path = [ pkgs.inetutils ];
environment = { environment = {
ES_HOME = cfg.dataDir; ES_HOME = cfg.dataDir;
ES_JAVA_OPTS = toString ([ "-Des.path.conf=${configDir}" ] ++ cfg.extraJavaOptions); ES_JAVA_OPTS = toString ( optional (!es6) [ "-Des.path.conf=${configDir}" ]
++ cfg.extraJavaOptions);
} // optionalAttrs es6 {
ES_PATH_CONF = configDir;
}; };
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/elasticsearch ${toString cfg.extraCmdLineOptions}"; ExecStart = "${cfg.package}/bin/elasticsearch ${toString cfg.extraCmdLineOptions}";

View File

@ -30,6 +30,20 @@ in
''; '';
}; };
allowAnyUser = mkOption {
type = types.bool;
default = false;
description = ''
Whether to allow any user to lock the screen. This will install a
setuid wrapper to allow any user to start physlock as root, which
is a minor security risk. Call the physlock binary to use this instead
of using the systemd service.
Note that you might need to relog to have the correct binary in your
PATH upon changing this option.
'';
};
disableSysRq = mkOption { disableSysRq = mkOption {
type = types.bool; type = types.bool;
default = true; default = true;
@ -79,28 +93,36 @@ in
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable (mkMerge [
{
# for physlock -l and physlock -L # for physlock -l and physlock -L
environment.systemPackages = [ pkgs.physlock ]; environment.systemPackages = [ pkgs.physlock ];
systemd.services."physlock" = { systemd.services."physlock" = {
enable = true; enable = true;
description = "Physlock"; description = "Physlock";
wantedBy = optional cfg.lockOn.suspend "suspend.target" wantedBy = optional cfg.lockOn.suspend "suspend.target"
++ optional cfg.lockOn.hibernate "hibernate.target" ++ optional cfg.lockOn.hibernate "hibernate.target"
++ cfg.lockOn.extraTargets; ++ cfg.lockOn.extraTargets;
before = optional cfg.lockOn.suspend "systemd-suspend.service" before = optional cfg.lockOn.suspend "systemd-suspend.service"
++ optional cfg.lockOn.hibernate "systemd-hibernate.service" ++ optional cfg.lockOn.hibernate "systemd-hibernate.service"
++ cfg.lockOn.extraTargets; ++ cfg.lockOn.extraTargets;
serviceConfig.Type = "forking"; serviceConfig = {
script = '' Type = "forking";
${pkgs.physlock}/bin/physlock -d${optionalString cfg.disableSysRq "s"} ExecStart = "${pkgs.physlock}/bin/physlock -d${optionalString cfg.disableSysRq "s"}";
''; };
}; };
security.pam.services.physlock = {}; security.pam.services.physlock = {};
}; }
(mkIf cfg.allowAnyUser {
security.wrappers.physlock = { source = "${pkgs.physlock}/bin/physlock"; user = "root"; };
})
]);
} }

View File

@ -88,6 +88,9 @@ let
${flip concatMapStrings v.map (p: '' ${flip concatMapStrings v.map (p: ''
HiddenServicePort ${toString p.port} ${p.destination} HiddenServicePort ${toString p.port} ${p.destination}
'')} '')}
${optionalString (v.authorizeClient != null) ''
HiddenServiceAuthorizeClient ${v.authorizeClient.authType} ${concatStringsSep "," v.authorizeClient.clientNames}
''}
'')) ''))
+ cfg.extraConfig; + cfg.extraConfig;
@ -619,6 +622,33 @@ in
})); }));
}; };
authorizeClient = mkOption {
default = null;
description = "If configured, the hidden service is accessible for authorized clients only.";
type = types.nullOr (types.submodule ({config, ...}: {
options = {
authType = mkOption {
type = types.enum [ "basic" "stealth" ];
description = ''
Either <literal>"basic"</literal> for a general-purpose authorization protocol
or <literal>"stealth"</literal> for a less scalable protocol
that also hides service activity from unauthorized clients.
'';
};
clientNames = mkOption {
type = types.nonEmptyListOf (types.strMatching "[A-Za-z0-9+-_]+");
description = ''
Only clients that are listed here are authorized to access the hidden service.
Generated authorization data can be found in <filename>${torDirectory}/onion/$name/hostname</filename>.
Clients need to put this authorization data in their configuration file using <literal>HidServAuth</literal>.
'';
};
};
}));
};
}; };
config = { config = {

View File

@ -0,0 +1,132 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mighttpd2;
configFile = pkgs.writeText "mighty-config" cfg.config;
routingFile = pkgs.writeText "mighty-routing" cfg.routing;
in {
options.services.mighttpd2 = {
enable = mkEnableOption "Mighttpd2 web server";
config = mkOption {
default = "";
example = ''
# Example configuration for Mighttpd 2
Port: 80
# IP address or "*"
Host: *
Debug_Mode: Yes # Yes or No
# If available, "nobody" is much more secure for User:.
User: root
# If available, "nobody" is much more secure for Group:.
Group: root
Pid_File: /var/run/mighty.pid
Logging: Yes # Yes or No
Log_File: /var/log/mighty # The directory must be writable by User:
Log_File_Size: 16777216 # bytes
Log_Backup_Number: 10
Index_File: index.html
Index_Cgi: index.cgi
Status_File_Dir: /usr/local/share/mighty/status
Connection_Timeout: 30 # seconds
Fd_Cache_Duration: 10 # seconds
# Server_Name: Mighttpd/3.x.y
Tls_Port: 443
Tls_Cert_File: cert.pem # should change this with an absolute path
# should change this with comma-separated absolute paths
Tls_Chain_Files: chain.pem
# Currently, Tls_Key_File must not be encrypted.
Tls_Key_File: privkey.pem # should change this with an absolute path
Service: 0 # 0 is HTTP only, 1 is HTTPS only, 2 is both
'';
type = types.lines;
description = ''
Verbatim config file to use
(see http://www.mew.org/~kazu/proj/mighttpd/en/config.html)
'';
};
routing = mkOption {
default = "";
example = ''
# Example routing for Mighttpd 2
# Domain lists
[localhost www.example.com]
# Entries are looked up in the specified order
# All paths must end with "/"
# A path to CGI scripts should be specified with "=>"
/~alice/cgi-bin/ => /home/alice/public_html/cgi-bin/
# A path to static files should be specified with "->"
/~alice/ -> /home/alice/public_html/
/cgi-bin/ => /export/cgi-bin/
# Reverse proxy rules should be specified with ">>"
# /path >> host:port/path2
# Either "host" or ":port" can be committed, but not both.
/app/cal/ >> example.net/calendar/
# Yesod app in the same server
/app/wiki/ >> 127.0.0.1:3000/
/ -> /export/www/
'';
type = types.lines;
description = ''
Verbatim routing file to use
(see http://www.mew.org/~kazu/proj/mighttpd/en/config.html)
'';
};
cores = mkOption {
default = null;
type = types.nullOr types.int;
description = ''
How many cores to use.
If null it will be determined automatically
'';
};
};
config = mkIf cfg.enable {
assertions =
[ { assertion = cfg.routing != "";
message = "You need at least one rule in mighttpd2.routing";
}
];
systemd.services.mighttpd2 = {
description = "Mighttpd2 web server";
after = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''
${pkgs.haskellPackages.mighttpd2}/bin/mighty \
${configFile} \
${routingFile} \
+RTS -N${optionalString (cfg.cores != null) "${cfg.cores}"}
'';
Type = "simple";
User = "mighttpd2";
Group = "mighttpd2";
Restart = "on-failure";
AmbientCapabilities = "cap_net_bind_service";
CapabilityBoundingSet = "cap_net_bind_service";
};
};
users.extraUsers.mighttpd2 = {
group = "mighttpd2";
uid = config.ids.uids.mighttpd2;
isSystemUser = true;
};
users.extraGroups.mighttpd2.gid = config.ids.gids.mighttpd2;
};
meta.maintainers = with lib.maintainers; [ fgaz ];
}

View File

@ -15,6 +15,9 @@ let
} // (optionalAttrs vhostConfig.enableACME { } // (optionalAttrs vhostConfig.enableACME {
sslCertificate = "/var/lib/acme/${serverName}/fullchain.pem"; sslCertificate = "/var/lib/acme/${serverName}/fullchain.pem";
sslCertificateKey = "/var/lib/acme/${serverName}/key.pem"; sslCertificateKey = "/var/lib/acme/${serverName}/key.pem";
}) // (optionalAttrs (vhostConfig.useACMEHost != null) {
sslCertificate = "/var/lib/acme/${vhostConfig.useACMEHost}/fullchain.pem";
sslCertificateKey = "/var/lib/acme/${vhostConfig.useACMEHost}/key.pem";
}) })
) cfg.virtualHosts; ) cfg.virtualHosts;
enableIPv6 = config.networking.enableIPv6; enableIPv6 = config.networking.enableIPv6;
@ -174,7 +177,7 @@ let
redirectListen = filter (x: !x.ssl) defaultListen; redirectListen = filter (x: !x.ssl) defaultListen;
acmeLocation = '' acmeLocation = optionalString (vhost.enableACME || vhost.useACMEHost != null) ''
location /.well-known/acme-challenge { location /.well-known/acme-challenge {
${optionalString (vhost.acmeFallbackHost != null) "try_files $uri @acme-fallback;"} ${optionalString (vhost.acmeFallbackHost != null) "try_files $uri @acme-fallback;"}
root ${vhost.acmeRoot}; root ${vhost.acmeRoot};
@ -194,7 +197,7 @@ let
${concatMapStringsSep "\n" listenString redirectListen} ${concatMapStringsSep "\n" listenString redirectListen}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases}; server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${optionalString vhost.enableACME acmeLocation} ${acmeLocation}
location / { location / {
return 301 https://$host$request_uri; return 301 https://$host$request_uri;
} }
@ -204,7 +207,7 @@ let
server { server {
${concatMapStringsSep "\n" listenString hostListen} ${concatMapStringsSep "\n" listenString hostListen}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases}; server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${optionalString vhost.enableACME acmeLocation} ${acmeLocation}
${optionalString (vhost.root != null) "root ${vhost.root};"} ${optionalString (vhost.root != null) "root ${vhost.root};"}
${optionalString (vhost.globalRedirect != null) '' ${optionalString (vhost.globalRedirect != null) ''
return 301 http${optionalString hasSSL "s"}://${vhost.globalRedirect}$request_uri; return 301 http${optionalString hasSSL "s"}://${vhost.globalRedirect}$request_uri;
@ -555,6 +558,14 @@ in
are mutually exclusive. are mutually exclusive.
''; '';
} }
{
assertion = all (conf: !(conf.enableACME && conf.useACMEHost != null)) (attrValues virtualHosts);
message = ''
Options services.nginx.service.virtualHosts.<name>.enableACME and
services.nginx.virtualHosts.<name>.useACMEHost are mutually exclusive.
'';
}
]; ];
systemd.services.nginx = { systemd.services.nginx = {
@ -580,7 +591,7 @@ in
security.acme.certs = filterAttrs (n: v: v != {}) ( security.acme.certs = filterAttrs (n: v: v != {}) (
let let
vhostsConfigs = mapAttrsToList (vhostName: vhostConfig: vhostConfig) virtualHosts; vhostsConfigs = mapAttrsToList (vhostName: vhostConfig: vhostConfig) virtualHosts;
acmeEnabledVhosts = filter (vhostConfig: vhostConfig.enableACME) vhostsConfigs; acmeEnabledVhosts = filter (vhostConfig: vhostConfig.enableACME && vhostConfig.useACMEHost == null) vhostsConfigs;
acmePairs = map (vhostConfig: { name = vhostConfig.serverName; value = { acmePairs = map (vhostConfig: { name = vhostConfig.serverName; value = {
user = cfg.user; user = cfg.user;
group = lib.mkDefault cfg.group; group = lib.mkDefault cfg.group;

View File

@ -48,7 +48,21 @@ with lib;
enableACME = mkOption { enableACME = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = "Whether to ask Let's Encrypt to sign a certificate for this vhost."; description = ''
Whether to ask Let's Encrypt to sign a certificate for this vhost.
Alternately, you can use an existing certificate through <option>useACMEHost</option>.
'';
};
useACMEHost = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
A host of an existing Let's Encrypt certificate to use.
This is useful if you have many subdomains and want to avoid hitting the
<link xlink:href="https://letsencrypt.org/docs/rate-limits/">rate limit</link>.
Alternately, you can generate a certificate through <option>enableACME</option>.
'';
}; };
acmeRoot = mkOption { acmeRoot = mkOption {

View File

@ -64,6 +64,16 @@ in {
''; '';
}; };
group = mkOption {
default = "traefik";
type = types.string;
example = "docker";
description = ''
Set the group that traefik runs under.
For the docker backend this needs to be set to <literal>docker</literal> instead.
'';
};
package = mkOption { package = mkOption {
default = pkgs.traefik; default = pkgs.traefik;
defaultText = "pkgs.traefik"; defaultText = "pkgs.traefik";
@ -87,7 +97,7 @@ in {
]; ];
Type = "simple"; Type = "simple";
User = "traefik"; User = "traefik";
Group = "traefik"; Group = cfg.group;
Restart = "on-failure"; Restart = "on-failure";
StartLimitInterval = 86400; StartLimitInterval = 86400;
StartLimitBurst = 5; StartLimitBurst = 5;

Some files were not shown because too many files have changed in this diff Show More