Merge branch 'master' into wxwidgets-3.0.3.1

This commit is contained in:
volth 2017-07-01 00:20:20 +00:00 committed by GitHub
commit a720bd45e6
1805 changed files with 45672 additions and 26805 deletions

View File

@ -11,6 +11,7 @@
- [ ] NixOS - [ ] NixOS
- [ ] macOS - [ ] macOS
- [ ] Linux - [ ] Linux
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"` - [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`) - [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). - [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

View File

@ -26,6 +26,4 @@ env:
- GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f - GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f
notifications: notifications:
email: email: false
on_success: never
on_failure: change

View File

@ -227,7 +227,7 @@ packages via <literal>packageOverrides</literal></title>
<para>You can define a function called <para>You can define a function called
<varname>packageOverrides</varname> in your local <varname>packageOverrides</varname> in your local
<filename>~/.config/nixpkgs/config.nix</filename> to overide nix packages. It <filename>~/.config/nixpkgs/config.nix</filename> to override nix packages. It
must be a function that takes pkgs as an argument and return modified must be a function that takes pkgs as an argument and return modified
set of packages. set of packages.

View File

@ -79,13 +79,9 @@
</listitem> </listitem>
</varlistentry> </varlistentry>
</variablelist> </variablelist>
<note><para>
If you dig around nixpkgs, you may notice there is also <varname>stdenv.cross</varname>.
This field defined as <varname>hostPlatform</varname> when the host and build platforms differ, but otherwise not defined at all.
This field is obsolete and will soon disappear—please do not use it.
</para></note>
<para> <para>
The exact scheme these fields is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up. The exact schema these fields follow is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up.
You can see examples of ones used in practice in <literal>lib.systems.examples</literal>; note how they are not all very consistent.
For now, here are few fields can count on them containing: For now, here are few fields can count on them containing:
</para> </para>
<variablelist> <variablelist>
@ -118,8 +114,27 @@
This is a nix representation of a parsed LLVM target triple with white-listed components. This is a nix representation of a parsed LLVM target triple with white-listed components.
This can be specified directly, or actually parsed from the <varname>config</varname>. This can be specified directly, or actually parsed from the <varname>config</varname>.
[Technically, only one need be specified and the others can be inferred, though the precision of inference may not be very good.] [Technically, only one need be specified and the others can be inferred, though the precision of inference may not be very good.]
See <literal>lib.systems.parse</literal> for the exact representation, along with some <literal>is*</literal>predicates. See <literal>lib.systems.parse</literal> for the exact representation.
These predicates are superior to the ones in <varname>stdenv</varname> as they aren't tied to the build platform (host, as previously discussed, would be a saner default). </para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>libc</varname></term>
<listitem>
<para>
This is a string identifying the standard C library used.
Valid identifiers include "glibc" for GNU libc, "libSystem" for Darwin's Libsystem, and "uclibc" for µClibc.
It should probably be refactored to use the module system, like <varname>parse</varname>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>is*</varname></term>
<listitem>
<para>
These predicates are defined in <literal>lib.systems.inspect</literal>, and slapped on every platform.
They are superior to the ones in <varname>stdenv</varname> as they force the user to be explicit about which platform they are inspecting.
Please use these instead of those.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -128,7 +143,7 @@
<listitem> <listitem>
<para> <para>
This is, quite frankly, a dumping ground of ad-hoc settings (it's an attribute set). This is, quite frankly, a dumping ground of ad-hoc settings (it's an attribute set).
See <literal>lib.systems.platforms</literal> for examples—there's hopefully one in there that will work verbatim for each platform one is working. See <literal>lib.systems.platforms</literal> for examples—there's hopefully one in there that will work verbatim for each platform that is working.
Please help us triage these flags and give them better homes! Please help us triage these flags and give them better homes!
</para> </para>
</listitem> </listitem>
@ -184,11 +199,27 @@
More information needs to moved from the old wiki, especially <link xlink:href="https://nixos.org/wiki/CrossCompiling" />, for this section. More information needs to moved from the old wiki, especially <link xlink:href="https://nixos.org/wiki/CrossCompiling" />, for this section.
</para></note> </para></note>
<para> <para>
Many sources (manual, wiki, etc) probably mention passing <varname>system</varname>, <varname>platform</varname>, and, optionally, <varname>crossSystem</varname> to nixpkgs: Nixpkgs can be instantiated with <varname>localSystem</varname> alone, in which case there is no cross compiling and everything is built by and for that system,
<literal>import &lt;nixpkgs&gt; { system = ..; platform = ..; crossSystem = ..; }</literal>. or also with <varname>crossSystem</varname>, in which case packages run on the latter, but all building happens on the former.
<varname>system</varname> and <varname>platform</varname> together determine the system on which packages are built, and <varname>crossSystem</varname> specifies the platform on which packages are ultimately intended to run, if it is different. Both parameters take the same schema as the 3 (build, host, and target) platforms defined in the previous section.
This still works, but with more recent changes, one can alternatively pass <varname>localSystem</varname>, containing <varname>system</varname> and <varname>platform</varname>, for symmetry. As mentioned above, <literal>lib.systems.examples</literal> has some platforms which are used as arguments for these parameters in practice.
You can use them programmatically, or on the command line like <command>nix-build &lt;nixpkgs&gt; --arg crossSystem '(import &lt;nixpkgs/lib&gt;).systems.examples.fooBarBaz'</command>.
</para> </para>
<para>
While one is free to pass both parameters in full, there's a lot of logic to fill in missing fields.
As discussed in the previous section, only one of <varname>system</varname>, <varname>config</varname>, and <varname>parsed</varname> is needed to infer the other two.
Additionally, <varname>libc</varname> will be inferred from <varname>parse</varname>.
Finally, <literal>localSystem.system</literal> is also <emphasis>impurely</emphasis> inferred based on the platform evaluation occurs.
This means it is often not necessary to pass <varname>localSystem</varname> at all, as in the command-line example in the previous paragraph.
</para>
<note>
<para>
Many sources (manual, wiki, etc) probably mention passing <varname>system</varname>, <varname>platform</varname>, along with the optional <varname>crossSystem</varname> to nixpkgs:
<literal>import &lt;nixpkgs&gt; { system = ..; platform = ..; crossSystem = ..; }</literal>.
Passing those two instead of <varname>localSystem</varname> is still supported for compatibility, but is discouraged.
Indeed, much of the inference we do for these parameters is motivated by compatibility as much as convenience.
</para>
</note>
<para> <para>
One would think that <varname>localSystem</varname> and <varname>crossSystem</varname> overlap horribly with the three <varname>*Platforms</varname> (<varname>buildPlatform</varname>, <varname>hostPlatform,</varname> and <varname>targetPlatform</varname>; see <varname>stage.nix</varname> or the manual). One would think that <varname>localSystem</varname> and <varname>crossSystem</varname> overlap horribly with the three <varname>*Platforms</varname> (<varname>buildPlatform</varname>, <varname>hostPlatform,</varname> and <varname>targetPlatform</varname>; see <varname>stage.nix</varname> or the manual).
Actually, those identifiers are purposefully not used here to draw a subtle but important distinction: Actually, those identifiers are purposefully not used here to draw a subtle but important distinction:

View File

@ -26,7 +26,7 @@ pkgs.stdenv.mkDerivation {
extraHeader = ''xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" ''; extraHeader = ''xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" '';
in '' in ''
{ {
pandoc '${inputFile}' -w docbook ${lib.optionalString useChapters "--chapters"} \ pandoc '${inputFile}' -w docbook ${lib.optionalString useChapters "--top-level-division=chapter"} \
--smart \ --smart \
| sed -e 's|<ulink url=|<link xlink:href=|' \ | sed -e 's|<ulink url=|<link xlink:href=|' \
-e 's|</ulink>|</link>|' \ -e 's|</ulink>|</link>|' \

View File

@ -70,7 +70,7 @@
<para> <para>
In the above example, the <varname>separateDebugInfo</varname> attribute is In the above example, the <varname>separateDebugInfo</varname> attribute is
overriden to be true, thus building debug info for overridden to be true, thus building debug info for
<varname>helloWithDebug</varname>, while all other attributes will be <varname>helloWithDebug</varname>, while all other attributes will be
retained from the original <varname>hello</varname> package. retained from the original <varname>hello</varname> package.
</para> </para>

View File

@ -2,60 +2,120 @@
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-beam"> xml:id="sec-beam">
<title>Beam Languages (Erlang &amp; Elixir)</title> <title>BEAM Languages (Erlang, Elixir &amp; LFE)</title>
<section xml:id="beam-introduction"> <section xml:id="beam-introduction">
<title>Introduction</title> <title>Introduction</title>
<para> <para>
In this document and related Nix expressions we use the term In this document and related Nix expressions, we use the term,
<emphasis>Beam</emphasis> to describe the environment. Beam is <emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name
the name of the Erlang Virtial Machine and, as far as we know, of the Erlang Virtual Machine and, as far as we're concerned, from a
from a packaging perspective all languages that run on Beam are packaging perspective, all languages that run on the BEAM are
interchangable. The things that do change, like the build interchangeable. That which varies, like the build system, is transparent
system, are transperant to the users of the package. So we make to users of any given BEAM package, so we make no distinction.
no distinction.
</para> </para>
</section> </section>
<section xml:id="build-tools"> <section xml:id="beam-structure">
<title>Structure</title>
<para>
All BEAM-related expressions are available via the top-level
<literal>beam</literal> attribute, which includes:
</para>
<itemizedlist>
<listitem>
<para>
<literal>interpreters</literal>: a set of compilers running on the
BEAM, including multiple Erlang/OTP versions
(<literal>beam.interpreters.erlangR19</literal>, etc), Elixir
(<literal>beam.interpreters.elixir</literal>) and LFE
(<literal>beam.interpreters.lfe</literal>).
</para>
</listitem>
<listitem>
<para>
<literal>packages</literal>: a set of package sets, each compiled with
a specific Erlang/OTP version, e.g.
<literal>beam.packages.erlangR19</literal>.
</para>
</listitem>
</itemizedlist>
<para>
The default Erlang compiler, defined by
<literal>beam.interpreters.erlang</literal>, is aliased as
<literal>erlang</literal>. The default BEAM package set is defined by
<literal>beam.packages.erlang</literal> and aliased at the top level as
<literal>beamPackages</literal>.
</para>
<para>
To create a package set built with a custom Erlang version, use the
lambda, <literal>beam.packagesWith</literal>, which accepts an Erlang/OTP
derivation and produces a package set similar to
<literal>beam.packages.erlang</literal>.
</para>
<para>
Many Erlang/OTP distributions available in
<literal>beam.interpreters</literal> have versions with ODBC and/or Java
enabled. For example, there's
<literal>beam.interpreters.erlangR19_odbc_javac</literal>, which
corresponds to <literal>beam.interpreters.erlangR19</literal>.
</para>
<para xml:id="erlang-call-package">
We also provide the lambda,
<literal>beam.packages.erlang.callPackage</literal>, which simplifies
writing BEAM package definitions by injecting all packages from
<literal>beam.packages.erlang</literal> into the top-level context.
</para>
</section>
<section xml:id="build-tools">
<title>Build Tools</title> <title>Build Tools</title>
<section xml:id="build-tools-rebar3"> <section xml:id="build-tools-rebar3">
<title>Rebar3</title> <title>Rebar3</title>
<para> <para>
By default Rebar3 wants to manage it's own dependencies. In the By default, Rebar3 wants to manage its own dependencies. This is perfectly
normal non-Nix, this is perfectly acceptable. In the Nix world it acceptable in the normal, non-Nix setup, but in the Nix world, it is not.
is not. To support this we have created two versions of rebar3, To rectify this, we provide two versions of Rebar3:
<literal>rebar3</literal> and <literal>rebar3-open</literal>. The <itemizedlist>
<literal>rebar3</literal> version has been patched to remove the <listitem>
ability to download anything from it. If you are not running it a <para>
nix-shell or a nix-build then its probably not going to work for <literal>rebar3</literal>: patched to remove the ability to download
you. <literal>rebar3-open</literal> is the normal, un-modified anything. When not running it via <literal>nix-shell</literal> or
rebar3. It should work exactly as would any other version of <literal>nix-build</literal>, it's probably not going to work as
rebar3. Any Erlang package should rely on desired.
<literal>rebar3</literal> and thats really what you should be </para>
using too. </listitem>
<listitem>
<para>
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It
should work exactly as would any other version of Rebar3. Any Erlang
package should rely on <literal>rebar3</literal> instead. See <xref
linkend="rebar3-packages"/>.
</para>
</listitem>
</itemizedlist>
</para> </para>
</section> </section>
<section xml:id="build-tools-other"> <section xml:id="build-tools-other">
<title>Mix &amp; Erlang.mk</title> <title>Mix &amp; Erlang.mk</title>
<para> <para>
Both Mix and Erlang.mk work exactly as you would expect. There Both Mix and Erlang.mk work exactly as expected. There is a bootstrap
is a bootstrap process that needs to be run for both of process that needs to be run for both, however, which is supported by the
them. However, that is supported by the <literal>buildMix</literal> and <literal>buildErlangMk</literal>
<literal>buildMix</literal> and <literal>buildErlangMk</literal> derivations. derivations, respectively.
</para> </para>
</section> </section>
</section> </section>
<section xml:id="how-to-install-beam-packages"> <section xml:id="how-to-install-beam-packages">
<title>How to install Beam packages</title> <title>How to Install BEAM Packages</title>
<para> <para>
Beam packages are not registered in the top level simply because BEAM packages are not registered at the top level, simply because they are
they are not relevant to the vast majority of Nix users. They are not relevant to the vast majority of Nix users. They are installable using
installable using the <literal>beamPackages</literal> attribute the <literal>beam.packages.erlang</literal> attribute set (aliased as
set. <literal>beamPackages</literal>), which points to packages built by the
default Erlang/OTP version in Nixpkgs, as defined by
<literal>beam.interpreters.erlang</literal>.
You can list the avialable packages in the To list the available packages in
<literal>beamPackages</literal> with the following command: <literal>beamPackages</literal>, use the following command:
</para> </para>
<programlisting> <programlisting>
@ -69,33 +129,34 @@ beamPackages.meck meck-0.8.3
beamPackages.rebar3-pc pc-1.1.0 beamPackages.rebar3-pc pc-1.1.0
</programlisting> </programlisting>
<para> <para>
To install any of those packages into your profile, refer to them by To install any of those packages into your profile, refer to them by their
their attribute path (first column): attribute path (first column):
</para> </para>
<programlisting> <programlisting>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</programlisting> </programlisting>
<para> <para>
The attribute path of any Beam packages corresponds to the name The attribute path of any BEAM package corresponds to the name of that
of that particular package in Hex or its OTP Application/Release name. particular package in <link xlink:href="https://hex.pm">Hex</link> or its
OTP Application/Release name.
</para> </para>
</section> </section>
<section xml:id="packaging-beam-applications"> <section xml:id="packaging-beam-applications">
<title>Packaging Beam Applications</title> <title>Packaging BEAM Applications</title>
<section xml:id="packaging-erlang-applications"> <section xml:id="packaging-erlang-applications">
<title>Erlang Applications</title> <title>Erlang Applications</title>
<section xml:id="rebar3-packages"> <section xml:id="rebar3-packages">
<title>Rebar3 Packages</title> <title>Rebar3 Packages</title>
<para> <para>
There is a Nix functional called The Nix function, <literal>buildRebar3</literal>, defined in
<literal>buildRebar3</literal>. We use this function to make a <literal>beam.packages.erlang.buildRebar3</literal> and aliased at the
derivation that understands how to build the rebar3 project. For top level, can be used to build a derivation that understands how to
example, the epression we use to build the <link build a Rebar3 project. For example, we can build <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> as
project follows. follows:
</para> </para>
<programlisting> <programlisting>
{stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }: { stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec { buildRebar3 rec {
name = "hex2nix"; name = "hex2nix";
@ -112,43 +173,52 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
} }
</programlisting> </programlisting>
<para> <para>
The only visible difference between this derivation and Such derivations are callable with
something like <literal>stdenv.mkDerivation</literal> is that we <literal>beam.packages.erlang.callPackage</literal> (see <xref
have added <literal>erlangDeps</literal> to the derivation. If linkend="erlang-call-package"/>). To call this package using the normal
you add your Beam dependencies here they will be correctly <literal>callPackage</literal>, refer to dependency packages via
handled by the system. <literal>beamPackages</literal>, e.g.
<literal>beamPackages.ibrowse</literal>.
</para> </para>
<para> <para>
If your package needs to compile native code via Rebar's port Notably, <literal>buildRebar3</literal> includes
compilation mechenism. You should add <literal>compilePort = <literal>beamDeps</literal>, while
true;</literal> to the derivation. <literal>stdenv.mkDerivation</literal> does not. BEAM dependencies added
there will be correctly handled by the system.
</para>
<para>
If a package needs to compile native code via Rebar3's port compilation
mechanism, add <literal>compilePort = true;</literal> to the derivation.
</para> </para>
</section> </section>
<section xml:id="erlang-mk-packages"> <section xml:id="erlang-mk-packages">
<title>Erlang.mk Packages</title> <title>Erlang.mk Packages</title>
<para> <para>
Erlang.mk functions almost identically to Rebar. The only real Erlang.mk functions similarly to Rebar3, except we use
difference is that <literal>buildErlangMk</literal> is called <literal>buildErlangMk</literal> instead of
instead of <literal>buildRebar3</literal> <literal>buildRebar3</literal>.
</para> </para>
<programlisting> <programlisting>
{ buildErlangMk, fetchHex, cowlib, ranch }: { buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk { buildErlangMk {
name = "cowboy"; name = "cowboy";
version = "1.0.4"; version = "1.0.4";
src = fetchHex { src = fetchHex {
pkg = "cowboy"; pkg = "cowboy";
version = "1.0.4"; version = "1.0.4";
sha256 = sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
"6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
}; };
beamDeps = [ cowlib ranch ]; beamDeps = [ cowlib ranch ];
meta = { meta = {
description = ''Small, fast, modular HTTP server written in description = ''
Erlang.''; Small, fast, modular HTTP server written in Erlang
'';
license = stdenv.lib.licenses.isc; license = stdenv.lib.licenses.isc;
homepage = "https://github.com/ninenines/cowboy"; homepage = https://github.com/ninenines/cowboy;
}; };
} }
</programlisting> </programlisting>
@ -156,28 +226,55 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
<section xml:id="mix-packages"> <section xml:id="mix-packages">
<title>Mix Packages</title> <title>Mix Packages</title>
<para> <para>
Mix functions almost identically to Rebar. The only real Mix functions similarly to Rebar3, except we use
difference is that <literal>buildMix</literal> is called <literal>buildMix</literal> instead of <literal>buildRebar3</literal>.
instead of <literal>buildRebar3</literal>
</para> </para>
<programlisting> <programlisting>
{ buildMix, fetchHex, plug, absinthe }: { buildMix, fetchHex, plug, absinthe }:
buildMix { buildMix {
name = "absinthe_plug"; name = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
src = fetchHex { src = fetchHex {
pkg = "absinthe_plug"; pkg = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
sha256 = sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
"08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
}; };
beamDeps = [ plug absinthe];
beamDeps = [ plug absinthe ];
meta = { meta = {
description = ''A plug for Absinthe, an experimental GraphQL description = ''
toolkit''; A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3; license = stdenv.lib.licenses.bsd3;
homepage = "https://github.com/CargoSense/absinthe_plug"; homepage = https://github.com/CargoSense/absinthe_plug;
};
}
</programlisting>
<para>
Alternatively, we can use <literal>buildHex</literal> as a shortcut:
</para>
<programlisting>
{ buildHex, buildMix, plug, absinthe }:
buildHex {
name = "absinthe_plug";
version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
builder = buildMix;
beamDeps = [ plug absinthe ];
meta = {
description = ''
A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug;
}; };
} }
</programlisting> </programlisting>
@ -185,18 +282,18 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</section> </section>
</section> </section>
<section xml:id="how-to-develop"> <section xml:id="how-to-develop">
<title>How to develop</title> <title>How to Develop</title>
<section xml:id="accessing-an-environment"> <section xml:id="accessing-an-environment">
<title>Accessing an Environment</title> <title>Accessing an Environment</title>
<para> <para>
Often, all you want to do is be able to access a valid Often, we simply want to access a valid environment that contains a
environment that contains a specific package and its specific package and its dependencies. We can accomplish that with the
dependencies. we can do that with the <literal>env</literal> <literal>env</literal> attribute of a derivation. For example, let's say
part of a derivation. For example, lets say we want to access an we want to access an Erlang REPL with <literal>ibrowse</literal> loaded
erlang repl with ibrowse loaded up. We could do the following. up. We could do the following:
</para> </para>
<programlisting> <programlisting>
~/w/nixpkgs nix-shell -A beamPackages.ibrowse.env --run "erl" $ nix-shell -A beamPackages.ibrowse.env --run "erl"
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false] Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V7.0 (abort with ^G) Eshell V7.0 (abort with ^G)
@ -237,20 +334,19 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
2> 2>
</programlisting> </programlisting>
<para> <para>
Notice the <literal>-A beamPackages.ibrowse.env</literal>.That Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key
is the key to this functionality. to this functionality.
</para> </para>
</section> </section>
<section xml:id="creating-a-shell"> <section xml:id="creating-a-shell">
<title>Creating a Shell</title> <title>Creating a Shell</title>
<para> <para>
Getting access to an environment often isn't enough to do real Getting access to an environment often isn't enough to do real
development. Many times we need to create a development. Usually, we need to create a <literal>shell.nix</literal>
<literal>shell.nix</literal> file and do our development inside file and do our development inside of the environment specified therein.
of the environment specified by that file. This file looks a lot This file looks a lot like the packaging described above, except that
like the packaging described above. The main difference is that <literal>src</literal> points to the project root and we call the package
<literal>src</literal> points to project root and we call the directly.
package directly.
</para> </para>
<programlisting> <programlisting>
{ pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }: { pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }:
@ -264,18 +360,19 @@ let
name = "hex2nix"; name = "hex2nix";
version = "0.1.0"; version = "0.1.0";
src = ./.; src = ./.;
erlangDeps = [ ibrowse jsx erlware_commons ]; beamDeps = [ ibrowse jsx erlware_commons ];
}; };
drv = beamPackages.callPackage f {}; drv = beamPackages.callPackage f {};
in in
drv drv
</programlisting> </programlisting>
<section xml:id="building-in-a-shell"> <section xml:id="building-in-a-shell">
<title>Building in a shell</title> <title>Building in a Shell (for Mix Projects)</title>
<para> <para>
We can leveral the support of the Derivation, regardless of We can leverage the support of the derivation, irrespective of the build
which build Derivation is called by calling the commands themselv.s derivation, by calling the commands themselves.
</para> </para>
<programlisting> <programlisting>
# ============================================================================= # =============================================================================
@ -335,42 +432,43 @@ analyze: build plt
</programlisting> </programlisting>
<para> <para>
If you add the <literal>shell.nix</literal> as described and Using a <literal>shell.nix</literal> as described (see <xref
user rebar as follows things should simply work. Aside from the linkend="creating-a-shell"/>) should just work. Aside from
<literal>test</literal>, <literal>plt</literal>, and <literal>test</literal>, <literal>plt</literal>, and
<literal>analyze</literal> the talks work just fine for all of <literal>analyze</literal>, the Make targets work just fine for all of the
the build Derivations. build derivations.
</para> </para>
</section> </section>
</section> </section>
</section> </section>
<section xml:id="generating-packages-from-hex-with-hex2nix"> <section xml:id="generating-packages-from-hex-with-hex2nix">
<title>Generating Packages from Hex with Hex2Nix</title> <title>Generating Packages from Hex with <literal>hex2nix</literal></title>
<para> <para>
Updating the Hex packages requires the use of the Updating the <link xlink:href="https://hex.pm">Hex</link> package set
<literal>hex2nix</literal> tool. Given the path to the Erlang requires <link
modules (usually xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>. Given the
<literal>pkgs/development/erlang-modules</literal>). It will path to the Erlang modules (usually
happily dump a file called <literal>pkgs/development/erlang-modules</literal>), it will dump a file
<literal>hex-packages.nix</literal>. That file will contain all called <literal>hex-packages.nix</literal>, containing all the packages that
the packages that use a recognized build system in Hex. However, use a recognized build system in <link
it can't know whether or not all those packages are buildable. xlink:href="https://hex.pm">Hex</link>. It can't be determined, however,
whether every package is buildable.
</para> </para>
<para> <para>
To make life easier for our users, it makes good sense to go To make life easier for our users, try to build every <link
ahead and attempt to build all those packages and remove the xlink:href="https://hex.pm">Hex</link> package and remove those that fail.
ones that don't build. To do that, simply run the command (in To do that, simply run the following command in the root of your
the root of your <literal>nixpkgs</literal> repository). that follows. <literal>nixpkgs</literal> repository:
</para> </para>
<programlisting> <programlisting>
$ nix-build -A beamPackages $ nix-build -A beamPackages
</programlisting> </programlisting>
<para> <para>
That will build every package in That will attempt to build every package in
<literal>beamPackages</literal>. Then you can go through and <literal>beamPackages</literal>. Then manually remove those that fail.
manually remove the ones that fail. Hopefully, someone will Hopefully, someone will improve <link
improve <literal>hex2nix</literal> in the future to automate xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> in the
that. future to automate the process.
</para> </para>
</section> </section>
</section> </section>

View File

@ -130,6 +130,9 @@ the following arguments are of special significance to the function:
</para> </para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
<para> <para>
<varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" /> <varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" />
where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows: where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows:
@ -160,7 +163,4 @@ done
</screen> </screen>
</para> </para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
</section> </section>

View File

@ -923,6 +923,28 @@ If you need to change a package's attribute(s) from `configuration.nix` you coul
If you are using the `bepasty-server` package somewhere, for example in `systemPackages` or indirectly from `services.bepasty`, then a `nixos-rebuild switch` will rebuild the system but with the `bepasty-server` package using a different `src` attribute. This way one can modify `python` based software/libraries easily. Using `self` and `super` one can also alter dependencies (`buildInputs`) between the old state (`self`) and new state (`super`). If you are using the `bepasty-server` package somewhere, for example in `systemPackages` or indirectly from `services.bepasty`, then a `nixos-rebuild switch` will rebuild the system but with the `bepasty-server` package using a different `src` attribute. This way one can modify `python` based software/libraries easily. Using `self` and `super` one can also alter dependencies (`buildInputs`) between the old state (`self`) and new state (`super`).
### How to override a Python package using overlays?
To alter a python package using overlays, you would use the following approach:
```nix
self: super:
rec {
python = super.python.override {
packageOverrides = python-self: python-super: {
bepasty-server = python-super.bepasty-server.overrideAttrs ( oldAttrs: {
src = self.pkgs.fetchgit {
url = "https://github.com/bepasty/bepasty-server";
sha256 = "9ziqshmsf0rjvdhhca55sm0x8jz76fsf2q4rwh4m6lpcf8wr0nps";
rev = "e2516e8cf4f2afb5185337073607eb9e84a61d2d";
};
});
};
};
pythonPackages = python.pkgs;
}
```
## Contributing ## Contributing
### Contributing guidelines ### Contributing guidelines

View File

@ -2,31 +2,55 @@
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-qt"> xml:id="sec-language-qt">
<title>Qt and KDE</title> <title>Qt</title>
<para>Qt is a comprehensive desktop and mobile application development toolkit for C++. Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5. The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features, but older versions are typically retained to support packages that may not be compatible with the latest version. When packaging applications and libraries for Nixpkgs, it is important to ensure that compatible versions of Qt 5 are used throughout; this consideration motivates the tools described below.</para> <para>
Qt is a comprehensive desktop and mobile application development toolkit for C++.
Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5.
The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features,
but older versions are typically retained until their support window ends.
The most important consideration in packaging Qt-based software is ensuring that each package and all its dependencies use the same version of Qt 5;
this consideration motivates most of the tools described below.
</para>
<section xml:id="ssec-qt-libraries"><title>Libraries</title> <section xml:id="ssec-qt-libraries"><title>Packaging Libraries for Nixpkgs</title>
<para>Libraries that depend on Qt 5 should be built with each available version to avoid linking a dependent package against incompatible versions of Qt 5. (Although Qt 5 maintains backward ABI compatibility, linking against multiple versions at once is generally not possible; at best it will lead to runtime faults.) Packages that provide libraries should be added to the top-level function <varname>mkLibsForQt5</varname>, which is used to build a set of libraries for every Qt 5 version. The <varname>callPackage</varname> provided in this scope will ensure that only one Qt version will be used throughout the dependency tree. Dependencies should be imported unqualified, i.e. <literal>qtbase</literal> not <literal>qt5.qtbase</literal>, so that <varname>callPackage</varname> can do its work. <emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal> into your package; although it may work fine in the moment, it could well break at the next Qt update.</para> <para>
Whenever possible, libraries that use Qt 5 should be built with each available version.
Packages providing libraries should be added to the top-level function <varname>mkLibsForQt5</varname>,
which is used to build a set of libraries for every Qt 5 version.
A special <varname>callPackage</varname> function is used in this scope to ensure that the entire dependency tree uses the same Qt 5 version.
Import dependencies unqualified, i.e., <literal>qtbase</literal> not <literal>qt5.qtbase</literal>.
<emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
</para>
<para>If a library does not support a particular version of Qt 5, it is best to mark it as broken by setting its <literal>meta.broken</literal> attribute. A package may be marked broken for certain versions by testing the <literal>qtbase.version</literal> attribute, which will always give the current Qt 5 version.</para> <para>
If a library does not support a particular version of Qt 5, it is best to mark it as broken by setting its <literal>meta.broken</literal> attribute.
A package may be marked broken for certain versions by testing the <literal>qtbase.version</literal> attribute, which will always give the current Qt 5 version.
</para>
</section> </section>
<section xml:id="ssec-qt-applications"><title>Applications</title> <section xml:id="ssec-qt-applications"><title>Packaging Applications for Nixpkgs</title>
<para>Applications generally do not need to be built with every Qt version because they do not provide any libraries for dependent packages to link against. The primary consideration is merely ensuring that the application itself and its dependencies are linked against only one version of Qt. To call your application expression, use <literal>libsForQt5.callPackage</literal> instead of <literal>callPackage</literal>. Dependencies should be imported unqualified, i.e. <literal>qtbase</literal> not <literal>qt5.qtbase</literal>. <emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal> into your package; although it may work fine in the moment, it could well break at the next Qt update.</para> <para>
Call your application expression using <literal>libsForQt5.callPackage</literal> instead of <literal>callPackage</literal>.
Import dependencies unqualified, i.e., <literal>qtbase</literal> not <literal>qt5.qtbase</literal>.
<emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
</para>
<para>It is generally best to build an application package against the <varname>libsForQt5</varname> library set. In case a package does not build with the latest Qt version, it is possible to pick a set pinned to a particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that is the latest version the package supports.</para> <para>
Qt 5 maintains strict backward compatibility, so it is generally best to build an application package against the latest version using the <varname>libsForQt5</varname> library set.
In case a package does not build with the latest Qt version, it is possible to pick a set pinned to a particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that is the latest version the package supports.
If a package must be pinned to an older Qt version, be sure to file a bug upstream;
because Qt is strictly backwards-compatible, any incompatibility is by definition a bug in the application.
</para>
<para>Qt-based applications require that several paths be set at runtime. This is accomplished by wrapping the provided executables in a package with <literal>wrapQtProgram</literal> or <literal>makeQtWrapper</literal> during the <literal>postFixup</literal> phase. To use the wrapper generators, add <literal>makeQtWrapper</literal> to <literal>nativeBuildInputs</literal>. The wrapper generators support the same options as <literal>wrapProgram</literal> and <literal>makeWrapper</literal> respectively. It is usually only necessary to generate wrappers for programs intended to be invoked by the user.</para> <para>
When testing applications in Nixpkgs, it is a common practice to build the package with <literal>nix-build</literal> and run it using the created symbolic link.
</section> This will not work with Qt applications, however, because they have many hard runtime requirements that can only be guaranteed if the package is actually installed.
To test a Qt application, install it with <literal>nix-env</literal> or run it inside <literal>nix-shell</literal>.
<section xml:id="ssec-qt-kde"><title>KDE</title> </para>
<para>The KDE Frameworks are a set of libraries for Qt 5 which form the basis of the Plasma desktop environment and the KDE Applications suite. Packaging a Frameworks-based library does not require any steps beyond those described above for general Qt-based libraries. Frameworks-based applications should not use <literal>makeQtWrapper</literal>; instead, use <literal>kdeWrapper</literal> to create the necessary wrappers: <literal>kdeWrapper { unwrapped = <replaceable>expr</replaceable>; targets = <replaceable>exes</replaceable>; }</literal>, where <replaceable>expr</replaceable> is the un-wrapped package expression and <replaceable>exes</replaceable> is a list of strings giving the relative paths to programs in the package which should be wrapped.</para>
</section> </section>

View File

@ -8,15 +8,48 @@ date: 2016-06-25
You'll get a vim(-your-suffix) in PATH also loading the plugins you want. You'll get a vim(-your-suffix) in PATH also loading the plugins you want.
Loading can be deferred; see examples. Loading can be deferred; see examples.
VAM (=vim-addon-manager) and Pathogen plugin managers are supported. Vim packages, VAM (=vim-addon-manager) and Pathogen are supported to load
Vundle, NeoBundle could be your turn. packages.
## dependencies by Vim plugins ## Custom configuration
Adding custom .vimrc lines can be done using the following code:
```
vim_configurable.customize {
name = "vim-with-plugins";
vimrcConfig.customRC = ''
set hidden
'';
}
```
## Vim packages
To store you plugins in Vim packages the following example can be used:
```
vim_configurable.customize {
vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
# loaded on launch
start = [ youcompleteme fugitive ];
# manually loadable by calling `:packadd $plugin-name`
opt = [ phpCompletion elm-vim ];
# To automatically load a plugin when opening a filetype, add vimrc lines like:
# autocmd FileType php :packadd phpCompletion
}
};
```
## VAM
### dependencies by Vim plugins
VAM introduced .json files supporting dependencies without versioning VAM introduced .json files supporting dependencies without versioning
assuming that "using latest version" is ok most of the time. assuming that "using latest version" is ok most of the time.
## HOWTO ### Example
First create a vim-scripts file having one plugin name per line. Example: First create a vim-scripts file having one plugin name per line. Example:

View File

@ -78,7 +78,7 @@ self: super:
<para>The first argument, usually named <varname>self</varname>, corresponds to the final package <para>The first argument, usually named <varname>self</varname>, corresponds to the final package
set. You should use this set for the dependencies of all packages specified in your set. You should use this set for the dependencies of all packages specified in your
overlay. For example, all the dependencies of <varname>rr</varname> in the example above come overlay. For example, all the dependencies of <varname>rr</varname> in the example above come
from <varname>self</varname>, as well as the overriden dependencies used in the from <varname>self</varname>, as well as the overridden dependencies used in the
<varname>boost</varname> override.</para> <varname>boost</varname> override.</para>
<para>The second argument, usually named <varname>super</varname>, <para>The second argument, usually named <varname>super</varname>,

View File

@ -516,4 +516,140 @@ to your configuration, rebuild, and run the game with
</section> </section>
<section xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to
configure. <varname>emacsWithPackages</varname> allows you to manage
packages from ELPA. This means that you will not have to install
that packages from within Emacs. For instance, if you wanted to use
<literal>company</literal>, <literal>counsel</literal>,
<literal>flycheck</literal>, <literal>ivy</literal>,
<literal>magit</literal>, <literal>projectile</literal>, and
<literal>use-package</literal> you could use this as a
<filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA
myEmacs</command>. However, this will only install those packages.
It will not <literal>configure</literal> them for us. To do this, we
need to provide a configuration file. Luckily, it is possible to do
this from within Nix! By modifying the above example, we can make
Emacs load a custom config file. The key is to create a package that
provide a <filename>default.el</filename> file in
<filename>/share/emacs/site-start/</filename>. Emacs knows to load
this file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in
addition to the user's presonal config. You can always disable it by
passing <command>-q</command> to the Emacs command.
</para>
</section>
</section>
</chapter> </chapter>

View File

@ -18,7 +18,7 @@
<para>The high change rate of nixpkgs make any pull request that is open for <para>The high change rate of nixpkgs make any pull request that is open for
long enough subject to conflicts that will require extra work from the long enough subject to conflicts that will require extra work from the
submitter or the merger. Reviewing pull requests in a timely manner and being submitter or the merger. Reviewing pull requests in a timely manner and being
responsive to the comments is the key to avoid these. Github provides sort responsive to the comments is the key to avoid these. GitHub provides sort
filters that can be used to see the <link filters that can be used to see the <link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most
recently</link> and the <link recently</link> and the <link

View File

@ -318,7 +318,13 @@ containing some shell commands to be executed, or by redefining the
shell function shell function
<varname><replaceable>name</replaceable>Phase</varname>. The former <varname><replaceable>name</replaceable>Phase</varname>. The former
is convenient to override a phase from the derivation, while the is convenient to override a phase from the derivation, while the
latter is convenient from a build script.</para> latter is convenient from a build script.
However, typically one only wants to <emphasis>add</emphasis> some
commands to a phase, e.g. by defining <literal>postInstall</literal>
or <literal>preFixup</literal>, as skipping some of the default actions
may have unexpected consequences.
</para>
<section xml:id="ssec-controlling-phases"><title>Controlling <section xml:id="ssec-controlling-phases"><title>Controlling
@ -634,6 +640,16 @@ script) if it exists.</para>
true.</para></listitem> true.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term><varname>configurePlatforms</varname></term>
<listitem><para>
By default, when cross compiling, the configure script has <option>--build=...</option> and <option>--host=...</option> passed.
Packages can instead pass <literal>[ "build" "host" "target" ]</literal> or a subset to control exactly which platform flags are passed.
Compilers and other tools should use this to also pass the target platform, for example.
Note eventually these will be passed when in native builds too, to improve determinism: build-time guessing, as is done today, is a risk of impurity.
</para></listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term><varname>preConfigure</varname></term> <term><varname>preConfigure</varname></term>
<listitem><para>Hook executed at the start of the configure <listitem><para>Hook executed at the start of the configure
@ -1156,7 +1172,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
<term><option>--replace</option> <term><option>--replace</option>
<replaceable>s1</replaceable> <replaceable>s1</replaceable>
<replaceable>s2</replaceable></term> <replaceable>s2</replaceable></term>
<listitem><para>Replace every occurence of the string <listitem><para>Replace every occurrence of the string
<replaceable>s1</replaceable> by <replaceable>s1</replaceable> by
<replaceable>s2</replaceable>.</para></listitem> <replaceable>s2</replaceable>.</para></listitem>
</varlistentry> </varlistentry>
@ -1164,7 +1180,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
<varlistentry> <varlistentry>
<term><option>--subst-var</option> <term><option>--subst-var</option>
<replaceable>varName</replaceable></term> <replaceable>varName</replaceable></term>
<listitem><para>Replace every occurence of <listitem><para>Replace every occurrence of
<literal>@<replaceable>varName</replaceable>@</literal> by <literal>@<replaceable>varName</replaceable>@</literal> by
the contents of the environment variable the contents of the environment variable
<replaceable>varName</replaceable>. This is useful for <replaceable>varName</replaceable>. This is useful for
@ -1177,7 +1193,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
<term><option>--subst-var-by</option> <term><option>--subst-var-by</option>
<replaceable>varName</replaceable> <replaceable>varName</replaceable>
<replaceable>s</replaceable></term> <replaceable>s</replaceable></term>
<listitem><para>Replace every occurence of <listitem><para>Replace every occurrence of
<literal>@<replaceable>varName</replaceable>@</literal> by <literal>@<replaceable>varName</replaceable>@</literal> by
the string <replaceable>s</replaceable>.</para></listitem> the string <replaceable>s</replaceable>.</para></listitem>
</varlistentry> </varlistentry>
@ -1225,7 +1241,7 @@ substitute ./foo.in ./foo.out \
<term><function>substituteAll</function> <term><function>substituteAll</function>
<replaceable>infile</replaceable> <replaceable>infile</replaceable>
<replaceable>outfile</replaceable></term> <replaceable>outfile</replaceable></term>
<listitem><para>Replaces every occurence of <listitem><para>Replaces every occurrence of
<literal>@<replaceable>varName</replaceable>@</literal>, where <literal>@<replaceable>varName</replaceable>@</literal>, where
<replaceable>varName</replaceable> is any environment variable, in <replaceable>varName</replaceable> is any environment variable, in
<replaceable>infile</replaceable>, writing the result to <replaceable>infile</replaceable>, writing the result to
@ -1528,7 +1544,7 @@ bin/blib.a(bios_console.o): In function `bios_handle_cup':
depends on such a format string, it will need to be worked around. depends on such a format string, it will need to be worked around.
</para> </para>
<para>Addtionally, some warnings are enabled which might trigger build <para>Additionally, some warnings are enabled which might trigger build
failures if compiler warnings are treated as errors in the package build. failures if compiler warnings are treated as errors in the package build.
In this case, set <option>NIX_CFLAGS_COMPILE</option> to In this case, set <option>NIX_CFLAGS_COMPILE</option> to
<option>-Wno-error=warning-type</option>.</para> <option>-Wno-error=warning-type</option>.</para>
@ -1558,7 +1574,7 @@ fcntl2.h:50:4: error: call to '__open_missing_mode' declared with attribute erro
<term><varname>pic</varname></term> <term><varname>pic</varname></term>
<listitem> <listitem>
<para>Adds the <option>-fPIC</option> compiler options. This options adds <para>Adds the <option>-fPIC</option> compiler options. This options adds
support for position independant code in shared libraries and thus making support for position independent code in shared libraries and thus making
ASLR possible.</para> ASLR possible.</para>
<para>Most notably, the Linux kernel, kernel modules and other code <para>Most notably, the Linux kernel, kernel modules and other code
not running in an operating system environment like boot loaders won't not running in an operating system environment like boot loaders won't

View File

@ -51,6 +51,24 @@ rec {
else { })); else { }));
/* `makeOverridable` takes a function from attribute set to attribute set and
injects `override` attibute which can be used to override arguments of
the function.
nix-repl> x = {a, b}: { result = a + b; }
nix-repl> y = lib.makeOverridable x { a = 1; b = 2; }
nix-repl> y
{ override = «lambda»; overrideDerivation = «lambda»; result = 3; }
nix-repl> y.override { a = 10; }
{ override = «lambda»; overrideDerivation = «lambda»; result = 12; }
Please refer to "Nixpkgs Contributors Guide" section
"<pkg>.overrideDerivation" to learn about `overrideDerivation` and caveats
related to its use.
*/
makeOverridable = f: origArgs: makeOverridable = f: origArgs:
let let
ff = f origArgs; ff = f origArgs;

View File

@ -20,8 +20,32 @@ rec {
traceXMLValMarked = str: x: trace (str + builtins.toXML x) x; traceXMLValMarked = str: x: trace (str + builtins.toXML x) x;
# strict trace functions (traced structure is fully evaluated and printed) # strict trace functions (traced structure is fully evaluated and printed)
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first. */
traceSeq = x: y: trace (builtins.deepSeq x x) y; traceSeq = x: y: trace (builtins.deepSeq x x) y;
/* Like `traceSeq`, but only down to depth n.
* This is very useful because lots of `traceSeq` usages
* lead to an infinite recursion.
*/
traceSeqN = depth: x: y: with lib;
let snip = v: if isList v then noQuotes "[]" v
else if isAttrs v then noQuotes "{}" v
else v;
noQuotes = str: v: { __pretty = const str; val = v; };
modify = n: fn: v: if (n == 0) then fn v
else if isList v then map (modify (n - 1) fn) v
else if isAttrs v then mapAttrs
(const (modify (n - 1) fn)) v
else v;
in trace (generators.toPretty { allowPrettyValues = true; }
(modify depth snip x)) y;
/* `traceSeq`, but the same value is traced and returned */
traceValSeq = v: traceVal (builtins.deepSeq v v); traceValSeq = v: traceVal (builtins.deepSeq v v);
/* `traceValSeq` but with fixed depth */
traceValSeqN = depth: v: traceSeqN depth v v;
# this can help debug your code as well - designed to not produce thousands of lines # this can help debug your code as well - designed to not produce thousands of lines
traceShowVal = x: trace (showVal x) x; traceShowVal = x: trace (showVal x) x;

View File

@ -5,8 +5,9 @@
*/ */
let let
# trivial, often used functions # often used, or depending on very little
trivial = import ./trivial.nix; trivial = import ./trivial.nix;
fixedPoints = import ./fixed-points.nix;
# datatypes # datatypes
attrsets = import ./attrsets.nix; attrsets = import ./attrsets.nix;
@ -42,7 +43,7 @@ let
filesystem = import ./filesystem.nix; filesystem = import ./filesystem.nix;
in in
{ inherit trivial { inherit trivial fixedPoints
attrsets lists strings stringsWithDeps attrsets lists strings stringsWithDeps
customisation maintainers meta sources customisation maintainers meta sources
modules options types modules options types
@ -55,6 +56,7 @@ in
} }
# !!! don't include everything at top-level; perhaps only the most # !!! don't include everything at top-level; perhaps only the most
# commonly used functions. # commonly used functions.
// trivial // lists // strings // stringsWithDeps // attrsets // sources // trivial // fixedPoints
// lists // strings // stringsWithDeps // attrsets // sources
// options // types // meta // debug // misc // modules // options // types // meta // debug // misc // modules
// customisation // customisation

78
lib/fixed-points.nix Normal file
View File

@ -0,0 +1,78 @@
rec {
# Compute the fixed point of the given function `f`, which is usually an
# attribute set that expects its final, non-recursive representation as an
# argument:
#
# f = self: { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; }
#
# Nix evaluates this recursion until all references to `self` have been
# resolved. At that point, the final result is returned and `f x = x` holds:
#
# nix-repl> fix f
# { bar = "bar"; foo = "foo"; foobar = "foobar"; }
#
# Type: fix :: (a -> a) -> a
#
# See https://en.wikipedia.org/wiki/Fixed-point_combinator for further
# details.
fix = f: let x = f x; in x;
# A variant of `fix` that records the original recursive attribute set in the
# result. This is useful in combination with the `extends` function to
# implement deep overriding. See pkgs/development/haskell-modules/default.nix
# for a concrete example.
fix' = f: let x = f x // { __unfix__ = f; }; in x;
# Modify the contents of an explicitly recursive attribute set in a way that
# honors `self`-references. This is accomplished with a function
#
# g = self: super: { foo = super.foo + " + "; }
#
# that has access to the unmodified input (`super`) as well as the final
# non-recursive representation of the attribute set (`self`). `extends`
# differs from the native `//` operator insofar as that it's applied *before*
# references to `self` are resolved:
#
# nix-repl> fix (extends g f)
# { bar = "bar"; foo = "foo + "; foobar = "foo + bar"; }
#
# The name of the function is inspired by object-oriented inheritance, i.e.
# think of it as an infix operator `g extends f` that mimics the syntax from
# Java. It may seem counter-intuitive to have the "base class" as the second
# argument, but it's nice this way if several uses of `extends` are cascaded.
extends = f: rattrs: self: let super = rattrs self; in super // f self super;
# Compose two extending functions of the type expected by 'extends'
# into one where changes made in the first are available in the
# 'super' of the second
composeExtensions =
f: g: self: super:
let fApplied = f self super;
super' = super // fApplied;
in fApplied // g self super';
# Create an overridable, recursive attribute set. For example:
#
# nix-repl> obj = makeExtensible (self: { })
#
# nix-repl> obj
# { __unfix__ = «lambda»; extend = «lambda»; }
#
# nix-repl> obj = obj.extend (self: super: { foo = "foo"; })
#
# nix-repl> obj
# { __unfix__ = «lambda»; extend = «lambda»; foo = "foo"; }
#
# nix-repl> obj = obj.extend (self: super: { foo = super.foo + " + "; bar = "bar"; foobar = self.foo + self.bar; })
#
# nix-repl> obj
# { __unfix__ = «lambda»; bar = "bar"; extend = «lambda»; foo = "foo + "; foobar = "foo + bar"; }
makeExtensible = makeExtensibleWithCustomName "extend";
# Same as `makeExtensible` but the name of the extending attribute is
# customized.
makeExtensibleWithCustomName = extenderName: rattrs:
fix' rattrs // {
${extenderName} = f: makeExtensibleWithCustomName extenderName (extends f rattrs);
};
}

View File

@ -90,4 +90,41 @@ rec {
* parsers as well. * parsers as well.
*/ */
toYAML = {}@args: toJSON args; toYAML = {}@args: toJSON args;
/* Pretty print a value, akin to `builtins.trace`.
* Should probably be a builtin as well.
*/
toPretty = {
/* If this option is true, attrsets like { __pretty = fn; val = ; }
will use fn to convert val to a pretty printed representation.
(This means fn is type Val -> String.) */
allowPrettyValues ? false
}@args: v: with builtins;
if isInt v then toString v
else if isBool v then (if v == true then "true" else "false")
else if isString v then "\"" + v + "\""
else if null == v then "null"
else if isFunction v then
let fna = functionArgs v;
showFnas = concatStringsSep "," (libAttr.mapAttrsToList
(name: hasDefVal: if hasDefVal then "(${name})" else name)
fna);
in if fna == {} then "<λ>"
else "<λ:{${showFnas}}>"
else if isList v then "[ "
+ libStr.concatMapStringsSep " " (toPretty args) v
+ " ]"
else if isAttrs v then
# apply pretty values if allowed
if attrNames v == [ "__pretty" "val" ] && allowPrettyValues
then v.__pretty v.val
# TODO: there is probably a better representation?
else if v ? type && v.type == "derivation" then "<δ>"
else "{ "
+ libStr.concatStringsSep " " (libAttr.mapAttrsToList
(name: value:
"${toPretty args name} = ${toPretty args value};") v)
+ " }"
else "toPretty: should never happen (v = ${v})";
} }

View File

@ -45,6 +45,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Apple Public Source License 2.0"; fullName = "Apple Public Source License 2.0";
}; };
arphicpl = {
fullName = "Arphic Public License";
url = https://www.freedesktop.org/wiki/Arphic_Public_License/;
};
artistic1 = spdx { artistic1 = spdx {
spdxId = "Artistic-1.0"; spdxId = "Artistic-1.0";
fullName = "Artistic License 1.0"; fullName = "Artistic License 1.0";

View File

@ -99,6 +99,7 @@
chris-martin = "Chris Martin <ch.martin@gmail.com>"; chris-martin = "Chris Martin <ch.martin@gmail.com>";
chrisjefferson = "Christopher Jefferson <chris@bubblescope.net>"; chrisjefferson = "Christopher Jefferson <chris@bubblescope.net>";
christopherpoole = "Christopher Mark Poole <mail@christopherpoole.net>"; christopherpoole = "Christopher Mark Poole <mail@christopherpoole.net>";
ciil = "Simon Lackerbauer <simon@lackerbauer.com>";
ckampka = "Christian Kampka <christian@kampka.net>"; ckampka = "Christian Kampka <christian@kampka.net>";
cko = "Christine Koppelt <christine.koppelt@gmail.com>"; cko = "Christine Koppelt <christine.koppelt@gmail.com>";
cleverca22 = "Michael Bishop <cleverca22@gmail.com>"; cleverca22 = "Michael Bishop <cleverca22@gmail.com>";
@ -132,6 +133,7 @@
deepfire = "Kosyrev Serge <_deepfire@feelingofgreen.ru>"; deepfire = "Kosyrev Serge <_deepfire@feelingofgreen.ru>";
demin-dmitriy = "Dmitriy Demin <demindf@gmail.com>"; demin-dmitriy = "Dmitriy Demin <demindf@gmail.com>";
DerGuteMoritz = "Moritz Heidkamp <moritz@twoticketsplease.de>"; DerGuteMoritz = "Moritz Heidkamp <moritz@twoticketsplease.de>";
dermetfan = "Robin Stumm <serverkorken@gmail.com>";
DerTim1 = "Tim Digel <tim.digel@active-group.de>"; DerTim1 = "Tim Digel <tim.digel@active-group.de>";
desiderius = "Didier J. Devroye <didier@devroye.name>"; desiderius = "Didier J. Devroye <didier@devroye.name>";
devhell = "devhell <\"^\"@regexmail.net>"; devhell = "devhell <\"^\"@regexmail.net>";
@ -139,11 +141,13 @@
dfoxfranke = "Daniel Fox Franke <dfoxfranke@gmail.com>"; dfoxfranke = "Daniel Fox Franke <dfoxfranke@gmail.com>";
dgonyeo = "Derek Gonyeo <derek@gonyeo.com>"; dgonyeo = "Derek Gonyeo <derek@gonyeo.com>";
dipinhora = "Dipin Hora <dipinhora+github@gmail.com>"; dipinhora = "Dipin Hora <dipinhora+github@gmail.com>";
disassembler = "Samuel Leathers <disasm@gmail.com>";
dmalikov = "Dmitry Malikov <malikov.d.y@gmail.com>"; dmalikov = "Dmitry Malikov <malikov.d.y@gmail.com>";
DmitryTsygankov = "Dmitry Tsygankov <dmitry.tsygankov@gmail.com>"; DmitryTsygankov = "Dmitry Tsygankov <dmitry.tsygankov@gmail.com>";
dmjio = "David Johnson <djohnson.m@gmail.com>"; dmjio = "David Johnson <djohnson.m@gmail.com>";
dochang = "Desmond O. Chang <dochang@gmail.com>"; dochang = "Desmond O. Chang <dochang@gmail.com>";
domenkozar = "Domen Kozar <domen@dev.si>"; domenkozar = "Domen Kozar <domen@dev.si>";
dotlambda = "Robert Schütz <rschuetz17@gmail.com>";
doublec = "Chris Double <chris.double@double.co.nz>"; doublec = "Chris Double <chris.double@double.co.nz>";
dpaetzel = "David Pätzel <david.a.paetzel@gmail.com>"; dpaetzel = "David Pätzel <david.a.paetzel@gmail.com>";
drets = "Dmytro Rets <dmitryrets@gmail.com>"; drets = "Dmytro Rets <dmitryrets@gmail.com>";
@ -161,6 +165,7 @@
ehegnes = "Eric Hegnes <eric.hegnes@gmail.com>"; ehegnes = "Eric Hegnes <eric.hegnes@gmail.com>";
ehmry = "Emery Hemingway <emery@vfemail.net>"; ehmry = "Emery Hemingway <emery@vfemail.net>";
eikek = "Eike Kettner <eike.kettner@posteo.de>"; eikek = "Eike Kettner <eike.kettner@posteo.de>";
ekleog = "Leo Gaspard <leo@gaspard.io>";
elasticdog = "Aaron Bull Schaefer <aaron@elasticdog.com>"; elasticdog = "Aaron Bull Schaefer <aaron@elasticdog.com>";
eleanor = "Dejan Lukan <dejan@proteansec.com>"; eleanor = "Dejan Lukan <dejan@proteansec.com>";
elitak = "Eric Litak <elitak@gmail.com>"; elitak = "Eric Litak <elitak@gmail.com>";
@ -176,7 +181,9 @@
exlevan = "Alexey Levan <exlevan@gmail.com>"; exlevan = "Alexey Levan <exlevan@gmail.com>";
expipiplus1 = "Joe Hermaszewski <nix@monoid.al>"; expipiplus1 = "Joe Hermaszewski <nix@monoid.al>";
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>"; fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
fare = "Francois-Rene Rideau <fahree@gmail.com>";
falsifian = "James Cook <james.cook@utoronto.ca>"; falsifian = "James Cook <james.cook@utoronto.ca>";
florianjacob = "Florian Jacob <projects+nixos@florianjacob.de>";
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>"; flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>"; fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>";
fmthoma = "Franz Thoma <f.m.thoma@googlemail.com>"; fmthoma = "Franz Thoma <f.m.thoma@googlemail.com>";
@ -241,6 +248,7 @@
jensbin = "Jens Binkert <jensbin@protonmail.com>"; jensbin = "Jens Binkert <jensbin@protonmail.com>";
jerith666 = "Matt McHenry <github@matt.mchenryfamily.org>"; jerith666 = "Matt McHenry <github@matt.mchenryfamily.org>";
jfb = "James Felix Black <james@yamtime.com>"; jfb = "James Felix Black <james@yamtime.com>";
jfrankenau = "Johannes Frankenau <johannes@frankenau.net>";
jgeerds = "Jascha Geerds <jascha@jgeerds.name>"; jgeerds = "Jascha Geerds <jascha@jgeerds.name>";
jgertm = "Tim Jaeger <jger.tm@gmail.com>"; jgertm = "Tim Jaeger <jger.tm@gmail.com>";
jgillich = "Jakob Gillich <jakob@gillich.me>"; jgillich = "Jakob Gillich <jakob@gillich.me>";
@ -266,6 +274,7 @@
kaiha = "Kai Harries <kai.harries@gmail.com>"; kaiha = "Kai Harries <kai.harries@gmail.com>";
kamilchm = "Kamil Chmielewski <kamil.chm@gmail.com>"; kamilchm = "Kamil Chmielewski <kamil.chm@gmail.com>";
kampfschlaefer = "Arnold Krille <arnold@arnoldarts.de>"; kampfschlaefer = "Arnold Krille <arnold@arnoldarts.de>";
kentjames = "James Kent <jameschristopherkent@gmail.com";
kevincox = "Kevin Cox <kevincox@kevincox.ca>"; kevincox = "Kevin Cox <kevincox@kevincox.ca>";
khumba = "Bryan Gardiner <bog@khumba.net>"; khumba = "Bryan Gardiner <bog@khumba.net>";
KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>"; KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>";
@ -462,6 +471,7 @@
rob = "Rob Vermaas <rob.vermaas@gmail.com>"; rob = "Rob Vermaas <rob.vermaas@gmail.com>";
robberer = "Longrin Wischnewski <robberer@freakmail.de>"; robberer = "Longrin Wischnewski <robberer@freakmail.de>";
robbinch = "Robbin C. <robbinch33@gmail.com>"; robbinch = "Robbin C. <robbinch33@gmail.com>";
roberth = "Robert Hensing <nixpkgs@roberthensing.nl>";
robgssp = "Rob Glossop <robgssp@gmail.com>"; robgssp = "Rob Glossop <robgssp@gmail.com>";
roblabla = "Robin Lambertz <robinlambertz+dev@gmail.com>"; roblabla = "Robin Lambertz <robinlambertz+dev@gmail.com>";
roconnor = "Russell O'Connor <roconnor@theorem.ca>"; roconnor = "Russell O'Connor <roconnor@theorem.ca>";
@ -472,6 +482,7 @@
rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>"; rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>";
rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>"; rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>";
rvlander = "Gaëtan André <rvlander@gaetanandre.eu>"; rvlander = "Gaëtan André <rvlander@gaetanandre.eu>";
rvolosatovs = "Roman Volosatovs <rvolosatovs@riseup.net";
ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>"; ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>";
ryansydnor = "Ryan Sydnor <ryan.t.sydnor@gmail.com>"; ryansydnor = "Ryan Sydnor <ryan.t.sydnor@gmail.com>";
ryantm = "Ryan Mulligan <ryan@ryantm.com>"; ryantm = "Ryan Mulligan <ryan@ryantm.com>";
@ -542,6 +553,7 @@
tokudan = "Daniel Frank <git@danielfrank.net>"; tokudan = "Daniel Frank <git@danielfrank.net>";
tomberek = "Thomas Bereknyei <tomberek@gmail.com>"; tomberek = "Thomas Bereknyei <tomberek@gmail.com>";
travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>"; travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>";
trevorj = "Trevor Joynson <nix@trevor.joynson.io>";
trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>"; trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>";
tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>"; tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>";
ttuegel = "Thomas Tuegel <ttuegel@mailbox.org>"; ttuegel = "Thomas Tuegel <ttuegel@mailbox.org>";
@ -599,4 +611,5 @@
zohl = "Al Zohali <zohl@fmap.me>"; zohl = "Al Zohali <zohl@fmap.me>";
zoomulator = "Kim Simmons <zoomulator@gmail.com>"; zoomulator = "Kim Simmons <zoomulator@gmail.com>";
zraexy = "David Mell <zraexy@gmail.com>"; zraexy = "David Mell <zraexy@gmail.com>";
zx2c4 = "Jason A. Donenfeld <Jason@zx2c4.com>";
} }

View File

@ -17,6 +17,11 @@ rec {
drv // { meta = (drv.meta or {}) // newAttrs; }; drv // { meta = (drv.meta or {}) // newAttrs; };
/* Disable Hydra builds of given derivation.
*/
dontDistribute = drv: addMetaAttrs { hydraPlatforms = []; } drv;
/* Change the symbolic name of a package for presentation purposes /* Change the symbolic name of a package for presentation purposes
(i.e., so that nix-env users can tell them apart). (i.e., so that nix-env users can tell them apart).
*/ */

View File

@ -438,8 +438,13 @@ rec {
=> true => true
isStorePath pkgs.python isStorePath pkgs.python
=> true => true
isStorePath [] || isStorePath 42 || isStorePath {} ||
=> false
*/ */
isStorePath = x: builtins.substring 0 1 (toString x) == "/" && dirOf (builtins.toPath x) == builtins.storeDir; isStorePath = x:
builtins.isString x
&& builtins.substring 0 1 (toString x) == "/"
&& dirOf (builtins.toPath x) == builtins.storeDir;
/* Convert string to int /* Convert string to int
Obviously, it is a bit hacky to use fromJSON that way. Obviously, it is a bit hacky to use fromJSON that way.

View File

@ -5,6 +5,7 @@ rec {
parse = import ./parse.nix; parse = import ./parse.nix;
inspect = import ./inspect.nix; inspect = import ./inspect.nix;
platforms = import ./platforms.nix; platforms = import ./platforms.nix;
examples = import ./examples.nix;
# Elaborate a `localSystem` or `crossSystem` so that it contains everything # Elaborate a `localSystem` or `crossSystem` so that it contains everything
# necessary. # necessary.

130
lib/systems/examples.nix Normal file
View File

@ -0,0 +1,130 @@
# These can be passed to nixpkgs as either the `localSystem` or
# `crossSystem`. They are put here for user convenience, but also used by cross
# tests and linux cross stdenv building, so handle with care!
let platforms = import ./platforms.nix; in
rec {
#
# Linux
#
sheevaplug = rec {
config = "armv5tel-unknown-linux-gnueabi";
bigEndian = false;
arch = "armv5tel";
float = "soft";
withTLS = true;
libc = "glibc";
platform = platforms.sheevaplug;
openssl.system = "linux-generic32";
inherit (platform) gcc;
};
raspberryPi = rec {
config = "armv6l-unknown-linux-gnueabihf";
bigEndian = false;
arch = "armv6l";
float = "hard";
fpu = "vfp";
withTLS = true;
libc = "glibc";
platform = platforms.raspberrypi;
openssl.system = "linux-generic32";
inherit (platform) gcc;
};
armv7l-hf-multiplatform = rec {
config = "arm-unknown-linux-gnueabihf";
bigEndian = false;
arch = "armv7-a";
float = "hard";
fpu = "vfpv3-d16";
withTLS = true;
libc = "glibc";
platform = platforms.armv7l-hf-multiplatform;
openssl.system = "linux-generic32";
inherit (platform) gcc;
};
aarch64-multiplatform = rec {
config = "aarch64-unknown-linux-gnu";
bigEndian = false;
arch = "aarch64";
withTLS = true;
libc = "glibc";
platform = platforms.aarch64-multiplatform;
inherit (platform) gcc;
};
scaleway-c1 = armv7l-hf-multiplatform // rec {
platform = platforms.scaleway-c1;
inherit (platform) gcc;
inherit (gcc) fpu;
};
pogoplug4 = rec {
arch = "armv5tel";
config = "armv5tel-softfloat-linux-gnueabi";
float = "soft";
platform = platforms.pogoplug4;
inherit (platform) gcc;
libc = "glibc";
withTLS = true;
openssl.system = "linux-generic32";
};
fuloongminipc = rec {
config = "mips64el-unknown-linux-gnu";
bigEndian = false;
arch = "mips";
float = "hard";
withTLS = true;
libc = "glibc";
platform = platforms.fuloong2f_n32;
openssl.system = "linux-generic32";
inherit (platform) gcc;
};
#
# Darwin
#
iphone64 = {
config = "aarch64-apple-darwin14";
arch = "arm64";
libc = "libSystem";
platform = {};
};
iphone32 = {
config = "arm-apple-darwin10";
arch = "armv7-a";
libc = "libSystem";
platform = {};
};
#
# Windows
#
# 32 bit mingw-w64
mingw32 = {
config = "i686-pc-mingw32";
arch = "x86"; # Irrelevant
libc = "msvcrt"; # This distinguishes the mingw (non posix) toolchain
platform = {};
};
# 64 bit mingw-w64
mingwW64 = {
# That's the triplet they use in the mingw-w64 docs.
config = "x86_64-pc-mingw32";
arch = "x86_64"; # Irrelevant
libc = "msvcrt"; # This distinguishes the mingw (non posix) toolchain
platform = {};
};
}

View File

@ -255,6 +255,10 @@ rec {
arch = "armv6"; arch = "armv6";
fpu = "vfp"; fpu = "vfp";
float = "hard"; float = "hard";
# TODO(@Ericson2314) what is this and is it a good idea? It was
# used in some cross compilation examples but not others.
#
# abi = "aapcs-linux";
}; };
}; };
@ -460,7 +464,10 @@ rec {
''; '';
kernelTarget = "vmlinux"; kernelTarget = "vmlinux";
uboot = null; uboot = null;
gcc.arch = "loongson2f"; gcc = {
arch = "loongson2f";
abi = "n32";
};
}; };
beaglebone = armv7l-hf-multiplatform // { beaglebone = armv7l-hf-multiplatform // {

View File

@ -1,7 +1,6 @@
# to run these tests: # to run these tests:
# nix-instantiate --eval --strict nixpkgs/lib/tests/misc.nix # nix-instantiate --eval --strict nixpkgs/lib/tests/misc.nix
# if the resulting list is empty, all tests passed # if the resulting list is empty, all tests passed
let inherit (builtins) add; in
with import ../default.nix; with import ../default.nix;
runTests { runTests {
@ -88,6 +87,37 @@ runTests {
expected = [ "2001" "db8" "0" "0042" "" "8a2e" "370" "" ]; expected = [ "2001" "db8" "0" "0042" "" "8a2e" "370" "" ];
}; };
testIsStorePath = {
expr =
let goodPath =
"${builtins.storeDir}/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11";
in {
storePath = isStorePath goodPath;
storePathAppendix = isStorePath
"${goodPath}/bin/python";
nonAbsolute = isStorePath (concatStrings (tail (stringToCharacters goodPath)));
asPath = isStorePath (builtins.toPath goodPath);
otherPath = isStorePath "/something/else";
otherVals = {
attrset = isStorePath {};
list = isStorePath [];
int = isStorePath 42;
};
};
expected = {
storePath = true;
storePathAppendix = false;
nonAbsolute = false;
asPath = true;
otherPath = false;
otherVals = {
attrset = false;
list = false;
int = false;
};
};
};
# LISTS # LISTS
testFilter = { testFilter = {
@ -255,6 +285,38 @@ runTests {
expected = builtins.toJSON val; expected = builtins.toJSON val;
}; };
testToPretty = {
expr = mapAttrs (const (generators.toPretty {})) rec {
int = 42;
bool = true;
string = "fnord";
null_ = null;
function = x: x;
functionArgs = { arg ? 4, foo }: arg;
list = [ 3 4 function [ false ] ];
attrs = { foo = null; "foo bar" = "baz"; };
drv = derivation { name = "test"; system = builtins.currentSystem; };
};
expected = rec {
int = "42";
bool = "true";
string = "\"fnord\"";
null_ = "null";
function = "<λ>";
functionArgs = "<λ:{(arg),foo}>";
list = "[ 3 4 ${function} [ false ] ]";
attrs = "{ \"foo\" = null; \"foo bar\" = \"baz\"; }";
drv = "<δ>";
};
};
testToPrettyAllowPrettyValues = {
expr = generators.toPretty { allowPrettyValues = true; }
{ __pretty = v: "«" + v + "»"; val = "foo"; };
expected = "«foo»";
};
# MISC # MISC
testOverridableDelayableArgsTest = { testOverridableDelayableArgsTest = {
@ -266,14 +328,14 @@ runTests {
res4 = let x = defaultOverridableDelayableArgs id { a = 7; }; res4 = let x = defaultOverridableDelayableArgs id { a = 7; };
in (x.merge) ( x: { b = 10; }); in (x.merge) ( x: { b = 10; });
res5 = let x = defaultOverridableDelayableArgs id { a = 7; }; res5 = let x = defaultOverridableDelayableArgs id { a = 7; };
in (x.merge) ( x: { a = add x.a 3; }); in (x.merge) ( x: { a = builtins.add x.a 3; });
res6 = let x = defaultOverridableDelayableArgs id { a = 7; mergeAttrBy = { a = add; }; }; res6 = let x = defaultOverridableDelayableArgs id { a = 7; mergeAttrBy = { a = builtins.add; }; };
y = x.merge {}; y = x.merge {};
in (y.merge) { a = 10; }; in (y.merge) { a = 10; };
resRem7 = res6.replace (a: removeAttrs a ["a"]); resRem7 = res6.replace (a: removeAttrs a ["a"]);
resReplace6 = let x = defaultOverridableDelayableArgs id { a = 7; mergeAttrBy = { a = add; }; }; resReplace6 = let x = defaultOverridableDelayableArgs id { a = 7; mergeAttrBy = { a = builtins.add; }; };
x2 = x.merge { a = 20; }; # now we have 27 x2 = x.merge { a = 20; }; # now we have 27
in (x2.replace) { a = 10; }; # and override the value by 10 in (x2.replace) { a = 10; }; # and override the value by 10

View File

@ -43,84 +43,6 @@ rec {
*/ */
mergeAttrs = x: y: x // y; mergeAttrs = x: y: x // y;
# Compute the fixed point of the given function `f`, which is usually an
# attribute set that expects its final, non-recursive representation as an
# argument:
#
# f = self: { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; }
#
# Nix evaluates this recursion until all references to `self` have been
# resolved. At that point, the final result is returned and `f x = x` holds:
#
# nix-repl> fix f
# { bar = "bar"; foo = "foo"; foobar = "foobar"; }
#
# Type: fix :: (a -> a) -> a
#
# See https://en.wikipedia.org/wiki/Fixed-point_combinator for further
# details.
fix = f: let x = f x; in x;
# A variant of `fix` that records the original recursive attribute set in the
# result. This is useful in combination with the `extends` function to
# implement deep overriding. See pkgs/development/haskell-modules/default.nix
# for a concrete example.
fix' = f: let x = f x // { __unfix__ = f; }; in x;
# Modify the contents of an explicitly recursive attribute set in a way that
# honors `self`-references. This is accomplished with a function
#
# g = self: super: { foo = super.foo + " + "; }
#
# that has access to the unmodified input (`super`) as well as the final
# non-recursive representation of the attribute set (`self`). `extends`
# differs from the native `//` operator insofar as that it's applied *before*
# references to `self` are resolved:
#
# nix-repl> fix (extends g f)
# { bar = "bar"; foo = "foo + "; foobar = "foo + bar"; }
#
# The name of the function is inspired by object-oriented inheritance, i.e.
# think of it as an infix operator `g extends f` that mimics the syntax from
# Java. It may seem counter-intuitive to have the "base class" as the second
# argument, but it's nice this way if several uses of `extends` are cascaded.
extends = f: rattrs: self: let super = rattrs self; in super // f self super;
# Compose two extending functions of the type expected by 'extends'
# into one where changes made in the first are available in the
# 'super' of the second
composeExtensions =
f: g: self: super:
let fApplied = f self super;
super' = super // fApplied;
in fApplied // g self super';
# Create an overridable, recursive attribute set. For example:
#
# nix-repl> obj = makeExtensible (self: { })
#
# nix-repl> obj
# { __unfix__ = «lambda»; extend = «lambda»; }
#
# nix-repl> obj = obj.extend (self: super: { foo = "foo"; })
#
# nix-repl> obj
# { __unfix__ = «lambda»; extend = «lambda»; foo = "foo"; }
#
# nix-repl> obj = obj.extend (self: super: { foo = super.foo + " + "; bar = "bar"; foobar = self.foo + self.bar; })
#
# nix-repl> obj
# { __unfix__ = «lambda»; bar = "bar"; extend = «lambda»; foo = "foo + "; foobar = "foo + bar"; }
makeExtensible = makeExtensibleWithCustomName "extend";
# Same as `makeExtensible` but the name of the extending attribute is
# customized.
makeExtensibleWithCustomName = extenderName: rattrs:
fix' rattrs // {
${extenderName} = f: makeExtensibleWithCustomName extenderName (extends f rattrs);
};
# Flip the order of the arguments of a binary function. # Flip the order of the arguments of a binary function.
flip = f: a: b: f b a; flip = f: a: b: f b a;

View File

@ -25,18 +25,33 @@ INDEX = "https://pypi.io/pypi"
EXTENSIONS = ['tar.gz', 'tar.bz2', 'tar', 'zip', '.whl'] EXTENSIONS = ['tar.gz', 'tar.bz2', 'tar', 'zip', '.whl']
"""Permitted file extensions. These are evaluated from left to right and the first occurance is returned.""" """Permitted file extensions. These are evaluated from left to right and the first occurance is returned."""
def _get_value(attribute, text): import logging
"""Match attribute in text and return it.""" logging.basicConfig(level=logging.INFO)
def _get_values(attribute, text):
"""Match attribute in text and return all matches.
:returns: List of matches.
"""
regex = '{}\s+=\s+"(.*)";'.format(attribute) regex = '{}\s+=\s+"(.*)";'.format(attribute)
regex = re.compile(regex) regex = re.compile(regex)
value = regex.findall(text) values = regex.findall(text)
n = len(value) return values
def _get_unique_value(attribute, text):
"""Match attribute in text and return unique match.
:returns: Single match.
"""
values = _get_values(attribute, text)
n = len(values)
if n > 1: if n > 1:
raise ValueError("Found too many values for {}".format(attribute)) raise ValueError("found too many values for {}".format(attribute))
elif n == 1: elif n == 1:
return value[0] return values[0]
else: else:
raise ValueError("No value found for {}".format(attribute)) raise ValueError("no value found for {}".format(attribute))
def _get_line_and_value(attribute, text): def _get_line_and_value(attribute, text):
"""Match attribute in text. Return the line and the value of the attribute.""" """Match attribute in text. Return the line and the value of the attribute."""
@ -45,11 +60,11 @@ def _get_line_and_value(attribute, text):
value = regex.findall(text) value = regex.findall(text)
n = len(value) n = len(value)
if n > 1: if n > 1:
raise ValueError("Found too many values for {}".format(attribute)) raise ValueError("found too many values for {}".format(attribute))
elif n == 1: elif n == 1:
return value[0] return value[0]
else: else:
raise ValueError("No value found for {}".format(attribute)) raise ValueError("no value found for {}".format(attribute))
def _replace_value(attribute, value, text): def _replace_value(attribute, value, text):
@ -64,175 +79,151 @@ def _fetch_page(url):
if r.status_code == requests.codes.ok: if r.status_code == requests.codes.ok:
return r.json() return r.json()
else: else:
raise ValueError("Request for {} failed".format(url)) raise ValueError("request for {} failed".format(url))
def _get_latest_version(package, extension):
def _get_latest_version_pypi(package, extension):
"""Get latest version and hash from PyPI."""
url = "{}/{}/json".format(INDEX, package) url = "{}/{}/json".format(INDEX, package)
json = _fetch_page(url) json = _fetch_page(url)
data = extract_relevant_nix_data(json, extension)[1] version = json['info']['version']
for release in json['releases'][version]:
version = data['latest_version'] if release['filename'].endswith(extension):
if version in data['versions']: # TODO: In case of wheel we need to do further checks!
sha256 = data['versions'][version]['sha256'] sha256 = release['digests']['sha256']
else:
sha256 = None # Its possible that no file was uploaded to PyPI
return version, sha256 return version, sha256
def extract_relevant_nix_data(json, extension): def _get_latest_version_github(package, extension):
"""Extract relevant Nix data from the JSON of a package obtained from PyPI. raise ValueError("updating from GitHub is not yet supported.")
:param json: JSON obtained from PyPI
FETCHERS = {
'fetchFromGitHub' : _get_latest_version_github,
'fetchPypi' : _get_latest_version_pypi,
'fetchurl' : _get_latest_version_pypi,
}
DEFAULT_SETUPTOOLS_EXTENSION = 'tar.gz'
FORMATS = {
'setuptools' : DEFAULT_SETUPTOOLS_EXTENSION,
'wheel' : 'whl'
}
def _determine_fetcher(text):
# Count occurences of fetchers.
nfetchers = sum(text.count('src = {}'.format(fetcher)) for fetcher in FETCHERS.keys())
if nfetchers == 0:
raise ValueError("no fetcher.")
elif nfetchers > 1:
raise ValueError("multiple fetchers.")
else:
# Then we check which fetcher to use.
for fetcher in FETCHERS.keys():
if 'src = {}'.format(fetcher) in text:
return fetcher
def _determine_extension(text, fetcher):
"""Determine what extension is used in the expression.
If we use:
- fetchPypi, we check if format is specified.
- fetchurl, we determine the extension from the url.
- fetchFromGitHub we simply use `.tar.gz`.
""" """
def _extract_license(json): if fetcher == 'fetchPypi':
"""Extract license from JSON.""" try:
return json['info']['license'] format = _get_unique_value('format', text)
except ValueError as e:
format = None # format was not given
def _available_versions(json): try:
return json['releases'].keys() extension = _get_unique_value('extension', text)
except ValueError as e:
extension = None # extension was not given
def _extract_latest_version(json): if extension is None:
return json['info']['version'] if format is None:
format = 'setuptools'
extension = FORMATS[format]
def _get_src_and_hash(json, version, extensions): elif fetcher == 'fetchurl':
"""Obtain url and hash for a given version and list of allowable extensions.""" url = _get_unique_value('url', text)
if not json['releases']: extension = os.path.splitext(url)[1]
msg = "Package {}: No releases available.".format(json['info']['name']) if 'pypi' not in url:
raise ValueError(msg) raise ValueError('url does not point to PyPI.')
else:
# We use ['releases'] and not ['urls'] because we want to have the possibility for different version.
for possible_file in json['releases'][version]:
for extension in extensions:
if possible_file['filename'].endswith(extension):
src = {'url': str(possible_file['url']),
'sha256': str(possible_file['digests']['sha256']),
}
return src
else:
msg = "Package {}: No release with valid file extension available.".format(json['info']['name'])
logging.info(msg)
return None
#raise ValueError(msg)
def _get_sources(json, extensions): elif fetcher == 'fetchFromGitHub':
versions = _available_versions(json) raise ValueError('updating from GitHub is not yet implemented.')
releases = {version: _get_src_and_hash(json, version, extensions) for version in versions}
releases = toolz.itemfilter(lambda x: x[1] is not None, releases)
return releases
# Collect data) return extension
name = str(json['info']['name'])
latest_version = str(_extract_latest_version(json))
#src = _get_src_and_hash(json, latest_version, EXTENSIONS)
sources = _get_sources(json, [extension])
# Collect meta data
license = str(_extract_license(json))
license = license if license != "UNKNOWN" else None
summary = str(json['info'].get('summary')).strip('.')
summary = summary if summary != "UNKNOWN" else None
#description = str(json['info'].get('description'))
#description = description if description != "UNKNOWN" else None
homepage = json['info'].get('home_page')
data = {
'latest_version' : latest_version,
'versions' : sources,
#'src' : src,
'meta' : {
'description' : summary if summary else None,
#'longDescription' : description,
'license' : license,
'homepage' : homepage,
},
}
return name, data
def _update_package(path): def _update_package(path):
# Read the expression
with open(path, 'r') as f:
text = f.read()
# Determine pname.
pname = _get_unique_value('pname', text)
# Determine version.
version = _get_unique_value('version', text)
# First we check how many fetchers are mentioned.
fetcher = _determine_fetcher(text)
extension = _determine_extension(text, fetcher)
new_version, new_sha256 = _get_latest_version_pypi(pname, extension)
if new_version == version:
logging.info("Path {}: no update available for {}.".format(path, pname))
return False
if not new_sha256:
raise ValueError("no file available for {}.".format(pname))
text = _replace_value('version', new_version, text)
text = _replace_value('sha256', new_sha256, text)
with open(path, 'w') as f:
f.write(text)
logging.info("Path {}: updated {} from {} to {}".format(path, pname, version, new_version))
return True
def _update(path):
# We need to read and modify a Nix expression. # We need to read and modify a Nix expression.
if os.path.isdir(path): if os.path.isdir(path):
path = os.path.join(path, 'default.nix') path = os.path.join(path, 'default.nix')
# If a default.nix does not exist, we quit.
if not os.path.isfile(path): if not os.path.isfile(path):
logging.warning("Path does not exist: {}".format(path)) logging.info("Path {}: does not exist.".format(path))
return False return False
# If file is not a Nix expression, we quit.
if not path.endswith(".nix"): if not path.endswith(".nix"):
logging.warning("Path does not end with `.nix`, skipping: {}".format(path)) logging.info("Path {}: does not end with `.nix`.".format(path))
return False
with open(path, 'r') as f:
text = f.read()
try:
pname = _get_value('pname', text)
except ValueError as e:
logging.warning("Path {}: {}".format(path, str(e)))
return False return False
try: try:
version = _get_value('version', text) return _update_package(path)
except ValueError as e: except ValueError as e:
logging.warning("Path {}: {}".format(path, str(e))) logging.warning("Path {}: {}".format(path, e))
return False return False
# If we use a wheel, then we need to request a wheel as well
try:
format = _get_value('format', text)
except ValueError as e:
# No format mentioned, then we assume we have setuptools
# and use a .tar.gz
logging.info("Path {}: {}".format(path, str(e)))
extension = ".tar.gz"
else:
if format == 'wheel':
extension = ".whl"
else:
try:
url = _get_value('url', text)
extension = os.path.splitext(url)[1]
if 'pypi' not in url:
logging.warning("Path {}: uses non-PyPI url, not updating.".format(path))
return False
except ValueError as e:
logging.info("Path {}: {}".format(path, str(e)))
extension = ".tar.gz"
try:
new_version, new_sha256 = _get_latest_version(pname, extension)
except ValueError as e:
logging.warning("Path {}: {}".format(path, str(e)))
else:
if not new_sha256:
logging.warning("Path has no valid file available: {}".format(path))
return False
if new_version != version:
try:
text = _replace_value('version', new_version, text)
except ValueError as e:
logging.warning("Path {}: {}".format(path, str(e)))
try:
text = _replace_value('sha256', new_sha256, text)
except ValueError as e:
logging.warning("Path {}: {}".format(path, str(e)))
with open(path, 'w') as f:
f.write(text)
logging.info("Updated {} from {} to {}".format(pname, version, new_version))
else:
logging.info("No update available for {} at {}".format(pname, version))
return True
def main(): def main():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
@ -240,11 +231,11 @@ def main():
args = parser.parse_args() args = parser.parse_args()
packages = args.package packages = map(os.path.abspath, args.package)
count = list(map(_update_package, packages)) count = list(map(_update, packages))
#logging.info("{} package(s) updated".format(sum(count))) logging.info("{} package(s) updated".format(sum(count)))
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@ -57,7 +57,7 @@ Thus, if something went wrong, you can get status info using
</para> </para>
<para>If the container has started succesfully, you can log in as <para>If the container has started successfully, you can log in as
root using the <command>root-login</command> operation: root using the <command>root-login</command> operation:
<screen> <screen>

View File

@ -45,6 +45,13 @@ services.xserver.displayManager.lightdm.enable = true;
</programlisting> </programlisting>
</para> </para>
<para>You can set the keyboard layout (and optionally the layout variant):
<programlisting>
services.xserver.layout = "de";
services.xserver.xkbVariant = "neo";
</programlisting>
</para>
<para>The X server is started automatically at boot time. If you <para>The X server is started automatically at boot time. If you
dont want this to happen, you can set: dont want this to happen, you can set:
<programlisting> <programlisting>

View File

@ -65,7 +65,7 @@ let
chmod -R u+w . chmod -R u+w .
ln -s ${modulesDoc} configuration/modules.xml ln -s ${modulesDoc} configuration/modules.xml
ln -s ${optionsDocBook} options-db.xml ln -s ${optionsDocBook} options-db.xml
echo "${version}" > version printf "%s" "${version}" > version
''; '';
toc = builtins.toFile "toc.xml" toc = builtins.toFile "toc.xml"
@ -94,25 +94,43 @@ let
"--stringparam chunk.toc ${toc}" "--stringparam chunk.toc ${toc}"
]; ];
manual-combined = runCommand "nixos-manual-combined"
{ inherit sources;
buildInputs = [ libxml2 libxslt ];
meta.description = "The NixOS manual as plain docbook XML";
}
''
${copySources}
xmllint --xinclude --output ./manual-combined.xml ./manual.xml
xmllint --xinclude --noxincludenode \
--output ./man-pages-combined.xml ./man-pages.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual-combined.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
man-pages-combined.xml
mkdir $out
cp manual-combined.xml $out/
cp man-pages-combined.xml $out/
'';
olinkDB = runCommand "manual-olinkdb" olinkDB = runCommand "manual-olinkdb"
{ inherit sources; { inherit sources;
buildInputs = [ libxml2 libxslt ]; buildInputs = [ libxml2 libxslt ];
} }
'' ''
${copySources}
xsltproc \ xsltproc \
${manualXsltprocOptions} \ ${manualXsltprocOptions} \
--stringparam collect.xref.targets only \ --stringparam collect.xref.targets only \
--stringparam targets.filename "$out/manual.db" \ --stringparam targets.filename "$out/manual.db" \
--nonet --xinclude \ --nonet \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \ ${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \
./manual.xml ${manual-combined}/manual-combined.xml
# Check the validity of the man pages sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
./man-pages.xml
cat > "$out/olinkdb.xml" <<EOF cat > "$out/olinkdb.xml" <<EOF
<?xml version="1.0" encoding="utf-8"?> <?xml version="1.0" encoding="utf-8"?>
@ -158,21 +176,15 @@ in rec {
allowedReferences = ["out"]; allowedReferences = ["out"];
} }
'' ''
${copySources}
# Check the validity of the manual sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual.xml
# Generate the HTML manual. # Generate the HTML manual.
dst=$out/share/doc/nixos dst=$out/share/doc/nixos
mkdir -p $dst mkdir -p $dst
xsltproc \ xsltproc \
${manualXsltprocOptions} \ ${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \ --stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/ \ --nonet --output $dst/ \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl ./manual.xml ${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \
${manual-combined}/manual-combined.xml
mkdir -p $dst/images/callouts mkdir -p $dst/images/callouts
cp ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/images/callouts/ cp ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/images/callouts/
@ -190,13 +202,6 @@ in rec {
buildInputs = [ libxml2 libxslt zip ]; buildInputs = [ libxml2 libxslt zip ];
} }
'' ''
${copySources}
# Check the validity of the manual sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual.xml
# Generate the epub manual. # Generate the epub manual.
dst=$out/share/doc/nixos dst=$out/share/doc/nixos
@ -204,10 +209,11 @@ in rec {
${manualXsltprocOptions} \ ${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \ --stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/epub/ \ --nonet --xinclude --output $dst/epub/ \
${docbook5_xsl}/xml/xsl/docbook/epub/docbook.xsl ./manual.xml ${docbook5_xsl}/xml/xsl/docbook/epub/docbook.xsl \
${manual-combined}/manual-combined.xml
mkdir -p $dst/epub/OEBPS/images/callouts mkdir -p $dst/epub/OEBPS/images/callouts
cp -r ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/epub/OEBPS/images/callouts cp -r ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/epub/OEBPS/images/callouts # */
echo "application/epub+zip" > mimetype echo "application/epub+zip" > mimetype
manual="$dst/nixos-manual.epub" manual="$dst/nixos-manual.epub"
zip -0Xq "$manual" mimetype zip -0Xq "$manual" mimetype
@ -227,23 +233,16 @@ in rec {
allowedReferences = ["out"]; allowedReferences = ["out"];
} }
'' ''
${copySources}
# Check the validity of the man pages sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
./man-pages.xml
# Generate manpages. # Generate manpages.
mkdir -p $out/share/man mkdir -p $out/share/man
xsltproc --nonet --xinclude \ xsltproc --nonet \
--param man.output.in.separate.dir 1 \ --param man.output.in.separate.dir 1 \
--param man.output.base.dir "'$out/share/man/'" \ --param man.output.base.dir "'$out/share/man/'" \
--param man.endnotes.are.numbered 0 \ --param man.endnotes.are.numbered 0 \
--param man.break.after.slash 1 \ --param man.break.after.slash 1 \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \ --stringparam target.database.document "${olinkDB}/olinkdb.xml" \
${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \ ${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \
./man-pages.xml ${manual-combined}/man-pages-combined.xml
''; '';
} }

View File

@ -12,12 +12,12 @@ your <filename>configuration.nix</filename> to configure the system that
would be installed on the CD.</para> would be installed on the CD.</para>
<para>Default CD/DVD configurations are available <para>Default CD/DVD configurations are available
inside <filename>nixos/modules/installer/cd-dvd</filename>. To build them inside <filename>nixos/modules/installer/cd-dvd</filename>.
you have to set <envar>NIXOS_CONFIG</envar> before
running <command>nix-build</command> to build the ISO.
<screen> <screen>
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix</screen> $ git clone https://github.com/NixOS/nixpkgs.git
$ cd nixpkgs/nixos
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nix</screen>
</para> </para>

View File

@ -96,7 +96,7 @@ options = {
</itemizedlist> </itemizedlist>
</para> </para>
<para>Both approachs have problems.</para> <para>Both approaches have problems.</para>
<para>Making backends independent can quickly become hard to manage. For <para>Making backends independent can quickly become hard to manage. For
display managers, there can be only one enabled at a time, but the type display managers, there can be only one enabled at a time, but the type

View File

@ -396,7 +396,7 @@ code before creating a new type.</para>
<listitem><para>For composed types that can take a submodule as type <listitem><para>For composed types that can take a submodule as type
parameter, this function can be used to substitute the parameter of a parameter, this function can be used to substitute the parameter of a
submodule type. It takes a module as parameter and return the type with submodule type. It takes a module as parameter and return the type with
the submodule options substituted. It is usally defined as a type the submodule options substituted. It is usually defined as a type
function call with a recursive call to function call with a recursive call to
<literal>substSubModules</literal>, e.g for a type <literal>substSubModules</literal>, e.g for a type
<literal>composedType</literal> that take an <literal>elemtype</literal> <literal>composedType</literal> that take an <literal>elemtype</literal>

View File

@ -342,7 +342,7 @@ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA haskellPackages.pandoc
<listitem> <listitem>
<para> <para>
Python 2.6 has been marked as broken (as it no longer recieves Python 2.6 has been marked as broken (as it no longer receives
security updates from upstream). security updates from upstream).
</para> </para>
</listitem> </listitem>

View File

@ -362,7 +362,7 @@ services.syncthing = {
<listitem> <listitem>
<para> <para>
<literal>networking.firewall.allowPing</literal> is now enabled by <literal>networking.firewall.allowPing</literal> is now enabled by
default. Users are encourarged to configure an approiate rate limit for default. Users are encouraged to configure an appropriate rate limit for
their machines using the Kernel interface at their machines using the Kernel interface at
<filename>/proc/sys/net/ipv4/icmp_ratelimit</filename> and <filename>/proc/sys/net/ipv4/icmp_ratelimit</filename> and
<filename>/proc/sys/net/ipv6/icmp/ratelimit</filename> or using the <filename>/proc/sys/net/ipv6/icmp/ratelimit</filename> or using the

View File

@ -55,6 +55,12 @@ has the following highlights: </para>
following incompatible changes:</para> following incompatible changes:</para>
<itemizedlist> <itemizedlist>
<listitem>
<para>
<literal>aiccu</literal> package was removed. This is due to SixXS
<link xlink:href="https://www.sixxs.net/main/"> sunsetting</link> its IPv6 tunnel.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
Top-level <literal>idea</literal> package collection was renamed. Top-level <literal>idea</literal> package collection was renamed.
@ -78,6 +84,35 @@ rmdir /var/lib/ipfs/.ipfs
</programlisting> </programlisting>
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <literal>postgres</literal> default version was changed from 9.5 to 9.6.
</para>
<para>
The <literal>postgres</literal> superuser name has changed from <literal>root</literal> to <literal>postgres</literal> to more closely follow what other Linux distributions are doing.
</para>
<para>
The <literal>postgres</literal> default <literal>dataDir</literal> has changed from <literal>/var/db/postgres</literal> to <literal>/var/lib/postgresql/$psqlSchema</literal> where $psqlSchema is 9.6 for example.
</para>
</listitem>
<listitem>
<para>
The <literal>caddy</literal> service was previously using an extra
<literal>.caddy</literal> in the data directory specified with the
<literal>dataDir</literal> option. The contents of the
<literal>.caddy</literal> directory are now expected to be in the
<literal>dataDir</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>ssh-agent</literal> user service is not started by default
anymore. Use <literal>programs.ssh.startAgent</literal> to enable it if
needed. There is also a new <literal>programs.gnupg.agent</literal>
module that creates a <literal>gpg-agent</literal> user service. It can
also serve as a SSH agent if <literal>enableSSHSupport</literal> is set.
</para>
</listitem>
</itemizedlist> </itemizedlist>

View File

@ -219,8 +219,8 @@ sub waitForMonitorPrompt {
sub retry { sub retry {
my ($coderef) = @_; my ($coderef) = @_;
my $n; my $n;
for ($n = 0; $n < 900; $n++) { for ($n = 899; $n >=0; $n--) {
return if &$coderef; return if &$coderef($n);
sleep 1; sleep 1;
} }
die "action timed out after $n seconds"; die "action timed out after $n seconds";
@ -518,6 +518,12 @@ sub waitUntilTTYMatches {
$self->nest("waiting for $regexp to appear on tty $tty", sub { $self->nest("waiting for $regexp to appear on tty $tty", sub {
retry sub { retry sub {
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on TTY$tty, which currently contains:");
$self->log($self->getTTYText($tty));
}
return 1 if $self->getTTYText($tty) =~ /$regexp/; return 1 if $self->getTTYText($tty) =~ /$regexp/;
} }
}); });
@ -566,6 +572,12 @@ sub waitForText {
my ($self, $regexp) = @_; my ($self, $regexp) = @_;
$self->nest("waiting for $regexp to appear on the screen", sub { $self->nest("waiting for $regexp to appear on the screen", sub {
retry sub { retry sub {
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on the screen, which currently contains:");
$self->log($self->getScreenText);
}
return 1 if $self->getScreenText =~ /$regexp/; return 1 if $self->getScreenText =~ /$regexp/;
} }
}); });
@ -600,6 +612,13 @@ sub waitForWindow {
$self->nest("waiting for a window to appear", sub { $self->nest("waiting for a window to appear", sub {
retry sub { retry sub {
my @names = $self->getWindowNames; my @names = $self->getWindowNames;
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on the the window list, which currently contains:");
$self->log(join(", ", @names));
}
foreach my $n (@names) { foreach my $n (@names) {
return 1 if $n =~ /$regexp/; return 1 if $n =~ /$regexp/;
} }

View File

@ -35,7 +35,7 @@ foreach my $vlan (split / /, $ENV{VLANS} || "") {
if ($pid == 0) { if ($pid == 0) {
dup2(fileno($pty->slave), 0); dup2(fileno($pty->slave), 0);
dup2(fileno($stdoutW), 1); dup2(fileno($stdoutW), 1);
exec "vde_switch -s $socket" or _exit(1); exec "vde_switch -s $socket --dirmode 0700" or _exit(1);
} }
close $stdoutW; close $stdoutW;
print $pty "version\n"; print $pty "version\n";

View File

@ -222,13 +222,11 @@ in
'' + cfg.extraResolvconfConf + '' '' + cfg.extraResolvconfConf + ''
''; '';
} // (optionalAttrs config.services.resolved.enable ( } // optionalAttrs config.services.resolved.enable {
if dnsmasqResolve then {
"dnsmasq-resolv.conf".source = "/run/systemd/resolve/resolv.conf";
} else {
"resolv.conf".source = "/run/systemd/resolve/resolv.conf"; "resolv.conf".source = "/run/systemd/resolve/resolv.conf";
} } // optionalAttrs (config.services.resolved.enable && dnsmasqResolve) {
)); "dnsmasq-resolv.conf".source = "/run/systemd/resolve/resolv.conf";
};
networking.proxy.envVars = networking.proxy.envVars =
optionalAttrs (cfg.proxy.default != null) { optionalAttrs (cfg.proxy.default != null) {

View File

@ -6,24 +6,29 @@ with lib;
let let
inherit (config.services.avahi) nssmdns; # only with nscd up and running we can load NSS modules that are not integrated in NSS
inherit (config.services.samba) nsswins; canLoadExternalModules = config.services.nscd.enable;
ldap = (config.users.ldap.enable && config.users.ldap.nsswitch); myhostname = canLoadExternalModules;
sssd = config.services.sssd.enable; mymachines = canLoadExternalModules;
resolved = config.services.resolved.enable; nssmdns = canLoadExternalModules && config.services.avahi.nssmdns;
nsswins = canLoadExternalModules && config.services.samba.nsswins;
ldap = canLoadExternalModules && (config.users.ldap.enable && config.users.ldap.nsswitch);
sssd = canLoadExternalModules && config.services.sssd.enable;
resolved = canLoadExternalModules && config.services.resolved.enable;
hostArray = [ "files" "mymachines" ] hostArray = [ "files" ]
++ optionals mymachines [ "mymachines" ]
++ optionals nssmdns [ "mdns_minimal [!UNAVAIL=return]" ] ++ optionals nssmdns [ "mdns_minimal [!UNAVAIL=return]" ]
++ optionals nsswins [ "wins" ] ++ optionals nsswins [ "wins" ]
++ optionals resolved ["resolv [!UNAVAIL=return]"] ++ optionals resolved ["resolve [!UNAVAIL=return]"]
++ [ "dns" ] ++ [ "dns" ]
++ optionals nssmdns [ "mdns" ] ++ optionals nssmdns [ "mdns" ]
++ ["myhostname" ]; ++ optionals myhostname ["myhostname" ];
passwdArray = [ "files" ] passwdArray = [ "files" ]
++ optional sssd "sss" ++ optional sssd "sss"
++ optionals ldap [ "ldap" ] ++ optionals ldap [ "ldap" ]
++ [ "mymachines" ]; ++ optionals mymachines [ "mymachines" ];
shadowArray = [ "files" ] shadowArray = [ "files" ]
++ optional sssd "sss" ++ optional sssd "sss"
@ -36,6 +41,7 @@ in {
options = { options = {
# NSS modules. Hacky! # NSS modules. Hacky!
# Only works with nscd!
system.nssModules = mkOption { system.nssModules = mkOption {
type = types.listOf types.path; type = types.listOf types.path;
internal = true; internal = true;
@ -55,6 +61,18 @@ in {
}; };
config = { config = {
assertions = [
{
# generic catch if the NixOS module adding to nssModules does not prevent it with specific message.
assertion = config.system.nssModules.path != "" -> canLoadExternalModules;
message = "Loading NSS modules from path ${config.system.nssModules.path} requires nscd being enabled.";
}
{
# resolved does not need to add to nssModules, therefore needs an extra assertion
assertion = resolved -> canLoadExternalModules;
message = "Loading systemd-resolved's nss-resolve NSS module requires nscd being enabled.";
}
];
# Name Service Switch configuration file. Required by the C # Name Service Switch configuration file. Required by the C
# library. !!! Factor out the mdns stuff. The avahi module # library. !!! Factor out the mdns stuff. The avahi module
@ -78,7 +96,7 @@ in {
# configured IP addresses, or ::1 and 127.0.0.2 as # configured IP addresses, or ::1 and 127.0.0.2 as
# fallbacks. Systemd also provides nss-mymachines to return IP # fallbacks. Systemd also provides nss-mymachines to return IP
# addresses of local containers. # addresses of local containers.
system.nssModules = [ config.systemd.package.out ]; system.nssModules = optionals canLoadExternalModules [ config.systemd.package.out ];
}; };
} }

View File

@ -240,11 +240,14 @@ in {
}; };
systemd.user = { systemd.user = {
services.pulseaudio = { services.pulseaudio = {
restartIfChanged = true;
serviceConfig = { serviceConfig = {
RestartSec = "500ms"; RestartSec = "500ms";
PassEnvironment = "DISPLAY";
}; };
environment = { DISPLAY = ":${toString config.services.xserver.display}"; }; };
restartIfChanged = true; sockets.pulseaudio = {
wantedBy = [ "sockets.target" ];
}; };
}; };
}) })

View File

@ -28,7 +28,7 @@ let
nvidia_libs32 = (nvidiaForKernel pkgs_i686.linuxPackages).override { libsOnly = true; kernel = null; }; nvidia_libs32 = (nvidiaForKernel pkgs_i686.linuxPackages).override { libsOnly = true; kernel = null; };
nvidiaPackage = nvidia: pkgs: nvidiaPackage = nvidia: pkgs:
if !nvidia.useGLVND then nvidia if !nvidia.useGLVND then nvidia.out
else pkgs.buildEnv { else pkgs.buildEnv {
name = "nvidia-libs"; name = "nvidia-libs";
paths = [ pkgs.libglvnd nvidia.out ]; paths = [ pkgs.libglvnd nvidia.out ];
@ -56,7 +56,8 @@ in
hardware.opengl.package = nvidiaPackage nvidia_x11 pkgs; hardware.opengl.package = nvidiaPackage nvidia_x11 pkgs;
hardware.opengl.package32 = nvidiaPackage nvidia_libs32 pkgs_i686; hardware.opengl.package32 = nvidiaPackage nvidia_libs32 pkgs_i686;
environment.systemPackages = [ nvidia_x11.bin nvidia_x11.settings nvidia_x11.persistenced ]; environment.systemPackages = [ nvidia_x11.bin nvidia_x11.settings ]
++ lib.filter (p: p != null) [ nvidia_x11.persistenced ];
boot.extraModulePackages = [ nvidia_x11.bin ]; boot.extraModulePackages = [ nvidia_x11.bin ];

View File

@ -1,5 +1,5 @@
{ {
x86_64-linux = "/nix/store/71im965h634iy99zsmlncw6qhx5jcclx-nix-1.11.9"; x86_64-linux = "/nix/store/crqd5wmrqipl4n1fcm5kkc1zg4sj80js-nix-1.11.11";
i686-linux = "/nix/store/cgvavixkayc36l6kl92i8mxr6k0p2yhy-nix-1.11.9"; i686-linux = "/nix/store/wsjn14xp5ja509d4dxb1c78zhirw0b5x-nix-1.11.11";
x86_64-darwin = "/nix/store/w1c96v5yxvdmq4nvqlxjvg6kp7xa2lag-nix-1.11.9"; x86_64-darwin = "/nix/store/zqkqnhk85g2shxlpb04y72h1i3db3gpl-nix-1.11.11";
} }

View File

@ -294,6 +294,8 @@
jackett = 276; jackett = 276;
aria2 = 277; aria2 = 277;
clickhouse = 278; clickhouse = 278;
rslsync = 279;
minio = 280;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399! # When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -557,6 +559,8 @@
jackett = 276; jackett = 276;
aria2 = 277; aria2 = 277;
clickhouse = 278; clickhouse = 278;
rslsync = 279;
minio = 280;
# When adding a gid, make sure it doesn't match an existing # When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal # uid. Users and groups with the same name should have equal

View File

@ -131,9 +131,9 @@ in {
path = mkIf (!isMLocate) [ pkgs.su ]; path = mkIf (!isMLocate) [ pkgs.su ];
script = script =
'' ''
install -m ${if isMLocate then "0750" else "0755"} -o root -g ${if isMLocate then "mlocate" else "root"} -d $(dirname ${cfg.output}) mkdir -m 0755 -p ${dirOf cfg.output}
exec ${cfg.locate}/bin/updatedb \ exec ${cfg.locate}/bin/updatedb \
${optionalString (cfg.localuser != null) ''--localuser=${cfg.localuser}''} \ ${optionalString (cfg.localuser != null && ! isMLocate) ''--localuser=${cfg.localuser}''} \
--output=${toString cfg.output} ${concatStringsSep " " cfg.extraFlags} --output=${toString cfg.output} ${concatStringsSep " " cfg.extraFlags}
''; '';
environment = { environment = {

View File

@ -80,6 +80,7 @@
./programs/environment.nix ./programs/environment.nix
./programs/fish.nix ./programs/fish.nix
./programs/freetds.nix ./programs/freetds.nix
./programs/gnupg.nix
./programs/gphoto2.nix ./programs/gphoto2.nix
./programs/info.nix ./programs/info.nix
./programs/java.nix ./programs/java.nix
@ -98,6 +99,7 @@
./programs/spacefm.nix ./programs/spacefm.nix
./programs/ssh.nix ./programs/ssh.nix
./programs/ssmtp.nix ./programs/ssmtp.nix
./programs/thefuck.nix
./programs/tmux.nix ./programs/tmux.nix
./programs/venus.nix ./programs/venus.nix
./programs/vim.nix ./programs/vim.nix
@ -250,6 +252,7 @@
./services/mail/exim.nix ./services/mail/exim.nix
./services/mail/freepops.nix ./services/mail/freepops.nix
./services/mail/mail.nix ./services/mail/mail.nix
./services/mail/mailhog.nix
./services/mail/mlmmj.nix ./services/mail/mlmmj.nix
./services/mail/offlineimap.nix ./services/mail/offlineimap.nix
./services/mail/opendkim.nix ./services/mail/opendkim.nix
@ -282,6 +285,7 @@
./services/misc/etcd.nix ./services/misc/etcd.nix
./services/misc/felix.nix ./services/misc/felix.nix
./services/misc/folding-at-home.nix ./services/misc/folding-at-home.nix
./services/misc/fstrim.nix
./services/misc/gammu-smsd.nix ./services/misc/gammu-smsd.nix
./services/misc/geoip-updater.nix ./services/misc/geoip-updater.nix
#./services/misc/gitit.nix #./services/misc/gitit.nix
@ -386,7 +390,6 @@
./services/network-filesystems/u9fs.nix ./services/network-filesystems/u9fs.nix
./services/network-filesystems/yandex-disk.nix ./services/network-filesystems/yandex-disk.nix
./services/network-filesystems/xtreemfs.nix ./services/network-filesystems/xtreemfs.nix
./services/networking/aiccu.nix
./services/networking/amuled.nix ./services/networking/amuled.nix
./services/networking/asterisk.nix ./services/networking/asterisk.nix
./services/networking/atftpd.nix ./services/networking/atftpd.nix
@ -484,6 +487,7 @@
./services/networking/radvd.nix ./services/networking/radvd.nix
./services/networking/rdnssd.nix ./services/networking/rdnssd.nix
./services/networking/redsocks.nix ./services/networking/redsocks.nix
./services/networking/resilio.nix
./services/networking/rpcbind.nix ./services/networking/rpcbind.nix
./services/networking/sabnzbd.nix ./services/networking/sabnzbd.nix
./services/networking/searx.nix ./services/networking/searx.nix
@ -572,6 +576,7 @@
./services/web-apps/frab.nix ./services/web-apps/frab.nix
./services/web-apps/mattermost.nix ./services/web-apps/mattermost.nix
./services/web-apps/nixbot.nix ./services/web-apps/nixbot.nix
./services/web-apps/piwik.nix
./services/web-apps/pump.io.nix ./services/web-apps/pump.io.nix
./services/web-apps/tt-rss.nix ./services/web-apps/tt-rss.nix
./services/web-apps/selfoss.nix ./services/web-apps/selfoss.nix
@ -584,6 +589,7 @@
./services/web-servers/lighttpd/default.nix ./services/web-servers/lighttpd/default.nix
./services/web-servers/lighttpd/gitweb.nix ./services/web-servers/lighttpd/gitweb.nix
./services/web-servers/lighttpd/inginious.nix ./services/web-servers/lighttpd/inginious.nix
./services/web-servers/minio.nix
./services/web-servers/nginx/default.nix ./services/web-servers/nginx/default.nix
./services/web-servers/phpfpm/default.nix ./services/web-servers/phpfpm/default.nix
./services/web-servers/shellinabox.nix ./services/web-servers/shellinabox.nix

View File

@ -55,7 +55,7 @@ with lib;
# same privileges as it would have inside it. This is particularly # same privileges as it would have inside it. This is particularly
# bad in the common case of running as root within the namespace. # bad in the common case of running as root within the namespace.
# #
# Setting the number of allowed userns to 0 effectively disables # Setting the number of allowed user namespaces to 0 effectively disables
# the feature at runtime. Attempting to create a user namespace # the feature at runtime. Attempting to create a user namespace
# with unshare will then fail with "no space left on device". # with unshare will then fail with "no space left on device".
boot.kernel.sysctl."user.max_user_namespaces" = mkDefault 0; boot.kernel.sysctl."user.max_user_namespaces" = mkDefault 0;

View File

@ -6,21 +6,17 @@ with lib;
###### interface ###### interface
options = { options = {
programs.browserpass = { programs.browserpass.enable = mkEnableOption "the NativeMessaging configuration for Chromium, Chrome, and Vivaldi.";
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to install the NativeMessaging configuration for installed browsers.
'';
};
};
}; };
###### implementation ###### implementation
config = mkIf config.programs.browserpass.enable { config = mkIf config.programs.browserpass.enable {
environment.systemPackages = [ pkgs.browserpass ]; environment.systemPackages = [ pkgs.browserpass ];
environment.etc."chromium/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json"; environment.etc = {
environment.etc."opt/chrome/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json"; "chromium/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
"chromium/policies/managed/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-policy.json";
"opt/chrome/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
"opt/chrome/policies/managed/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-policy.json";
};
}; };
} }

View File

@ -0,0 +1,156 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.gnupg;
in
{
options.programs.gnupg = {
agent.enable = mkOption {
type = types.bool;
default = false;
description = ''
Enables GnuPG agent with socket-activation for every user session.
'';
};
agent.enableSSHSupport = mkOption {
type = types.bool;
default = false;
description = ''
Enable SSH agent support in GnuPG agent. Also sets SSH_AUTH_SOCK
environment variable correctly. This will disable socket-activation
and thus always start a GnuPG agent per user session.
'';
};
agent.enableExtraSocket = mkOption {
type = types.bool;
default = false;
description = ''
Enable extra socket for GnuPG agent.
'';
};
agent.enableBrowserSocket = mkOption {
type = types.bool;
default = false;
description = ''
Enable browser socket for GnuPG agent.
'';
};
dirmngr.enable = mkOption {
type = types.bool;
default = false;
description = ''
Enables GnuPG network certificate management daemon with socket-activation for every user session.
'';
};
};
config = mkIf cfg.agent.enable {
systemd.user.services.gpg-agent = {
serviceConfig = {
ExecStart = [
""
("${pkgs.gnupg}/bin/gpg-agent --supervised "
+ optionalString cfg.agent.enableSSHSupport "--enable-ssh-support")
];
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload gpg-agent";
};
};
systemd.user.sockets.gpg-agent = {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent" ];
socketConfig = {
FileDescriptorName = "std";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.sockets.gpg-agent-ssh = mkIf cfg.agent.enableSSHSupport {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.ssh" ];
socketConfig = {
FileDescriptorName = "ssh";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.sockets.gpg-agent-extra = mkIf cfg.agent.enableExtraSocket {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.extra" ];
socketConfig = {
FileDescriptorName = "extra";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.sockets.gpg-agent-browser = mkIf cfg.agent.enableBrowserSocket {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.browser" ];
socketConfig = {
FileDescriptorName = "browser";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.user.services.dirmngr = {
requires = [ "dirmngr.socket" ];
after = [ "dirmngr.socket" ];
unitConfig = {
RefuseManualStart = "true";
};
serviceConfig = {
ExecStart = "${pkgs.gnupg}/bin/dirmngr --supervised";
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload dirmngr";
};
};
systemd.user.sockets.dirmngr = {
wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.dirmngr" ];
socketConfig = {
SocketMode = "0600";
DirectoryMode = "0700";
};
};
systemd.packages = [ pkgs.gnupg ];
environment.extraInit = ''
# Bind gpg-agent to this TTY if gpg commands are used.
export GPG_TTY=$(tty)
'' + (optionalString cfg.agent.enableSSHSupport ''
# SSH agent protocol doesn't support changing TTYs, so bind the agent
# to every new TTY.
${pkgs.gnupg}/bin/gpg-connect-agent --quiet updatestartuptty /bye > /dev/null
if [ -z "$SSH_AUTH_SOCK" ]; then
export SSH_AUTH_SOCK=$(${pkgs.gnupg}/bin/gpgconf --list-dirs agent-ssh-socket)
fi
'');
assertions = [
{ assertion = cfg.agent.enableSSHSupport && !config.programs.ssh.startAgent;
message = "You can't use ssh-agent and GnuPG agent with SSH support enabled at the same time!";
}
];
};
}

View File

@ -74,7 +74,7 @@ in
startAgent = mkOption { startAgent = mkOption {
type = types.bool; type = types.bool;
default = true; default = false;
description = '' description = ''
Whether to start the OpenSSH agent when you log in. The OpenSSH agent Whether to start the OpenSSH agent when you log in. The OpenSSH agent
remembers private keys for you so that you don't have to type in remembers private keys for you so that you don't have to type in
@ -199,9 +199,8 @@ in
environment.etc."ssh/ssh_known_hosts".text = knownHostsText; environment.etc."ssh/ssh_known_hosts".text = knownHostsText;
# FIXME: this should really be socket-activated for über-awesomeness. # FIXME: this should really be socket-activated for über-awesomeness.
systemd.user.services.ssh-agent = systemd.user.services.ssh-agent = mkIf cfg.startAgent
{ enable = cfg.startAgent; { description = "SSH Agent";
description = "SSH Agent";
wantedBy = [ "default.target" ]; wantedBy = [ "default.target" ];
serviceConfig = serviceConfig =
{ ExecStartPre = "${pkgs.coreutils}/bin/rm -f %t/ssh-agent"; { ExecStartPre = "${pkgs.coreutils}/bin/rm -f %t/ssh-agent";

View File

@ -0,0 +1,31 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.programs.thefuck;
in
{
options = {
programs.thefuck = {
enable = mkEnableOption "thefuck";
alias = mkOption {
default = "fuck";
type = types.string;
description = ''
`thefuck` needs an alias to be configured.
The default value is `fuck`, but you can use anything else as well.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ thefuck ];
environment.shellInit = ''
eval $(${pkgs.thefuck}/bin/thefuck --alias ${cfg.alias})
'';
};
}

View File

@ -8,13 +8,7 @@ in
{ {
options = { options = {
programs.zsh.syntaxHighlighting = { programs.zsh.syntaxHighlighting = {
enable = mkOption { enable = mkEnableOption "zsh-syntax-highlighting";
default = false;
type = types.bool;
description = ''
Enable zsh-syntax-highlighting.
'';
};
highlighters = mkOption { highlighters = mkOption {
default = [ "main" ]; default = [ "main" ];
@ -38,13 +32,13 @@ in
}; };
patterns = mkOption { patterns = mkOption {
default = []; default = {};
type = types.listOf(types.listOf(types.string)); type = types.attrsOf types.string;
example = literalExample '' example = literalExample ''
[ {
["rm -rf *" "fg=white,bold,bg=red"] "rm -rf *" = "fg=white,bold,bg=red";
] }
''; '';
description = '' description = ''
@ -67,14 +61,17 @@ in
"ZSH_HIGHLIGHT_HIGHLIGHTERS=(${concatStringsSep " " cfg.highlighters})" "ZSH_HIGHLIGHT_HIGHLIGHTERS=(${concatStringsSep " " cfg.highlighters})"
} }
${optionalString (length(cfg.patterns) > 0) ${let
n = attrNames cfg.patterns;
in
optionalString (length(n) > 0)
(assert(elem "pattern" cfg.highlighters); (foldl ( (assert(elem "pattern" cfg.highlighters); (foldl (
a: b: a: b:
assert(length(b) == 2); ''
${a}
ZSH_HIGHLIGHT_PATTERNS+=('${elemAt b 0}' '${elemAt b 1}')
'' ''
) "") cfg.patterns) ${a}
ZSH_HIGHLIGHT_PATTERNS+=('${b}' '${attrByPath [b] "" cfg.patterns}')
''
) "") n)
} }
''; '';
}; };

View File

@ -117,7 +117,7 @@ in
# Tell zsh how to find installed completions # Tell zsh how to find installed completions
for p in ''${(z)NIX_PROFILES}; do for p in ''${(z)NIX_PROFILES}; do
fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions) fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions $p/share/zsh/vendor-completions)
done done
${if cfg.enableCompletion then "autoload -U compinit && compinit" else ""} ${if cfg.enableCompletion then "autoload -U compinit && compinit" else ""}

View File

@ -13,7 +13,7 @@ let
description = '' description = ''
Where the webroot of the HTTP vhost is located. Where the webroot of the HTTP vhost is located.
<filename>.well-known/acme-challenge/</filename> directory <filename>.well-known/acme-challenge/</filename> directory
will be created automatically if it doesn't exist. will be created below the webroot if it doesn't exist.
<literal>http://example.org/.well-known/acme-challenge/</literal> must also <literal>http://example.org/.well-known/acme-challenge/</literal> must also
be available (notice unencrypted HTTP). be available (notice unencrypted HTTP).
''; '';
@ -46,7 +46,10 @@ let
allowKeysForGroup = mkOption { allowKeysForGroup = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = "Give read permissions to the specified group to read SSL private certificates."; description = ''
Give read permissions to the specified group
(<option>security.acme.group</option>) to read SSL private certificates.
'';
}; };
postRun = mkOption { postRun = mkOption {
@ -65,21 +68,24 @@ let
"cert.der" "cert.pem" "chain.pem" "external.sh" "cert.der" "cert.pem" "chain.pem" "external.sh"
"fullchain.pem" "full.pem" "key.der" "key.pem" "account_key.json" "fullchain.pem" "full.pem" "key.der" "key.pem" "account_key.json"
]); ]);
default = [ "fullchain.pem" "key.pem" "account_key.json" ]; default = [ "fullchain.pem" "full.pem" "key.pem" "account_key.json" ];
description = '' description = ''
Plugins to enable. With default settings simp_le will Plugins to enable. With default settings simp_le will
store public certificate bundle in <filename>fullchain.pem</filename> store public certificate bundle in <filename>fullchain.pem</filename>,
and private key in <filename>key.pem</filename> in its state directory. private key in <filename>key.pem</filename> and those two previous
files combined in <filename>full.pem</filename> in its state directory.
''; '';
}; };
extraDomains = mkOption { extraDomains = mkOption {
type = types.attrsOf (types.nullOr types.str); type = types.attrsOf (types.nullOr types.str);
default = {}; default = {};
example = { example = literalExample ''
{
"example.org" = "/srv/http/nginx"; "example.org" = "/srv/http/nginx";
"mydomain.org" = null; "mydomain.org" = null;
}; }
'';
description = '' description = ''
Extra domain names for which certificates are to be issued, with their Extra domain names for which certificates are to be issued, with their
own server roots if needed. own server roots if needed.
@ -139,7 +145,8 @@ in
description = '' description = ''
Attribute set of certificates to get signed and renewed. Attribute set of certificates to get signed and renewed.
''; '';
example = { example = literalExample ''
{
"example.com" = { "example.com" = {
webroot = "/var/www/challenges/"; webroot = "/var/www/challenges/";
email = "foo@example.com"; email = "foo@example.com";
@ -149,7 +156,8 @@ in
webroot = "/var/www/challenges/"; webroot = "/var/www/challenges/";
email = "bar@example.com"; email = "bar@example.com";
}; };
}; }
'';
}; };
}; };
}; };
@ -238,6 +246,9 @@ in
mv $workdir/server.key ${cpath}/key.pem mv $workdir/server.key ${cpath}/key.pem
mv $workdir/server.crt ${cpath}/fullchain.pem mv $workdir/server.crt ${cpath}/fullchain.pem
# Create full.pem for e.g. lighttpd (same format as "simp_le ... -f full.pem" creates)
cat "${cpath}/key.pem" "${cpath}/fullchain.pem" > "${cpath}/full.pem"
# Clean up working directory # Clean up working directory
rm $workdir/server.csr rm $workdir/server.csr
rm $workdir/server.pass.key rm $workdir/server.pass.key
@ -247,6 +258,8 @@ in
chown '${data.user}:${data.group}' '${cpath}/key.pem' chown '${data.user}:${data.group}' '${cpath}/key.pem'
chmod ${rights} '${cpath}/fullchain.pem' chmod ${rights} '${cpath}/fullchain.pem'
chown '${data.user}:${data.group}' '${cpath}/fullchain.pem' chown '${data.user}:${data.group}' '${cpath}/fullchain.pem'
chmod ${rights} '${cpath}/full.pem'
chown '${data.user}:${data.group}' '${cpath}/full.pem'
''; '';
serviceConfig = { serviceConfig = {
Type = "oneshot"; Type = "oneshot";
@ -275,15 +288,14 @@ in
) )
); );
servicesAttr = listToAttrs services; servicesAttr = listToAttrs services;
nginxAttr = { injectServiceDep = {
nginx = {
after = [ "acme-selfsigned-certificates.target" ]; after = [ "acme-selfsigned-certificates.target" ];
wants = [ "acme-selfsigned-certificates.target" "acme-certificates.target" ]; wants = [ "acme-selfsigned-certificates.target" "acme-certificates.target" ];
}; };
};
in in
servicesAttr // servicesAttr //
(if config.services.nginx.enable then nginxAttr else {}); (if config.services.nginx.enable then { nginx = injectServiceDep; } else {}) //
(if config.services.lighttpd.enable then { lighttpd = injectServiceDep; } else {});
systemd.timers = flip mapAttrs' cfg.certs (cert: data: nameValuePair systemd.timers = flip mapAttrs' cfg.certs (cert: data: nameValuePair
("acme-${cert}") ("acme-${cert}")

View File

@ -80,8 +80,8 @@ let
group = "root"; group = "root";
} // s) } // s)
else if else if
(s ? "setuid" && s.setuid == true) || (s ? "setuid" && s.setuid) ||
(s ? "setguid" && s.setguid == true) || (s ? "setgid" && s.setgid) ||
(s ? "permissions") (s ? "permissions")
then mkSetuidProgram s then mkSetuidProgram s
else mkSetuidProgram else mkSetuidProgram
@ -171,7 +171,7 @@ in
###### setcap activation script ###### setcap activation script
system.activationScripts.wrappers = system.activationScripts.wrappers =
lib.stringAfter [ "users" ] lib.stringAfter [ "specialfs" "users" ]
'' ''
# Look in the system path and in the default profile for # Look in the system path and in the default profile for
# programs to be wrapped. # programs to be wrapped.

View File

@ -40,7 +40,7 @@ let
}); });
policyFile = pkgs.writeText "kube-policy" policyFile = pkgs.writeText "kube-policy"
concatStringsSep "\n" (map (builtins.toJSON cfg.apiserver.authorizationPolicy)); (concatStringsSep "\n" (map builtins.toJSON cfg.apiserver.authorizationPolicy));
cniConfig = pkgs.buildEnv { cniConfig = pkgs.buildEnv {
name = "kubernetes-cni-config"; name = "kubernetes-cni-config";

View File

@ -308,6 +308,7 @@ in
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" ]; after = [ "hydra-init.service" ];
environment = serverEnv; environment = serverEnv;
restartTriggers = [ hydraConf ];
serviceConfig = serviceConfig =
{ ExecStart = { ExecStart =
"@${cfg.package}/bin/hydra-server hydra-server -f -h '${cfg.listenHost}' " "@${cfg.package}/bin/hydra-server hydra-server -f -h '${cfg.listenHost}' "
@ -324,6 +325,7 @@ in
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ]; after = [ "hydra-init.service" "network.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ]; path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];
environment = env // { environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
IN_SYSTEMD = "1"; # to get log severity levels IN_SYSTEMD = "1"; # to get log severity levels
@ -344,7 +346,8 @@ in
{ wantedBy = [ "multi-user.target" ]; { wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ]; after = [ "hydra-init.service" "network.target" ];
path = [ cfg.package pkgs.nettools ]; path = with pkgs; [ cfg.package nettools jq ];
restartTriggers = [ hydraConf ];
environment = env; environment = env;
serviceConfig = serviceConfig =
{ ExecStart = "@${cfg.package}/bin/hydra-evaluator hydra-evaluator"; { ExecStart = "@${cfg.package}/bin/hydra-evaluator hydra-evaluator";

View File

@ -125,6 +125,15 @@ in {
Additional command line arguments to pass to Jenkins. Additional command line arguments to pass to Jenkins.
''; '';
}; };
extraJavaOptions = mkOption {
type = types.listOf types.str;
default = [ ];
example = [ "-Xmx80m" ];
description = ''
Additional command line arguments to pass to the Java run time (as opposed to Jenkins).
'';
};
}; };
}; };
@ -185,7 +194,7 @@ in {
''; '';
script = '' script = ''
${pkgs.jdk}/bin/java -jar ${pkgs.jenkins}/webapps/jenkins.war --httpListenAddress=${cfg.listenAddress} \ ${pkgs.jdk}/bin/java ${concatStringsSep " " cfg.extraJavaOptions} -jar ${pkgs.jenkins}/webapps/jenkins.war --httpListenAddress=${cfg.listenAddress} \
--httpPort=${toString cfg.port} \ --httpPort=${toString cfg.port} \
--prefix=${cfg.prefix} \ --prefix=${cfg.prefix} \
${concatStringsSep " " cfg.extraOptions} ${concatStringsSep " " cfg.extraOptions}

View File

@ -20,6 +20,7 @@ let
'' ''
[mysqld] [mysqld]
port = ${toString cfg.port} port = ${toString cfg.port}
${optionalString (cfg.bind != null) "bind-address = ${cfg.bind}" }
${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "log-bin=mysql-bin"} ${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "log-bin=mysql-bin"}
${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "server-id = ${toString cfg.replication.serverId}"} ${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "server-id = ${toString cfg.replication.serverId}"}
${optionalString (cfg.replication.role == "slave" && !atLeast55) ${optionalString (cfg.replication.role == "slave" && !atLeast55)
@ -58,6 +59,13 @@ in
"; ";
}; };
bind = mkOption {
type = types.nullOr types.str;
default = null;
example = literalExample "0.0.0.0";
description = "Address to bind to. The default it to bind to all addresses";
};
port = mkOption { port = mkOption {
type = types.int; type = types.int;
default = 3306; default = 3306;
@ -72,7 +80,7 @@ in
dataDir = mkOption { dataDir = mkOption {
type = types.path; type = types.path;
default = "/var/mysql"; # !!! should be /var/db/mysql example = "/var/lib/mysql";
description = "Location where MySQL stores its table files"; description = "Location where MySQL stores its table files";
}; };
@ -166,6 +174,10 @@ in
config = mkIf config.services.mysql.enable { config = mkIf config.services.mysql.enable {
services.mysql.dataDir =
mkDefault (if versionAtLeast config.system.stateVersion "17.09" then "/var/lib/mysql"
else "/var/mysql");
users.extraUsers.mysql = { users.extraUsers.mysql = {
description = "MySQL server user"; description = "MySQL server user";
group = "mysql"; group = "mysql";

View File

@ -38,6 +38,10 @@ let
pre84 = versionOlder (builtins.parseDrvName postgresql.name).version "8.4"; pre84 = versionOlder (builtins.parseDrvName postgresql.name).version "8.4";
# NixOS traditionally used `root` as superuser, most other distros use `postgres`. From 17.09
# we also try to follow this standard
superuser = (if versionAtLeast config.system.stateVersion "17.09" then "postgres" else "root");
in in
{ {
@ -74,7 +78,7 @@ in
dataDir = mkOption { dataDir = mkOption {
type = types.path; type = types.path;
default = "/var/db/postgresql"; example = "/var/lib/postgresql/9.6";
description = '' description = ''
Data directory for PostgreSQL. Data directory for PostgreSQL.
''; '';
@ -160,7 +164,13 @@ in
# Note: when changing the default, make it conditional on # Note: when changing the default, make it conditional on
# system.stateVersion to maintain compatibility with existing # system.stateVersion to maintain compatibility with existing
# systems! # systems!
mkDefault (if versionAtLeast config.system.stateVersion "16.03" then pkgs.postgresql95 else pkgs.postgresql94); mkDefault (if versionAtLeast config.system.stateVersion "17.09" then pkgs.postgresql96
else if versionAtLeast config.system.stateVersion "16.03" then pkgs.postgresql95
else pkgs.postgresql94);
services.postgresql.dataDir =
mkDefault (if versionAtLeast config.system.stateVersion "17.09" then "/var/lib/postgresql/${config.services.postgresql.package.psqlSchema}"
else "/var/db/postgresql");
services.postgresql.authentication = mkAfter services.postgresql.authentication = mkAfter
'' ''
@ -205,7 +215,7 @@ in
'' ''
# Initialise the database. # Initialise the database.
if ! test -e ${cfg.dataDir}/PG_VERSION; then if ! test -e ${cfg.dataDir}/PG_VERSION; then
initdb -U root initdb -U ${superuser}
# See postStart! # See postStart!
touch "${cfg.dataDir}/.first_startup" touch "${cfg.dataDir}/.first_startup"
fi fi
@ -237,14 +247,14 @@ in
# Wait for PostgreSQL to be ready to accept connections. # Wait for PostgreSQL to be ready to accept connections.
postStart = postStart =
'' ''
while ! psql --port=${toString cfg.port} postgres -c "" 2> /dev/null; do while ! ${pkgs.sudo}/bin/sudo -u ${superuser} psql --port=${toString cfg.port} -d postgres -c "" 2> /dev/null; do
if ! kill -0 "$MAINPID"; then exit 1; fi if ! kill -0 "$MAINPID"; then exit 1; fi
sleep 0.1 sleep 0.1
done done
if test -e "${cfg.dataDir}/.first_startup"; then if test -e "${cfg.dataDir}/.first_startup"; then
${optionalString (cfg.initialScript != null) '' ${optionalString (cfg.initialScript != null) ''
psql -f "${cfg.initialScript}" --port=${toString cfg.port} postgres ${pkgs.sudo}/bin/sudo -u ${superuser} psql -f "${cfg.initialScript}" --port=${toString cfg.port} -d postgres
''} ''}
rm -f "${cfg.dataDir}/.first_startup" rm -f "${cfg.dataDir}/.first_startup"
fi fi

View File

@ -0,0 +1,110 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.rethinkdb;
rethinkdb = cfg.package;
in
{
###### interface
options = {
services.rethinkdb = {
enable = mkOption {
default = false;
description = "Whether to enable the RethinkDB server.";
};
#package = mkOption {
# default = pkgs.rethinkdb;
# description = "Which RethinkDB derivation to use.";
#};
user = mkOption {
default = "rethinkdb";
description = "User account under which RethinkDB runs.";
};
group = mkOption {
default = "rethinkdb";
description = "Group which rethinkdb user belongs to.";
};
dbpath = mkOption {
default = "/var/db/rethinkdb";
description = "Location where RethinkDB stores its data, 1 data directory per instance.";
};
pidpath = mkOption {
default = "/var/run/rethinkdb";
description = "Location where each instance's pid file is located.";
};
#cfgpath = mkOption {
# default = "/etc/rethinkdb/instances.d";
# description = "Location where RethinkDB stores it config files, 1 config file per instance.";
#};
# TODO: currently not used by our implementation.
#instances = mkOption {
# type = types.attrsOf types.str;
# default = {};
# description = "List of named RethinkDB instances in our cluster.";
#};
};
};
###### implementation
config = mkIf config.services.rethinkdb.enable {
environment.systemPackages = [ rethinkdb ];
systemd.services.rethinkdb = {
description = "RethinkDB server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
# TODO: abstract away 'default', which is a per-instance directory name
# allowing end user of this nix module to provide multiple instances,
# and associated directory per instance
ExecStart = "${rethinkdb}/bin/rethinkdb -d ${cfg.dbpath}/default";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
User = cfg.user;
Group = cfg.group;
PIDFile = "${cfg.pidpath}/default.pid";
PermissionsStartOnly = true;
};
preStart = ''
if ! test -e ${cfg.dbpath}; then
install -d -m0755 -o ${cfg.user} -g ${cfg.group} ${cfg.dbpath}
install -d -m0755 -o ${cfg.user} -g ${cfg.group} ${cfg.dbpath}/default
chown -R ${cfg.user}:${cfg.group} ${cfg.dbpath}
fi
if ! test -e "${cfg.pidpath}/default.pid"; then
install -D -o ${cfg.user} -g ${cfg.group} /dev/null "${cfg.pidpath}/default.pid"
fi
'';
};
users.extraUsers.rethinkdb = mkIf (cfg.user == "rethinkdb")
{ name = "rethinkdb";
description = "RethinkDB server user";
};
users.extraGroups = optionalAttrs (cfg.group == "rethinkdb") (singleton
{ name = "rethinkdb";
});
};
}

View File

@ -4,10 +4,14 @@ with lib;
let let
cfg = config.services.logstash; cfg = config.services.logstash;
atLeast54 = versionAtLeast (builtins.parseDrvName cfg.package.name).version "5.4";
pluginPath = lib.concatStringsSep ":" cfg.plugins; pluginPath = lib.concatStringsSep ":" cfg.plugins;
havePluginPath = lib.length cfg.plugins > 0; havePluginPath = lib.length cfg.plugins > 0;
ops = lib.optionalString; ops = lib.optionalString;
verbosityFlag = { verbosityFlag =
if atLeast54
then "--log.level " + cfg.logLevel
else {
debug = "--debug"; debug = "--debug";
info = "--verbose"; info = "--verbose";
warn = ""; # intentionally empty warn = ""; # intentionally empty
@ -15,6 +19,31 @@ let
fatal = "--silent"; fatal = "--silent";
}."${cfg.logLevel}"; }."${cfg.logLevel}";
pluginsPath =
if atLeast54
then "--path.plugins ${pluginPath}"
else "--pluginpath ${pluginPath}";
logstashConf = pkgs.writeText "logstash.conf" ''
input {
${cfg.inputConfig}
}
filter {
${cfg.filterConfig}
}
output {
${cfg.outputConfig}
}
'';
logstashSettingsYml = pkgs.writeText "logstash.yml" cfg.extraSettings;
logstashSettingsDir = pkgs.runCommand "logstash-settings" {inherit logstashSettingsYml;} ''
mkdir -p $out
ln -s $logstashSettingsYml $out/logstash.yml
'';
in in
{ {
@ -45,6 +74,15 @@ in
description = "The paths to find other logstash plugins in."; description = "The paths to find other logstash plugins in.";
}; };
dataDir = mkOption {
type = types.str;
default = "/var/lib/logstash";
description = ''
A path to directory writable by logstash that it uses to store data.
Plugins will also have access to this path.
'';
};
logLevel = mkOption { logLevel = mkOption {
type = types.enum [ "debug" "info" "warn" "error" "fatal" ]; type = types.enum [ "debug" "info" "warn" "error" "fatal" ];
default = "warn"; default = "warn";
@ -116,6 +154,19 @@ in
''; '';
}; };
extraSettings = mkOption {
type = types.lines;
default = "";
description = "Extra Logstash settings in YAML format.";
example = ''
pipeline:
batch:
size: 125
delay: 5
'';
};
}; };
}; };
@ -123,31 +174,34 @@ in
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
assertions = [
{ assertion = atLeast54 -> !cfg.enableWeb;
message = ''
The logstash web interface is only available for versions older than 5.4.
So either set services.logstash.enableWeb = false,
or set services.logstash.package to an older logstash.
'';
}
];
systemd.services.logstash = with pkgs; { systemd.services.logstash = with pkgs; {
description = "Logstash Daemon"; description = "Logstash Daemon";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
environment = { JAVA_HOME = jre; }; environment = { JAVA_HOME = jre; };
path = [ pkgs.bash ]; path = [ pkgs.bash ];
serviceConfig = { serviceConfig = {
ExecStart = ExecStartPre = ''${pkgs.coreutils}/bin/mkdir -p "${cfg.dataDir}" ; ${pkgs.coreutils}/bin/chmod 700 "${cfg.dataDir}"'';
"${cfg.package}/bin/logstash agent " + ExecStart = concatStringsSep " " (filter (s: stringLength s != 0) [
"-w ${toString cfg.filterWorkers} " + "${cfg.package}/bin/logstash"
ops havePluginPath "--pluginpath ${pluginPath} " + (ops (!atLeast54) "agent")
"${verbosityFlag} " + "-w ${toString cfg.filterWorkers}"
"-f ${writeText "logstash.conf" '' (ops havePluginPath pluginsPath)
input { "${verbosityFlag}"
${cfg.inputConfig} "-f ${logstashConf}"
} (ops atLeast54 "--path.settings ${logstashSettingsDir}")
(ops atLeast54 "--path.data ${cfg.dataDir}")
filter { (ops cfg.enableWeb "-- web -a ${cfg.listenAddress} -p ${cfg.port}")
${cfg.filterConfig} ]);
}
output {
${cfg.outputConfig}
}
''} " +
ops cfg.enableWeb "-- web -a ${cfg.listenAddress} -p ${cfg.port}";
}; };
}; };
}; };

View File

@ -0,0 +1,43 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mailhog;
in {
###### interface
options = {
services.mailhog = {
enable = mkEnableOption "MailHog";
user = mkOption {
type = types.str;
default = "mailhog";
description = "User account under which mailhog runs.";
};
};
};
###### implementation
config = mkIf cfg.enable {
users.extraUsers.mailhog = {
name = cfg.user;
description = "MailHog service user";
};
systemd.services.mailhog = {
description = "MailHog service";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${pkgs.mailhog}/bin/MailHog";
User = cfg.user;
};
};
};
}

View File

@ -3,35 +3,121 @@
with lib; with lib;
let let
cfg = config.services.spamassassin; cfg = config.services.spamassassin;
spamassassin-local-cf = pkgs.writeText "local.cf" cfg.config;
spamassassin-init-pre = pkgs.writeText "init.pre" cfg.initPreConf;
spamdEnv = pkgs.buildEnv {
name = "spamd-env";
paths = [];
postBuild = ''
ln -sf ${spamassassin-init-pre} $out/init.pre
ln -sf ${spamassassin-local-cf} $out/local.cf
'';
};
in in
{ {
###### interface
options = { options = {
services.spamassassin = { services.spamassassin = {
enable = mkOption { enable = mkOption {
default = false; default = false;
description = "Whether to run the SpamAssassin daemon."; description = "Whether to run the SpamAssassin daemon";
}; };
debug = mkOption { debug = mkOption {
default = false; default = false;
description = "Whether to run the SpamAssassin daemon in debug mode."; description = "Whether to run the SpamAssassin daemon in debug mode";
}; };
config = mkOption {
type = types.lines;
description = ''
The SpamAssassin local.cf config
If you are using this configuration:
add_header all Status _YESNO_, score=_SCORE_ required=_REQD_ tests=_TESTS_ autolearn=_AUTOLEARN_ version=_VERSION_
Then you can Use this sieve filter:
require ["fileinto", "reject", "envelope"];
if header :contains "X-Spam-Flag" "YES" {
fileinto "spam";
}
Or this procmail filter:
:0:
* ^X-Spam-Flag: YES
/var/vpopmail/domains/lastlog.de/js/.maildir/.spam/new
To filter your messages based on the additional mail headers added by spamassassin.
'';
example = ''
#rewrite_header Subject [***** SPAM _SCORE_ *****]
required_score 5.0
use_bayes 1
bayes_auto_learn 1
add_header all Status _YESNO_, score=_SCORE_ required=_REQD_ tests=_TESTS_ autolearn=_AUTOLEARN_ version=_VERSION_
'';
default = "";
}; };
initPreConf = mkOption {
type = types.str;
description = "The SpamAssassin init.pre config.";
default =
''
#
# to update this list, run this command in the rules directory:
# grep 'loadplugin.*Mail::SpamAssassin::Plugin::.*' -o -h * | sort | uniq
#
#loadplugin Mail::SpamAssassin::Plugin::AccessDB
#loadplugin Mail::SpamAssassin::Plugin::AntiVirus
loadplugin Mail::SpamAssassin::Plugin::AskDNS
# loadplugin Mail::SpamAssassin::Plugin::ASN
loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold
#loadplugin Mail::SpamAssassin::Plugin::AWL
loadplugin Mail::SpamAssassin::Plugin::Bayes
loadplugin Mail::SpamAssassin::Plugin::BodyEval
loadplugin Mail::SpamAssassin::Plugin::Check
#loadplugin Mail::SpamAssassin::Plugin::DCC
loadplugin Mail::SpamAssassin::Plugin::DKIM
loadplugin Mail::SpamAssassin::Plugin::DNSEval
loadplugin Mail::SpamAssassin::Plugin::FreeMail
loadplugin Mail::SpamAssassin::Plugin::Hashcash
loadplugin Mail::SpamAssassin::Plugin::HeaderEval
loadplugin Mail::SpamAssassin::Plugin::HTMLEval
loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch
loadplugin Mail::SpamAssassin::Plugin::ImageInfo
loadplugin Mail::SpamAssassin::Plugin::MIMEEval
loadplugin Mail::SpamAssassin::Plugin::MIMEHeader
# loadplugin Mail::SpamAssassin::Plugin::PDFInfo
#loadplugin Mail::SpamAssassin::Plugin::PhishTag
loadplugin Mail::SpamAssassin::Plugin::Pyzor
loadplugin Mail::SpamAssassin::Plugin::Razor2
# loadplugin Mail::SpamAssassin::Plugin::RelayCountry
loadplugin Mail::SpamAssassin::Plugin::RelayEval
loadplugin Mail::SpamAssassin::Plugin::ReplaceTags
# loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
# loadplugin Mail::SpamAssassin::Plugin::Shortcircuit
loadplugin Mail::SpamAssassin::Plugin::SpamCop
loadplugin Mail::SpamAssassin::Plugin::SPF
#loadplugin Mail::SpamAssassin::Plugin::TextCat
# loadplugin Mail::SpamAssassin::Plugin::TxRep
loadplugin Mail::SpamAssassin::Plugin::URIDetail
loadplugin Mail::SpamAssassin::Plugin::URIDNSBL
loadplugin Mail::SpamAssassin::Plugin::URIEval
# loadplugin Mail::SpamAssassin::Plugin::URILocalBL
loadplugin Mail::SpamAssassin::Plugin::VBounce
loadplugin Mail::SpamAssassin::Plugin::WhiteListSubject
loadplugin Mail::SpamAssassin::Plugin::WLBLEval
'';
};
};
}; };
###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
@ -50,13 +136,65 @@ in
gid = config.ids.gids.spamd; gid = config.ids.gids.spamd;
}; };
systemd.services.sa-update = {
script = ''
set +e
${pkgs.su}/bin/su -s "${pkgs.bash}/bin/bash" -c "${pkgs.spamassassin}/bin/sa-update --gpghomedir=/var/lib/spamassassin/sa-update-keys/ --siteconfigpath=${spamdEnv}/" spamd
v=$?
set -e
if [ $v -gt 1 ]; then
echo "sa-update execution error"
exit $v
fi
if [ $v -eq 0 ]; then
systemctl reload spamd.service
fi
'';
};
systemd.timers.sa-update = {
description = "sa-update-service";
partOf = [ "sa-update.service" ];
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "1:*";
Persistent = true;
};
};
systemd.services.spamd = { systemd.services.spamd = {
description = "Spam Assassin Server"; description = "Spam Assassin Server";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "network.target" ]; after = [ "network.target" ];
script = "${pkgs.spamassassin}/bin/spamd ${optionalString cfg.debug "-D"} --username=spamd --groupname=spamd --nouser-config --virtual-config-dir=/var/lib/spamassassin/user-%u --allow-tell --pidfile=/var/run/spamd.pid"; serviceConfig = {
ExecStart = "${pkgs.spamassassin}/bin/spamd ${optionalString cfg.debug "-D"} --username=spamd --groupname=spamd --siteconfigpath=${spamdEnv} --virtual-config-dir=/var/lib/spamassassin/user-%u --allow-tell --pidfile=/var/run/spamd.pid";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
# 0 and 1 no error, exitcode > 1 means error:
# https://spamassassin.apache.org/full/3.1.x/doc/sa-update.html#exit_codes
preStart = ''
# this abstraction requires no centralized config at all
if [ -d /etc/spamassassin ]; then
echo "This spamassassin does not support global '/etc/spamassassin' folder for configuration as this would be impure. Merge your configs into 'services.spamassassin' and remove the '/etc/spamassassin' folder to make this service work. Also see 'https://github.com/NixOS/nixpkgs/pull/26470'.";
exit 1
fi
echo "Recreating '/var/lib/spamasassin' with creating '3.004001' (or similar) and 'sa-update-keys'"
mkdir -p /var/lib/spamassassin
chown spamd:spamd /var/lib/spamassassin -R
set +e
${pkgs.su}/bin/su -s "${pkgs.bash}/bin/bash" -c "${pkgs.spamassassin}/bin/sa-update --gpghomedir=/var/lib/spamassassin/sa-update-keys/ --siteconfigpath=${spamdEnv}/" spamd
v=$?
set -e
if [ $v -gt 1 ]; then
echo "sa-update execution error"
exit $v
fi
chown spamd:spamd /var/lib/spamassassin -R
'';
}; };
}; };
} }

View File

@ -22,19 +22,9 @@ in {
environment.systemPackages = [ pkgs.autorandr ]; environment.systemPackages = [ pkgs.autorandr ];
# systemd.unitPackages = [ pkgs.autorandr ]; systemd.packages = [ pkgs.autorandr ];
systemd.services.autorandr = { systemd.services.autorandr = {
unitConfig = {
Description = "autorandr execution hook";
After = [ "sleep.target" ];
StartLimitInterval = "5";
StartLimitBurst = "1";
};
serviceConfig = {
ExecStart = "${pkgs.autorandr}/bin/autorandr --batch --change --default default";
Type = "oneshot";
RemainAfterExit = false;
};
wantedBy = [ "sleep.target" ]; wantedBy = [ "sleep.target" ];
}; };

View File

@ -84,7 +84,7 @@ in {
dataDir = if !isNull instanceCfg.dataDir then instanceCfg.dataDir else dataDir = if !isNull instanceCfg.dataDir then instanceCfg.dataDir else
"/var/lib/errbot/${name}"; "/var/lib/errbot/${name}";
in { in {
after = [ "network.target" ]; after = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
mkdir -p ${dataDir} mkdir -p ${dataDir}

View File

@ -0,0 +1,45 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.fstrim;
in {
options = {
services.fstrim = {
enable = mkEnableOption "periodic SSD TRIM of mounted partitions in background";
interval = mkOption {
type = types.string;
default = "weekly";
description = ''
How often we run fstrim. For most desktop and server systems
a sufficient trimming frequency is once a week.
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>.
'';
};
};
};
config = mkIf cfg.enable {
systemd.packages = [ pkgs.utillinux ];
systemd.timers.fstrim = {
timerConfig = {
OnCalendar = cfg.interval;
};
wantedBy = [ "timers.target" ];
};
};
}

View File

@ -82,7 +82,7 @@ in
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
test -d "${cfg.dataDir}" || { test -d "${cfg.dataDir}/Plex Media Server" || {
echo "Creating initial Plex data directory in \"${cfg.dataDir}\"." echo "Creating initial Plex data directory in \"${cfg.dataDir}\"."
mkdir -p "${cfg.dataDir}/Plex Media Server" mkdir -p "${cfg.dataDir}/Plex Media Server"
chown -R ${cfg.user}:${cfg.group} "${cfg.dataDir}" chown -R ${cfg.user}:${cfg.group} "${cfg.dataDir}"

View File

@ -48,7 +48,8 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
systemd.user.services.arbtt = { systemd.user.services.arbtt = {
description = "arbtt statistics capture service"; description = "arbtt statistics capture service";
wantedBy = [ "default.target" ]; wantedBy = [ "graphical-session.target" ];
partOf = [ "graphical-session.target" ];
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";

View File

@ -488,9 +488,7 @@ in {
# create index # create index
${pkgs.python27Packages.graphite_web}/bin/build-index.sh ${pkgs.python27Packages.graphite_web}/bin/build-index.sh
chown graphite:graphite ${cfg.dataDir} chown -R graphite:graphite ${cfg.dataDir}
chown graphite:graphite ${cfg.dataDir}/whisper
chown -R graphite:graphite ${cfg.dataDir}/log
touch ${dataDir}/db-created touch ${dataDir}/db-created
fi fi

View File

@ -66,15 +66,6 @@ let
How frequently to evaluate rules by default. How frequently to evaluate rules by default.
''; '';
}; };
labels = mkOption {
type = types.attrsOf types.str;
default = {};
description = ''
The labels to add to any timeseries that this Prometheus instance
scrapes.
'';
};
}; };
}; };

View File

@ -3,7 +3,7 @@
with lib; with lib;
let let
inherit (pkgs) glusterfs; inherit (pkgs) glusterfs rsync;
cfg = config.services.glusterfs; cfg = config.services.glusterfs;
@ -50,8 +50,11 @@ in
after = [ "rpcbind.service" "network.target" "local-fs.target" ]; after = [ "rpcbind.service" "network.target" "local-fs.target" ];
before = [ "network-online.target" ]; before = [ "network-online.target" ];
# The copying of hooks is due to upstream bug https://bugzilla.redhat.com/show_bug.cgi?id=1452761
preStart = '' preStart = ''
install -m 0755 -d /var/log/glusterfs install -m 0755 -d /var/log/glusterfs
mkdir -p /var/lib/glusterd/hooks/
${rsync}/bin/rsync -a ${glusterfs}/var/lib/glusterd/hooks/ /var/lib/glusterd/hooks/
''; '';
serviceConfig = { serviceConfig = {

View File

@ -1,185 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.aiccu;
notNull = a: ! isNull a;
configFile = pkgs.writeText "aiccu.conf" ''
${if notNull cfg.username then "username " + cfg.username else ""}
${if notNull cfg.password then "password " + cfg.password else ""}
protocol ${cfg.protocol}
server ${cfg.server}
ipv6_interface ${cfg.interfaceName}
verbose ${boolToString cfg.verbose}
daemonize true
automatic ${boolToString cfg.automatic}
requiretls ${boolToString cfg.requireTLS}
pidfile ${cfg.pidFile}
defaultroute ${boolToString cfg.defaultRoute}
${if notNull cfg.setupScript then cfg.setupScript else ""}
makebeats ${boolToString cfg.makeHeartBeats}
noconfigure ${boolToString cfg.noConfigure}
behindnat ${boolToString cfg.behindNAT}
${if cfg.localIPv4Override then "local_ipv4_override" else ""}
'';
in {
options = {
services.aiccu = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable aiccu IPv6 over IPv4 SiXXs tunnel";
};
username = mkOption {
type = with types; nullOr str;
default = null;
example = "FAB5-SIXXS";
description = "Login credential";
};
password = mkOption {
type = with types; nullOr str;
default = null;
example = "TmAkRbBEr0";
description = "Login credential";
};
protocol = mkOption {
type = types.str;
default = "tic";
example = "tic|tsp|l2tp";
description = "Protocol to use for setting up the tunnel";
};
server = mkOption {
type = types.str;
default = "tic.sixxs.net";
example = "enabled.ipv6server.net";
description = "Server to use for setting up the tunnel";
};
interfaceName = mkOption {
type = types.str;
default = "aiccu";
example = "sixxs";
description = ''
The name of the interface that will be used as a tunnel interface.
On *BSD the ipv6_interface should be set to gifX (eg gif0) for proto-41 tunnels
or tunX (eg tun0) for AYIYA tunnels.
'';
};
tunnelID = mkOption {
type = with types; nullOr str;
default = null;
example = "T12345";
description = "The tunnel id to use, only required when there are multiple tunnels in the list";
};
verbose = mkOption {
type = types.bool;
default = false;
description = "Be verbose?";
};
automatic = mkOption {
type = types.bool;
default = true;
description = "Automatic Login and Tunnel activation";
};
requireTLS = mkOption {
type = types.bool;
default = false;
description = ''
When set to true, if TLS is not supported on the server
the TIC transaction will fail.
When set to false, it will try a starttls, when that is
not supported it will continue.
In any case if AICCU is build with TLS support it will
try to do a 'starttls' to the TIC server to see if that
is supported.
'';
};
pidFile = mkOption {
type = types.path;
default = "/run/aiccu.pid";
example = "/var/lib/aiccu/aiccu.pid";
description = "Location of PID File";
};
defaultRoute = mkOption {
type = types.bool;
default = true;
description = "Add a default route";
};
setupScript = mkOption {
type = with types; nullOr path;
default = null;
example = "/var/lib/aiccu/fix-subnets.sh";
description = "Script to run after setting up the interfaces";
};
makeHeartBeats = mkOption {
type = types.bool;
default = true;
description = ''
In general you don't want to turn this off
Of course only applies to AYIYA and heartbeat tunnels not to static ones
'';
};
noConfigure = mkOption {
type = types.bool;
default = false;
description = "Don't configure anything";
};
behindNAT = mkOption {
type = types.bool;
default = false;
description = "Notify the user that a NAT-kind network is detected";
};
localIPv4Override = mkOption {
type = types.bool;
default = false;
description = ''
Overrides the IPv4 parameter received from TIC
This allows one to configure a NAT into "DMZ" mode and then
forwarding the proto-41 packets to an internal host.
This is only needed for static proto-41 tunnels!
AYIYA and heartbeat tunnels don't require this.
'';
};
};
};
config = mkIf cfg.enable {
systemd.services.aiccu = {
description = "Automatic IPv6 Connectivity Client Utility";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.aiccu}/bin/aiccu start ${configFile}";
ExecStop = "${pkgs.aiccu}/bin/aiccu stop";
Type = "forking";
PIDFile = cfg.pidFile;
Restart = "no"; # aiccu startup errors are serious, do not pound the tic server or be banned.
};
};
};
}

View File

@ -10,12 +10,17 @@ let
confFile = pkgs.writeText "named.conf" confFile = pkgs.writeText "named.conf"
'' ''
include "/etc/bind/rndc.key";
controls {
inet 127.0.0.1 allow {localhost;} keys {"rndc-key";};
};
acl cachenetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.cacheNetworks} }; acl cachenetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.cacheNetworks} };
acl badnetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.blockedNetworks} }; acl badnetworks { ${concatMapStrings (entry: " ${entry}; ") cfg.blockedNetworks} };
options { options {
listen-on {any;}; listen-on { ${concatMapStrings (entry: " ${entry}; ") cfg.listenOn} };
listen-on-v6 {any;}; listen-on-v6 { ${concatMapStrings (entry: " ${entry}; ") cfg.listenOnIpv6} };
allow-query { cachenetworks; }; allow-query { cachenetworks; };
blackhole { badnetworks; }; blackhole { badnetworks; };
forward first; forward first;
@ -96,6 +101,22 @@ in
"; ";
}; };
listenOn = mkOption {
default = ["any"];
type = types.listOf types.str;
description = "
Interfaces to listen on.
";
};
listenOnIpv6 = mkOption {
default = ["any"];
type = types.listOf types.str;
description = "
Ipv6 interfaces to listen on.
";
};
zones = mkOption { zones = mkOption {
default = []; default = [];
description = " description = "
@ -151,11 +172,21 @@ in
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
mkdir -m 0755 -p /etc/bind
if ! [ -f "/etc/bind/rndc.key" ]; then
${pkgs.bind.out}/sbin/rndc-confgen -r /dev/urandom -c /etc/bind/rndc.key -u ${bindUser} -a -A hmac-sha256 2>/dev/null
fi
${pkgs.coreutils}/bin/mkdir -p /var/run/named ${pkgs.coreutils}/bin/mkdir -p /var/run/named
chown ${bindUser} /var/run/named chown ${bindUser} /var/run/named
''; '';
script = "${pkgs.bind.out}/sbin/named -u ${bindUser} ${optionalString cfg.ipv4Only "-4"} -c ${cfg.configFile} -f"; serviceConfig = {
ExecStart = "${pkgs.bind.out}/sbin/named -u ${bindUser} ${optionalString cfg.ipv4Only "-4"} -c ${cfg.configFile} -f";
ExecReload = "${pkgs.bind.out}/sbin/rndc -k '/etc/bind/rndc.key' reload";
ExecStop = "${pkgs.bind.out}/sbin/rndc -k '/etc/bind/rndc.key' stop";
};
unitConfig.Documentation = "man:named(8)"; unitConfig.Documentation = "man:named(8)";
}; };
}; };

View File

@ -5,15 +5,33 @@ with lib;
let let
cfg = config.services.cntlm; cfg = config.services.cntlm;
uid = config.ids.uids.cntlm;
configFile = if cfg.configText != "" then
pkgs.writeText "cntlm.conf" ''
${cfg.configText}
''
else
pkgs.writeText "lighttpd.conf" ''
# Cntlm Authentication Proxy Configuration
Username ${cfg.username}
Domain ${cfg.domain}
Password ${cfg.password}
${optionalString (cfg.netbios_hostname != "") "Workstation ${cfg.netbios_hostname}"}
${concatMapStrings (entry: "Proxy ${entry}\n") cfg.proxy}
${optionalString (cfg.noproxy != []) "NoProxy ${concatStringsSep ", " cfg.noproxy}"}
${concatMapStrings (port: ''
Listen ${toString port}
'') cfg.port}
${cfg.extraConfig}
'';
in in
{ {
options = { options.services.cntlm = {
services.cntlm = {
enable = mkOption { enable = mkOption {
default = false; default = false;
@ -40,6 +58,7 @@ in
netbios_hostname = mkOption { netbios_hostname = mkOption {
type = types.str; type = types.str;
default = "";
description = '' description = ''
The hostname of your machine. The hostname of your machine.
''; '';
@ -53,6 +72,15 @@ in
number of proxies. Should one proxy fail, cntlm automatically moves on to the next one. The connect request fails only if the whole number of proxies. Should one proxy fail, cntlm automatically moves on to the next one. The connect request fails only if the whole
list of proxies is scanned and (for each request) and found to be invalid. Command-line takes precedence over the configuration file. list of proxies is scanned and (for each request) and found to be invalid. Command-line takes precedence over the configuration file.
''; '';
example = [ "proxy.example.com:81" ];
};
noproxy = mkOption {
description = ''
A list of domains where the proxy is skipped.
'';
default = [];
example = [ "*.example.com" "example.com" ];
}; };
port = mkOption { port = mkOption {
@ -61,6 +89,12 @@ in
}; };
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines;
default = "";
description = "Additional config appended to the end of the generated <filename>cntlm.conf</filename>.";
};
configText = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
description = "Verbatim contents of <filename>cntlm.conf</filename>."; description = "Verbatim contents of <filename>cntlm.conf</filename>.";
@ -68,47 +102,25 @@ in
}; };
};
###### implementation ###### implementation
config = mkIf config.services.cntlm.enable { config = mkIf cfg.enable {
systemd.services.cntlm = { systemd.services.cntlm = {
description = "CNTLM is an NTLM / NTLM Session Response / NTLMv2 authenticating HTTP proxy"; description = "CNTLM is an NTLM / NTLM Session Response / NTLMv2 authenticating HTTP proxy";
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { serviceConfig = {
Type = "forking";
User = "cntlm"; User = "cntlm";
ExecStart = '' ExecStart = ''
${pkgs.cntlm}/bin/cntlm -U cntlm \ ${pkgs.cntlm}/bin/cntlm -U cntlm -c ${configFile} -v -f
-c ${pkgs.writeText "cntlm_config" cfg.extraConfig}
''; '';
}; };
}; };
services.cntlm.netbios_hostname = mkDefault config.networking.hostName;
users.extraUsers.cntlm = { users.extraUsers.cntlm = {
name = "cntlm"; name = "cntlm";
description = "cntlm system-wide daemon"; description = "cntlm system-wide daemon";
home = "/var/empty"; isSystemUser = true;
}; };
services.cntlm.extraConfig =
''
# Cntlm Authentication Proxy Configuration
Username ${cfg.username}
Domain ${cfg.domain}
Password ${cfg.password}
Workstation ${cfg.netbios_hostname}
${concatMapStrings (entry: "Proxy ${entry}\n") cfg.proxy}
${concatMapStrings (port: ''
Listen ${toString port}
'') cfg.port}
'';
}; };
} }

View File

@ -17,7 +17,7 @@ let
host = ${cfg.dns.address} host = ${cfg.dns.address}
port = ${toString cfg.dns.port} port = ${toString cfg.dns.port}
oldDNSMethod = NO_OLD_DNS oldDNSMethod = NO_OLD_DNS
externalIP = ${cfg.dns.address} externalIP = ${cfg.dns.externalAddress}
[http] [http]
host = ${cfg.api.hostname} host = ${cfg.api.hostname}
@ -47,8 +47,18 @@ in
type = types.str; type = types.str;
default = "127.0.0.1"; default = "127.0.0.1";
description = '' description = ''
The IP address that will be used to reach this machine. The IP address the DNSChain resolver will bind to.
Leave this unchanged if you do not wish to directly expose the DNSChain resolver. Leave this unchanged if you do not wish to directly expose the resolver.
'';
};
dns.externalAddress = mkOption {
type = types.str;
default = cfg.dns.address;
description = ''
The IP address used by clients to reach the resolver and the value of
the <literal>namecoin.dns</literal> record. Set this in case the bind address
is not the actual IP address (e.g. the machine is behind a NAT).
''; '';
}; };

View File

@ -114,14 +114,15 @@ let
# The "nixos-fw" chain does the actual work. # The "nixos-fw" chain does the actual work.
ip46tables -N nixos-fw ip46tables -N nixos-fw
# Perform a reverse-path test to refuse spoofers
# For now, we just drop, as the raw table doesn't have a log-refuse yet
${optionalString (kernelHasRPFilter && (cfg.checkReversePath != false)) ''
# Clean up rpfilter rules # Clean up rpfilter rules
ip46tables -t raw -D PREROUTING -j nixos-fw-rpfilter 2> /dev/null || true ip46tables -t raw -D PREROUTING -j nixos-fw-rpfilter 2> /dev/null || true
ip46tables -t raw -F nixos-fw-rpfilter 2> /dev/null || true ip46tables -t raw -F nixos-fw-rpfilter 2> /dev/null || true
ip46tables -t raw -N nixos-fw-rpfilter 2> /dev/null || true ip46tables -t raw -X nixos-fw-rpfilter 2> /dev/null || true
${optionalString (kernelHasRPFilter && (cfg.checkReversePath != false)) ''
# Perform a reverse-path test to refuse spoofers
# For now, we just drop, as the raw table doesn't have a log-refuse yet
ip46tables -t raw -N nixos-fw-rpfilter 2> /dev/null || true
ip46tables -t raw -A nixos-fw-rpfilter -m rpfilter ${optionalString (cfg.checkReversePath == "loose") "--loose"} -j RETURN ip46tables -t raw -A nixos-fw-rpfilter -m rpfilter ${optionalString (cfg.checkReversePath == "loose") "--loose"} -j RETURN
# Allows this host to act as a DHCPv4 server # Allows this host to act as a DHCPv4 server

View File

@ -164,7 +164,7 @@ in
path = [ pkgs.hostapd ]; path = [ pkgs.hostapd ];
wantedBy = [ "network.target" ]; wantedBy = [ "network.target" ];
after = [ "${cfg.interface}-cfg.service" "nat.service" "bind.service" "dhcpd.service"]; after = [ "${cfg.interface}-cfg.service" "nat.service" "bind.service" "dhcpd.service" "sys-subsystem-net-devices-${cfg.interface}.device" ];
serviceConfig = serviceConfig =
{ ExecStart = "${pkgs.hostapd}/bin/hostapd ${configFile}"; { ExecStart = "${pkgs.hostapd}/bin/hostapd ${configFile}";

View File

@ -212,7 +212,8 @@ in
type = with types; nullOr int; type = with types; nullOr int;
default = null; default = null;
description = '' description = ''
Set a router bandwidth limit integer in kbps or letters: L (32), O (256), P (2048), X (>9000) Set a router bandwidth limit integer in KBps.
If not set, i2pd defaults to 32KBps.
''; '';
}; };

View File

@ -12,16 +12,15 @@ let
configFile = writeText "NetworkManager.conf" '' configFile = writeText "NetworkManager.conf" ''
[main] [main]
plugins=keyfile plugins=keyfile
dhcp=${cfg.dhcp}
dns=${if cfg.useDnsmasq then "dnsmasq" else "default"} dns=${if cfg.useDnsmasq then "dnsmasq" else "default"}
[keyfile] [keyfile]
${optionalString (config.networking.hostName != "")
''hostname=${config.networking.hostName}''}
${optionalString (cfg.unmanaged != []) ${optionalString (cfg.unmanaged != [])
''unmanaged-devices=${lib.concatStringsSep ";" cfg.unmanaged}''} ''unmanaged-devices=${lib.concatStringsSep ";" cfg.unmanaged}''}
[logging] [logging]
level=WARN level=${cfg.logLevel}
[connection] [connection]
ipv6.ip6-privacy=2 ipv6.ip6-privacy=2
@ -138,6 +137,22 @@ in {
apply = list: (attrValues cfg.basePackages) ++ list; apply = list: (attrValues cfg.basePackages) ++ list;
}; };
dhcp = mkOption {
type = types.enum [ "dhclient" "dhcpcd" "internal" ];
default = "dhclient";
description = ''
Which program (or internal library) should be used for DHCP.
'';
};
logLevel = mkOption {
type = types.enum [ "OFF" "ERR" "WARN" "INFO" "DEBUG" "TRACE" ];
default = "WARN";
description = ''
Set the default logging verbosity level.
'';
};
appendNameservers = mkOption { appendNameservers = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = []; default = [];

View File

@ -811,6 +811,7 @@ in
serviceConfig = { serviceConfig = {
ExecStart = "${nsdPkg}/sbin/nsd -d -c ${nsdEnv}/nsd.conf"; ExecStart = "${nsdPkg}/sbin/nsd -d -c ${nsdEnv}/nsd.conf";
StandardError = "null";
PIDFile = pidFile; PIDFile = pidFile;
Restart = "always"; Restart = "always";
RestartSec = "4s"; RestartSec = "4s";

View File

@ -0,0 +1,268 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.resilio;
resilioSync = pkgs.resilio-sync;
sharedFoldersRecord = map (entry: {
secret = entry.secret;
dir = entry.directory;
use_relay_server = entry.useRelayServer;
use_tracker = entry.useTracker;
use_dht = entry.useDHT;
search_lan = entry.searchLAN;
use_sync_trash = entry.useSyncTrash;
known_hosts = knownHosts;
}) cfg.sharedFolders;
configFile = pkgs.writeText "config.json" (builtins.toJSON ({
device_name = cfg.deviceName;
storage_path = cfg.storagePath;
listening_port = cfg.listeningPort;
use_gui = false;
check_for_updates = cfg.checkForUpdates;
use_upnp = cfg.useUpnp;
download_limit = cfg.downloadLimit;
upload_limit = cfg.uploadLimit;
lan_encrypt_data = cfg.encryptLAN;
} // optionalAttrs cfg.enableWebUI {
webui = { listen = "${cfg.httpListenAddr}:${toString cfg.httpListenPort}"; } //
(optionalAttrs (cfg.httpLogin != "") { login = cfg.httpLogin; }) //
(optionalAttrs (cfg.httpPass != "") { password = cfg.httpPass; }) //
(optionalAttrs (cfg.apiKey != "") { api_key = cfg.apiKey; }) //
(optionalAttrs (cfg.directoryRoot != "") { directory_root = cfg.directoryRoot; });
} // optionalAttrs (sharedFoldersRecord != []) {
shared_folders = sharedFoldersRecord;
}));
in
{
options = {
services.resilio = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
If enabled, start the Resilio Sync daemon. Once enabled, you can
interact with the service through the Web UI, or configure it in your
NixOS configuration. Enabling the <literal>resilio</literal> service
also installs a systemd user unit which can be used to start
user-specific copies of the daemon. Once installed, you can use
<literal>systemctl --user start resilio</literal> as your user to start
the daemon using the configuration file located at
<literal>$HOME/.config/resilio-sync/config.json</literal>.
'';
};
deviceName = mkOption {
type = types.str;
example = "Voltron";
default = config.networking.hostName;
description = ''
Name of the Resilio Sync device.
'';
};
listeningPort = mkOption {
type = types.int;
default = 0;
example = 44444;
description = ''
Listening port. Defaults to 0 which randomizes the port.
'';
};
checkForUpdates = mkOption {
type = types.bool;
default = true;
description = ''
Determines whether to check for updates and alert the user
about them in the UI.
'';
};
useUpnp = mkOption {
type = types.bool;
default = true;
description = ''
Use Universal Plug-n-Play (UPnP)
'';
};
downloadLimit = mkOption {
type = types.int;
default = 0;
example = 1024;
description = ''
Download speed limit. 0 is unlimited (default).
'';
};
uploadLimit = mkOption {
type = types.int;
default = 0;
example = 1024;
description = ''
Upload speed limit. 0 is unlimited (default).
'';
};
httpListenAddr = mkOption {
type = types.str;
default = "0.0.0.0";
example = "1.2.3.4";
description = ''
HTTP address to bind to.
'';
};
httpListenPort = mkOption {
type = types.int;
default = 9000;
description = ''
HTTP port to bind on.
'';
};
httpLogin = mkOption {
type = types.str;
example = "allyourbase";
default = "";
description = ''
HTTP web login username.
'';
};
httpPass = mkOption {
type = types.str;
example = "arebelongtous";
default = "";
description = ''
HTTP web login password.
'';
};
encryptLAN = mkOption {
type = types.bool;
default = true;
description = "Encrypt LAN data.";
};
enableWebUI = mkOption {
type = types.bool;
default = false;
description = ''
Enable Web UI for administration. Bound to the specified
<literal>httpListenAddress</literal> and
<literal>httpListenPort</literal>.
'';
};
storagePath = mkOption {
type = types.path;
default = "/var/lib/resilio-sync/";
description = ''
Where BitTorrent Sync will store it's database files (containing
things like username info and licenses). Generally, you should not
need to ever change this.
'';
};
apiKey = mkOption {
type = types.str;
default = "";
description = "API key, which enables the developer API.";
};
directoryRoot = mkOption {
type = types.str;
default = "";
example = "/media";
description = "Default directory to add folders in the web UI.";
};
sharedFolders = mkOption {
default = [];
example =
[ { secret = "AHMYFPCQAHBM7LQPFXQ7WV6Y42IGUXJ5Y";
directory = "/home/user/sync_test";
useRelayServer = true;
useTracker = true;
useDHT = false;
searchLAN = true;
useSyncTrash = true;
knownHosts = [
"192.168.1.2:4444"
"192.168.1.3:4444"
];
}
];
description = ''
Shared folder list. If enabled, web UI must be
disabled. Secrets can be generated using <literal>rslsync
--generate-secret</literal>. Note that this secret will be
put inside the Nix store, so it is realistically not very
secret.
If you would like to be able to modify the contents of this
directories, it is recommended that you make your user a
member of the <literal>resilio</literal> group.
Directories in this list should be in the
<literal>resilio</literal> group, and that group must have
write access to the directory. It is also recommended that
<literal>chmod g+s</literal> is applied to the directory
so that any sub directories created will also belong to
the <literal>resilio</literal> group. Also,
<literal>setfacl -d -m group:resilio:rwx</literal> and
<literal>setfacl -m group:resilio:rwx</literal> should also
be applied so that the sub directories are writable by
the group.
'';
};
};
};
config = mkIf cfg.enable {
assertions =
[ { assertion = cfg.deviceName != "";
message = "Device name cannot be empty.";
}
{ assertion = cfg.enableWebUI -> cfg.sharedFolders == [];
message = "If using shared folders, the web UI cannot be enabled.";
}
{ assertion = cfg.apiKey != "" -> cfg.enableWebUI;
message = "If you're using an API key, you must enable the web server.";
}
];
users.extraUsers.rslsync = {
description = "Resilio Sync Service user";
home = cfg.storagePath;
createHome = true;
uid = config.ids.uids.rslsync;
group = "rslsync";
};
users.extraGroups = [ { name = "rslsync"; } ];
systemd.services.resilio = with pkgs; {
description = "Resilio Sync Service";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "local-fs.target" ];
serviceConfig = {
Restart = "on-abort";
UMask = "0002";
User = "rslsync";
ExecStart = ''
${resilioSync}/bin/rslsync --nodaemon --config ${configFile}
'';
};
};
};
}

View File

@ -21,6 +21,8 @@ let
daemon reads in addition to the the user's authorized_keys file. daemon reads in addition to the the user's authorized_keys file.
You can combine the <literal>keys</literal> and You can combine the <literal>keys</literal> and
<literal>keyFiles</literal> options. <literal>keyFiles</literal> options.
Warning: If you are using <literal>NixOps</literal> then don't use this
option since it will replace the key required for deployment via ssh.
''; '';
}; };

View File

@ -35,7 +35,8 @@ in
description = '' description = ''
The name of the node which is used as an identifier when communicating The name of the node which is used as an identifier when communicating
with the remote nodes in the mesh. If null then the hostname of the system with the remote nodes in the mesh. If null then the hostname of the system
is used. is used to derive a name (note that tinc may replace non-alphanumeric characters in
hostnames by underscores).
''; '';
}; };

View File

@ -18,6 +18,13 @@ with lib;
default = 33445; default = 33445;
description = "udp port for toxcore, port-forward to help with connectivity if you run many nodes behind one NAT"; description = "udp port for toxcore, port-forward to help with connectivity if you run many nodes behind one NAT";
}; };
auto_add_peers = mkOption {
type = types.listOf types.string;
default = [];
example = ''[ "toxid1" "toxid2" ]'';
description = "peers to automacally connect to on startup";
};
}; };
}; };
@ -33,8 +40,13 @@ with lib;
chown toxvpn /run/toxvpn chown toxvpn /run/toxvpn
''; '';
path = [ pkgs.toxvpn ];
script = ''
exec toxvpn -i ${config.services.toxvpn.localip} -l /run/toxvpn/control -u toxvpn -p ${toString config.services.toxvpn.port} ${lib.concatMapStringsSep " " (x: "-a ${x}") config.services.toxvpn.auto_add_peers}
'';
serviceConfig = { serviceConfig = {
ExecStart = "${pkgs.toxvpn}/bin/toxvpn -i ${config.services.toxvpn.localip} -l /run/toxvpn/control -u toxvpn -p ${toString config.services.toxvpn.port}";
KillMode = "process"; KillMode = "process";
Restart = "on-success"; Restart = "on-success";
Type = "notify"; Type = "notify";
@ -43,6 +55,8 @@ with lib;
restartIfChanged = false; # Likely to be used for remote admin restartIfChanged = false; # Likely to be used for remote admin
}; };
environment.systemPackages = [ pkgs.toxvpn ];
users.extraUsers = { users.extraUsers = {
toxvpn = { toxvpn = {
uid = config.ids.uids.toxvpn; uid = config.ids.uids.toxvpn;

View File

@ -5,13 +5,22 @@ with lib;
let let
cfg = config.services.elasticsearch; cfg = config.services.elasticsearch;
es5 = builtins.compareVersions (builtins.parseDrvName cfg.package.name).version "5" >= 0;
esConfig = '' esConfig = ''
network.host: ${cfg.listenAddress} network.host: ${cfg.listenAddress}
cluster.name: ${cfg.cluster_name}
${if es5 then ''
http.port: ${toString cfg.port}
transport.tcp.port: ${toString cfg.tcp_port}
'' else ''
network.port: ${toString cfg.port} network.port: ${toString cfg.port}
network.tcp.port: ${toString cfg.tcp_port} network.tcp.port: ${toString cfg.tcp_port}
# TODO: find a way to enable security manager # TODO: find a way to enable security manager
security.manager.enabled: false security.manager.enabled: false
cluster.name: ${cfg.cluster_name} ''}
${cfg.extraConf} ${cfg.extraConf}
''; '';
@ -19,13 +28,18 @@ let
name = "elasticsearch-config"; name = "elasticsearch-config";
paths = [ paths = [
(pkgs.writeTextDir "elasticsearch.yml" esConfig) (pkgs.writeTextDir "elasticsearch.yml" esConfig)
(pkgs.writeTextDir "logging.yml" cfg.logging) (if es5 then (pkgs.writeTextDir "log4j2.properties" cfg.logging)
else (pkgs.writeTextDir "logging.yml" cfg.logging))
]; ];
# Elasticsearch 5.x won't start when the scripts directory does not exist
postBuild = if es5 then "${pkgs.coreutils}/bin/mkdir -p $out/scripts" else "";
}; };
esPlugins = pkgs.buildEnv { esPlugins = pkgs.buildEnv {
name = "elasticsearch-plugins"; name = "elasticsearch-plugins";
paths = cfg.plugins; paths = cfg.plugins;
# Elasticsearch 5.x won't start when the plugins directory does not exist
postBuild = if es5 then "${pkgs.coreutils}/bin/mkdir -p $out/plugins" else "";
}; };
in { in {
@ -85,7 +99,19 @@ in {
logging = mkOption { logging = mkOption {
description = "Elasticsearch logging configuration."; description = "Elasticsearch logging configuration.";
default = '' default =
if es5 then ''
logger.action.name = org.elasticsearch.action
logger.action.level = info
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
'' else ''
rootLogger: INFO, console rootLogger: INFO, console
logger: logger:
action: INFO action: INFO
@ -112,6 +138,12 @@ in {
description = "Extra command line options for the elasticsearch launcher."; description = "Extra command line options for the elasticsearch launcher.";
default = []; default = [];
type = types.listOf types.str; type = types.listOf types.str;
};
extraJavaOptions = mkOption {
description = "Extra command line options for Java.";
default = [];
type = types.listOf types.str;
example = [ "-Djava.net.preferIPv4Stack=true" ]; example = [ "-Djava.net.preferIPv4Stack=true" ];
}; };
@ -133,13 +165,21 @@ in {
path = [ pkgs.inetutils ]; path = [ pkgs.inetutils ];
environment = { environment = {
ES_HOME = cfg.dataDir; ES_HOME = cfg.dataDir;
ES_JAVA_OPTS = toString ([ "-Des.path.conf=${configDir}" ] ++ cfg.extraJavaOptions);
}; };
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/elasticsearch -Des.path.conf=${configDir} ${toString cfg.extraCmdLineOptions}"; ExecStart = "${cfg.package}/bin/elasticsearch ${toString cfg.extraCmdLineOptions}";
User = "elasticsearch"; User = "elasticsearch";
PermissionsStartOnly = true; PermissionsStartOnly = true;
LimitNOFILE = "1024000";
}; };
preStart = '' preStart = ''
# Only set vm.max_map_count if lower than ES required minimum
# This avoids conflict if configured via boot.kernel.sysctl
if [ `${pkgs.procps}/bin/sysctl -n vm.max_map_count` -lt 262144 ]; then
${pkgs.procps}/bin/sysctl -w vm.max_map_count=262144
fi
mkdir -m 0700 -p ${cfg.dataDir} mkdir -m 0700 -p ${cfg.dataDir}
# Install plugins # Install plugins
@ -148,11 +188,6 @@ in {
ln -sfT ${cfg.package}/modules ${cfg.dataDir}/modules ln -sfT ${cfg.package}/modules ${cfg.dataDir}/modules
if [ "$(id -u)" = 0 ]; then chown -R elasticsearch ${cfg.dataDir}; fi if [ "$(id -u)" = 0 ]; then chown -R elasticsearch ${cfg.dataDir}; fi
''; '';
postStart = mkBefore ''
until ${pkgs.curl.bin}/bin/curl -s -o /dev/null ${cfg.listenAddress}:${toString cfg.port}; do
sleep 1
done
'';
}; };
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];

View File

@ -5,7 +5,11 @@ with lib;
let let
cfg = config.services.kibana; cfg = config.services.kibana;
cfgFile = pkgs.writeText "kibana.json" (builtins.toJSON ( atLeast54 = versionAtLeast (builtins.parseDrvName cfg.package.name).version "5.4";
cfgFile = if atLeast54 then cfgFile5 else cfgFile4;
cfgFile4 = pkgs.writeText "kibana.json" (builtins.toJSON (
(filterAttrsRecursive (n: v: v != null) ({ (filterAttrsRecursive (n: v: v != null) ({
host = cfg.listenAddress; host = cfg.listenAddress;
port = cfg.port; port = cfg.port;
@ -36,6 +40,27 @@ let
]; ];
} // cfg.extraConf) } // cfg.extraConf)
))); )));
cfgFile5 = pkgs.writeText "kibana.json" (builtins.toJSON (
(filterAttrsRecursive (n: v: v != null) ({
server.host = cfg.listenAddress;
server.port = cfg.port;
server.ssl.certificate = cfg.cert;
server.ssl.key = cfg.key;
kibana.index = cfg.index;
kibana.defaultAppId = cfg.defaultAppId;
elasticsearch.url = cfg.elasticsearch.url;
elasticsearch.username = cfg.elasticsearch.username;
elasticsearch.password = cfg.elasticsearch.password;
elasticsearch.ssl.certificate = cfg.elasticsearch.cert;
elasticsearch.ssl.key = cfg.elasticsearch.key;
elasticsearch.ssl.certificateAuthorities = cfg.elasticsearch.certificateAuthorities;
} // cfg.extraConf)
)));
in { in {
options.services.kibana = { options.services.kibana = {
enable = mkEnableOption "enable kibana service"; enable = mkEnableOption "enable kibana service";
@ -96,11 +121,29 @@ in {
}; };
ca = mkOption { ca = mkOption {
description = "CA file to auth against elasticsearch."; description = ''
CA file to auth against elasticsearch.
It's recommended to use the <option>certificateAuthorities</option> option
when using kibana-5.4 or newer.
'';
default = null; default = null;
type = types.nullOr types.path; type = types.nullOr types.path;
}; };
certificateAuthorities = mkOption {
description = ''
CA files to auth against elasticsearch.
Please use the <option>ca</option> option when using kibana &lt; 5.4
because those old versions don't support setting multiple CA's.
This defaults to the singleton list [ca] when the <option>ca</option> option is defined.
'';
default = if isNull cfg.elasticsearch.ca then [] else [ca];
type = types.listOf types.path;
};
cert = mkOption { cert = mkOption {
description = "Certificate file to auth against elasticsearch."; description = "Certificate file to auth against elasticsearch.";
default = null; default = null;
@ -118,6 +161,7 @@ in {
description = "Kibana package to use"; description = "Kibana package to use";
default = pkgs.kibana; default = pkgs.kibana;
defaultText = "pkgs.kibana"; defaultText = "pkgs.kibana";
example = "pkgs.kibana5";
type = types.package; type = types.package;
}; };

View File

@ -0,0 +1,97 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-piwik">
<title>Piwik</title>
<para>
Piwik is a real-time web analytics application.
This module configures php-fpm as backend for piwik, optionally configuring an nginx vhost as well.
</para>
<para>
An automatic setup is not suported by piwik, so you need to configure piwik itself in the browser-based piwik setup.
</para>
<section>
<title>Database Setup</title>
<para>
You also need to configure a MariaDB or MySQL database and -user for piwik yourself,
and enter those credentials in your browser.
You can use passwordless database authentication via the UNIX_SOCKET authentication plugin
with the following SQL commands:
<programlisting>
INSTALL PLUGIN unix_socket SONAME 'auth_socket';
ALTER USER root IDENTIFIED VIA unix_socket;
CREATE DATABASE piwik;
CREATE USER 'piwik'@'localhost' IDENTIFIED VIA unix_socket;
GRANT ALL PRIVILEGES ON piwik.* TO 'piwik'@'localhost';
</programlisting>
Then fill in <literal>piwik</literal> as database user and database name, and leave the password field blank.
This works with MariaDB and MySQL. This authentication works by allowing only the <literal>piwik</literal> unix
user to authenticate as <literal>piwik</literal> database (without needing a password), but no other users.
For more information on passwordless login, see
<link xlink:href="https://mariadb.com/kb/en/mariadb/unix_socket-authentication-plugin/" />.
</para>
<para>
Of course, you can use password based authentication as well, e.g. when the database is not on the same host.
</para>
</section>
<section>
<title>Backup</title>
<para>
You only need to take backups of your MySQL database and the
<filename>/var/lib/piwik/config/config.ini.php</filename> file.
Use a user in the <literal>piwik</literal> group or root to access the file.
For more information, see <link xlink:href="https://piwik.org/faq/how-to-install/faq_138/" />.
</para>
</section>
<section>
<title>Issues</title>
<itemizedlist>
<listitem>
<para>
Piwik's file integrity check will warn you.
This is due to the patches necessary for NixOS, you can safely ignore this.
</para>
</listitem>
<listitem>
<para>
Piwik will warn you that the JavaScript tracker is not writable.
This is because it's located in the read-only nix store.
You can safely ignore this, unless you need a plugin that needs JavaScript tracker access.
</para>
</listitem>
<listitem>
<para>
Sending mail from piwik, e.g. for the password reset function, might not work out of the box:
There's a problem with using <command>sendmail</command> from <literal>php-fpm</literal> that is
being investigated at <link xlink:href="https://github.com/NixOS/nixpkgs/issues/26611" />.
If you have (or don't have) this problem as well, please report it. You can enable SMTP as method
to send mail in piwik's <quote>General Settings</quote> > <quote>Mail Server Settings</quote> instead.
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Using other Web Servers than nginx</title>
<para>
You can use other web servers by forwarding calls for <filename>index.php</filename> and
<filename>piwik.php</filename> to the <literal>/run/phpfpm-piwik.sock</literal> fastcgi unix socket.
You can use the nginx configuration in the module code as a reference to what else should be configured.
</para>
</section>
</chapter>

View File

@ -0,0 +1,219 @@
{ config, lib, pkgs, services, ... }:
with lib;
let
cfg = config.services.piwik;
user = "piwik";
dataDir = "/var/lib/${user}";
pool = user;
# it's not possible to use /run/phpfpm/${pool}.sock because /run/phpfpm/ is root:root 0770,
# and therefore is not accessible by the web server.
phpSocket = "/run/phpfpm-${pool}.sock";
phpExecutionUnit = "phpfpm-${pool}";
databaseService = "mysql.service";
in {
options = {
services.piwik = {
# NixOS PR for database setup: https://github.com/NixOS/nixpkgs/pull/6963
# piwik issue for automatic piwik setup: https://github.com/piwik/piwik/issues/10257
# TODO: find a nice way to do this when more NixOS MySQL and / or piwik automatic setup stuff is implemented.
enable = mkOption {
type = types.bool;
default = false;
description = ''
Enable piwik web analytics with php-fpm backend.
'';
};
webServerUser = mkOption {
type = types.str;
example = "nginx";
description = ''
Name of the owner of the ${phpSocket} fastcgi socket for piwik.
If you want to use another webserver than nginx, you need to set this to that server's user
and pass fastcgi requests to `index.php` and `piwik.php` to this socket.
'';
};
phpfpmProcessManagerConfig = mkOption {
type = types.str;
default = ''
; default phpfpm process manager settings
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; log worker's stdout, but this has a performance hit
catch_workers_output = yes
'';
description = ''
Settings for phpfpm's process manager. You might need to change this depending on the load for piwik.
'';
};
nginx = mkOption {
# TODO: for maximum flexibility, it would be nice to use nginx's vhost_options module
# but this only makes sense if we can somehow specify defaults suitable for piwik.
# But users can always copy the piwik nginx config to their configuration.nix and customize it.
type = types.nullOr (types.submodule {
options = {
virtualHost = mkOption {
type = types.str;
default = "piwik.${config.networking.hostName}";
example = "piwik.$\{config.networking.hostName\}";
description = ''
Name of the nginx virtualhost to use and set up.
'';
};
enableSSL = mkOption {
type = types.bool;
default = true;
description = "Whether to enable https.";
};
forceSSL = mkOption {
type = types.bool;
default = true;
description = "Whether to always redirect to https.";
};
enableACME = mkOption {
type = types.bool;
default = true;
description = "Whether to ask Let's Encrypt to sign a certificate for this vhost.";
};
};
});
default = null;
example = { virtualHost = "stats.$\{config.networking.hostName\}"; };
description = ''
The options to use to configure an nginx virtualHost.
If null (the default), no nginx virtualHost will be configured.
'';
};
};
};
config = mkIf cfg.enable {
users.extraUsers.${user} = {
isSystemUser = true;
createHome = true;
home = dataDir;
group = user;
};
users.extraGroups.${user} = {};
systemd.services.piwik_setup_update = {
# everything needs to set up and up to date before piwik php files are executed
requiredBy = [ "${phpExecutionUnit}.service" ];
before = [ "${phpExecutionUnit}.service" ];
# the update part of the script can only work if the database is already up and running
requires = [ databaseService ];
after = [ databaseService ];
path = [ pkgs.piwik ];
serviceConfig = {
Type = "oneshot";
User = user;
# hide especially config.ini.php from other
UMask = "0007";
Environment = "PIWIK_USER_PATH=${dataDir}";
# chown + chmod in preStart needs root
PermissionsStartOnly = true;
};
# correct ownership and permissions in case they're not correct anymore,
# e.g. after restoring from backup or moving from another system.
# Note that ${dataDir}/config/config.ini.php might contain the MySQL password.
preStart = ''
chown -R ${user}:${user} ${dataDir}
chmod -R ug+rwX,o-rwx ${dataDir}
'';
script = ''
# Use User-Private Group scheme to protect piwik data, but allow administration / backup via piwik group
# Copy config folder
chmod g+s "${dataDir}"
cp -r "${pkgs.piwik}/config" "${dataDir}/"
chmod -R u+rwX,g+rwX,o-rwx "${dataDir}"
# check whether user setup has already been done
if test -f "${dataDir}/config/config.ini.php"; then
# then execute possibly pending database upgrade
piwik-console core:update --yes
fi
'';
};
systemd.services.${phpExecutionUnit} = {
# stop phpfpm on package upgrade, do database upgrade via piwik_setup_update, and then restart
restartTriggers = [ pkgs.piwik ];
# stop config.ini.php from getting written with read permission for others
serviceConfig.UMask = "0007";
};
services.phpfpm.poolConfigs = {
${pool} = ''
listen = "${phpSocket}"
listen.owner = ${cfg.webServerUser}
listen.group = root
listen.mode = 0600
user = ${user}
env[PIWIK_USER_PATH] = ${dataDir}
${cfg.phpfpmProcessManagerConfig}
'';
};
services.nginx.virtualHosts = mkIf (cfg.nginx != null) {
# References:
# https://fralef.me/piwik-hardening-with-nginx-and-php-fpm.html
# https://github.com/perusio/piwik-nginx
${cfg.nginx.virtualHost} = {
root = "${pkgs.piwik}/share";
enableSSL = cfg.nginx.enableSSL;
enableACME = cfg.nginx.enableACME;
forceSSL = cfg.nginx.forceSSL;
locations."/" = {
index = "index.php";
};
# allow index.php for webinterface
locations."= /index.php".extraConfig = ''
fastcgi_pass unix:${phpSocket};
'';
# allow piwik.php for tracking
locations."= /piwik.php".extraConfig = ''
fastcgi_pass unix:${phpSocket};
'';
# Any other attempt to access any php files is forbidden
locations."~* ^.+\.php$".extraConfig = ''
return 403;
'';
# Disallow access to unneeded directories
# config and tmp are already removed
locations."~ ^/(?:core|lang|misc)/".extraConfig = ''
return 403;
'';
# Disallow access to several helper files
locations."~* \.(?:bat|git|ini|sh|txt|tpl|xml|md)$".extraConfig = ''
return 403;
'';
# No crawling of this site for bots that obey robots.txt - no useful information here.
locations."= /robots.txt".extraConfig = ''
return 200 "User-agent: *\nDisallow: /\n";
'';
# let browsers cache piwik.js
locations."= /piwik.js".extraConfig = ''
expires 1M;
'';
};
};
};
meta = {
doc = ./piwik-doc.xml;
maintainers = with stdenv.lib.maintainers; [ florianjacob ];
};
}

View File

@ -16,7 +16,7 @@ let
phpMajorVersion = head (splitString "." php.version); phpMajorVersion = head (splitString "." php.version);
mod_perl = pkgs.mod_perl.override { apacheHttpd = httpd; }; mod_perl = pkgs.apacheHttpdPackages.mod_perl.override { apacheHttpd = httpd; };
defaultListen = cfg: if cfg.enableSSL defaultListen = cfg: if cfg.enableSSL
then [{ip = "*"; port = 443;}] then [{ip = "*"; port = 443;}]

View File

@ -36,7 +36,11 @@ in
dataDir = mkOption { dataDir = mkOption {
default = "/var/lib/caddy"; default = "/var/lib/caddy";
type = types.path; type = types.path;
description = "The data directory, for storing certificates."; description = ''
The data directory, for storing certificates. Before 17.09, this
would create a .caddy directory. With 17.09 the contents of the
.caddy directory are in the specified data directory instead.
'';
}; };
package = mkOption { package = mkOption {
@ -50,17 +54,32 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
systemd.services.caddy = { systemd.services.caddy = {
description = "Caddy web server"; description = "Caddy web server";
after = [ "network.target" ]; after = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
environment = mkIf (versionAtLeast config.system.stateVersion "17.09")
{ CADDYPATH = cfg.dataDir; };
serviceConfig = { serviceConfig = {
ExecStart = ''${cfg.package.bin}/bin/caddy -conf=${configFile} \ ExecStart = ''
${cfg.package.bin}/bin/caddy -root=/var/tmp -conf=${configFile} \
-ca=${cfg.ca} -email=${cfg.email} ${optionalString cfg.agree "-agree"} -ca=${cfg.ca} -email=${cfg.email} ${optionalString cfg.agree "-agree"}
''; '';
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
Type = "simple"; Type = "simple";
User = "caddy"; User = "caddy";
Group = "caddy"; Group = "caddy";
Restart = "on-failure";
StartLimitInterval = 86400;
StartLimitBurst = 5;
AmbientCapabilities = "cap_net_bind_service"; AmbientCapabilities = "cap_net_bind_service";
LimitNOFILE = 8192; CapabilityBoundingSet = "cap_net_bind_service";
NoNewPrivileges = true;
LimitNPROC = 64;
LimitNOFILE = 1048576;
PrivateTmp = true;
PrivateDevices = true;
ProtectHome = true;
ProtectSystem = "full";
ReadWriteDirectories = cfg.dataDir;
}; };
}; };

View File

@ -0,0 +1,69 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.minio;
in
{
meta.maintainers = [ maintainers.bachp ];
options.services.minio = {
enable = mkEnableOption "Minio Object Storage";
listenAddress = mkOption {
default = ":9000";
type = types.str;
description = "Listen on a specific IP address and port.";
};
dataDir = mkOption {
default = "/var/lib/minio/data";
type = types.path;
description = "The data directory, for storing the objects.";
};
configDir = mkOption {
default = "/var/lib/minio/config";
type = types.path;
description = "The config directory, for the access keys and other settings.";
};
package = mkOption {
default = pkgs.minio;
defaultText = "pkgs.minio";
type = types.package;
description = "Minio package to use.";
};
};
config = mkIf cfg.enable {
systemd.services.minio = {
description = "Minio Object Storage";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
# Make sure directories exist with correct owner
mkdir -p ${cfg.configDir}
chown -R minio:minio ${cfg.configDir}
mkdir -p ${cfg.dataDir}
chown minio:minio ${cfg.dataDir}
'';
serviceConfig = {
PermissionsStartOnly = true;
ExecStart = "${cfg.package}/bin/minio server --address ${cfg.listenAddress} --config-dir=${cfg.configDir} ${cfg.dataDir}";
Type = "simple";
User = "minio";
Group = "minio";
LimitNOFILE = 65536;
};
};
users.extraUsers.minio = {
group = "minio";
uid = config.ids.uids.minio;
};
users.extraGroups.minio.gid = config.ids.uids.minio;
};
}

View File

@ -208,13 +208,13 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
systemd.user.services.compton = { systemd.user.services.compton = {
description = "Compton composite manager"; description = "Compton composite manager";
wantedBy = [ "default.target" ]; wantedBy = [ "graphical-session.target" ];
partOf = [ "graphical-session.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/compton --config ${configFile}"; ExecStart = "${cfg.package}/bin/compton --config ${configFile}";
RestartSec = 3; RestartSec = 3;
Restart = "always"; Restart = "always";
}; };
environment.DISPLAY = ":0";
}; };
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];

View File

@ -35,10 +35,10 @@ let
chmod -R a+w $out/share/gsettings-schemas/nixos-gsettings-overrides chmod -R a+w $out/share/gsettings-schemas/nixos-gsettings-overrides
cat - > $out/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas/nixos-defaults.gschema.override <<- EOF cat - > $out/share/gsettings-schemas/nixos-gsettings-overrides/glib-2.0/schemas/nixos-defaults.gschema.override <<- EOF
[org.gnome.desktop.background] [org.gnome.desktop.background]
picture-uri='${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png' picture-uri='${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png'
[org.gnome.desktop.screensaver] [org.gnome.desktop.screensaver]
picture-uri='${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png' picture-uri='${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png'
${cfg.extraGSettingsOverrides} ${cfg.extraGSettingsOverrides}
EOF EOF

View File

@ -7,7 +7,7 @@ let
xcfg = config.services.xserver; xcfg = config.services.xserver;
cfg = xcfg.desktopManager.plasma5; cfg = xcfg.desktopManager.plasma5;
inherit (pkgs) kdeWrapper kdeApplications plasma5 libsForQt5 qt5 xorg; inherit (pkgs) kdeApplications plasma5 libsForQt5 qt5 xorg;
in in
@ -30,24 +30,12 @@ in
''; '';
}; };
extraPackages = mkOption {
type = types.listOf types.package;
default = [];
description = ''
KDE packages that need to be installed system-wide.
'';
};
}; };
}; };
config = mkMerge [ config = mkMerge [
(mkIf (cfg.extraPackages != []) {
environment.systemPackages = [ (kdeWrapper cfg.extraPackages) ];
})
(mkIf (xcfg.enable && cfg.enable) { (mkIf (xcfg.enable && cfg.enable) {
services.xserver.desktopManager.session = singleton { services.xserver.desktopManager.session = singleton {
name = "plasma5"; name = "plasma5";
@ -64,8 +52,8 @@ in
}; };
security.wrappers = { security.wrappers = {
kcheckpass.source = "${plasma5.plasma-workspace.out}/lib/libexec/kcheckpass"; kcheckpass.source = "${lib.getBin plasma5.plasma-workspace}/lib/libexec/kcheckpass";
"start_kdeinit".source = "${pkgs.kinit.out}/lib/libexec/kf5/start_kdeinit"; "start_kdeinit".source = "${lib.getBin pkgs.kinit}/lib/libexec/kf5/start_kdeinit";
}; };
environment.systemPackages = with pkgs; with qt5; with libsForQt5; with plasma5; with kdeApplications; environment.systemPackages = with pkgs; with qt5; with libsForQt5; with plasma5; with kdeApplications;
@ -139,10 +127,14 @@ in
plasma-workspace plasma-workspace
plasma-workspace-wallpapers plasma-workspace-wallpapers
dolphin
dolphin-plugins dolphin-plugins
ffmpegthumbs ffmpegthumbs
kdegraphics-thumbnailers kdegraphics-thumbnailers
khelpcenter
kio-extras kio-extras
konsole
oxygen
print-manager print-manager
breeze-icons breeze-icons
@ -163,16 +155,6 @@ in
++ lib.optional config.services.colord.enable colord-kde ++ lib.optional config.services.colord.enable colord-kde
++ lib.optionals config.services.samba.enable [ kdenetwork-filesharing pkgs.samba ]; ++ lib.optionals config.services.samba.enable [ kdenetwork-filesharing pkgs.samba ];
services.xserver.desktopManager.plasma5.extraPackages =
with kdeApplications; with plasma5;
[
khelpcenter
oxygen
dolphin
konsole
];
environment.pathsToLink = [ "/share" ]; environment.pathsToLink = [ "/share" ];
environment.etc = singleton { environment.etc = singleton {
@ -183,7 +165,6 @@ in
environment.variables = { environment.variables = {
# Enable GTK applications to load SVG icons # Enable GTK applications to load SVG icons
GDK_PIXBUF_MODULE_FILE = "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache"; GDK_PIXBUF_MODULE_FILE = "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache";
QT_PLUGIN_PATH = "/run/current-system/sw/lib/qt5/plugins";
}; };
fonts.fonts = with pkgs; [ noto-fonts hack-font ]; fonts.fonts = with pkgs; [ noto-fonts hack-font ];
@ -209,7 +190,6 @@ in
services.xserver.displayManager.sddm = { services.xserver.displayManager.sddm = {
theme = "breeze"; theme = "breeze";
package = pkgs.sddmPlasma5;
}; };
security.pam.services.kde = { allowNullPassword = true; }; security.pam.services.kde = { allowNullPassword = true; };

View File

@ -122,6 +122,9 @@ let
source ~/.xprofile source ~/.xprofile
fi fi
# Start systemd user services for graphical sessions
${config.systemd.package}/bin/systemctl --user start graphical-session.target
# Allow the user to setup a custom session type. # Allow the user to setup a custom session type.
if test -x ~/.xsession; then if test -x ~/.xsession; then
exec ~/.xsession exec ~/.xsession
@ -164,6 +167,9 @@ let
''} ''}
test -n "$waitPID" && wait "$waitPID" test -n "$waitPID" && wait "$waitPID"
${config.systemd.package}/bin/systemctl --user stop graphical-session.target
exit 0 exit 0
''; '';
@ -325,6 +331,13 @@ in
config = { config = {
services.xserver.displayManager.xserverBin = "${xorg.xorgserver.out}/bin/X"; services.xserver.displayManager.xserverBin = "${xorg.xorgserver.out}/bin/X";
systemd.user.targets.graphical-session = {
unitConfig = {
RefuseManualStart = false;
StopWhenUnneeded = false;
};
};
}; };
imports = [ imports = [

View File

@ -111,7 +111,7 @@ in
background = mkOption { background = mkOption {
type = types.str; type = types.str;
default = "${pkgs.nixos-artwork}/share/artwork/gnome/Gnome_Dark.png"; default = "${pkgs.nixos-artwork.wallpapers.gnome-dark}/share/artwork/gnome/Gnome_Dark.png";
description = '' description = ''
The background image or color to use. The background image or color to use.
''; '';

Some files were not shown because too many files have changed in this diff Show More