Merge commit '93cd0685c5ac4d8f21d8586d3e5c45cd7394fab9' into gcc-modernize-builder

This commit is contained in:
John Ericson 2017-12-07 01:49:31 -05:00
commit 3a59cd87f2
3957 changed files with 75662 additions and 53190 deletions

23
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,23 @@
# CODEOWNERS file
#
# This file is used to describe who owns what in this repository. This file does not
# replace `meta.maintainers` but is instead used for other things than derivations
# and modules, like documentation, package sets, and other assets.
#
# For documentation on this file, see https://help.github.com/articles/about-codeowners/
# Mentioned users will get code review requests.
# Python-related code and docs
pkgs/top-level/python-packages.nix @FRidh
pkgs/development/interpreters/python/* @FRidh
pkgs/development/python-modules/* @FRidh
doc/languages-frameworks/python.md @FRidh
# Boostraping and core infra
pkgs/stdenv/ @Ericson2314
pkgs/build-support/cc-wrapper/ @Ericson2314
# Darwin-related
pkgs/stdenv/darwin/* @copumpkin @LnL7
pkgs/os-specific/darwin/* @LnL7
pkgs/os-specific/darwin/apple-source-releases/* @copumpkin

View File

@ -15,7 +15,7 @@ under the terms of [COPYING](../COPYING), which is an MIT-like license.
* Format the commits in the following way: * Format the commits in the following way:
``` ```
(pkg-name | service-name): (from -> to | init at version | refactor | etc) (pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Additional information.) (Motivation for change. Additional information.)
``` ```
@ -24,10 +24,10 @@ under the terms of [COPYING](../COPYING), which is an MIT-like license.
* nginx: init at 2.0.1 * nginx: init at 2.0.1
* firefox: 3.0 -> 3.1.1 * firefox: 3.0 -> 3.1.1
* hydra service: add bazBaz option * nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo. Dual baz behavior is needed to do foo.
* nginx service: refactor config generation * nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234). The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).

View File

@ -3,6 +3,8 @@
###### Things done ###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- [ ] Tested using sandboxing - [ ] Tested using sandboxing
([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS,
or option `build-use-sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) or option `build-use-sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file)
@ -11,6 +13,7 @@
- [ ] NixOS - [ ] NixOS
- [ ] macOS - [ ] macOS
- [ ] Linux - [ ] Linux
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"` - [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`) - [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). - [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

View File

@ -1,14 +0,0 @@
{
"userBlacklist": [
"civodul",
"jhasse",
"shlevy",
"bbenoist"
],
"alwaysNotifyForPaths": [
{ "name": "FRidh", "files": ["pkgs/top-level/python-packages.nix", "pkgs/development/interpreters/python/*", "pkgs/development/python-modules/*" ] },
{ "name": "LnL7", "files": ["pkgs/stdenv/darwin/*", "pkgs/os-specific/darwin/*"] },
{ "name": "copumpkin", "files": ["pkgs/stdenv/darwin/*", "pkgs/os-specific/darwin/apple-source-releases/*"] }
],
"fileBlacklist": ["pkgs/top-level/all-packages.nix"]
}

View File

@ -12,15 +12,21 @@ matrix:
script: script:
- ./maintainers/scripts/travis-nox-review-pr.sh nixpkgs-verify nixpkgs-manual nixpkgs-tarball nixpkgs-unstable - ./maintainers/scripts/travis-nox-review-pr.sh nixpkgs-verify nixpkgs-manual nixpkgs-tarball nixpkgs-unstable
- ./maintainers/scripts/travis-nox-review-pr.sh nixos-options nixos-manual - ./maintainers/scripts/travis-nox-review-pr.sh nixos-options nixos-manual
env:
- BUILD_TYPE="Test Nixpkgs evaluation & NixOS manual build"
- os: linux - os: linux
sudo: required sudo: required
dist: trusty dist: trusty
before_script: before_script:
- sudo mount -o remount,exec,size=2G,mode=755 /run/user - sudo mount -o remount,exec,size=2G,mode=755 /run/user
script: ./maintainers/scripts/travis-nox-review-pr.sh nox pr script: ./maintainers/scripts/travis-nox-review-pr.sh nox pr
env:
- BUILD_TYPE="Build affected packages (Linux)"
- os: osx - os: osx
osx_image: xcode7.3 osx_image: xcode7.3
script: ./maintainers/scripts/travis-nox-review-pr.sh nox pr script: ./maintainers/scripts/travis-nox-review-pr.sh nox pr
env:
- BUILD_TYPE="Build affected packages (macOS)"
env: env:
global: global:
- GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f - GITHUB_TOKEN=5edaaf1017f691ed34e7f80878f8f5fbd071603f

View File

@ -38,5 +38,5 @@ For pull-requests, please rebase onto nixpkgs `master`.
Communication: Communication:
* [Mailing list](http://lists.science.uu.nl/mailman/listinfo/nix-dev) * [Mailing list](https://groups.google.com/forum/#!forum/nix-devel)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos) * [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)

View File

@ -243,5 +243,218 @@ set of packages.
</section> </section>
<section xml:id="sec-declarative-package-management">
<title>Declarative Package Management</title>
<section xml:id="sec-building-environment">
<title>Build an environment</title>
<para>
Using <literal>packageOverrides</literal>, it is possible to manage
packages declaratively. This means that we can list all of our desired
packages within a declarative Nix expression. For example, to have
<literal>aspell</literal>, <literal>bc</literal>,
<literal>ffmpeg</literal>, <literal>coreutils</literal>,
<literal>gdb</literal>, <literal>nixUnstable</literal>,
<literal>emscripten</literal>, <literal>jq</literal>,
<literal>nox</literal>, and <literal>silver-searcher</literal>, we could
use the following in <filename>~/.config/nixpkgs/config.nix</filename>:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils gdb ffmpeg nixUnstable emscripten jq nox silver-searcher ];
};
};
}
</screen>
<para>
To install it into our environment, you can just run <literal>nix-env -iA
nixpkgs.myPackages</literal>. If you want to load the packages to be built
from a working copy of <literal>nixpkgs</literal> you just run
<literal>nix-env -f. -iA myPackages</literal>. To explore what's been
installed, just look through <filename>~/.nix-profile/</filename>. You can
see that a lot of stuff has been installed. Some of this stuff is useful
some of it isn't. Let's tell Nixpkgs to only link the stuff that we want:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils gdb ffmpeg nixUnstable emscripten jq nox silver-searcher ];
pathsToLink = [ "/share" "/bin" ];
};
};
}
</screen>
<para>
<literal>pathsToLink</literal> tells Nixpkgs to only link the paths listed
which gets rid of the extra stuff in the profile.
<filename>/bin</filename> and <filename>/share</filename> are good
defaults for a user environment, getting rid of the clutter. If you are
running on Nix on MacOS, you may want to add another path as well,
<filename>/Applications</filename>, that makes GUI apps available.
</para>
</section>
<section xml:id="sec-getting-documentation">
<title>Getting documentation</title>
<para>
After building that new environment, look through
<filename>~/.nix-profile</filename> to make sure everything is there that
we wanted. Discerning readers will note that some files are missing. Look
inside <filename>~/.nix-profile/share/man/man1/</filename> to verify this.
There are no man pages for any of the Nix tools! This is because some
packages like Nix have multiple outputs for things like documentation (see
section 4). Let's make Nix install those as well.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils ffmpeg nixUnstable emscripten jq nox silver-searcher ];
pathsToLink = [ "/share/man" "/share/doc" /bin" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}
</screen>
<para>
This provides us with some useful documentation for using our packages.
However, if we actually want those manpages to be detected by man, we need
to set up our environment. This can also be managed within Nix
expressions.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d
cp ${myProfile} $out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
];
pathsToLink = [ "/share/man" "/share/doc" /bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
}
</screen>
<para>
For this to work fully, you must also have this script sourced when you
are logged in. Try adding something like this to your
<filename>~/.profile</filename> file:
</para>
<screen>
#!/bin/sh
if [ -d $HOME/.nix-profile/etc/profile.d ]; then
for i in $HOME/.nix-profile/etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
fi
</screen>
<para>
Now just run <literal>source $HOME/.profile</literal> and you can starting
loading man pages from your environent.
</para>
</section>
<section xml:id="sec-gnu-info-setup">
<title>GNU info setup</title>
<para>
Configuring GNU info is a little bit trickier than man pages. To work
correctly, info needs a database to be generated. This can be done with
some small modifications to our environment scripts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
export INFOPATH=$HOME/.nix-profile/share/info:/nix/var/nix/profiles/default/share/info:/usr/share/info
'';
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [
(runCommand "profile" {} ''
mkdir -p $out/etc/profile.d
cp ${myProfile} $out/etc/profile.d/my-profile.sh
'')
aspell
bc
coreutils
ffmpeg
man
nixUnstable
emscripten
jq
nox
silver-searcher
texinfoInteractive
];
pathsToLink = [ "/share/man" "/share/doc" "/share/info" "/bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" "info" ];
postBuild = ''
if [ -x $out/bin/install-info -a -w $out/share/info ]; then
shopt -s nullglob
for i in $out/share/info/*.info $out/share/info/*.info.gz; do
$out/bin/install-info $i $out/share/info/dir
done
fi
'';
};
};
}
</screen>
<para>
<literal>postBuild</literal> tells Nixpkgs to run a command after building
the environment. In this case, <literal>install-info</literal> adds the
installed info pages to <literal>dir</literal> which is GNU info's default
root node. Note that <literal>texinfoInteractive</literal> is added to the
environment to give the <literal>install-info</literal> command.
</para>
</section>
</section>
</chapter> </chapter>

View File

@ -37,8 +37,9 @@
</para> </para>
<para> <para>
In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>. In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>.
All three are always defined at the top level, so one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>: All three are always defined as attributes in the standard environment, and at the top level. That means one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...</programlisting> <programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...buildPlatform...</programlisting>, or just off <varname>stdenv</varname>:
<programlisting>{ stdenv, fooDep, barDep, .. }: ...stdenv.buildPlatform...</programlisting>.
</para> </para>
<variablelist> <variablelist>
<varlistentry> <varlistentry>
@ -79,11 +80,6 @@
</listitem> </listitem>
</varlistentry> </varlistentry>
</variablelist> </variablelist>
<note><para>
If you dig around nixpkgs, you may notice there is also <varname>stdenv.cross</varname>.
This field defined as <varname>hostPlatform</varname> when the host and build platforms differ, but otherwise not defined at all.
This field is obsolete and will soon disappear—please do not use it.
</para></note>
<para> <para>
The exact schema these fields follow is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up. The exact schema these fields follow is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up.
You can see examples of ones used in practice in <literal>lib.systems.examples</literal>; note how they are not all very consistent. You can see examples of ones used in practice in <literal>lib.systems.examples</literal>; note how they are not all very consistent.

View File

@ -26,7 +26,7 @@ pkgs.stdenv.mkDerivation {
extraHeader = ''xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" ''; extraHeader = ''xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" '';
in '' in ''
{ {
pandoc '${inputFile}' -w docbook ${lib.optionalString useChapters "--chapters"} \ pandoc '${inputFile}' -w docbook ${lib.optionalString useChapters "--top-level-division=chapter"} \
--smart \ --smart \
| sed -e 's|<ulink url=|<link xlink:href=|' \ | sed -e 's|<ulink url=|<link xlink:href=|' \
-e 's|</ulink>|</link>|' \ -e 's|</ulink>|</link>|' \

View File

@ -358,8 +358,8 @@
<para> <para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and <varname>pkgs.dockerTools</varname> is a set of functions for creating and
manipulating Docker images according to the manipulating Docker images according to the
<link xlink:href="https://github.com/docker/docker/blob/master/image/spec/v1.md#docker-image-specification-v100"> <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120">
Docker Image Specification v1.0.0 Docker Image Specification v1.2.0
</link>. Docker itself is not used to perform any of the operations done by these </link>. Docker itself is not used to perform any of the operations done by these
functions. functions.
</para> </para>
@ -493,8 +493,8 @@
<varname>config</varname> is used to specify the configuration of the <varname>config</varname> is used to specify the configuration of the
containers that will be started off the built image in Docker. containers that will be started off the built image in Docker.
The available options are listed in the The available options are listed in the
<link xlink:href="https://github.com/docker/docker/blob/master/image/spec/v1.md#container-runconfig-field-descriptions"> <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions">
Docker Image Specification v1.0.0 Docker Image Specification v1.2.0
</link>. </link>.
</para> </para>
</callout> </callout>

View File

@ -2,60 +2,120 @@
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-beam"> xml:id="sec-beam">
<title>Beam Languages (Erlang &amp; Elixir)</title> <title>BEAM Languages (Erlang, Elixir &amp; LFE)</title>
<section xml:id="beam-introduction"> <section xml:id="beam-introduction">
<title>Introduction</title> <title>Introduction</title>
<para> <para>
In this document and related Nix expressions we use the term In this document and related Nix expressions, we use the term,
<emphasis>Beam</emphasis> to describe the environment. Beam is <emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name
the name of the Erlang Virtial Machine and, as far as we know, of the Erlang Virtual Machine and, as far as we're concerned, from a
from a packaging perspective all languages that run on Beam are packaging perspective, all languages that run on the BEAM are
interchangable. The things that do change, like the build interchangeable. That which varies, like the build system, is transparent
system, are transperant to the users of the package. So we make to users of any given BEAM package, so we make no distinction.
no distinction.
</para> </para>
</section> </section>
<section xml:id="build-tools"> <section xml:id="beam-structure">
<title>Structure</title>
<para>
All BEAM-related expressions are available via the top-level
<literal>beam</literal> attribute, which includes:
</para>
<itemizedlist>
<listitem>
<para>
<literal>interpreters</literal>: a set of compilers running on the
BEAM, including multiple Erlang/OTP versions
(<literal>beam.interpreters.erlangR19</literal>, etc), Elixir
(<literal>beam.interpreters.elixir</literal>) and LFE
(<literal>beam.interpreters.lfe</literal>).
</para>
</listitem>
<listitem>
<para>
<literal>packages</literal>: a set of package sets, each compiled with
a specific Erlang/OTP version, e.g.
<literal>beam.packages.erlangR19</literal>.
</para>
</listitem>
</itemizedlist>
<para>
The default Erlang compiler, defined by
<literal>beam.interpreters.erlang</literal>, is aliased as
<literal>erlang</literal>. The default BEAM package set is defined by
<literal>beam.packages.erlang</literal> and aliased at the top level as
<literal>beamPackages</literal>.
</para>
<para>
To create a package set built with a custom Erlang version, use the
lambda, <literal>beam.packagesWith</literal>, which accepts an Erlang/OTP
derivation and produces a package set similar to
<literal>beam.packages.erlang</literal>.
</para>
<para>
Many Erlang/OTP distributions available in
<literal>beam.interpreters</literal> have versions with ODBC and/or Java
enabled. For example, there's
<literal>beam.interpreters.erlangR19_odbc_javac</literal>, which
corresponds to <literal>beam.interpreters.erlangR19</literal>.
</para>
<para xml:id="erlang-call-package">
We also provide the lambda,
<literal>beam.packages.erlang.callPackage</literal>, which simplifies
writing BEAM package definitions by injecting all packages from
<literal>beam.packages.erlang</literal> into the top-level context.
</para>
</section>
<section xml:id="build-tools">
<title>Build Tools</title> <title>Build Tools</title>
<section xml:id="build-tools-rebar3"> <section xml:id="build-tools-rebar3">
<title>Rebar3</title> <title>Rebar3</title>
<para> <para>
By default Rebar3 wants to manage it's own dependencies. In the By default, Rebar3 wants to manage its own dependencies. This is perfectly
normal non-Nix, this is perfectly acceptable. In the Nix world it acceptable in the normal, non-Nix setup, but in the Nix world, it is not.
is not. To support this we have created two versions of rebar3, To rectify this, we provide two versions of Rebar3:
<literal>rebar3</literal> and <literal>rebar3-open</literal>. The <itemizedlist>
<literal>rebar3</literal> version has been patched to remove the <listitem>
ability to download anything from it. If you are not running it a <para>
nix-shell or a nix-build then its probably not going to work for <literal>rebar3</literal>: patched to remove the ability to download
you. <literal>rebar3-open</literal> is the normal, un-modified anything. When not running it via <literal>nix-shell</literal> or
rebar3. It should work exactly as would any other version of <literal>nix-build</literal>, it's probably not going to work as
rebar3. Any Erlang package should rely on desired.
<literal>rebar3</literal> and thats really what you should be </para>
using too. </listitem>
<listitem>
<para>
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It
should work exactly as would any other version of Rebar3. Any Erlang
package should rely on <literal>rebar3</literal> instead. See <xref
linkend="rebar3-packages"/>.
</para>
</listitem>
</itemizedlist>
</para> </para>
</section> </section>
<section xml:id="build-tools-other"> <section xml:id="build-tools-other">
<title>Mix &amp; Erlang.mk</title> <title>Mix &amp; Erlang.mk</title>
<para> <para>
Both Mix and Erlang.mk work exactly as you would expect. There Both Mix and Erlang.mk work exactly as expected. There is a bootstrap
is a bootstrap process that needs to be run for both of process that needs to be run for both, however, which is supported by the
them. However, that is supported by the <literal>buildMix</literal> and <literal>buildErlangMk</literal>
<literal>buildMix</literal> and <literal>buildErlangMk</literal> derivations. derivations, respectively.
</para> </para>
</section> </section>
</section> </section>
<section xml:id="how-to-install-beam-packages"> <section xml:id="how-to-install-beam-packages">
<title>How to install Beam packages</title> <title>How to Install BEAM Packages</title>
<para> <para>
Beam packages are not registered in the top level simply because BEAM packages are not registered at the top level, simply because they are
they are not relevant to the vast majority of Nix users. They are not relevant to the vast majority of Nix users. They are installable using
installable using the <literal>beamPackages</literal> attribute the <literal>beam.packages.erlang</literal> attribute set (aliased as
set. <literal>beamPackages</literal>), which points to packages built by the
default Erlang/OTP version in Nixpkgs, as defined by
<literal>beam.interpreters.erlang</literal>.
You can list the avialable packages in the To list the available packages in
<literal>beamPackages</literal> with the following command: <literal>beamPackages</literal>, use the following command:
</para> </para>
<programlisting> <programlisting>
@ -69,115 +129,152 @@ beamPackages.meck meck-0.8.3
beamPackages.rebar3-pc pc-1.1.0 beamPackages.rebar3-pc pc-1.1.0
</programlisting> </programlisting>
<para> <para>
To install any of those packages into your profile, refer to them by To install any of those packages into your profile, refer to them by their
their attribute path (first column): attribute path (first column):
</para> </para>
<programlisting> <programlisting>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</programlisting> </programlisting>
<para> <para>
The attribute path of any Beam packages corresponds to the name The attribute path of any BEAM package corresponds to the name of that
of that particular package in Hex or its OTP Application/Release name. particular package in <link xlink:href="https://hex.pm">Hex</link> or its
OTP Application/Release name.
</para> </para>
</section> </section>
<section xml:id="packaging-beam-applications"> <section xml:id="packaging-beam-applications">
<title>Packaging Beam Applications</title> <title>Packaging BEAM Applications</title>
<section xml:id="packaging-erlang-applications"> <section xml:id="packaging-erlang-applications">
<title>Erlang Applications</title> <title>Erlang Applications</title>
<section xml:id="rebar3-packages"> <section xml:id="rebar3-packages">
<title>Rebar3 Packages</title> <title>Rebar3 Packages</title>
<para> <para>
There is a Nix functional called The Nix function, <literal>buildRebar3</literal>, defined in
<literal>buildRebar3</literal>. We use this function to make a <literal>beam.packages.erlang.buildRebar3</literal> and aliased at the
derivation that understands how to build the rebar3 project. For top level, can be used to build a derivation that understands how to
example, the epression we use to build the <link build a Rebar3 project. For example, we can build <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> as
project follows. follows:
</para> </para>
<programlisting> <programlisting>
{stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }: { stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec { buildRebar3 rec {
name = "hex2nix"; name = "hex2nix";
version = "0.0.1"; version = "0.0.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ericbmerritt"; owner = "ericbmerritt";
repo = "hex2nix"; repo = "hex2nix";
rev = "${version}"; rev = "${version}";
sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg"; sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
}; };
beamDeps = [ ibrowse jsx erlware_commons ]; beamDeps = [ ibrowse jsx erlware_commons ];
} }
</programlisting> </programlisting>
<para> <para>
The only visible difference between this derivation and Such derivations are callable with
something like <literal>stdenv.mkDerivation</literal> is that we <literal>beam.packages.erlang.callPackage</literal> (see <xref
have added <literal>erlangDeps</literal> to the derivation. If linkend="erlang-call-package"/>). To call this package using the normal
you add your Beam dependencies here they will be correctly <literal>callPackage</literal>, refer to dependency packages via
handled by the system. <literal>beamPackages</literal>, e.g.
<literal>beamPackages.ibrowse</literal>.
</para> </para>
<para> <para>
If your package needs to compile native code via Rebar's port Notably, <literal>buildRebar3</literal> includes
compilation mechenism. You should add <literal>compilePort = <literal>beamDeps</literal>, while
true;</literal> to the derivation. <literal>stdenv.mkDerivation</literal> does not. BEAM dependencies added
there will be correctly handled by the system.
</para>
<para>
If a package needs to compile native code via Rebar3's port compilation
mechanism, add <literal>compilePort = true;</literal> to the derivation.
</para> </para>
</section> </section>
<section xml:id="erlang-mk-packages"> <section xml:id="erlang-mk-packages">
<title>Erlang.mk Packages</title> <title>Erlang.mk Packages</title>
<para> <para>
Erlang.mk functions almost identically to Rebar. The only real Erlang.mk functions similarly to Rebar3, except we use
difference is that <literal>buildErlangMk</literal> is called <literal>buildErlangMk</literal> instead of
instead of <literal>buildRebar3</literal> <literal>buildRebar3</literal>.
</para> </para>
<programlisting> <programlisting>
{ buildErlangMk, fetchHex, cowlib, ranch }: { buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk {
name = "cowboy";
version = "1.0.4";
src = fetchHex {
pkg = "cowboy";
version = "1.0.4";
sha256 =
"6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
};
beamDeps = [ cowlib ranch ];
meta = { buildErlangMk {
description = ''Small, fast, modular HTTP server written in name = "cowboy";
Erlang.''; version = "1.0.4";
license = stdenv.lib.licenses.isc;
homepage = "https://github.com/ninenines/cowboy"; src = fetchHex {
}; pkg = "cowboy";
version = "1.0.4";
sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
};
beamDeps = [ cowlib ranch ];
meta = {
description = ''
Small, fast, modular HTTP server written in Erlang
'';
license = stdenv.lib.licenses.isc;
homepage = https://github.com/ninenines/cowboy;
};
} }
</programlisting> </programlisting>
</section> </section>
<section xml:id="mix-packages"> <section xml:id="mix-packages">
<title>Mix Packages</title> <title>Mix Packages</title>
<para> <para>
Mix functions almost identically to Rebar. The only real Mix functions similarly to Rebar3, except we use
difference is that <literal>buildMix</literal> is called <literal>buildMix</literal> instead of <literal>buildRebar3</literal>.
instead of <literal>buildRebar3</literal>
</para> </para>
<programlisting> <programlisting>
{ buildMix, fetchHex, plug, absinthe }: { buildMix, fetchHex, plug, absinthe }:
buildMix { buildMix {
name = "absinthe_plug"; name = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
src = fetchHex { src = fetchHex {
pkg = "absinthe_plug"; pkg = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
sha256 = sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
"08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
}; };
beamDeps = [ plug absinthe];
beamDeps = [ plug absinthe ];
meta = { meta = {
description = ''A plug for Absinthe, an experimental GraphQL description = ''
toolkit''; A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3; license = stdenv.lib.licenses.bsd3;
homepage = "https://github.com/CargoSense/absinthe_plug"; homepage = https://github.com/CargoSense/absinthe_plug;
};
}
</programlisting>
<para>
Alternatively, we can use <literal>buildHex</literal> as a shortcut:
</para>
<programlisting>
{ buildHex, buildMix, plug, absinthe }:
buildHex {
name = "absinthe_plug";
version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
builder = buildMix;
beamDeps = [ plug absinthe ];
meta = {
description = ''
A plug for Absinthe, an experimental GraphQL toolkit
'';
license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug;
}; };
} }
</programlisting> </programlisting>
@ -185,18 +282,18 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</section> </section>
</section> </section>
<section xml:id="how-to-develop"> <section xml:id="how-to-develop">
<title>How to develop</title> <title>How to Develop</title>
<section xml:id="accessing-an-environment"> <section xml:id="accessing-an-environment">
<title>Accessing an Environment</title> <title>Accessing an Environment</title>
<para> <para>
Often, all you want to do is be able to access a valid Often, we simply want to access a valid environment that contains a
environment that contains a specific package and its specific package and its dependencies. We can accomplish that with the
dependencies. we can do that with the <literal>env</literal> <literal>env</literal> attribute of a derivation. For example, let's say
part of a derivation. For example, lets say we want to access an we want to access an Erlang REPL with <literal>ibrowse</literal> loaded
erlang repl with ibrowse loaded up. We could do the following. up. We could do the following:
</para> </para>
<programlisting> <programlisting>
~/w/nixpkgs nix-shell -A beamPackages.ibrowse.env --run "erl" $ nix-shell -A beamPackages.ibrowse.env --run "erl"
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false] Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V7.0 (abort with ^G) Eshell V7.0 (abort with ^G)
@ -237,20 +334,19 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
2> 2>
</programlisting> </programlisting>
<para> <para>
Notice the <literal>-A beamPackages.ibrowse.env</literal>.That Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key
is the key to this functionality. to this functionality.
</para> </para>
</section> </section>
<section xml:id="creating-a-shell"> <section xml:id="creating-a-shell">
<title>Creating a Shell</title> <title>Creating a Shell</title>
<para> <para>
Getting access to an environment often isn't enough to do real Getting access to an environment often isn't enough to do real
development. Many times we need to create a development. Usually, we need to create a <literal>shell.nix</literal>
<literal>shell.nix</literal> file and do our development inside file and do our development inside of the environment specified therein.
of the environment specified by that file. This file looks a lot This file looks a lot like the packaging described above, except that
like the packaging described above. The main difference is that <literal>src</literal> points to the project root and we call the package
<literal>src</literal> points to project root and we call the directly.
package directly.
</para> </para>
<programlisting> <programlisting>
{ pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }: { pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }:
@ -264,18 +360,19 @@ let
name = "hex2nix"; name = "hex2nix";
version = "0.1.0"; version = "0.1.0";
src = ./.; src = ./.;
erlangDeps = [ ibrowse jsx erlware_commons ]; beamDeps = [ ibrowse jsx erlware_commons ];
}; };
drv = beamPackages.callPackage f {}; drv = beamPackages.callPackage f {};
in in
drv
drv
</programlisting> </programlisting>
<section xml:id="building-in-a-shell"> <section xml:id="building-in-a-shell">
<title>Building in a shell</title> <title>Building in a Shell (for Mix Projects)</title>
<para> <para>
We can leveral the support of the Derivation, regardless of We can leverage the support of the derivation, irrespective of the build
which build Derivation is called by calling the commands themselv.s derivation, by calling the commands themselves.
</para> </para>
<programlisting> <programlisting>
# ============================================================================= # =============================================================================
@ -335,42 +432,43 @@ analyze: build plt
</programlisting> </programlisting>
<para> <para>
If you add the <literal>shell.nix</literal> as described and Using a <literal>shell.nix</literal> as described (see <xref
user rebar as follows things should simply work. Aside from the linkend="creating-a-shell"/>) should just work. Aside from
<literal>test</literal>, <literal>plt</literal>, and <literal>test</literal>, <literal>plt</literal>, and
<literal>analyze</literal> the talks work just fine for all of <literal>analyze</literal>, the Make targets work just fine for all of the
the build Derivations. build derivations.
</para> </para>
</section> </section>
</section> </section>
</section> </section>
<section xml:id="generating-packages-from-hex-with-hex2nix"> <section xml:id="generating-packages-from-hex-with-hex2nix">
<title>Generating Packages from Hex with Hex2Nix</title> <title>Generating Packages from Hex with <literal>hex2nix</literal></title>
<para> <para>
Updating the Hex packages requires the use of the Updating the <link xlink:href="https://hex.pm">Hex</link> package set
<literal>hex2nix</literal> tool. Given the path to the Erlang requires <link
modules (usually xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>. Given the
<literal>pkgs/development/erlang-modules</literal>). It will path to the Erlang modules (usually
happily dump a file called <literal>pkgs/development/erlang-modules</literal>), it will dump a file
<literal>hex-packages.nix</literal>. That file will contain all called <literal>hex-packages.nix</literal>, containing all the packages that
the packages that use a recognized build system in Hex. However, use a recognized build system in <link
it can't know whether or not all those packages are buildable. xlink:href="https://hex.pm">Hex</link>. It can't be determined, however,
whether every package is buildable.
</para> </para>
<para> <para>
To make life easier for our users, it makes good sense to go To make life easier for our users, try to build every <link
ahead and attempt to build all those packages and remove the xlink:href="https://hex.pm">Hex</link> package and remove those that fail.
ones that don't build. To do that, simply run the command (in To do that, simply run the following command in the root of your
the root of your <literal>nixpkgs</literal> repository). that follows. <literal>nixpkgs</literal> repository:
</para> </para>
<programlisting> <programlisting>
$ nix-build -A beamPackages $ nix-build -A beamPackages
</programlisting> </programlisting>
<para> <para>
That will build every package in That will attempt to build every package in
<literal>beamPackages</literal>. Then you can go through and <literal>beamPackages</literal>. Then manually remove those that fail.
manually remove the ones that fail. Hopefully, someone will Hopefully, someone will improve <link
improve <literal>hex2nix</literal> in the future to automate xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> in the
that. future to automate the process.
</para> </para>
</section> </section>
</section> </section>

View File

@ -13,7 +13,7 @@ standard Go programs.
deis = buildGoPackage rec { deis = buildGoPackage rec {
name = "deis-${version}"; name = "deis-${version}";
version = "1.13.0"; version = "1.13.0";
goPackagePath = "github.com/deis/deis"; <co xml:id='ex-buildGoPackage-1' /> goPackagePath = "github.com/deis/deis"; <co xml:id='ex-buildGoPackage-1' />
subPackages = [ "client" ]; <co xml:id='ex-buildGoPackage-2' /> subPackages = [ "client" ]; <co xml:id='ex-buildGoPackage-2' />
@ -130,6 +130,9 @@ the following arguments are of special significance to the function:
</para> </para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
<para> <para>
<varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" /> <varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" />
where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows: where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows:
@ -160,7 +163,4 @@ done
</screen> </screen>
</para> </para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
</section> </section>

View File

@ -698,33 +698,6 @@ rm /nix/var/nix/manifests/*
rm /nix/var/nix/channel-cache/* rm /nix/var/nix/channel-cache/*
``` ```
### How to use the Haste Haskell-to-Javascript transpiler
Open a shell with `haste-compiler` and `haste-cabal-install` (you don't actually need
`node`, but it can be useful to test stuff):
```shell
nix-shell \
-p "haskellPackages.ghcWithPackages (self: with self; [haste-cabal-install haste-compiler])" \
-p nodejs
```
You may not need the following step but if `haste-boot` fails to compile all the
packages it needs, this might do the trick
```shell
haste-cabal update
```
`haste-boot` builds a set of core libraries so that they can be used from Javascript
transpiled programs:
```shell
haste-boot
```
Transpile and run a "Hello world" program:
```
$ echo 'module Main where main = putStrLn "Hello world"' > hello-world.hs
$ hastec --onexec hello-world.hs
$ node hello-world.js
Hello world
```
### Builds on Darwin fail with `math.h` not found ### Builds on Darwin fail with `math.h` not found
Users of GHC on Darwin have occasionally reported that builds fail, because the Users of GHC on Darwin have occasionally reported that builds fail, because the
@ -854,7 +827,7 @@ the work to be licensed" under the terms of the LGPL (including for free).
The LGPL licensing for GMP is a problem for the overall licensing of binary The LGPL licensing for GMP is a problem for the overall licensing of binary
programs compiled with GHC because most distributions (and builds) of GHC use programs compiled with GHC because most distributions (and builds) of GHC use
static libraries. (Dynamic libraries are currently distributed only for OS X.) static libraries. (Dynamic libraries are currently distributed only for macOS.)
The LGPL licensing situation may be worse: even though The LGPL licensing situation may be worse: even though
[The Glasgow Haskell Compiler License](https://www.haskell.org/ghc/license) [The Glasgow Haskell Compiler License](https://www.haskell.org/ghc/license)
is essentially a "free software" license (BSD3), according to is essentially a "free software" license (BSD3), according to
@ -912,14 +885,14 @@ nix-build -A haskell.packages.integer-simple.ghc802.scientific
- The *Journey into the Haskell NG infrastructure* series of postings - The *Journey into the Haskell NG infrastructure* series of postings
describe the new Haskell infrastructure in great detail: describe the new Haskell infrastructure in great detail:
- [Part 1](http://lists.science.uu.nl/pipermail/nix-dev/2015-January/015591.html) - [Part 1](https://nixos.org/nix-dev/2015-January/015591.html)
explains the differences between the old and the new code and gives explains the differences between the old and the new code and gives
instructions how to migrate to the new setup. instructions how to migrate to the new setup.
- [Part 2](http://lists.science.uu.nl/pipermail/nix-dev/2015-January/015608.html) - [Part 2](https://nixos.org/nix-dev/2015-January/015608.html)
looks in-depth at how to tweak and configure your setup by means of looks in-depth at how to tweak and configure your setup by means of
overrides. overrides.
- [Part 3](http://lists.science.uu.nl/pipermail/nix-dev/2015-April/016912.html) - [Part 3](https://nixos.org/nix-dev/2015-April/016912.html)
describes the infrastructure that keeps the Haskell package set in Nixpkgs describes the infrastructure that keeps the Haskell package set in Nixpkgs
up-to-date. up-to-date.

View File

@ -340,7 +340,7 @@ other packages we like to have in the environment, all specified with `propagate
Indeed, we can just add any package we like to have in our environment to `propagatedBuildInputs`. Indeed, we can just add any package we like to have in our environment to `propagatedBuildInputs`.
```nix ```nix
with import <nixpkgs>; with import <nixpkgs> {};
with pkgs.python35Packages; with pkgs.python35Packages;
buildPythonPackage rec { buildPythonPackage rec {
@ -423,7 +423,7 @@ and in this case the `python35` interpreter is automatically used.
### Interpreters ### Interpreters
Versions 2.7, 3.3, 3.4, 3.5 and 3.6 of the CPython interpreter are available as Versions 2.7, 3.3, 3.4, 3.5 and 3.6 of the CPython interpreter are available as
respectively `python27`, `python33`, `python34`, `python35` and `python36`. The PyPy interpreter respectively `python27`, `python34`, `python35` and `python36`. The PyPy interpreter
is available as `pypy`. The aliases `python2` and `python3` correspond to respectively `python27` and is available as `pypy`. The aliases `python2` and `python3` correspond to respectively `python27` and
`python35`. The default interpreter, `python`, maps to `python2`. `python35`. The default interpreter, `python`, maps to `python2`.
The Nix expressions for the interpreters can be found in The Nix expressions for the interpreters can be found in
@ -469,7 +469,6 @@ sets are
* `pkgs.python26Packages` * `pkgs.python26Packages`
* `pkgs.python27Packages` * `pkgs.python27Packages`
* `pkgs.python33Packages`
* `pkgs.python34Packages` * `pkgs.python34Packages`
* `pkgs.python35Packages` * `pkgs.python35Packages`
* `pkgs.python36Packages` * `pkgs.python36Packages`
@ -546,6 +545,35 @@ All parameters from `mkDerivation` function are still supported.
* `catchConflicts` If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`. * `catchConflicts` If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`.
* `checkInputs` Dependencies needed for running the `checkPhase`. These are added to `buildInputs` when `doCheck = true`. * `checkInputs` Dependencies needed for running the `checkPhase`. These are added to `buildInputs` when `doCheck = true`.
##### Overriding Python packages
The `buildPythonPackage` function has a `overridePythonAttrs` method that
can be used to override the package. In the following example we create an
environment where we have the `blaze` package using an older version of `pandas`.
We override first the Python interpreter and pass
`packageOverrides` which contains the overrides for packages in the package set.
```nix
with import <nixpkgs> {};
(let
python = let
packageOverrides = self: super: {
pandas = super.pandas.overridePythonAttrs(old: rec {
version = "0.19.1";
name = "pandas-${version}";
src = super.fetchPypi {
pname = "pandas";
inherit version;
sha256 = "08blshqj9zj1wyjhhw3kl2vas75vhhicvv72flvf1z3jvapgw295";
};
});
};
in pkgs.python3.override {inherit packageOverrides;};
in python.withPackages(ps: [ps.blaze])).env
```
#### `buildPythonApplication` function #### `buildPythonApplication` function
The `buildPythonApplication` function is practically the same as `buildPythonPackage`. The `buildPythonApplication` function is practically the same as `buildPythonPackage`.
@ -622,7 +650,7 @@ attribute. The `shell.nix` file from the previous section can thus be also writt
```nix ```nix
with import <nixpkgs> {}; with import <nixpkgs> {};
(python33.withPackages (ps: [ps.numpy ps.requests])).env (python36.withPackages (ps: [ps.numpy ps.requests])).env
``` ```
In contrast to `python.buildEnv`, `python.withPackages` does not support the more advanced options In contrast to `python.buildEnv`, `python.withPackages` does not support the more advanced options
@ -755,17 +783,17 @@ In the following example we rename the `pandas` package and build it.
```nix ```nix
with import <nixpkgs> {}; with import <nixpkgs> {};
let (let
python = let python = let
packageOverrides = self: super: { packageOverrides = self: super: {
pandas = super.pandas.override {name="foo";}; pandas = super.pandas.overridePythonAttrs(old: {name="foo";});
}; };
in pkgs.python35.override {inherit packageOverrides;}; in pkgs.python35.override {inherit packageOverrides;};
in python.pkgs.pandas in python.withPackages(ps: [ps.pandas])).env
``` ```
Using `nix-build` on this expression will build the package `pandas` Using `nix-build` on this expression will build an environment that contains the
but with the new name `foo`. package `pandas` but with the new name `foo`.
All packages in the package set will use the renamed package. All packages in the package set will use the renamed package.
A typical use case is to switch to another version of a certain package. A typical use case is to switch to another version of a certain package.

View File

@ -4,10 +4,14 @@
<title>Ruby</title> <title>Ruby</title>
<para>There currently is support to bundle applications that are packaged as Ruby gems. The utility "bundix" allows you to write a <filename>Gemfile</filename>, let bundler create a <filename>Gemfile.lock</filename>, and then convert <para>There currently is support to bundle applications that are packaged as
this into a nix expression that contains all Gem dependencies automatically.</para> Ruby gems. The utility "bundix" allows you to write a
<filename>Gemfile</filename>, let bundler create a
<filename>Gemfile.lock</filename>, and then convert this into a nix
expression that contains all Gem dependencies automatically.
</para>
<para>For example, to package sensu, we did:</para> <para>For example, to package sensu, we did:</para>
<screen> <screen>
<![CDATA[$ cd pkgs/servers/monitoring <![CDATA[$ cd pkgs/servers/monitoring
@ -16,7 +20,7 @@ $ cd sensu
$ cat > Gemfile $ cat > Gemfile
source 'https://rubygems.org' source 'https://rubygems.org'
gem 'sensu' gem 'sensu'
$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix --magic $ $(nix-build '<nixpkgs>' -A bundix --no-out-link)/bin/bundix --magic
$ cat > default.nix $ cat > default.nix
{ lib, bundlerEnv, ruby }: { lib, bundlerEnv, ruby }:
@ -38,15 +42,61 @@ bundlerEnv rec {
}]]> }]]>
</screen> </screen>
<para>Please check in the <filename>Gemfile</filename>, <filename>Gemfile.lock</filename> and the <filename>gemset.nix</filename> so future updates can be run easily. <para>Please check in the <filename>Gemfile</filename>,
<filename>Gemfile.lock</filename> and the
<filename>gemset.nix</filename> so future updates can be run easily.
</para> </para>
<para>Resulting derivations also have two helpful items, <literal>env</literal> and <literal>wrapper</literal>. The first one allows one to quickly drop into <para>For tools written in Ruby - i.e. where the desire is to install
<command>nix-shell</command> with the specified environment present. E.g. <command>nix-shell -A sensu.env</command> would give you an environment with Ruby preset a package and then execute e.g. <command>rake</command> at the command
so it has all the libraries necessary for <literal>sensu</literal> in its paths. The second one can be used to make derivations from custom Ruby scripts which have line, there is an alternative builder called <literal>bundlerApp</literal>.
<filename>Gemfile</filename>s with their dependencies specified. It is a derivation with <command>ruby</command> wrapped so it can find all the needed dependencies. Set up the <filename>gemset.nix</filename> the same way, and then, for
For example, to make a derivation <literal>my-script</literal> for a <filename>my-script.rb</filename> (which should be placed in <filename>bin</filename>) you should example:
run <command>bundix</command> as specified above and then use <literal>bundlerEnv</literal> like this:</para> </para>
<screen>
<![CDATA[{ lib, bundlerApp }:
bundlerApp {
pname = "corundum";
gemdir = ./.;
exes = [ "corundum-skel" ];
meta = with lib; {
description = "Tool and libraries for maintaining Ruby gems.";
homepage = https://github.com/nyarly/corundum;
license = licenses.mit;
maintainers = [ maintainers.nyarly ];
platforms = platforms.unix;
};
}]]>
</screen>
<para>The chief advantage of <literal>bundlerApp</literal> over
<literal>bundlerEnv</literal> is the executables introduced in the
environment are precisely those selected in the <literal>exes</literal>
list, as opposed to <literal>bundlerEnv</literal> which adds all the
executables made available by gems in the gemset, which can mean e.g.
<command>rspec</command> or <command>rake</command> in unpredictable
versions available from various packages.
</para>
<para>Resulting derivations for both builders also have two helpful
attributes, <literal>env</literal> and <literal>wrappedRuby</literal>.
The first one allows one to quickly drop into
<command>nix-shell</command> with the specified environment present.
E.g. <command>nix-shell -A sensu.env</command> would give you an
environment with Ruby preset so it has all the libraries necessary
for <literal>sensu</literal> in its paths. The second one can be
used to make derivations from custom Ruby scripts which have
<filename>Gemfile</filename>s with their dependencies specified. It is
a derivation with <command>ruby</command> wrapped so it can find all
the needed dependencies. For example, to make a derivation
<literal>my-script</literal> for a <filename>my-script.rb</filename>
(which should be placed in <filename>bin</filename>) you should run
<command>bundix</command> as specified above and then use
<literal>bundlerEnv</literal> like this:
</para>
<programlisting> <programlisting>
<![CDATA[let env = bundlerEnv { <![CDATA[let env = bundlerEnv {
@ -60,13 +110,9 @@ run <command>bundix</command> as specified above and then use <literal>bundlerEn
in stdenv.mkDerivation { in stdenv.mkDerivation {
name = "my-script"; name = "my-script";
buildInputs = [ env.wrappedRuby ];
buildInputs = [ env.wrapper ];
script = ./my-script.rb; script = ./my-script.rb;
buildCommand = '' buildCommand = ''
mkdir -p $out/bin
install -D -m755 $script $out/bin/my-script install -D -m755 $script $out/bin/my-script
patchShebangs $out/bin/my-script patchShebangs $out/bin/my-script
''; '';
@ -74,4 +120,3 @@ in stdenv.mkDerivation {
</programlisting> </programlisting>
</section> </section>

View File

@ -8,15 +8,48 @@ date: 2016-06-25
You'll get a vim(-your-suffix) in PATH also loading the plugins you want. You'll get a vim(-your-suffix) in PATH also loading the plugins you want.
Loading can be deferred; see examples. Loading can be deferred; see examples.
VAM (=vim-addon-manager) and Pathogen plugin managers are supported. Vim packages, VAM (=vim-addon-manager) and Pathogen are supported to load
Vundle, NeoBundle could be your turn. packages.
## dependencies by Vim plugins ## Custom configuration
Adding custom .vimrc lines can be done using the following code:
```
vim_configurable.customize {
name = "vim-with-plugins";
vimrcConfig.customRC = ''
set hidden
'';
}
```
## Vim packages
To store you plugins in Vim packages the following example can be used:
```
vim_configurable.customize {
vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
# loaded on launch
start = [ youcompleteme fugitive ];
# manually loadable by calling `:packadd $plugin-name`
opt = [ phpCompletion elm-vim ];
# To automatically load a plugin when opening a filetype, add vimrc lines like:
# autocmd FileType php :packadd phpCompletion
}
};
```
## VAM
### dependencies by Vim plugins
VAM introduced .json files supporting dependencies without versioning VAM introduced .json files supporting dependencies without versioning
assuming that "using latest version" is ok most of the time. assuming that "using latest version" is ok most of the time.
## HOWTO ### Example
First create a vim-scripts file having one plugin name per line. Example: First create a vim-scripts file having one plugin name per line. Example:

View File

@ -73,7 +73,7 @@
<varlistentry><term><varname> <varlistentry><term><varname>
$outputMan</varname></term><listitem><para> $outputMan</varname></term><listitem><para>
is for man pages (except for section 3). They go to <varname>man</varname> or <varname>doc</varname> or <varname>$outputBin</varname> by default. is for man pages (except for section 3). They go to <varname>man</varname> or <varname>$outputBin</varname> by default.
</para></listitem></varlistentry> </para></listitem></varlistentry>
<varlistentry><term><varname> <varlistentry><term><varname>
@ -83,7 +83,7 @@
<varlistentry><term><varname> <varlistentry><term><varname>
$outputInfo</varname></term><listitem><para> $outputInfo</varname></term><listitem><para>
is for info pages. They go to <varname>info</varname> or <varname>doc</varname> or <varname>$outputMan</varname> by default. is for info pages. They go to <varname>info</varname> or <varname>$outputBin</varname> by default.
</para></listitem></varlistentry> </para></listitem></varlistentry>
</variablelist> </variablelist>

View File

@ -8,59 +8,88 @@
overlays. Overlays are used to add layers in the fix-point used by Nixpkgs overlays. Overlays are used to add layers in the fix-point used by Nixpkgs
to compose the set of all packages.</para> to compose the set of all packages.</para>
<para>Nixpkgs can be configured with a list of overlays, which are
applied in order. This means that the order of the overlays can be significant
if multiple layers override the same package.</para>
<!--============================================================--> <!--============================================================-->
<section xml:id="sec-overlays-install"> <section xml:id="sec-overlays-install">
<title>Installing Overlays</title> <title>Installing overlays</title>
<para>The set of overlays is looked for in the following places. The <para>The list of overlays is determined as follows.</para>
first one present is considered, and all the rest are ignored:
<para>If the <varname>overlays</varname> argument is not provided explicitly, we look for overlays in a path. The path
is determined as follows:
<orderedlist> <orderedlist>
<listitem> <listitem>
<para>First, if an <varname>overlays</varname> argument to the nixpkgs function itself is given,
then that is used.</para>
<para>As an argument of the imported attribute set. When importing Nixpkgs, <para>This can be passed explicitly when importing nipxkgs, for example
the <varname>overlays</varname> attribute argument can be set to a list of <literal>import &lt;nixpkgs> { overlays = [ overlay1 overlay2 ]; }</literal>.</para>
functions, which is described in <xref linkend="sec-overlays-layout"/>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Otherwise, if the Nix path entry <literal>&lt;nixpkgs-overlays></literal> exists, we look for overlays
at that path, as described below.</para>
<para>In the directory pointed to by the Nix search path entry <para>See the section on <literal>NIX_PATH</literal> in the Nix manual for more details on how to
<literal>&lt;nixpkgs-overlays></literal>.</para> set a value for <literal>&lt;nixpkgs-overlays>.</literal></para>
</listitem> </listitem>
<listitem> <listitem>
<para>If one of <filename>~/.config/nixpkgs/overlays.nix</filename> and
<para>In the directory <filename>~/.config/nixpkgs/overlays/</filename>.</para> <filename>~/.config/nixpkgs/overlays/</filename> exists, then we look for overlays at that path, as
described below. It is an error if both exist.</para>
</listitem> </listitem>
</orderedlist> </orderedlist>
</para> </para>
<para>For the second and third options, the directory should contain Nix expressions defining the <para>If we are looking for overlays at a path, then there are two cases:
overlays. Each overlay can be a file, a directory containing a <itemizedlist>
<filename>default.nix</filename>, or a symlink to one of those. The expressions should follow <listitem>
the syntax described in <xref linkend="sec-overlays-layout"/>.</para> <para>If the path is a file, then the file is imported as a Nix expression and used as the list of
overlays.</para>
</listitem>
<para>The order of the overlay layers can influence the recipe of packages if multiple layers override <listitem>
the same recipe. In the case where overlays are loaded from a directory, they are loaded in <para>If the path is a directory, then we take the content of the directory, order it
alphabetical order.</para> lexicographically, and attempt to interpret each as an overlay by:
<itemizedlist>
<listitem>
<para>Importing the file, if it is a <literal>.nix</literal> file.</para>
</listitem>
<listitem>
<para>Importing a top-level <filename>default.nix</filename> file, if it is a directory.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
<para>To install an overlay using the last option, you can clone the overlay's repository and add <para>On a NixOS system the value of the <literal>nixpkgs.overlays</literal> option, if present,
a symbolic link to it in <filename>~/.config/nixpkgs/overlays/</filename> directory.</para> is passed to the system Nixpkgs directly as an argument. Note that this does not affect the overlays for
non-NixOS operations (e.g. <literal>nix-env</literal>), which are looked up independently.</para>
<para>The <filename>overlays.nix</filename> option therefore provides a convenient way to use the same
overlays for a NixOS system configuration and user configuration: the same file can be used
as <filename>overlays.nix</filename> and imported as the value of <literal>nixpkgs.overlays</literal>.</para>
</section> </section>
<!--============================================================--> <!--============================================================-->
<section xml:id="sec-overlays-layout"> <section xml:id="sec-overlays-definition">
<title>Overlays Layout</title> <title>Defining overlays</title>
<para>Overlays are expressed as Nix functions which accept 2 arguments and return a set of <para>Overlays are Nix functions which accept two arguments,
packages.</para> conventionally called <varname>self</varname> and <varname>super</varname>,
and return a set of packages. For example, the following is a valid overlay.</para>
<programlisting> <programlisting>
self: super: self: super:
@ -75,25 +104,31 @@ self: super:
} }
</programlisting> </programlisting>
<para>The first argument, usually named <varname>self</varname>, corresponds to the final package <para>The first argument (<varname>self</varname>) corresponds to the final package
set. You should use this set for the dependencies of all packages specified in your set. You should use this set for the dependencies of all packages specified in your
overlay. For example, all the dependencies of <varname>rr</varname> in the example above come overlay. For example, all the dependencies of <varname>rr</varname> in the example above come
from <varname>self</varname>, as well as the overridden dependencies used in the from <varname>self</varname>, as well as the overridden dependencies used in the
<varname>boost</varname> override.</para> <varname>boost</varname> override.</para>
<para>The second argument, usually named <varname>super</varname>, <para>The second argument (<varname>super</varname>)
corresponds to the result of the evaluation of the previous stages of corresponds to the result of the evaluation of the previous stages of
Nixpkgs. It does not contain any of the packages added by the current Nixpkgs. It does not contain any of the packages added by the current
overlay nor any of the following overlays. This set should be used either overlay, nor any of the following overlays. This set should be used either
to refer to packages you wish to override, or to access functions defined to refer to packages you wish to override, or to access functions defined
in Nixpkgs. For example, the original recipe of <varname>boost</varname> in Nixpkgs. For example, the original recipe of <varname>boost</varname>
in the above example, comes from <varname>super</varname>, as well as the in the above example, comes from <varname>super</varname>, as well as the
<varname>callPackage</varname> function.</para> <varname>callPackage</varname> function.</para>
<para>The value returned by this function should be a set similar to <para>The value returned by this function should be a set similar to
<filename>pkgs/top-level/all-packages.nix</filename>, which contains <filename>pkgs/top-level/all-packages.nix</filename>, containing
overridden and/or new packages.</para> overridden and/or new packages.</para>
<para>Overlays are similar to other methods for customizing Nixpkgs, in particular
the <literal>packageOverrides</literal> attribute described in <xref linkend="sec-modify-via-packageOverrides"/>.
Indeed, <literal>packageOverrides</literal> acts as an overlay with only the
<varname>super</varname> argument. It is therefore appropriate for basic use,
but overlays are more powerful and easier to distribute.</para>
</section> </section>
</chapter> </chapter>

View File

@ -366,15 +366,33 @@ it. Place the resulting <filename>package.nix</filename> file into
</section> </section>
<section xml:id="sec-autojump"> <section xml:id="sec-shell-helpers">
<title>Autojump</title> <title>Interactive shell helpers</title>
<para> <para>
autojump needs the shell integration to be useful but unlike other systems, Some packages provide the shell integration to be more useful. But
nix doesn't have a standard share directory location. This is why a unlike other systems, nix doesn't have a standard share directory
<command>autojump-share</command> script is shipped that prints the location location. This is why a bunch <command>PACKAGE-share</command>
of the shared folder. This can then be used in the .bashrc like this: scripts are shipped that print the location of the corresponding
shared folder.
Current list of such packages is as following:
<itemizedlist>
<listitem>
<para>
<literal>autojump</literal>: <command>autojump-share</command>
</para>
</listitem>
<listitem>
<para>
<literal>fzf</literal>: <command>fzf-share</command>
</para>
</listitem>
</itemizedlist>
E.g. <literal>autojump</literal> can then used in the .bashrc like this:
<screen> <screen>
source "$(autojump-share)/autojump.bash" source "$(autojump-share)/autojump.bash"
</screen> </screen>

View File

@ -212,7 +212,7 @@ $ nix-env -f . -iA libfoo</screen>
<listitem> <listitem>
<para>Optionally commit the new package and open a pull request, or send a patch to <para>Optionally commit the new package and open a pull request, or send a patch to
<literal>nix-dev@cs.uu.nl</literal>.</para> <literal>https://groups.google.com/forum/#!forum/nix-devel</literal>.</para>
</listitem> </listitem>

View File

@ -1,3 +1,4 @@
<chapter xmlns="http://docbook.org/ns/docbook" <chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-stdenv"> xml:id="chap-stdenv">
@ -1153,7 +1154,7 @@ makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry xml:id='fun-substitute'> <varlistentry xml:id='fun-substitute'>
<term><function>substitute</function> <term><function>substitute</function>
@ -1312,7 +1313,7 @@ someVar=$(stripHash $name)
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry xml:id='fun-wrapProgram'> <varlistentry xml:id='fun-wrapProgram'>
<term><function>wrapProgram</function> <term><function>wrapProgram</function>
@ -1342,12 +1343,34 @@ someVar=$(stripHash $name)
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term>GCC wrapper</term> <term>CC Wrapper</term>
<listitem><para>Adds the <filename>include</filename> subdirectory <listitem>
of each build input to the <envar>NIX_CFLAGS_COMPILE</envar> <para>
environment variable, and the <filename>lib</filename> and CC Wrapper wraps a C toolchain for a bunch of miscellaneous purposes.
<filename>lib64</filename> subdirectories to Specifically, a C compiler (GCC or Clang), Binutils (or the CCTools + binutils mashup when targetting Darwin), and a C standard library (glibc or Darwin's libSystem) are all fed in, and dependency finding, hardening (see below), and purity checks for each are handled by CC Wrapper.
<envar>NIX_LDFLAGS</envar>.</para></listitem> Packages typically depend on only CC Wrapper, instead of those 3 inputs directly.
</para>
<para>
Dependency finding is undoubtedly the main task of CC wrapper.
It is currently accomplished by collecting directories of host-platform dependencies (i.e. <varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>) in environment variables.
CC wrapper's setup hook causes any <filename>include</filename> subdirectory of such a dependency to be added to <envar>NIX_CFLAGS_COMPILE</envar>, and any <filename>lib</filename> and <filename>lib64</filename> subdirectories to <envar>NIX_LDFLAGS</envar>.
The setup hook itself contains some lengthy comments describing the exact convoluted mechanism by which this is accomplished.
</para>
<para>
A final task of the setup hook is defining a number of standard environment variables to tell build systems which executables full-fill which purpose.
They are defined to just be the base name of the tools, under the assumption that CC Wrapper's binaries will be on the path.
Firstly, this helps poorly-written packages, e.g. ones that look for just <command>gcc</command> when <envar>CC</envar> isn't defined yet <command>clang</command> is to be used.
Secondly, this helps packages not get confused when cross-compiling, in which case multiple CC wrappers may be simultaneous in use (targeting different platforms).
<envar>BUILD_</envar>- and <envar>TARGET_</envar>-prefixed versions of the normal environment variable are defined for the additional CC Wrappers, properly disambiguating them.
</para>
<para>
A problem with this final task is that CC Wrapper is honest and defines <envar>LD</envar> as <command>ld</command>.
Most packages, however, firstly use the C compiler for linking, secondly use <envar>LD</envar> anyways, defining it as the C compiler, and thirdly, only so define <envar>LD</envar> when it is undefined as a fallback.
This triple-threat means CC Wrapper will break those packages, as LD is already defined as the actually linker which the package won't override yet doesn't want to use.
The workaround is to define, just for the problematic package, <envar>LD</envar> as the C compiler.
A good way to do this would be <command>preConfigure = "LD=$CC"</command>.
</para>
</listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>

View File

@ -51,6 +51,24 @@ rec {
else { })); else { }));
/* `makeOverridable` takes a function from attribute set to attribute set and
injects `override` attibute which can be used to override arguments of
the function.
nix-repl> x = {a, b}: { result = a + b; }
nix-repl> y = lib.makeOverridable x { a = 1; b = 2; }
nix-repl> y
{ override = «lambda»; overrideDerivation = «lambda»; result = 3; }
nix-repl> y.override { a = 10; }
{ override = «lambda»; overrideDerivation = «lambda»; result = 12; }
Please refer to "Nixpkgs Contributors Guide" section
"<pkg>.overrideDerivation" to learn about `overrideDerivation` and caveats
related to its use.
*/
makeOverridable = f: origArgs: makeOverridable = f: origArgs:
let let
ff = f origArgs; ff = f origArgs;

View File

@ -20,8 +20,32 @@ rec {
traceXMLValMarked = str: x: trace (str + builtins.toXML x) x; traceXMLValMarked = str: x: trace (str + builtins.toXML x) x;
# strict trace functions (traced structure is fully evaluated and printed) # strict trace functions (traced structure is fully evaluated and printed)
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first. */
traceSeq = x: y: trace (builtins.deepSeq x x) y; traceSeq = x: y: trace (builtins.deepSeq x x) y;
/* Like `traceSeq`, but only down to depth n.
* This is very useful because lots of `traceSeq` usages
* lead to an infinite recursion.
*/
traceSeqN = depth: x: y: with lib;
let snip = v: if isList v then noQuotes "[]" v
else if isAttrs v then noQuotes "{}" v
else v;
noQuotes = str: v: { __pretty = const str; val = v; };
modify = n: fn: v: if (n == 0) then fn v
else if isList v then map (modify (n - 1) fn) v
else if isAttrs v then mapAttrs
(const (modify (n - 1) fn)) v
else v;
in trace (generators.toPretty { allowPrettyValues = true; }
(modify depth snip x)) y;
/* `traceSeq`, but the same value is traced and returned */
traceValSeq = v: traceVal (builtins.deepSeq v v); traceValSeq = v: traceVal (builtins.deepSeq v v);
/* `traceValSeq` but with fixed depth */
traceValSeqN = depth: v: traceSeqN depth v v;
# this can help debug your code as well - designed to not produce thousands of lines # this can help debug your code as well - designed to not produce thousands of lines
traceShowVal = x: trace (showVal x) x; traceShowVal = x: trace (showVal x) x;

View File

@ -309,48 +309,6 @@ rec {
mergeAttrsByFuncDefaults = foldl mergeAttrByFunc { inherit mergeAttrBy; }; mergeAttrsByFuncDefaults = foldl mergeAttrByFunc { inherit mergeAttrBy; };
mergeAttrsByFuncDefaultsClean = list: removeAttrs (mergeAttrsByFuncDefaults list) ["mergeAttrBy"]; mergeAttrsByFuncDefaultsClean = list: removeAttrs (mergeAttrsByFuncDefaults list) ["mergeAttrBy"];
# merge attrs based on version key into mkDerivation args, see mergeAttrBy to learn about smart merge defaults
#
# This function is best explained by an example:
#
# {version ? "2.x"}:
#
# mkDerivation (mergeAttrsByVersion "package-name" version
# { # version specific settings
# "git" = { src = ..; preConfigre = "autogen.sh"; buildInputs = [automake autoconf libtool]; };
# "2.x" = { src = ..; };
# }
# { // shared settings
# buildInputs = [ common build inputs ];
# meta = { .. }
# }
# )
#
# Please note that e.g. Eelco Dolstra usually prefers having one file for
# each version. On the other hand there are valuable additional design goals
# - readability
# - do it once only
# - try to avoid duplication
#
# Marc Weber and Michael Raskin sometimes prefer keeping older
# versions around for testing and regression tests - as long as its cheap to
# do so.
#
# Very often it just happens that the "shared" code is the bigger part.
# Then using this function might be appropriate.
#
# Be aware that its easy to cause recompilations in all versions when using
# this function - also if derivations get too complex splitting into multiple
# files is the way to go.
#
# See misc.nix -> versionedDerivation
# discussion: nixpkgs: pull/310
mergeAttrsByVersion = name: version: attrsByVersion: base:
mergeAttrsByFuncDefaultsClean [ { name = "${name}-${version}"; }
base
(maybeAttr version (throw "bad version ${version} for ${name}") attrsByVersion)
];
# sane defaults (same name as attr name so that inherit can be used) # sane defaults (same name as attr name so that inherit can be used)
mergeAttrBy = # { buildInputs = concatList; [...]; passthru = mergeAttr; [..]; } mergeAttrBy = # { buildInputs = concatList; [...]; passthru = mergeAttr; [..]; }
listToAttrs (map (n: nameValuePair n lib.concat) listToAttrs (map (n: nameValuePair n lib.concat)
@ -423,4 +381,12 @@ rec {
else if isInt x then "int" else if isInt x then "int"
else "string"; else "string";
/* deprecated:
For historical reasons, imap has an index starting at 1.
But for consistency with the rest of the library we want an index
starting at zero.
*/
imap = imap1;
} }

View File

@ -90,4 +90,41 @@ rec {
* parsers as well. * parsers as well.
*/ */
toYAML = {}@args: toJSON args; toYAML = {}@args: toJSON args;
/* Pretty print a value, akin to `builtins.trace`.
* Should probably be a builtin as well.
*/
toPretty = {
/* If this option is true, attrsets like { __pretty = fn; val = ; }
will use fn to convert val to a pretty printed representation.
(This means fn is type Val -> String.) */
allowPrettyValues ? false
}@args: v: with builtins;
if isInt v then toString v
else if isBool v then (if v == true then "true" else "false")
else if isString v then "\"" + v + "\""
else if null == v then "null"
else if isFunction v then
let fna = functionArgs v;
showFnas = concatStringsSep "," (libAttr.mapAttrsToList
(name: hasDefVal: if hasDefVal then "(${name})" else name)
fna);
in if fna == {} then "<λ>"
else "<λ:{${showFnas}}>"
else if isList v then "[ "
+ libStr.concatMapStringsSep " " (toPretty args) v
+ " ]"
else if isAttrs v then
# apply pretty values if allowed
if attrNames v == [ "__pretty" "val" ] && allowPrettyValues
then v.__pretty v.val
# TODO: there is probably a better representation?
else if v ? type && v.type == "derivation" then "<δ>"
else "{ "
+ libStr.concatStringsSep " " (libAttr.mapAttrsToList
(name: value:
"${toPretty args name} = ${toPretty args value};") v)
+ " }"
else "toPretty: should never happen (v = ${v})";
} }

View File

@ -546,12 +546,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "zlib License"; fullName = "zlib License";
}; };
zpt20 = spdx { # FIXME: why zpt* instead of zpl* zpl20 = spdx {
spdxId = "ZPL-2.0"; spdxId = "ZPL-2.0";
fullName = "Zope Public License 2.0"; fullName = "Zope Public License 2.0";
}; };
zpt21 = spdx { zpl21 = spdx {
spdxId = "ZPL-2.1"; spdxId = "ZPL-2.1";
fullName = "Zope Public License 2.1"; fullName = "Zope Public License 2.1";
}; };

View File

@ -77,15 +77,21 @@ rec {
*/ */
foldl' = builtins.foldl' or foldl; foldl' = builtins.foldl' or foldl;
/* Map with index /* Map with index starting from 0
FIXME(zimbatm): why does this start to count at 1?
Example: Example:
imap (i: v: "${v}-${toString i}") ["a" "b"] imap0 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-0" "b-1" ]
*/
imap0 = f: list: genList (n: f n (elemAt list n)) (length list);
/* Map with index starting from 1
Example:
imap1 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-1" "b-2" ] => [ "a-1" "b-2" ]
*/ */
imap = f: list: genList (n: f (n + 1) (elemAt list n)) (length list); imap1 = f: list: genList (n: f (n + 1) (elemAt list n)) (length list);
/* Map and concatenate the result. /* Map and concatenate the result.
@ -471,4 +477,12 @@ rec {
*/ */
subtractLists = e: filter (x: !(elem x e)); subtractLists = e: filter (x: !(elem x e));
/* Test if two lists have no common element.
It should be slightly more efficient than (intersectLists a b == [])
*/
mutuallyExclusive = a: b:
(builtins.length a) == 0 ||
(!(builtins.elem (builtins.head a) b) &&
mutuallyExclusive (builtins.tail a) b);
} }

View File

@ -16,6 +16,7 @@
acowley = "Anthony Cowley <acowley@gmail.com>"; acowley = "Anthony Cowley <acowley@gmail.com>";
adelbertc = "Adelbert Chang <adelbertc@gmail.com>"; adelbertc = "Adelbert Chang <adelbertc@gmail.com>";
adev = "Adrien Devresse <adev@adev.name>"; adev = "Adrien Devresse <adev@adev.name>";
adisbladis = "Adam Hose <adis@blad.is>";
Adjective-Object = "Maxwell Huang-Hobbs <mhuan13@gmail.com>"; Adjective-Object = "Maxwell Huang-Hobbs <mhuan13@gmail.com>";
adnelson = "Allen Nelson <ithinkican@gmail.com>"; adnelson = "Allen Nelson <ithinkican@gmail.com>";
adolfogc = "Adolfo E. García Castro <adolfo.garcia.cr@gmail.com>"; adolfogc = "Adolfo E. García Castro <adolfo.garcia.cr@gmail.com>";
@ -42,6 +43,7 @@
andrewrk = "Andrew Kelley <superjoe30@gmail.com>"; andrewrk = "Andrew Kelley <superjoe30@gmail.com>";
andsild = "Anders Sildnes <andsild@gmail.com>"; andsild = "Anders Sildnes <andsild@gmail.com>";
aneeshusa = "Aneesh Agrawal <aneeshusa@gmail.com>"; aneeshusa = "Aneesh Agrawal <aneeshusa@gmail.com>";
ankhers = "Justin Wood <justin.k.wood@gmail.com>";
antono = "Antono Vasiljev <self@antono.info>"; antono = "Antono Vasiljev <self@antono.info>";
apeschar = "Albert Peschar <albert@peschar.net>"; apeschar = "Albert Peschar <albert@peschar.net>";
apeyroux = "Alexandre Peyroux <alex@px.io>"; apeyroux = "Alexandre Peyroux <alex@px.io>";
@ -61,6 +63,7 @@
bachp = "Pascal Bach <pascal.bach@nextrem.ch>"; bachp = "Pascal Bach <pascal.bach@nextrem.ch>";
badi = "Badi' Abdul-Wahid <abdulwahidc@gmail.com>"; badi = "Badi' Abdul-Wahid <abdulwahidc@gmail.com>";
balajisivaraman = "Balaji Sivaraman<sivaraman.balaji@gmail.com>"; balajisivaraman = "Balaji Sivaraman<sivaraman.balaji@gmail.com>";
barrucadu = "Michael Walker <mike@barrucadu.co.uk>";
basvandijk = "Bas van Dijk <v.dijk.bas@gmail.com>"; basvandijk = "Bas van Dijk <v.dijk.bas@gmail.com>";
Baughn = "Svein Ove Aas <sveina@gmail.com>"; Baughn = "Svein Ove Aas <sveina@gmail.com>";
bcarrell = "Brandon Carrell <brandoncarrell@gmail.com>"; bcarrell = "Brandon Carrell <brandoncarrell@gmail.com>";
@ -72,6 +75,7 @@
berdario = "Dario Bertini <berdario@gmail.com>"; berdario = "Dario Bertini <berdario@gmail.com>";
bergey = "Daniel Bergey <bergey@teallabs.org>"; bergey = "Daniel Bergey <bergey@teallabs.org>";
bhipple = "Benjamin Hipple <bhipple@protonmail.com>"; bhipple = "Benjamin Hipple <bhipple@protonmail.com>";
binarin = "Alexey Lebedeff <binarin@binarin.ru>";
bjg = "Brian Gough <bjg@gnu.org>"; bjg = "Brian Gough <bjg@gnu.org>";
bjornfor = "Bjørn Forsman <bjorn.forsman@gmail.com>"; bjornfor = "Bjørn Forsman <bjorn.forsman@gmail.com>";
bluescreen303 = "Mathijs Kwik <mathijs@bluescreen303.nl>"; bluescreen303 = "Mathijs Kwik <mathijs@bluescreen303.nl>";
@ -90,6 +94,7 @@
campadrenalin = "Philip Horger <campadrenalin@gmail.com>"; campadrenalin = "Philip Horger <campadrenalin@gmail.com>";
canndrew = "Andrew Cann <shum@canndrew.org>"; canndrew = "Andrew Cann <shum@canndrew.org>";
carlsverre = "Carl Sverre <accounts@carlsverre.com>"; carlsverre = "Carl Sverre <accounts@carlsverre.com>";
casey = "Casey Rodarmor <casey@rodarmor.net>";
cdepillabout = "Dennis Gosnell <cdep.illabout@gmail.com>"; cdepillabout = "Dennis Gosnell <cdep.illabout@gmail.com>";
cfouche = "Chaddaï Fouché <chaddai.fouche@gmail.com>"; cfouche = "Chaddaï Fouché <chaddai.fouche@gmail.com>";
changlinli = "Changlin Li <mail@changlinli.com>"; changlinli = "Changlin Li <mail@changlinli.com>";
@ -132,6 +137,7 @@
dbrock = "Daniel Brockman <daniel@brockman.se>"; dbrock = "Daniel Brockman <daniel@brockman.se>";
deepfire = "Kosyrev Serge <_deepfire@feelingofgreen.ru>"; deepfire = "Kosyrev Serge <_deepfire@feelingofgreen.ru>";
demin-dmitriy = "Dmitriy Demin <demindf@gmail.com>"; demin-dmitriy = "Dmitriy Demin <demindf@gmail.com>";
derchris = "Christian Gerbrandt <derchris@me.com>";
DerGuteMoritz = "Moritz Heidkamp <moritz@twoticketsplease.de>"; DerGuteMoritz = "Moritz Heidkamp <moritz@twoticketsplease.de>";
dermetfan = "Robin Stumm <serverkorken@gmail.com>"; dermetfan = "Robin Stumm <serverkorken@gmail.com>";
DerTim1 = "Tim Digel <tim.digel@active-group.de>"; DerTim1 = "Tim Digel <tim.digel@active-group.de>";
@ -141,17 +147,20 @@
dfoxfranke = "Daniel Fox Franke <dfoxfranke@gmail.com>"; dfoxfranke = "Daniel Fox Franke <dfoxfranke@gmail.com>";
dgonyeo = "Derek Gonyeo <derek@gonyeo.com>"; dgonyeo = "Derek Gonyeo <derek@gonyeo.com>";
dipinhora = "Dipin Hora <dipinhora+github@gmail.com>"; dipinhora = "Dipin Hora <dipinhora+github@gmail.com>";
disassembler = "Samuel Leathers <disasm@gmail.com>";
dmalikov = "Dmitry Malikov <malikov.d.y@gmail.com>"; dmalikov = "Dmitry Malikov <malikov.d.y@gmail.com>";
DmitryTsygankov = "Dmitry Tsygankov <dmitry.tsygankov@gmail.com>"; DmitryTsygankov = "Dmitry Tsygankov <dmitry.tsygankov@gmail.com>";
dmjio = "David Johnson <djohnson.m@gmail.com>"; dmjio = "David Johnson <djohnson.m@gmail.com>";
dochang = "Desmond O. Chang <dochang@gmail.com>"; dochang = "Desmond O. Chang <dochang@gmail.com>";
domenkozar = "Domen Kozar <domen@dev.si>"; domenkozar = "Domen Kozar <domen@dev.si>";
dotlambda = "Robert Schütz <rschuetz17@gmail.com>";
doublec = "Chris Double <chris.double@double.co.nz>"; doublec = "Chris Double <chris.double@double.co.nz>";
dpaetzel = "David Pätzel <david.a.paetzel@gmail.com>"; dpaetzel = "David Pätzel <david.a.paetzel@gmail.com>";
drets = "Dmytro Rets <dmitryrets@gmail.com>"; drets = "Dmytro Rets <dmitryrets@gmail.com>";
drewkett = "Andrew Burkett <burkett.andrew@gmail.com>"; drewkett = "Andrew Burkett <burkett.andrew@gmail.com>";
dsferruzza = "David Sferruzza <david.sferruzza@gmail.com>"; dsferruzza = "David Sferruzza <david.sferruzza@gmail.com>";
dtzWill = "Will Dietz <nix@wdtz.org>"; dtzWill = "Will Dietz <nix@wdtz.org>";
dywedir = "Vladyslav M. <dywedir@protonmail.ch>";
e-user = "Alexander Kahl <nixos@sodosopa.io>"; e-user = "Alexander Kahl <nixos@sodosopa.io>";
ebzzry = "Rommel Martinez <ebzzry@gmail.com>"; ebzzry = "Rommel Martinez <ebzzry@gmail.com>";
edanaher = "Evan Danaher <nixos@edanaher.net>"; edanaher = "Evan Danaher <nixos@edanaher.net>";
@ -166,6 +175,7 @@
ekleog = "Leo Gaspard <leo@gaspard.io>"; ekleog = "Leo Gaspard <leo@gaspard.io>";
elasticdog = "Aaron Bull Schaefer <aaron@elasticdog.com>"; elasticdog = "Aaron Bull Schaefer <aaron@elasticdog.com>";
eleanor = "Dejan Lukan <dejan@proteansec.com>"; eleanor = "Dejan Lukan <dejan@proteansec.com>";
elijahcaine = "Elijah Caine <elijahcainemv@gmail.com>";
elitak = "Eric Litak <elitak@gmail.com>"; elitak = "Eric Litak <elitak@gmail.com>";
ellis = "Ellis Whitehead <nixos@ellisw.net>"; ellis = "Ellis Whitehead <nixos@ellisw.net>";
eperuffo = "Emanuele Peruffo <info@emanueleperuffo.com>"; eperuffo = "Emanuele Peruffo <info@emanueleperuffo.com>";
@ -181,6 +191,7 @@
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>"; fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
fare = "Francois-Rene Rideau <fahree@gmail.com>"; fare = "Francois-Rene Rideau <fahree@gmail.com>";
falsifian = "James Cook <james.cook@utoronto.ca>"; falsifian = "James Cook <james.cook@utoronto.ca>";
florianjacob = "Florian Jacob <projects+nixos@florianjacob.de>";
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>"; flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>"; fluffynukeit = "Daniel Austin <dan@fluffynukeit.com>";
fmthoma = "Franz Thoma <f.m.thoma@googlemail.com>"; fmthoma = "Franz Thoma <f.m.thoma@googlemail.com>";
@ -205,6 +216,7 @@
gilligan = "Tobias Pflug <tobias.pflug@gmail.com>"; gilligan = "Tobias Pflug <tobias.pflug@gmail.com>";
giogadi = "Luis G. Torres <lgtorres42@gmail.com>"; giogadi = "Luis G. Torres <lgtorres42@gmail.com>";
gleber = "Gleb Peregud <gleber.p@gmail.com>"; gleber = "Gleb Peregud <gleber.p@gmail.com>";
glenns = "Glenn Searby <glenn.searby@gmail.com>";
globin = "Robin Gloster <mail@glob.in>"; globin = "Robin Gloster <mail@glob.in>";
gnidorah = "Alex Ivanov <yourbestfriend@opmbx.org>"; gnidorah = "Alex Ivanov <yourbestfriend@opmbx.org>";
goibhniu = "Cillian de Róiste <cillian.deroiste@gmail.com>"; goibhniu = "Cillian de Róiste <cillian.deroiste@gmail.com>";
@ -212,6 +224,7 @@
goodrone = "Andrew Trachenko <goodrone@gmail.com>"; goodrone = "Andrew Trachenko <goodrone@gmail.com>";
gpyh = "Yacine Hmito <yacine.hmito@gmail.com>"; gpyh = "Yacine Hmito <yacine.hmito@gmail.com>";
grahamc = "Graham Christensen <graham@grahamc.com>"; grahamc = "Graham Christensen <graham@grahamc.com>";
grburst = "Julius Elias <grburst@openmailbox.org>";
gridaphobe = "Eric Seidel <eric@seidel.io>"; gridaphobe = "Eric Seidel <eric@seidel.io>";
guibert = "David Guibert <david.guibert@gmail.com>"; guibert = "David Guibert <david.guibert@gmail.com>";
guillaumekoenig = "Guillaume Koenig <guillaume.edward.koenig@gmail.com>"; guillaumekoenig = "Guillaume Koenig <guillaume.edward.koenig@gmail.com>";
@ -220,8 +233,10 @@
havvy = "Ryan Scheel <ryan.havvy@gmail.com>"; havvy = "Ryan Scheel <ryan.havvy@gmail.com>";
hbunke = "Hendrik Bunke <bunke.hendrik@gmail.com>"; hbunke = "Hendrik Bunke <bunke.hendrik@gmail.com>";
hce = "Hans-Christian Esperer <hc@hcesperer.org>"; hce = "Hans-Christian Esperer <hc@hcesperer.org>";
hectorj = "Hector Jusforgues <hector.jusforgues+nixos@gmail.com>";
heel = "Sergii Paryzhskyi <parizhskiy@gmail.com>"; heel = "Sergii Paryzhskyi <parizhskiy@gmail.com>";
henrytill = "Henry Till <henrytill@gmail.com>"; henrytill = "Henry Till <henrytill@gmail.com>";
hhm = "hhm <heehooman+nixpkgs@gmail.com>";
hinton = "Tom Hinton <t@larkery.com>"; hinton = "Tom Hinton <t@larkery.com>";
hodapp = "Chris Hodapp <hodapp87@gmail.com>"; hodapp = "Chris Hodapp <hodapp87@gmail.com>";
hrdinka = "Christoph Hrdinka <c.nix@hrdinka.at>"; hrdinka = "Christoph Hrdinka <c.nix@hrdinka.at>";
@ -237,7 +252,7 @@
jammerful = "jammerful <jammerful@gmail.com>"; jammerful = "jammerful <jammerful@gmail.com>";
jansol = "Jan Solanti <jan.solanti@paivola.fi>"; jansol = "Jan Solanti <jan.solanti@paivola.fi>";
javaguirre = "Javier Aguirre <contacto@javaguirre.net>"; javaguirre = "Javier Aguirre <contacto@javaguirre.net>";
jb55 = "William Casarin <bill@casarin.me>"; jb55 = "William Casarin <jb55@jb55.com>";
jbedo = "Justin Bedő <cu@cua0.org>"; jbedo = "Justin Bedő <cu@cua0.org>";
jcumming = "Jack Cummings <jack@mudshark.org>"; jcumming = "Jack Cummings <jack@mudshark.org>";
jdagilliland = "Jason Gilliland <jdagilliland@gmail.com>"; jdagilliland = "Jason Gilliland <jdagilliland@gmail.com>";
@ -245,6 +260,7 @@
jensbin = "Jens Binkert <jensbin@protonmail.com>"; jensbin = "Jens Binkert <jensbin@protonmail.com>";
jerith666 = "Matt McHenry <github@matt.mchenryfamily.org>"; jerith666 = "Matt McHenry <github@matt.mchenryfamily.org>";
jfb = "James Felix Black <james@yamtime.com>"; jfb = "James Felix Black <james@yamtime.com>";
jfrankenau = "Johannes Frankenau <johannes@frankenau.net>";
jgeerds = "Jascha Geerds <jascha@jgeerds.name>"; jgeerds = "Jascha Geerds <jascha@jgeerds.name>";
jgertm = "Tim Jaeger <jger.tm@gmail.com>"; jgertm = "Tim Jaeger <jger.tm@gmail.com>";
jgillich = "Jakob Gillich <jakob@gillich.me>"; jgillich = "Jakob Gillich <jakob@gillich.me>";
@ -257,12 +273,14 @@
joelmo = "Joel Moberg <joel.moberg@gmail.com>"; joelmo = "Joel Moberg <joel.moberg@gmail.com>";
joelteon = "Joel Taylor <me@joelt.io>"; joelteon = "Joel Taylor <me@joelt.io>";
johbo = "Johannes Bornhold <johannes@bornhold.name>"; johbo = "Johannes Bornhold <johannes@bornhold.name>";
johnramsden = "John Ramsden <johnramsden@riseup.net>";
joko = "Ioannis Koutras <ioannis.koutras@gmail.com>"; joko = "Ioannis Koutras <ioannis.koutras@gmail.com>";
jonafato = "Jon Banafato <jon@jonafato.com>"; jonafato = "Jon Banafato <jon@jonafato.com>";
jpbernardy = "Jean-Philippe Bernardy <jeanphilippe.bernardy@gmail.com>"; jpbernardy = "Jean-Philippe Bernardy <jeanphilippe.bernardy@gmail.com>";
jpierre03 = "Jean-Pierre PRUNARET <nix@prunetwork.fr>"; jpierre03 = "Jean-Pierre PRUNARET <nix@prunetwork.fr>";
jpotier = "Martin Potier <jpo.contributes.to.nixos@marvid.fr>"; jpotier = "Martin Potier <jpo.contributes.to.nixos@marvid.fr>";
jraygauthier = "Raymond Gauthier <jraygauthier@gmail.com>"; jraygauthier = "Raymond Gauthier <jraygauthier@gmail.com>";
jtojnar = "Jan Tojnar <jtojnar@gmail.com>";
juliendehos = "Julien Dehos <dehos@lisic.univ-littoral.fr>"; juliendehos = "Julien Dehos <dehos@lisic.univ-littoral.fr>";
jwiegley = "John Wiegley <johnw@newartisans.com>"; jwiegley = "John Wiegley <johnw@newartisans.com>";
jwilberding = "Jordan Wilberding <jwilberding@afiniate.com>"; jwilberding = "Jordan Wilberding <jwilberding@afiniate.com>";
@ -275,8 +293,10 @@
khumba = "Bryan Gardiner <bog@khumba.net>"; khumba = "Bryan Gardiner <bog@khumba.net>";
KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>"; KibaFox = "Kiba Fox <kiba.fox@foxypossibilities.com>";
kierdavis = "Kier Davis <kierdavis@gmail.com>"; kierdavis = "Kier Davis <kierdavis@gmail.com>";
kiloreux = "Kiloreux Emperex <kiloreux@gmail.com>";
kkallio = "Karn Kallio <tierpluspluslists@gmail.com>"; kkallio = "Karn Kallio <tierpluspluslists@gmail.com>";
knedlsepp = "Josef Kemetmüller <josef.kemetmueller@gmail.com>"; knedlsepp = "Josef Kemetmüller <josef.kemetmueller@gmail.com>";
konimex = "Muhammad Herdiansyah <herdiansyah@openmailbox.org>";
koral = "Koral <koral@mailoo.org>"; koral = "Koral <koral@mailoo.org>";
kovirobi = "Kovacsics Robert <kovirobi@gmail.com>"; kovirobi = "Kovacsics Robert <kovirobi@gmail.com>";
kragniz = "Louis Taylor <louis@kragniz.eu>"; kragniz = "Louis Taylor <louis@kragniz.eu>";
@ -297,6 +317,7 @@
lihop = "Leroy Hopson <nixos@leroy.geek.nz>"; lihop = "Leroy Hopson <nixos@leroy.geek.nz>";
linquize = "Linquize <linquize@yahoo.com.hk>"; linquize = "Linquize <linquize@yahoo.com.hk>";
linus = "Linus Arver <linusarver@gmail.com>"; linus = "Linus Arver <linusarver@gmail.com>";
lluchs = "Lukas Werling <lukas.werling@gmail.com>";
lnl7 = "Daiderd Jordan <daiderd@gmail.com>"; lnl7 = "Daiderd Jordan <daiderd@gmail.com>";
loskutov = "Ignat Loskutov <ignat.loskutov@gmail.com>"; loskutov = "Ignat Loskutov <ignat.loskutov@gmail.com>";
lovek323 = "Jason O'Conal <jason@oconal.id.au>"; lovek323 = "Jason O'Conal <jason@oconal.id.au>";
@ -308,6 +329,7 @@
luispedro = "Luis Pedro Coelho <luis@luispedro.org>"; luispedro = "Luis Pedro Coelho <luis@luispedro.org>";
lukego = "Luke Gorrie <luke@snabb.co>"; lukego = "Luke Gorrie <luke@snabb.co>";
lw = "Sergey Sofeychuk <lw@fmap.me>"; lw = "Sergey Sofeychuk <lw@fmap.me>";
lyt = "Tim Liou <wheatdoge@gmail.com>";
m3tti = "Mathaeus Sander <mathaeus.peter.sander@gmail.com>"; m3tti = "Mathaeus Sander <mathaeus.peter.sander@gmail.com>";
ma27 = "Maximilian Bosch <maximilian@mbosch.me>"; ma27 = "Maximilian Bosch <maximilian@mbosch.me>";
madjar = "Georges Dubus <georges.dubus@compiletoi.net>"; madjar = "Georges Dubus <georges.dubus@compiletoi.net>";
@ -359,6 +381,7 @@
MostAwesomeDude = "Corbin Simpson <cds@corbinsimpson.com>"; MostAwesomeDude = "Corbin Simpson <cds@corbinsimpson.com>";
mounium = "Katona László <muoniurn@gmail.com>"; mounium = "Katona László <muoniurn@gmail.com>";
MP2E = "Cray Elliott <MP2E@archlinux.us>"; MP2E = "Cray Elliott <MP2E@archlinux.us>";
mpcsh = "Mark Cohen <m@mpc.sh>";
mpscholten = "Marc Scholten <marc@mpscholten.de>"; mpscholten = "Marc Scholten <marc@mpscholten.de>";
mpsyco = "Francis St-Amour <fr.st-amour@gmail.com>"; mpsyco = "Francis St-Amour <fr.st-amour@gmail.com>";
msackman = "Matthew Sackman <matthew@wellquite.org>"; msackman = "Matthew Sackman <matthew@wellquite.org>";
@ -373,11 +396,12 @@
nand0p = "Fernando Jose Pando <nando@hex7.com>"; nand0p = "Fernando Jose Pando <nando@hex7.com>";
Nate-Devv = "Nathan Moore <natedevv@gmail.com>"; Nate-Devv = "Nathan Moore <natedevv@gmail.com>";
nathan-gs = "Nathan Bijnens <nathan@nathan.gs>"; nathan-gs = "Nathan Bijnens <nathan@nathan.gs>";
nckx = "Tobias Geerinckx-Rice <tobias.geerinckx.rice@gmail.com>"; nckx = "Tobias Geerinckx-Rice <github@tobias.gr>";
ndowens = "Nathan Owens <ndowens04@gmail.com>"; ndowens = "Nathan Owens <ndowens04@gmail.com>";
neeasade = "Nathan Isom <nathanisom27@gmail.com>"; neeasade = "Nathan Isom <nathanisom27@gmail.com>";
nequissimus = "Tim Steinbach <tim@nequissimus.com>"; nequissimus = "Tim Steinbach <tim@nequissimus.com>";
nfjinjing = "Jinjing Wang <nfjinjing@gmail.com>"; nfjinjing = "Jinjing Wang <nfjinjing@gmail.com>";
nh2 = "Niklas Hambüchen <mail@nh2.me>";
nhooyr = "Anmol Sethi <anmol@aubble.com>"; nhooyr = "Anmol Sethi <anmol@aubble.com>";
nickhu = "Nick Hu <me@nickhu.co.uk>"; nickhu = "Nick Hu <me@nickhu.co.uk>";
nicknovitski = "Nick Novitski <nixpkgs@nicknovitski.com>"; nicknovitski = "Nick Novitski <nixpkgs@nicknovitski.com>";
@ -389,6 +413,7 @@
np = "Nicolas Pouillard <np.nix@nicolaspouillard.fr>"; np = "Nicolas Pouillard <np.nix@nicolaspouillard.fr>";
nslqqq = "Nikita Mikhailov <nslqqq@gmail.com>"; nslqqq = "Nikita Mikhailov <nslqqq@gmail.com>";
nthorne = "Niklas Thörne <notrupertthorne@gmail.com>"; nthorne = "Niklas Thörne <notrupertthorne@gmail.com>";
nyarly = "Judson Lester <nyarly@gmail.com>";
obadz = "obadz <obadz-nixos@obadz.com>"; obadz = "obadz <obadz-nixos@obadz.com>";
ocharles = "Oliver Charles <ollie@ocharles.org.uk>"; ocharles = "Oliver Charles <ollie@ocharles.org.uk>";
odi = "Oliver Dunkl <oliver.dunkl@gmail.com>"; odi = "Oliver Dunkl <oliver.dunkl@gmail.com>";
@ -397,6 +422,7 @@
okasu = "Okasu <oka.sux@gmail.com>"; okasu = "Okasu <oka.sux@gmail.com>";
olcai = "Erik Timan <dev@timan.info>"; olcai = "Erik Timan <dev@timan.info>";
olejorgenb = "Ole Jørgen Brønner <olejorgenb@yahoo.no>"; olejorgenb = "Ole Jørgen Brønner <olejorgenb@yahoo.no>";
olynch = "Owen Lynch <owen@olynch.me>";
orbekk = "KJ Ørbekk <kjetil.orbekk@gmail.com>"; orbekk = "KJ Ørbekk <kjetil.orbekk@gmail.com>";
orbitz = "Malcolm Matalka <mmatalka@gmail.com>"; orbitz = "Malcolm Matalka <mmatalka@gmail.com>";
orivej = "Orivej Desh <orivej@gmx.fr>"; orivej = "Orivej Desh <orivej@gmx.fr>";
@ -467,6 +493,7 @@
rob = "Rob Vermaas <rob.vermaas@gmail.com>"; rob = "Rob Vermaas <rob.vermaas@gmail.com>";
robberer = "Longrin Wischnewski <robberer@freakmail.de>"; robberer = "Longrin Wischnewski <robberer@freakmail.de>";
robbinch = "Robbin C. <robbinch33@gmail.com>"; robbinch = "Robbin C. <robbinch33@gmail.com>";
roberth = "Robert Hensing <nixpkgs@roberthensing.nl>";
robgssp = "Rob Glossop <robgssp@gmail.com>"; robgssp = "Rob Glossop <robgssp@gmail.com>";
roblabla = "Robin Lambertz <robinlambertz+dev@gmail.com>"; roblabla = "Robin Lambertz <robinlambertz+dev@gmail.com>";
roconnor = "Russell O'Connor <roconnor@theorem.ca>"; roconnor = "Russell O'Connor <roconnor@theorem.ca>";
@ -477,9 +504,11 @@
rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>"; rushmorem = "Rushmore Mushambi <rushmore@webenchanter.com>";
rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>"; rvl = "Rodney Lorrimar <dev+nix@rodney.id.au>";
rvlander = "Gaëtan André <rvlander@gaetanandre.eu>"; rvlander = "Gaëtan André <rvlander@gaetanandre.eu>";
rvolosatovs = "Roman Volosatovs <rvolosatovs@riseup.net";
ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>"; ryanartecona = "Ryan Artecona <ryanartecona@gmail.com>";
ryansydnor = "Ryan Sydnor <ryan.t.sydnor@gmail.com>"; ryansydnor = "Ryan Sydnor <ryan.t.sydnor@gmail.com>";
ryantm = "Ryan Mulligan <ryan@ryantm.com>"; ryantm = "Ryan Mulligan <ryan@ryantm.com>";
rybern = "Ryan Bernstein <ryan.bernstein@columbia.edu>";
rycee = "Robert Helgesson <robert@rycee.net>"; rycee = "Robert Helgesson <robert@rycee.net>";
ryneeverett = "Ryne Everett <ryneeverett@gmail.com>"; ryneeverett = "Ryne Everett <ryneeverett@gmail.com>";
rzetterberg = "Richard Zetterberg <richard.zetterberg@gmail.com>"; rzetterberg = "Richard Zetterberg <richard.zetterberg@gmail.com>";
@ -487,10 +516,12 @@
samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>"; samuelrivas = "Samuel Rivas <samuelrivas@gmail.com>";
sander = "Sander van der Burg <s.vanderburg@tudelft.nl>"; sander = "Sander van der Burg <s.vanderburg@tudelft.nl>";
sargon = "Daniel Ehlers <danielehlers@mindeye.net>"; sargon = "Daniel Ehlers <danielehlers@mindeye.net>";
sauyon = "Sauyon Lee <s@uyon.co>";
schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>"; schmitthenner = "Fabian Schmitthenner <development@schmitthenner.eu>";
schneefux = "schneefux <schneefux+nixos_pkg@schneefux.xyz>"; schneefux = "schneefux <schneefux+nixos_pkg@schneefux.xyz>";
schristo = "Scott Christopher <schristopher@konputa.com>"; schristo = "Scott Christopher <schristopher@konputa.com>";
scolobb = "Sergiu Ivanov <sivanov@colimite.fr>"; scolobb = "Sergiu Ivanov <sivanov@colimite.fr>";
sdll = "Sasha Illarionov <sasha.delly@gmail.com>";
sepi = "Raffael Mancini <raffael@mancini.lu>"; sepi = "Raffael Mancini <raffael@mancini.lu>";
seppeljordan = "Sebastian Jordan <sebastian.jordan.mail@googlemail.com>"; seppeljordan = "Sebastian Jordan <sebastian.jordan.mail@googlemail.com>";
shanemikel = "Shane Pearlman <shanemikel1@gmail.com>"; shanemikel = "Shane Pearlman <shanemikel1@gmail.com>";
@ -524,6 +555,7 @@
steveej = "Stefan Junker <mail@stefanjunker.de>"; steveej = "Stefan Junker <mail@stefanjunker.de>";
SuprDewd = "Bjarki Ágúst Guðmundsson <suprdewd@gmail.com>"; SuprDewd = "Bjarki Ágúst Guðmundsson <suprdewd@gmail.com>";
swarren83 = "Shawn Warren <shawn.w.warren@gmail.com>"; swarren83 = "Shawn Warren <shawn.w.warren@gmail.com>";
swflint = "Samuel W. Flint <swflint@flintfam.org>";
swistak35 = "Rafał Łasocha <me@swistak35.com>"; swistak35 = "Rafał Łasocha <me@swistak35.com>";
szczyp = "Szczyp <qb@szczyp.com>"; szczyp = "Szczyp <qb@szczyp.com>";
sztupi = "Attila Sztupak <attila.sztupak@gmail.com>"; sztupi = "Attila Sztupak <attila.sztupak@gmail.com>";
@ -546,19 +578,24 @@
tohl = "Tomas Hlavaty <tom@logand.com>"; tohl = "Tomas Hlavaty <tom@logand.com>";
tokudan = "Daniel Frank <git@danielfrank.net>"; tokudan = "Daniel Frank <git@danielfrank.net>";
tomberek = "Thomas Bereknyei <tomberek@gmail.com>"; tomberek = "Thomas Bereknyei <tomberek@gmail.com>";
tomsmeets = "Tom Smeets <tom@tsmeets.nl>";
travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>"; travisbhartwell = "Travis B. Hartwell <nafai@travishartwell.net>";
trevorj = "Trevor Joynson <nix@trevor.joynson.io>";
trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>"; trino = "Hubert Mühlhans <muehlhans.hubert@ekodia.de>";
tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>"; tstrobel = "Thomas Strobel <4ZKTUB6TEP74PYJOPWIR013S2AV29YUBW5F9ZH2F4D5UMJUJ6S@hash.domains>";
ttuegel = "Thomas Tuegel <ttuegel@mailbox.org>"; ttuegel = "Thomas Tuegel <ttuegel@mailbox.org>";
tv = "Tomislav Viljetić <tv@shackspace.de>"; tv = "Tomislav Viljetić <tv@shackspace.de>";
tvestelind = "Tomas Vestelind <tomas.vestelind@fripost.org>"; tvestelind = "Tomas Vestelind <tomas.vestelind@fripost.org>";
tvorog = "Marsel Zaripov <marszaripov@gmail.com>"; tvorog = "Marsel Zaripov <marszaripov@gmail.com>";
tweber = "Thorsten Weber <tw+nixpkgs@360vier.de>";
twey = "James Twey Kay <twey@twey.co.uk>"; twey = "James Twey Kay <twey@twey.co.uk>";
uralbash = "Svintsov Dmitry <root@uralbash.ru>"; uralbash = "Svintsov Dmitry <root@uralbash.ru>";
utdemir = "Utku Demir <me@utdemir.com>"; utdemir = "Utku Demir <me@utdemir.com>";
#urkud = "Yury G. Kudryashov <urkud+nix@ya.ru>"; inactive since 2012 #urkud = "Yury G. Kudryashov <urkud+nix@ya.ru>"; inactive since 2012
uwap = "uwap <me@uwap.name>"; uwap = "uwap <me@uwap.name>";
vaibhavsagar = "Vaibhav Sagar <vaibhavsagar@gmail.com>";
vandenoever = "Jos van den Oever <jos@vandenoever.info>"; vandenoever = "Jos van den Oever <jos@vandenoever.info>";
vanschelven = "Klaas van Schelven <klaas@vanschelven.com>";
vanzef = "Ivan Solyankin <vanzef@gmail.com>"; vanzef = "Ivan Solyankin <vanzef@gmail.com>";
vbgl = "Vincent Laporte <Vincent.Laporte@gmail.com>"; vbgl = "Vincent Laporte <Vincent.Laporte@gmail.com>";
vbmithr = "Vincent Bernardoff <vb@luminar.eu.org>"; vbmithr = "Vincent Bernardoff <vb@luminar.eu.org>";
@ -566,6 +603,7 @@
vdemeester = "Vincent Demeester <vincent@sbr.pm>"; vdemeester = "Vincent Demeester <vincent@sbr.pm>";
veprbl = "Dmitry Kalinkin <veprbl@gmail.com>"; veprbl = "Dmitry Kalinkin <veprbl@gmail.com>";
vifino = "Adrian Pistol <vifino@tty.sh>"; vifino = "Adrian Pistol <vifino@tty.sh>";
vinymeuh = "VinyMeuh <vinymeuh@gmail.com>";
viric = "Lluís Batlle i Rossell <viric@viric.name>"; viric = "Lluís Batlle i Rossell <viric@viric.name>";
vizanto = "Danny Wilson <danny@prime.vc>"; vizanto = "Danny Wilson <danny@prime.vc>";
vklquevs = "vklquevs <vklquevs@gmail.com>"; vklquevs = "vklquevs <vklquevs@gmail.com>";
@ -577,6 +615,7 @@
volth = "Jaroslavas Pocepko <jaroslavas@volth.com>"; volth = "Jaroslavas Pocepko <jaroslavas@volth.com>";
vozz = "Oliver Hunt <oliver.huntuk@gmail.com>"; vozz = "Oliver Hunt <oliver.huntuk@gmail.com>";
vrthra = "Rahul Gopinath <rahul@gopinath.org>"; vrthra = "Rahul Gopinath <rahul@gopinath.org>";
vyp = "vyp <elisp.vim@gmail.com>";
wedens = "wedens <kirill.wedens@gmail.com>"; wedens = "wedens <kirill.wedens@gmail.com>";
willtim = "Tim Philip Williams <tim.williams.public@gmail.com>"; willtim = "Tim Philip Williams <tim.williams.public@gmail.com>";
winden = "Antonio Vargas Gonzalez <windenntw@gmail.com>"; winden = "Antonio Vargas Gonzalez <windenntw@gmail.com>";
@ -598,9 +637,11 @@
z77z = "Marco Maggesi <maggesi@math.unifi.it>"; z77z = "Marco Maggesi <maggesi@math.unifi.it>";
zagy = "Christian Zagrodnick <cz@flyingcircus.io>"; zagy = "Christian Zagrodnick <cz@flyingcircus.io>";
zalakain = "Unai Zalakain <contact@unaizalakain.info>"; zalakain = "Unai Zalakain <contact@unaizalakain.info>";
zarelit = "David Costa <david@zarel.net>";
zauberpony = "Elmar Athmer <elmar@athmer.org>"; zauberpony = "Elmar Athmer <elmar@athmer.org>";
zef = "Zef Hemel <zef@zef.me>"; zef = "Zef Hemel <zef@zef.me>";
zimbatm = "zimbatm <zimbatm@zimbatm.com>"; zimbatm = "zimbatm <zimbatm@zimbatm.com>";
Zimmi48 = "Théo Zimmermann <theo.zimmermann@univ-paris-diderot.fr>";
zohl = "Al Zohali <zohl@fmap.me>"; zohl = "Al Zohali <zohl@fmap.me>";
zoomulator = "Kim Simmons <zoomulator@gmail.com>"; zoomulator = "Kim Simmons <zoomulator@gmail.com>";
zraexy = "David Mell <zraexy@gmail.com>"; zraexy = "David Mell <zraexy@gmail.com>";

View File

@ -98,7 +98,7 @@ rec {
/* Close a set of modules under the imports relation. */ /* Close a set of modules under the imports relation. */
closeModules = modules: args: closeModules = modules: args:
let let
toClosureList = file: parentKey: imap (n: x: toClosureList = file: parentKey: imap1 (n: x:
if isAttrs x || isFunction x then if isAttrs x || isFunction x then
let key = "${parentKey}:anon-${toString n}"; in let key = "${parentKey}:anon-${toString n}"; in
unifyModuleSyntax file key (unpackSubmodule (applyIfFunction key) x args) unifyModuleSyntax file key (unpackSubmodule (applyIfFunction key) x args)

View File

@ -33,7 +33,7 @@ rec {
concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"] concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"]
=> "1-foo2-bar" => "1-foo2-bar"
*/ */
concatImapStrings = f: list: concatStrings (lib.imap f list); concatImapStrings = f: list: concatStrings (lib.imap1 f list);
/* Place an element between each element of a list /* Place an element between each element of a list
@ -70,7 +70,7 @@ rec {
concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ] concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ]
=> "6-3-2" => "6-3-2"
*/ */
concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap f list); concatImapStringsSep = sep: f: list: concatStringsSep sep (lib.imap1 f list);
/* Construct a Unix-style search path consisting of each `subDir" /* Construct a Unix-style search path consisting of each `subDir"
directory of the given list of packages. directory of the given list of packages.

View File

@ -1,20 +1,22 @@
with import ./parse.nix; with import ./parse.nix;
with import ../attrsets.nix; with import ../attrsets.nix;
with import ../lists.nix;
rec { rec {
patterns = { patterns = rec {
"32bit" = { cpu = { bits = 32; }; }; "32bit" = { cpu = { bits = 32; }; };
"64bit" = { cpu = { bits = 64; }; }; "64bit" = { cpu = { bits = 64; }; };
i686 = { cpu = cpuTypes.i686; }; i686 = { cpu = cpuTypes.i686; };
x86_64 = { cpu = cpuTypes.x86_64; }; x86_64 = { cpu = cpuTypes.x86_64; };
PowerPC = { cpu = cpuTypes.powerpc; };
x86 = { cpu = { family = "x86"; }; }; x86 = { cpu = { family = "x86"; }; };
Arm = { cpu = { family = "arm"; }; }; Arm = { cpu = { family = "arm"; }; };
Mips = { cpu = { family = "mips"; }; }; Mips = { cpu = { family = "mips"; }; };
BigEndian = { cpu = { significantByte = significantBytes.bigEndian; }; }; BigEndian = { cpu = { significantByte = significantBytes.bigEndian; }; };
LittleEndian = { cpu = { significantByte = significantBytes.littleEndian; }; }; LittleEndian = { cpu = { significantByte = significantBytes.littleEndian; }; };
Unix = { kernel = { families = { inherit (kernelFamilies) unix; }; }; };
BSD = { kernel = { families = { inherit (kernelFamilies) bsd; }; }; }; BSD = { kernel = { families = { inherit (kernelFamilies) bsd; }; }; };
Unix = [ BSD Darwin Linux SunOS Hurd Cygwin ];
Darwin = { kernel = kernels.darwin; }; Darwin = { kernel = kernels.darwin; };
Linux = { kernel = kernels.linux; }; Linux = { kernel = kernels.linux; };
@ -27,11 +29,15 @@ rec {
Cygwin = { kernel = kernels.windows; abi = abis.cygnus; }; Cygwin = { kernel = kernels.windows; abi = abis.cygnus; };
MinGW = { kernel = kernels.windows; abi = abis.gnu; }; MinGW = { kernel = kernels.windows; abi = abis.gnu; };
Arm32 = recursiveUpdate patterns.Arm patterns."32bit"; Arm32 = recursiveUpdate Arm patterns."32bit";
Arm64 = recursiveUpdate patterns.Arm patterns."64bit"; Arm64 = recursiveUpdate Arm patterns."64bit";
}; };
matchAnyAttrs = patterns:
if builtins.isList patterns then attrs: any (pattern: matchAttrs pattern attrs) patterns
else matchAttrs patterns;
predicates = mapAttrs' predicates = mapAttrs'
(name: value: nameValuePair ("is" + name) (matchAttrs value)) (name: value: nameValuePair ("is" + name) (matchAnyAttrs value))
patterns; patterns;
} }

View File

@ -44,7 +44,7 @@ rec {
i686 = { bits = 32; significantByte = littleEndian; family = "x86"; }; i686 = { bits = 32; significantByte = littleEndian; family = "x86"; };
x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; }; x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; };
mips64el = { bits = 32; significantByte = littleEndian; family = "mips"; }; mips64el = { bits = 32; significantByte = littleEndian; family = "mips"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "powerpc"; }; powerpc = { bits = 32; significantByte = bigEndian; family = "power"; };
}; };
isVendor = isType "vendor"; isVendor = isType "vendor";
@ -68,21 +68,20 @@ rec {
isKernelFamily = isType "kernel-family"; isKernelFamily = isType "kernel-family";
kernelFamilies = setTypes "kernel-family" { kernelFamilies = setTypes "kernel-family" {
bsd = {}; bsd = {};
unix = {};
}; };
isKernel = x: isType "kernel" x; isKernel = x: isType "kernel" x;
kernels = with execFormats; with kernelFamilies; setTypesAssert "kernel" kernels = with execFormats; with kernelFamilies; setTypesAssert "kernel"
(x: isExecFormat x.execFormat && all isKernelFamily (attrValues x.families)) (x: isExecFormat x.execFormat && all isKernelFamily (attrValues x.families))
{ {
darwin = { execFormat = macho; families = { inherit unix; }; }; darwin = { execFormat = macho; families = { }; };
freebsd = { execFormat = elf; families = { inherit unix bsd; }; }; freebsd = { execFormat = elf; families = { inherit bsd; }; };
hurd = { execFormat = elf; families = { inherit unix; }; }; hurd = { execFormat = elf; families = { }; };
linux = { execFormat = elf; families = { inherit unix; }; }; linux = { execFormat = elf; families = { }; };
netbsd = { execFormat = elf; families = { inherit unix bsd; }; }; netbsd = { execFormat = elf; families = { inherit bsd; }; };
none = { execFormat = unknown; families = { inherit unix; }; }; none = { execFormat = unknown; families = { }; };
openbsd = { execFormat = elf; families = { inherit unix bsd; }; }; openbsd = { execFormat = elf; families = { inherit bsd; }; };
solaris = { execFormat = elf; families = { inherit unix; }; }; solaris = { execFormat = elf; families = { }; };
windows = { execFormat = pe; families = { }; }; windows = { execFormat = pe; families = { }; };
} // { # aliases } // { # aliases
# TODO(@Ericson2314): Handle these Darwin version suffixes more generally. # TODO(@Ericson2314): Handle these Darwin version suffixes more generally.
@ -164,7 +163,7 @@ rec {
mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s)); mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s));
doubleFromSystem = { cpu, vendor, kernel, abi, ... }: doubleFromSystem = { cpu, vendor, kernel, abi, ... }:
if vendor == kernels.windows && abi == abis.cygnus if abi == abis.cygnus
then "${cpu.name}-cygwin" then "${cpu.name}-cygwin"
else "${cpu.name}-${kernel.name}"; else "${cpu.name}-${kernel.name}";

View File

@ -543,6 +543,10 @@ rec {
# Cavium ThunderX stuff. # Cavium ThunderX stuff.
PCI_HOST_THUNDER_ECAM y PCI_HOST_THUNDER_ECAM y
# The default (=y) forces us to have the XHCI firmware available in initrd,
# which our initrd builder can't currently do easily.
USB_XHCI_TEGRA m
''; '';
uboot = null; uboot = null;
kernelTarget = "Image"; kernelTarget = "Image";

View File

@ -285,6 +285,38 @@ runTests {
expected = builtins.toJSON val; expected = builtins.toJSON val;
}; };
testToPretty = {
expr = mapAttrs (const (generators.toPretty {})) rec {
int = 42;
bool = true;
string = "fnord";
null_ = null;
function = x: x;
functionArgs = { arg ? 4, foo }: arg;
list = [ 3 4 function [ false ] ];
attrs = { foo = null; "foo bar" = "baz"; };
drv = derivation { name = "test"; system = builtins.currentSystem; };
};
expected = rec {
int = "42";
bool = "true";
string = "\"fnord\"";
null_ = "null";
function = "<λ>";
functionArgs = "<λ:{(arg),foo}>";
list = "[ 3 4 ${function} [ false ] ]";
attrs = "{ \"foo\" = null; \"foo bar\" = \"baz\"; }";
drv = "<δ>";
};
};
testToPrettyAllowPrettyValues = {
expr = generators.toPretty { allowPrettyValues = true; }
{ __pretty = v: "«" + v + "»"; val = "foo"; };
expected = "«foo»";
};
# MISC # MISC
testOverridableDelayableArgsTest = { testOverridableDelayableArgsTest = {

View File

@ -179,9 +179,9 @@ rec {
description = "list of ${elemType.description}s"; description = "list of ${elemType.description}s";
check = isList; check = isList;
merge = loc: defs: merge = loc: defs:
map (x: x.value) (filter (x: x ? value) (concatLists (imap (n: def: map (x: x.value) (filter (x: x ? value) (concatLists (imap1 (n: def:
if isList def.value then if isList def.value then
imap (m: def': imap1 (m: def':
(mergeDefinitions (mergeDefinitions
(loc ++ ["[definition ${toString n}-entry ${toString m}]"]) (loc ++ ["[definition ${toString n}-entry ${toString m}]"])
elemType elemType
@ -220,7 +220,7 @@ rec {
if isList def.value then if isList def.value then
{ inherit (def) file; { inherit (def) file;
value = listToAttrs ( value = listToAttrs (
imap (elemIdx: elem: imap1 (elemIdx: elem:
{ name = elem.name or "unnamed-${toString defIdx}.${toString elemIdx}"; { name = elem.name or "unnamed-${toString defIdx}.${toString elemIdx}";
value = elem; value = elem;
}) def.value); }) def.value);
@ -233,7 +233,7 @@ rec {
name = "loaOf"; name = "loaOf";
description = "list or attribute set of ${elemType.description}s"; description = "list or attribute set of ${elemType.description}s";
check = x: isList x || isAttrs x; check = x: isList x || isAttrs x;
merge = loc: defs: attrOnly.merge loc (imap convertIfList defs); merge = loc: defs: attrOnly.merge loc (imap1 convertIfList defs);
getSubOptions = prefix: elemType.getSubOptions (prefix ++ ["<name?>"]); getSubOptions = prefix: elemType.getSubOptions (prefix ++ ["<name?>"]);
getSubModules = elemType.getSubModules; getSubModules = elemType.getSubModules;
substSubModules = m: loaOf (elemType.substSubModules m); substSubModules = m: loaOf (elemType.substSubModules m);

View File

@ -2,11 +2,11 @@
set -o pipefail set -o pipefail
GNOME_FTP="ftp.gnome.org/pub/GNOME/sources" GNOME_FTP=ftp.gnome.org/pub/GNOME/sources
# projects that don't follow the GNOME major versioning, or that we don't want to # projects that don't follow the GNOME major versioning, or that we don't want to
# programmatically update # programmatically update
NO_GNOME_MAJOR="gtkhtml gdm" NO_GNOME_MAJOR="ghex gtkhtml gdm"
usage() { usage() {
echo "Usage: $0 gnome_dir <show project>|<update project>|<update-all> [major.minor]" >&2 echo "Usage: $0 gnome_dir <show project>|<update project>|<update-all> [major.minor]" >&2
@ -18,10 +18,10 @@ if [ "$#" -lt 2 ]; then
usage usage
fi fi
GNOME_TOP="$1" GNOME_TOP=$1
shift shift
action="$1" action=$1
# curl -l ftp://... doesn't work from my office in HSE, and I don't want to have # curl -l ftp://... doesn't work from my office in HSE, and I don't want to have
# any conversations with sysadmin. Somehow lftp works. # any conversations with sysadmin. Somehow lftp works.
@ -36,18 +36,18 @@ else
fi fi
find_project() { find_project() {
exec find "$GNOME_TOP" -mindepth 2 -maxdepth 2 -type d $@ exec find "$GNOME_TOP" -mindepth 2 -maxdepth 2 -type d "$@"
} }
show_project() { show_project() {
local project="$1" local project=$1
local majorVersion="$2" local majorVersion=$2
local version="" local version=
if [ -z "$majorVersion" ]; then if [ -z "$majorVersion" ]; then
echo "Looking for available versions..." >&2 echo "Looking for available versions..." >&2
local available_baseversions=( `ls_ftp ftp://${GNOME_FTP}/${project} | grep '[0-9]\.[0-9]' | sort -t. -k1,1n -k 2,2n` ) local available_baseversions=$(ls_ftp ftp://${GNOME_FTP}/${project} | grep '[0-9]\.[0-9]' | sort -t. -k1,1n -k 2,2n)
if [ "$?" -ne "0" ]; then if [ "$?" -ne 0 ]; then
echo "Project $project not found" >&2 echo "Project $project not found" >&2
return 1 return 1
fi fi
@ -59,11 +59,11 @@ show_project() {
if echo "$majorVersion" | grep -q "[0-9]\+\.[0-9]\+\.[0-9]\+"; then if echo "$majorVersion" | grep -q "[0-9]\+\.[0-9]\+\.[0-9]\+"; then
# not a major version # not a major version
version="$majorVersion" version=$majorVersion
majorVersion=$(echo "$majorVersion" | cut -d '.' -f 1,2) majorVersion=$(echo "$majorVersion" | cut -d '.' -f 1,2)
fi fi
local FTPDIR="${GNOME_FTP}/${project}/${majorVersion}" local FTPDIR=${GNOME_FTP}/${project}/${majorVersion}
#version=`curl -l ${FTPDIR}/ 2>/dev/null | grep LATEST-IS | sed -e s/LATEST-IS-//` #version=`curl -l ${FTPDIR}/ 2>/dev/null | grep LATEST-IS | sed -e s/LATEST-IS-//`
# gnome's LATEST-IS is broken. Do not trust it. # gnome's LATEST-IS is broken. Do not trust it.
@ -92,7 +92,7 @@ show_project() {
esac esac
done done
echo "Found versions ${!versions[@]}" >&2 echo "Found versions ${!versions[@]}" >&2
version=`echo ${!versions[@]} | sed -e 's/ /\n/g' | sort -t. -k1,1n -k 2,2n -k 3,3n | tail -n1` version=$(echo ${!versions[@]} | sed -e 's/ /\n/g' | sort -t. -k1,1n -k 2,2n -k 3,3n | tail -n1)
if [ -z "$version" ]; then if [ -z "$version" ]; then
echo "No version available for major $majorVersion" >&2 echo "No version available for major $majorVersion" >&2
return 1 return 1
@ -103,7 +103,7 @@ show_project() {
local name=${project}-${version} local name=${project}-${version}
echo "Fetching .sha256 file" >&2 echo "Fetching .sha256 file" >&2
local sha256out=$(curl -s -f http://${FTPDIR}/${name}.sha256sum) local sha256out=$(curl -s -f http://"${FTPDIR}"/"${name}".sha256sum)
if [ "$?" -ne "0" ]; then if [ "$?" -ne "0" ]; then
echo "Version not found" >&2 echo "Version not found" >&2
@ -136,8 +136,8 @@ fetchurl: {
} }
update_project() { update_project() {
local project="$1" local project=$1
local majorVersion="$2" local majorVersion=$2
# find project in nixpkgs tree # find project in nixpkgs tree
projectPath=$(find_project -name "$project" -print) projectPath=$(find_project -name "$project" -print)
@ -150,14 +150,14 @@ update_project() {
if [ "$?" -eq "0" ]; then if [ "$?" -eq "0" ]; then
echo "Updating $projectPath/src.nix" >&2 echo "Updating $projectPath/src.nix" >&2
echo -e "$src" > "$projectPath/src.nix" echo -e "$src" > "$projectPath"/src.nix
fi fi
return 0 return 0
} }
if [ "$action" == "update-all" ]; then if [ "$action" = "update-all" ]; then
majorVersion="$2" majorVersion=$2
if [ -z "$majorVersion" ]; then if [ -z "$majorVersion" ]; then
echo "No major version specified" >&2 echo "No major version specified" >&2
usage usage
@ -170,23 +170,23 @@ if [ "$action" == "update-all" ]; then
echo "Skipping $project" echo "Skipping $project"
else else
echo "= Updating $project to $majorVersion" >&2 echo "= Updating $project to $majorVersion" >&2
update_project $project $majorVersion update_project "$project" "$majorVersion"
echo >&2 echo >&2
fi fi
done done
else else
project="$2" project=$2
majorVersion="$3" majorVersion=$3
if [ -z "$project" ]; then if [ -z "$project" ]; then
echo "No project specified, exiting" >&2 echo "No project specified, exiting" >&2
usage usage
fi fi
if [ "$action" == "show" ]; then if [ "$action" = show ]; then
show_project $project $majorVersion show_project "$project" "$majorVersion"
elif [ "$action" == "update" ]; then elif [ "$action" = update ]; then
update_project $project $majorVersion update_project "$project" "$majorVersion"
else else
echo "Unknown action $action" >&2 echo "Unknown action $action" >&2
usage usage

View File

@ -53,8 +53,8 @@ while test -n "$1"; do
nox) nox)
echo "=== Fetching Nox from binary cache" echo "=== Fetching Nox from binary cache"
# build nox silently so it's not in the log # build nox (+ a basic nix-shell env) silently so it's not in the log
nix-build "<nixpkgs>" -A nox -A stdenv nix-shell -p nox stdenv --command true
;; ;;
pr) pr)

View File

@ -91,7 +91,9 @@ def _get_latest_version_pypi(package, extension):
if release['filename'].endswith(extension): if release['filename'].endswith(extension):
# TODO: In case of wheel we need to do further checks! # TODO: In case of wheel we need to do further checks!
sha256 = release['digests']['sha256'] sha256 = release['digests']['sha256']
break
else:
sha256 = None
return version, sha256 return version, sha256

View File

@ -65,7 +65,7 @@ let
chmod -R u+w . chmod -R u+w .
ln -s ${modulesDoc} configuration/modules.xml ln -s ${modulesDoc} configuration/modules.xml
ln -s ${optionsDocBook} options-db.xml ln -s ${optionsDocBook} options-db.xml
echo "${version}" > version printf "%s" "${version}" > version
''; '';
toc = builtins.toFile "toc.xml" toc = builtins.toFile "toc.xml"
@ -94,25 +94,43 @@ let
"--stringparam chunk.toc ${toc}" "--stringparam chunk.toc ${toc}"
]; ];
manual-combined = runCommand "nixos-manual-combined"
{ inherit sources;
buildInputs = [ libxml2 libxslt ];
meta.description = "The NixOS manual as plain docbook XML";
}
''
${copySources}
xmllint --xinclude --output ./manual-combined.xml ./manual.xml
xmllint --xinclude --noxincludenode \
--output ./man-pages-combined.xml ./man-pages.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual-combined.xml
xmllint --debug --noout --nonet \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
man-pages-combined.xml
mkdir $out
cp manual-combined.xml $out/
cp man-pages-combined.xml $out/
'';
olinkDB = runCommand "manual-olinkdb" olinkDB = runCommand "manual-olinkdb"
{ inherit sources; { inherit sources;
buildInputs = [ libxml2 libxslt ]; buildInputs = [ libxml2 libxslt ];
} }
'' ''
${copySources}
xsltproc \ xsltproc \
${manualXsltprocOptions} \ ${manualXsltprocOptions} \
--stringparam collect.xref.targets only \ --stringparam collect.xref.targets only \
--stringparam targets.filename "$out/manual.db" \ --stringparam targets.filename "$out/manual.db" \
--nonet --xinclude \ --nonet \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \ ${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \
./manual.xml ${manual-combined}/manual-combined.xml
# Check the validity of the man pages sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
./man-pages.xml
cat > "$out/olinkdb.xml" <<EOF cat > "$out/olinkdb.xml" <<EOF
<?xml version="1.0" encoding="utf-8"?> <?xml version="1.0" encoding="utf-8"?>
@ -158,21 +176,15 @@ in rec {
allowedReferences = ["out"]; allowedReferences = ["out"];
} }
'' ''
${copySources}
# Check the validity of the manual sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual.xml
# Generate the HTML manual. # Generate the HTML manual.
dst=$out/share/doc/nixos dst=$out/share/doc/nixos
mkdir -p $dst mkdir -p $dst
xsltproc \ xsltproc \
${manualXsltprocOptions} \ ${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \ --stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/ \ --nonet --output $dst/ \
${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl ./manual.xml ${docbook5_xsl}/xml/xsl/docbook/xhtml/chunktoc.xsl \
${manual-combined}/manual-combined.xml
mkdir -p $dst/images/callouts mkdir -p $dst/images/callouts
cp ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/images/callouts/ cp ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/images/callouts/
@ -190,13 +202,6 @@ in rec {
buildInputs = [ libxml2 libxslt zip ]; buildInputs = [ libxml2 libxslt zip ];
} }
'' ''
${copySources}
# Check the validity of the manual sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
manual.xml
# Generate the epub manual. # Generate the epub manual.
dst=$out/share/doc/nixos dst=$out/share/doc/nixos
@ -204,10 +209,11 @@ in rec {
${manualXsltprocOptions} \ ${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \ --stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/epub/ \ --nonet --xinclude --output $dst/epub/ \
${docbook5_xsl}/xml/xsl/docbook/epub/docbook.xsl ./manual.xml ${docbook5_xsl}/xml/xsl/docbook/epub/docbook.xsl \
${manual-combined}/manual-combined.xml
mkdir -p $dst/epub/OEBPS/images/callouts mkdir -p $dst/epub/OEBPS/images/callouts
cp -r ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/epub/OEBPS/images/callouts cp -r ${docbook5_xsl}/xml/xsl/docbook/images/callouts/*.gif $dst/epub/OEBPS/images/callouts # */
echo "application/epub+zip" > mimetype echo "application/epub+zip" > mimetype
manual="$dst/nixos-manual.epub" manual="$dst/nixos-manual.epub"
zip -0Xq "$manual" mimetype zip -0Xq "$manual" mimetype
@ -227,23 +233,16 @@ in rec {
allowedReferences = ["out"]; allowedReferences = ["out"];
} }
'' ''
${copySources}
# Check the validity of the man pages sources.
xmllint --noout --nonet --xinclude --noxincludenode \
--relaxng ${docbook5}/xml/rng/docbook/docbook.rng \
./man-pages.xml
# Generate manpages. # Generate manpages.
mkdir -p $out/share/man mkdir -p $out/share/man
xsltproc --nonet --xinclude \ xsltproc --nonet \
--param man.output.in.separate.dir 1 \ --param man.output.in.separate.dir 1 \
--param man.output.base.dir "'$out/share/man/'" \ --param man.output.base.dir "'$out/share/man/'" \
--param man.endnotes.are.numbered 0 \ --param man.endnotes.are.numbered 0 \
--param man.break.after.slash 1 \ --param man.break.after.slash 1 \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \ --stringparam target.database.document "${olinkDB}/olinkdb.xml" \
${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \ ${docbook5_xsl}/xml/xsl/docbook/manpages/docbook.xsl \
./man-pages.xml ${manual-combined}/man-pages-combined.xml
''; '';
} }

View File

@ -11,7 +11,7 @@ a USB stick. You can use the <command>dd</command> utility to write the image:
<command>dd if=<replaceable>path-to-image</replaceable> <command>dd if=<replaceable>path-to-image</replaceable>
of=<replaceable>/dev/sdb</replaceable></command>. Be careful about specifying the of=<replaceable>/dev/sdb</replaceable></command>. Be careful about specifying the
correct drive; you can use the <command>lsblk</command> command to get a list of correct drive; you can use the <command>lsblk</command> command to get a list of
block devices. If you're on OS X you can run <command>diskutil list</command> block devices. If you're on macOS you can run <command>diskutil list</command>
to see the list of devices; the device you'll use for the USB must be ejected to see the list of devices; the device you'll use for the USB must be ejected
before writing the image.</para> before writing the image.</para>

View File

@ -17,11 +17,16 @@
<refsynopsisdiv> <refsynopsisdiv>
<cmdsynopsis> <cmdsynopsis>
<command>nixos-option</command> <command>nixos-option</command>
<arg choice='plain'><replaceable>option.name</replaceable></arg> <arg>
<option>-I</option>
<replaceable>path</replaceable>
</arg>
<arg><option>--verbose</option></arg>
<arg><option>--xml</option></arg>
<arg choice="plain"><replaceable>option.name</replaceable></arg>
</cmdsynopsis> </cmdsynopsis>
</refsynopsisdiv> </refsynopsisdiv>
<refsection><title>Description</title> <refsection><title>Description</title>
<para>This command evaluates the configuration specified in <para>This command evaluates the configuration specified in
@ -33,6 +38,45 @@ attributes contained in the attribute set.</para>
</refsection> </refsection>
<refsection><title>Options</title>
<para>This command accepts the following options:</para>
<variablelist>
<varlistentry>
<term><option>-I</option> <replaceable>path</replaceable></term>
<listitem>
<para>
This option is passed to the underlying
<command>nix-instantiate</command> invocation.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--verbose</option></term>
<listitem>
<para>
This option enables verbose mode, which currently is just
the Bash <command>set</command> <option>-x</option> debug mode.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--xml</option></term>
<listitem>
<para>
This option causes the output to be rendered as XML.
</para>
</listitem>
</varlistentry>
</variablelist>
</refsection>
<refsection><title>Environment</title> <refsection><title>Environment</title>
<variablelist> <variablelist>

View File

@ -18,7 +18,7 @@
<para>If you encounter problems, please report them on the <para>If you encounter problems, please report them on the
<literal <literal
xlink:href="http://lists.science.uu.nl/mailman/listinfo/nix-dev">nix-dev@lists.science.uu.nl</literal> xlink:href="https://groups.google.com/forum/#!forum/nix-devel">nix-devel</literal>
mailing list or on the <link mailing list or on the <link
xlink:href="irc://irc.freenode.net/#nixos"> xlink:href="irc://irc.freenode.net/#nixos">
<literal>#nixos</literal> channel on Freenode</link>. Bugs should <literal>#nixos</literal> channel on Freenode</link>. Bugs should

View File

@ -28,7 +28,7 @@ has the following highlights:</para>
since version 0.0 as well as the most recent <link since version 0.0 as well as the most recent <link
xlink:href="http://www.stackage.org/">Stackage Nightly</link> xlink:href="http://www.stackage.org/">Stackage Nightly</link>
snapshot. The announcement <link snapshot. The announcement <link
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2015-September/018138.html">&quot;Full xlink:href="https://nixos.org/nix-dev/2015-September/018138.html">&quot;Full
Stackage Support in Nixpkgs&quot;</link> gives additional Stackage Support in Nixpkgs&quot;</link> gives additional
details.</para> details.</para>
</listitem> </listitem>

View File

@ -78,13 +78,13 @@ following incompatible changes:</para>
our package set it loosely based on the latest available LTS release, i.e. our package set it loosely based on the latest available LTS release, i.e.
LTS 7.x at the time of this writing. New releases of NixOS and Nixpkgs will LTS 7.x at the time of this writing. New releases of NixOS and Nixpkgs will
drop those old names entirely. <link drop those old names entirely. <link
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2016-June/020585.html">The xlink:href="https://nixos.org/nix-dev/2016-June/020585.html">The
motivation for this change</link> has been discussed at length on the motivation for this change</link> has been discussed at length on the
<literal>nix-dev</literal> mailing list and in <link <literal>nix-dev</literal> mailing list and in <link
xlink:href="https://github.com/NixOS/nixpkgs/issues/14897">Github issue xlink:href="https://github.com/NixOS/nixpkgs/issues/14897">Github issue
#14897</link>. Development strategies for Haskell hackers who want to rely #14897</link>. Development strategies for Haskell hackers who want to rely
on Nix and NixOS have been described in <link on Nix and NixOS have been described in <link
xlink:href="http://lists.science.uu.nl/pipermail/nix-dev/2016-June/020642.html">another xlink:href="https://nixos.org/nix-dev/2016-June/020642.html">another
nix-dev article</link>.</para> nix-dev article</link>.</para>
</listitem> </listitem>

View File

@ -315,7 +315,7 @@ following incompatible changes:</para>
let let
pkgs = import &lt;nixpkgs&gt; {}; pkgs = import &lt;nixpkgs&gt; {};
in in
import pkgs.path { overlays = [(self: super: ...)] } import pkgs.path { overlays = [(self: super: ...)]; }
</programlisting> </programlisting>
</para> </para>

View File

@ -85,6 +85,10 @@ rmdir /var/lib/ipfs/.ipfs
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para>
The following changes apply if the <literal>stateVersion</literal> is changed to 17.09 or higher.
For <literal>stateVersion = "17.03</literal> or lower the old behavior is preserved.
</para>
<para> <para>
The <literal>postgres</literal> default version was changed from 9.5 to 9.6. The <literal>postgres</literal> default version was changed from 9.5 to 9.6.
</para> </para>
@ -94,6 +98,9 @@ rmdir /var/lib/ipfs/.ipfs
<para> <para>
The <literal>postgres</literal> default <literal>dataDir</literal> has changed from <literal>/var/db/postgres</literal> to <literal>/var/lib/postgresql/$psqlSchema</literal> where $psqlSchema is 9.6 for example. The <literal>postgres</literal> default <literal>dataDir</literal> has changed from <literal>/var/db/postgres</literal> to <literal>/var/lib/postgresql/$psqlSchema</literal> where $psqlSchema is 9.6 for example.
</para> </para>
<para>
The <literal>mysql</literal> default <literal>dataDir</literal> has changed from <literal>/var/mysql</literal> to <literal>/var/lib/mysql</literal>.
</para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
@ -113,9 +120,42 @@ rmdir /var/lib/ipfs/.ipfs
also serve as a SSH agent if <literal>enableSSHSupport</literal> is set. also serve as a SSH agent if <literal>enableSSHSupport</literal> is set.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <literal>services.tinc.networks.&lt;name&gt;.listenAddress</literal>
option had a misleading name that did not correspond to its behavior. It
now correctly defines the ip to listen for incoming connections on. To
keep the previous behaviour, use
<literal>services.tinc.networks.&lt;name&gt;.bindToAddress</literal>
instead. Refer to the description of the options for more details.
</para>
</listitem>
<listitem>
<para>
<literal>tlsdate</literal> package and module were removed. This is due to the project
being dead and not building with openssl 1.1.
</para>
</listitem>
<listitem>
<para>
<literal>wvdial</literal> package and module were removed. This is due to the project
being dead and not building with openssl 1.1.
</para>
</listitem>
<listitem>
<para>
<literal>cc-wrapper</literal>'s setup-hook now exports a number of
environment variables corresponding to binutils binaries,
(e.g. <envar>LD</envar>, <envar>STRIP</envar>, <envar>RANLIB</envar>,
etc). This is done to prevent packages' build systems guessing, which is
harder to predict, especially when cross-compiling. However, some packages
have broken due to this—their build systems either not supporting, or
claiming to support without adequate testing, taking such environment
variables as parameters.
</para>
</listitem>
</itemizedlist> </itemizedlist>
<para>Other notable improvements:</para> <para>Other notable improvements:</para>
<itemizedlist> <itemizedlist>
@ -141,6 +181,32 @@ rmdir /var/lib/ipfs/.ipfs
module where user Fontconfig settings are available. module where user Fontconfig settings are available.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
ZFS/SPL have been updated to 0.7.0, <literal>zfsUnstable, splUnstable</literal>
have therefore been removed.
</para>
</listitem>
<listitem>
<para>
The <option>time.timeZone</option> option now allows the value
<literal>null</literal> in addition to timezone strings. This value
allows changing the timezone of a system imperatively using
<command>timedatectl set-timezone</command>. The default timezone
is still UTC.
</para>
</listitem>
<listitem>
<para>
Nixpkgs overlays may now be specified with a file as well as a directory. The
value of <literal>&lt;nixpkgs-overlays></literal> may be a file, and
<filename>~/.config/nixpkgs/overlays.nix</filename> can be used instead of the
<filename>~/.config/nixpkgs/overalys</filename> directory.
</para>
<para>
See the overlays chapter of the Nixpkgs manual for more details.
</para>
</listitem>
</itemizedlist> </itemizedlist>

View File

@ -39,6 +39,12 @@
with lib; with lib;
let let
extensions = {
qcow2 = "qcow2";
vpc = "vhd";
raw = "img";
};
# Copied from https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/installer/cd-dvd/channel.nix # Copied from https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/installer/cd-dvd/channel.nix
# TODO: factor out more cleanly # TODO: factor out more cleanly
@ -142,8 +148,8 @@ in pkgs.vmTools.runInLinuxVM (
mv $diskImage $out/nixos.img mv $diskImage $out/nixos.img
diskImage=$out/nixos.img diskImage=$out/nixos.img
'' else '' '' else ''
${pkgs.qemu}/bin/qemu-img convert -f raw -O qcow2 $diskImage $out/nixos.qcow2 ${pkgs.qemu}/bin/qemu-img convert -f raw -O ${format} $diskImage $out/nixos.${extensions.${format}}
diskImage=$out/nixos.qcow2 diskImage=$out/nixos.${extensions.${format}}
''} ''}
${postVM} ${postVM}
''; '';

View File

@ -33,7 +33,7 @@ pkgs.stdenv.mkDerivation {
echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)" echo "Creating an EXT4 image of $bytes bytes (numInodes=$numInodes, numDataBlocks=$numDataBlocks)"
truncate -s $bytes $out truncate -s $bytes $out
faketime "1970-01-01 00:00:00" mkfs.ext4 -L ${volumeLabel} -U 44444444-4444-4444-8888-888888888888 $out faketime -f "1970-01-01 00:00:01" mkfs.ext4 -L ${volumeLabel} -U 44444444-4444-4444-8888-888888888888 $out
# Populate the image contents by piping a bunch of commands to the `debugfs` tool from e2fsprogs. # Populate the image contents by piping a bunch of commands to the `debugfs` tool from e2fsprogs.
# For example, to copy /nix/store/abcd...efg-coreutils-8.23/bin/sleep: # For example, to copy /nix/store/abcd...efg-coreutils-8.23/bin/sleep:
@ -76,7 +76,7 @@ pkgs.stdenv.mkDerivation {
echo sif $file gid 30000 # chgrp to nixbld echo sif $file gid 30000 # chgrp to nixbld
done done
) | faketime "1970-01-01 00:00:00" debugfs -w $out -f /dev/stdin > errorlog 2>&1 ) | faketime -f "1970-01-01 00:00:01" debugfs -w $out -f /dev/stdin > errorlog 2>&1
# The debugfs tool doesn't terminate on error nor exit with a non-zero status. Check manually. # The debugfs tool doesn't terminate on error nor exit with a non-zero status. Check manually.
if egrep -q 'Could not allocate|File not found' errorlog; then if egrep -q 'Could not allocate|File not found' errorlog; then

View File

@ -219,8 +219,8 @@ sub waitForMonitorPrompt {
sub retry { sub retry {
my ($coderef) = @_; my ($coderef) = @_;
my $n; my $n;
for ($n = 0; $n < 900; $n++) { for ($n = 899; $n >=0; $n--) {
return if &$coderef; return if &$coderef($n);
sleep 1; sleep 1;
} }
die "action timed out after $n seconds"; die "action timed out after $n seconds";
@ -518,6 +518,12 @@ sub waitUntilTTYMatches {
$self->nest("waiting for $regexp to appear on tty $tty", sub { $self->nest("waiting for $regexp to appear on tty $tty", sub {
retry sub { retry sub {
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on TTY$tty, which currently contains:");
$self->log($self->getTTYText($tty));
}
return 1 if $self->getTTYText($tty) =~ /$regexp/; return 1 if $self->getTTYText($tty) =~ /$regexp/;
} }
}); });
@ -566,6 +572,12 @@ sub waitForText {
my ($self, $regexp) = @_; my ($self, $regexp) = @_;
$self->nest("waiting for $regexp to appear on the screen", sub { $self->nest("waiting for $regexp to appear on the screen", sub {
retry sub { retry sub {
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on the screen, which currently contains:");
$self->log($self->getScreenText);
}
return 1 if $self->getScreenText =~ /$regexp/; return 1 if $self->getScreenText =~ /$regexp/;
} }
}); });
@ -600,6 +612,13 @@ sub waitForWindow {
$self->nest("waiting for a window to appear", sub { $self->nest("waiting for a window to appear", sub {
retry sub { retry sub {
my @names = $self->getWindowNames; my @names = $self->getWindowNames;
my ($retries_remaining) = @_;
if ($retries_remaining == 0) {
$self->log("Last chance to match /$regexp/ on the the window list, which currently contains:");
$self->log(join(", ", @names));
}
foreach my $n (@names) { foreach my $n (@names) {
return 1 if $n =~ /$regexp/; return 1 if $n =~ /$regexp/;
} }

View File

@ -22,15 +22,26 @@ in {
generated image. Glob patterns work. generated image. Glob patterns work.
''; '';
}; };
sizeMB = mkOption {
type = types.int;
default = if config.ec2.hvm then 2048 else 8192;
description = "The size in MB of the image";
};
format = mkOption {
type = types.enum [ "raw" "qcow2" "vpc" ];
default = "qcow2";
description = "The image format to output";
};
}; };
config.system.build.amazonImage = import ../../../lib/make-disk-image.nix { config.system.build.amazonImage = import ../../../lib/make-disk-image.nix {
inherit lib config; inherit lib config;
inherit (cfg) contents; inherit (cfg) contents format;
pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package pkgs = import ../../../.. { inherit (pkgs) system; }; # ensure we use the regular qemu-kvm package
partitioned = config.ec2.hvm; partitioned = config.ec2.hvm;
diskSize = if config.ec2.hvm then 2048 else 8192; diskSize = cfg.sizeMB;
format = "qcow2";
configFile = pkgs.writeText "configuration.nix" configFile = pkgs.writeText "configuration.nix"
'' ''
{ {
@ -41,5 +52,4 @@ in {
} }
''; '';
}; };
} }

View File

@ -19,7 +19,6 @@ let
bind_policy ${config.users.ldap.bind.policy} bind_policy ${config.users.ldap.bind.policy}
${optionalString config.users.ldap.useTLS '' ${optionalString config.users.ldap.useTLS ''
ssl start_tls ssl start_tls
tls_checkpeer no
''} ''}
${optionalString (config.users.ldap.bind.distinguishedName != "") '' ${optionalString (config.users.ldap.bind.distinguishedName != "") ''
binddn ${config.users.ldap.bind.distinguishedName} binddn ${config.users.ldap.bind.distinguishedName}

View File

@ -20,12 +20,26 @@ in
options = { options = {
networking.hosts = lib.mkOption {
type = types.attrsOf ( types.listOf types.str );
default = {};
example = literalExample ''
{
"127.0.0.1" = [ "foo.bar.baz" ];
"192.168.0.2" = [ "fileserver.local" "nameserver.local" ];
};
'';
description = ''
Locally defined maps of hostnames to IP addresses.
'';
};
networking.extraHosts = lib.mkOption { networking.extraHosts = lib.mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
example = "192.168.0.1 lanlocalhost"; example = "192.168.0.1 lanlocalhost";
description = '' description = ''
Additional entries to be appended to <filename>/etc/hosts</filename>. Additional verbatim entries to be appended to <filename>/etc/hosts</filename>.
''; '';
}; };
@ -188,11 +202,22 @@ in
# /etc/hosts: Hostname-to-IP mappings. # /etc/hosts: Hostname-to-IP mappings.
"hosts".text = "hosts".text =
let oneToString = set : ip : ip + " " + concatStringsSep " " ( getAttr ip set );
allToString = set : concatMapStringsSep "\n" ( oneToString set ) ( attrNames set );
userLocalHosts = optionalString
( builtins.hasAttr "127.0.0.1" cfg.hosts )
( concatStringsSep " " ( remove "localhost" cfg.hosts."127.0.0.1" ));
userLocalHosts6 = optionalString
( builtins.hasAttr "::1" cfg.hosts )
( concatStringsSep " " ( remove "localhost" cfg.hosts."::1" ));
otherHosts = allToString ( removeAttrs cfg.hosts [ "127.0.0.1" "::1" ]);
in
'' ''
127.0.0.1 localhost 127.0.0.1 ${userLocalHosts} localhost
${optionalString cfg.enableIPv6 '' ${optionalString cfg.enableIPv6 ''
::1 localhost ::1 ${userLocalHosts6} localhost
''} ''}
${otherHosts}
${cfg.extraHosts} ${cfg.extraHosts}
''; '';
@ -223,7 +248,9 @@ in
''; '';
} // optionalAttrs config.services.resolved.enable { } // optionalAttrs config.services.resolved.enable {
"resolv.conf".source = "/run/systemd/resolve/resolv.conf"; # symlink the static version of resolv.conf as recommended by upstream:
# https://www.freedesktop.org/software/systemd/man/systemd-resolved.html#/etc/resolv.conf
"resolv.conf".source = "${pkgs.systemd}/lib/systemd/resolv.conf";
} // optionalAttrs (config.services.resolved.enable && dnsmasqResolve) { } // optionalAttrs (config.services.resolved.enable && dnsmasqResolve) {
"dnsmasq-resolv.conf".source = "/run/systemd/resolve/resolv.conf"; "dnsmasq-resolv.conf".source = "/run/systemd/resolve/resolv.conf";
}; };

View File

@ -26,7 +26,15 @@ with lib;
fonts.fontconfig.enable = false; fonts.fontconfig.enable = false;
nixpkgs.config.packageOverrides = pkgs: nixpkgs.config.packageOverrides = pkgs: {
{ dbus = pkgs.dbus.override { x11Support = false; }; }; dbus = pkgs.dbus.override { x11Support = false; };
networkmanager_fortisslvpn = pkgs.networkmanager_fortisslvpn.override { withGnome = false; };
networkmanager_l2tp = pkgs.networkmanager_l2tp.override { withGnome = false; };
networkmanager_openconnect = pkgs.networkmanager_openconnect.override { withGnome = false; };
networkmanager_openvpn = pkgs.networkmanager_openvpn.override { withGnome = false; };
networkmanager_pptp = pkgs.networkmanager_pptp.override { withGnome = false; };
networkmanager_vpnc = pkgs.networkmanager_vpnc.override { withGnome = false; };
pinentry = pkgs.pinentry.override { gtk2 = null; qt4 = null; };
};
}; };
} }

View File

@ -6,24 +6,30 @@ with lib;
let let
inherit (config.services.avahi) nssmdns; # only with nscd up and running we can load NSS modules that are not integrated in NSS
inherit (config.services.samba) nsswins; canLoadExternalModules = config.services.nscd.enable;
ldap = (config.users.ldap.enable && config.users.ldap.nsswitch); myhostname = canLoadExternalModules;
sssd = config.services.sssd.enable; mymachines = canLoadExternalModules;
resolved = config.services.resolved.enable; nssmdns = canLoadExternalModules && config.services.avahi.nssmdns;
nsswins = canLoadExternalModules && config.services.samba.nsswins;
ldap = canLoadExternalModules && (config.users.ldap.enable && config.users.ldap.nsswitch);
sssd = canLoadExternalModules && config.services.sssd.enable;
resolved = canLoadExternalModules && config.services.resolved.enable;
hostArray = [ "files" "mymachines" ] hostArray = [ "files" ]
++ optionals mymachines [ "mymachines" ]
++ optionals nssmdns [ "mdns_minimal [!UNAVAIL=return]" ] ++ optionals nssmdns [ "mdns_minimal [!UNAVAIL=return]" ]
++ optionals nsswins [ "wins" ] ++ optionals nsswins [ "wins" ]
++ optionals resolved ["resolv [!UNAVAIL=return]"] ++ optionals resolved ["resolve [!UNAVAIL=return]"]
++ [ "dns" ] ++ [ "dns" ]
++ optionals nssmdns [ "mdns" ] ++ optionals nssmdns [ "mdns" ]
++ ["myhostname" ]; ++ optionals myhostname ["myhostname" ];
passwdArray = [ "files" ] passwdArray = [ "files" ]
++ optional sssd "sss" ++ optional sssd "sss"
++ optionals ldap [ "ldap" ] ++ optionals ldap [ "ldap" ]
++ [ "mymachines" ]; ++ optionals mymachines [ "mymachines" ]
++ [ "systemd" ];
shadowArray = [ "files" ] shadowArray = [ "files" ]
++ optional sssd "sss" ++ optional sssd "sss"
@ -36,6 +42,7 @@ in {
options = { options = {
# NSS modules. Hacky! # NSS modules. Hacky!
# Only works with nscd!
system.nssModules = mkOption { system.nssModules = mkOption {
type = types.listOf types.path; type = types.listOf types.path;
internal = true; internal = true;
@ -55,6 +62,18 @@ in {
}; };
config = { config = {
assertions = [
{
# generic catch if the NixOS module adding to nssModules does not prevent it with specific message.
assertion = config.system.nssModules.path != "" -> canLoadExternalModules;
message = "Loading NSS modules from path ${config.system.nssModules.path} requires nscd being enabled.";
}
{
# resolved does not need to add to nssModules, therefore needs an extra assertion
assertion = resolved -> canLoadExternalModules;
message = "Loading systemd-resolved's nss-resolve NSS module requires nscd being enabled.";
}
];
# Name Service Switch configuration file. Required by the C # Name Service Switch configuration file. Required by the C
# library. !!! Factor out the mdns stuff. The avahi module # library. !!! Factor out the mdns stuff. The avahi module
@ -78,7 +97,7 @@ in {
# configured IP addresses, or ::1 and 127.0.0.2 as # configured IP addresses, or ::1 and 127.0.0.2 as
# fallbacks. Systemd also provides nss-mymachines to return IP # fallbacks. Systemd also provides nss-mymachines to return IP
# addresses of local containers. # addresses of local containers.
system.nssModules = [ config.systemd.package.out ]; system.nssModules = optionals canLoadExternalModules [ config.systemd.package.out ];
}; };
} }

View File

@ -6,6 +6,7 @@ with lib;
let let
cfg = config.hardware.pulseaudio; cfg = config.hardware.pulseaudio;
alsaCfg = config.sound;
systemWide = cfg.enable && cfg.systemWide; systemWide = cfg.enable && cfg.systemWide;
nonSystemWide = cfg.enable && !cfg.systemWide; nonSystemWide = cfg.enable && !cfg.systemWide;
@ -76,6 +77,7 @@ let
ctl.!default { ctl.!default {
type pulse type pulse
} }
${alsaCfg.extraConfig}
''); '');
in { in {
@ -222,7 +224,7 @@ in {
# Allow PulseAudio to get realtime priority using rtkit. # Allow PulseAudio to get realtime priority using rtkit.
security.rtkit.enable = true; security.rtkit.enable = true;
systemd.packages = [ cfg.package ]; systemd.packages = [ overriddenPackage ];
}) })
(mkIf hasZeroconf { (mkIf hasZeroconf {

View File

@ -5,6 +5,52 @@ with lib;
let let
randomEncryptionCoerce = enable: { inherit enable; };
randomEncryptionOpts = { ... }: {
options = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Encrypt swap device with a random key. This way you won't have a persistent swap device.
WARNING: Don't try to hibernate when you have at least one swap partition with
this option enabled! We have no way to set the partition into which hibernation image
is saved, so if your image ends up on an encrypted one you would lose it!
WARNING #2: Do not use /dev/disk/by-uuid/… or /dev/disk/by-label/… as your swap device
when using randomEncryption as the UUIDs and labels will get erased on every boot when
the partition is encrypted. Best to use /dev/disk/by-partuuid/
'';
};
cipher = mkOption {
default = "aes-xts-plain64";
example = "serpent-xts-plain64";
type = types.str;
description = ''
Use specified cipher for randomEncryption.
Hint: Run "cryptsetup benchmark" to see which one is fastest on your machine.
'';
};
source = mkOption {
default = "/dev/urandom";
example = "/dev/random";
type = types.str;
description = ''
Define the source of randomness to obtain a random key for encryption.
'';
};
};
};
swapCfg = {config, options, ...}: { swapCfg = {config, options, ...}: {
options = { options = {
@ -47,10 +93,17 @@ let
randomEncryption = mkOption { randomEncryption = mkOption {
default = false; default = false;
type = types.bool; example = {
enable = true;
cipher = "serpent-xts-plain64";
source = "/dev/random";
};
type = types.coercedTo types.bool randomEncryptionCoerce (types.submodule randomEncryptionOpts);
description = '' description = ''
Encrypt swap device with a random key. This way you won't have a persistent swap device. Encrypt swap device with a random key. This way you won't have a persistent swap device.
HINT: run "cryptsetup benchmark" to test cipher performance on your machine.
WARNING: Don't try to hibernate when you have at least one swap partition with WARNING: Don't try to hibernate when you have at least one swap partition with
this option enabled! We have no way to set the partition into which hibernation image this option enabled! We have no way to set the partition into which hibernation image
is saved, so if your image ends up on an encrypted one you would lose it! is saved, so if your image ends up on an encrypted one you would lose it!
@ -77,7 +130,7 @@ let
device = mkIf options.label.isDefined device = mkIf options.label.isDefined
"/dev/disk/by-label/${config.label}"; "/dev/disk/by-label/${config.label}";
deviceName = lib.replaceChars ["\\"] [""] (escapeSystemdPath config.device); deviceName = lib.replaceChars ["\\"] [""] (escapeSystemdPath config.device);
realDevice = if config.randomEncryption then "/dev/mapper/${deviceName}" else config.device; realDevice = if config.randomEncryption.enable then "/dev/mapper/${deviceName}" else config.device;
}; };
}; };
@ -125,14 +178,14 @@ in
createSwapDevice = sw: createSwapDevice = sw:
assert sw.device != ""; assert sw.device != "";
assert !(sw.randomEncryption && lib.hasPrefix "/dev/disk/by-uuid" sw.device); assert !(sw.randomEncryption.enable && lib.hasPrefix "/dev/disk/by-uuid" sw.device);
assert !(sw.randomEncryption && lib.hasPrefix "/dev/disk/by-label" sw.device); assert !(sw.randomEncryption.enable && lib.hasPrefix "/dev/disk/by-label" sw.device);
let realDevice' = escapeSystemdPath sw.realDevice; let realDevice' = escapeSystemdPath sw.realDevice;
in nameValuePair "mkswap-${sw.deviceName}" in nameValuePair "mkswap-${sw.deviceName}"
{ description = "Initialisation of swap device ${sw.device}"; { description = "Initialisation of swap device ${sw.device}";
wantedBy = [ "${realDevice'}.swap" ]; wantedBy = [ "${realDevice'}.swap" ];
before = [ "${realDevice'}.swap" ]; before = [ "${realDevice'}.swap" ];
path = [ pkgs.utillinux ] ++ optional sw.randomEncryption pkgs.cryptsetup; path = [ pkgs.utillinux ] ++ optional sw.randomEncryption.enable pkgs.cryptsetup;
script = script =
'' ''
@ -145,13 +198,11 @@ in
truncate --size "${toString sw.size}M" "${sw.device}" truncate --size "${toString sw.size}M" "${sw.device}"
fi fi
chmod 0600 ${sw.device} chmod 0600 ${sw.device}
${optionalString (!sw.randomEncryption) "mkswap ${sw.realDevice}"} ${optionalString (!sw.randomEncryption.enable) "mkswap ${sw.realDevice}"}
fi fi
''} ''}
${optionalString sw.randomEncryption '' ${optionalString sw.randomEncryption.enable ''
echo "secretkey" | cryptsetup luksFormat --batch-mode ${sw.device} cryptsetup plainOpen -c ${sw.randomEncryption.cipher} -d ${sw.randomEncryption.source} ${sw.device} ${sw.deviceName}
echo "secretkey" | cryptsetup luksOpen ${sw.device} ${sw.deviceName}
cryptsetup luksErase --batch-mode ${sw.device}
mkswap ${sw.realDevice} mkswap ${sw.realDevice}
''} ''}
''; '';
@ -159,12 +210,12 @@ in
unitConfig.RequiresMountsFor = [ "${dirOf sw.device}" ]; unitConfig.RequiresMountsFor = [ "${dirOf sw.device}" ];
unitConfig.DefaultDependencies = false; # needed to prevent a cycle unitConfig.DefaultDependencies = false; # needed to prevent a cycle
serviceConfig.Type = "oneshot"; serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = sw.randomEncryption; serviceConfig.RemainAfterExit = sw.randomEncryption.enable;
serviceConfig.ExecStop = optionalString sw.randomEncryption "${pkgs.cryptsetup}/bin/cryptsetup luksClose ${sw.deviceName}"; serviceConfig.ExecStop = optionalString sw.randomEncryption.enable "${pkgs.cryptsetup}/bin/cryptsetup luksClose ${sw.deviceName}";
restartIfChanged = false; restartIfChanged = false;
}; };
in listToAttrs (map createSwapDevice (filter (sw: sw.size != null || sw.randomEncryption) config.swapDevices)); in listToAttrs (map createSwapDevice (filter (sw: sw.size != null || sw.randomEncryption.enable) config.swapDevices));
}; };

View File

@ -115,10 +115,12 @@ in
"/share/mime" "/share/mime"
"/share/nano" "/share/nano"
"/share/org" "/share/org"
"/share/terminfo"
"/share/themes" "/share/themes"
"/share/vim-plugins" "/share/vim-plugins"
"/share/vulkan" "/share/vulkan"
"/share/kservices5"
"/share/kservicetypes5"
"/share/kxmlgui5"
]; ];
system.path = pkgs.buildEnv { system.path = pkgs.buildEnv {

View File

@ -0,0 +1,33 @@
# This module manages the terminfo database
# and its integration in the system.
{ config, ... }:
{
config = {
environment.pathsToLink = [
"/share/terminfo"
];
environment.etc."terminfo" = {
source = "${config.system.path}/share/terminfo";
};
environment.profileRelativeEnvVars = {
TERMINFO_DIRS = [ "/share/terminfo" ];
};
environment.extraInit = ''
# reset TERM with new TERMINFO available (if any)
export TERM=$TERM
'';
security.sudo.extraConfig = ''
# Keep terminfo database for root and %wheel.
Defaults:root,%wheel env_keep+=TERMINFO_DIRS
Defaults:root,%wheel env_keep+=TERMINFO
'';
};
}

View File

@ -14,13 +14,16 @@ in
time = { time = {
timeZone = mkOption { timeZone = mkOption {
default = "UTC"; default = null;
type = types.str; type = types.nullOr types.str;
example = "America/New_York"; example = "America/New_York";
description = '' description = ''
The time zone used when displaying times and dates. See <link The time zone used when displaying times and dates. See <link
xlink:href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones"/> xlink:href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones"/>
for a comprehensive list of possible values for this setting. for a comprehensive list of possible values for this setting.
If null, the timezone will default to UTC and can be set imperatively
using timedatectl.
''; '';
}; };
@ -40,13 +43,14 @@ in
# This way services are restarted when tzdata changes. # This way services are restarted when tzdata changes.
systemd.globalEnvironment.TZDIR = tzdir; systemd.globalEnvironment.TZDIR = tzdir;
environment.etc.localtime = systemd.services.systemd-timedated.environment = lib.optionalAttrs (config.time.timeZone != null) { NIXOS_STATIC_TIMEZONE = "1"; };
{ source = "/etc/zoneinfo/${config.time.timeZone}";
mode = "direct-symlink"; environment.etc = {
zoneinfo.source = tzdir;
} // lib.optionalAttrs (config.time.timeZone != null) {
localtime.source = "/etc/zoneinfo/${config.time.timeZone}";
localtime.mode = "direct-symlink";
}; };
environment.etc.zoneinfo.source = tzdir;
}; };
} }

View File

@ -527,7 +527,7 @@ in {
input.gid = ids.gids.input; input.gid = ids.gids.input;
}; };
system.activationScripts.users = stringAfter [ "etc" ] system.activationScripts.users = stringAfter [ "stdio" ]
'' ''
${pkgs.perl}/bin/perl -w \ ${pkgs.perl}/bin/perl -w \
-I${pkgs.perlPackages.FileSlurp}/lib/perl5/site_perl \ -I${pkgs.perlPackages.FileSlurp}/lib/perl5/site_perl \

View File

@ -3,7 +3,7 @@
with lib; with lib;
{ {
meta.maintainers = [ maintainers.grahamc ]; meta.maintainers = with maintainers; [ grahamc ];
options = { options = {
hardware.mcelog = { hardware.mcelog = {
@ -19,19 +19,17 @@ with lib;
}; };
config = mkIf config.hardware.mcelog.enable { config = mkIf config.hardware.mcelog.enable {
systemd.services.mcelog = { systemd = {
description = "Machine Check Exception Logging Daemon"; packages = [ pkgs.mcelog ];
wantedBy = [ "multi-user.target" ];
serviceConfig = { services.mcelog = {
ExecStart = "${pkgs.mcelog}/bin/mcelog --daemon --foreground"; wantedBy = [ "multi-user.target" ];
SuccessExitStatus = [ 0 15 ]; serviceConfig = {
ProtectHome = true;
ProtectHome = true; PrivateNetwork = true;
PrivateNetwork = true; PrivateTmp = true;
PrivateTmp = true; };
}; };
}; };
}; };
} }

View File

@ -1,5 +1,5 @@
{ {
x86_64-linux = "/nix/store/crqd5wmrqipl4n1fcm5kkc1zg4sj80js-nix-1.11.11"; x86_64-linux = "/nix/store/avwiw7hb1qckag864sc6ixfxr8qmf94w-nix-1.11.13";
i686-linux = "/nix/store/wsjn14xp5ja509d4dxb1c78zhirw0b5x-nix-1.11.11"; i686-linux = "/nix/store/8wv3ms0afw95hzsz4lxzv0nj4w3614z9-nix-1.11.13";
x86_64-darwin = "/nix/store/zqkqnhk85g2shxlpb04y72h1i3db3gpl-nix-1.11.11"; x86_64-darwin = "/nix/store/z21lvakv1l7lhasmv5fvaz8mlzxia8k9-nix-1.11.13";
} }

View File

@ -140,7 +140,7 @@ channel_closure="$tmpdir/channel.closure"
nix-store --export $channel_root > $channel_closure nix-store --export $channel_root > $channel_closure
# Populate the target root directory with the basics # Populate the target root directory with the basics
@prepare_root@/bin/nixos-prepare-root $mountPoint $channel_root $system_root @nixClosure@ $system_closure $channel_closure @prepare_root@/bin/nixos-prepare-root "$mountPoint" "$channel_root" "$system_root" @nixClosure@ "$system_closure" "$channel_closure"
# nixos-prepare-root doesn't currently do anything with file ownership, so we set it up here instead # nixos-prepare-root doesn't currently do anything with file ownership, so we set it up here instead
chown @root_uid@:@nixbld_gid@ $mountPoint/nix/store chown @root_uid@:@nixbld_gid@ $mountPoint/nix/store

View File

@ -250,7 +250,7 @@ trap cleanup EXIT
# If --repair is given, don't try to use the Nix daemon, because the # If --repair is given, don't try to use the Nix daemon, because the
# flag can only be used directly. # flag can only be used directly.
if [ -z "$repair" ] && systemctl show nix-daemon.socket nix-daemon.service | grep -q ActiveState=active; then if [ -z "$repair" ] && systemctl show nix-daemon.socket nix-daemon.service | grep -q ActiveState=active; then
export NIX_REMOTE=${NIX_REMOTE:-daemon} export NIX_REMOTE=${NIX_REMOTE-daemon}
fi fi

View File

@ -139,6 +139,7 @@
btsync = 113; btsync = 113;
minecraft = 114; minecraft = 114;
#monetdb = 115; # unused (not packaged), removed 2016-09-19 #monetdb = 115; # unused (not packaged), removed 2016-09-19
vault = 115;
rippled = 116; rippled = 116;
murmur = 117; murmur = 117;
foundationdb = 118; foundationdb = 118;
@ -166,7 +167,7 @@
dnsmasq = 141; dnsmasq = 141;
uhub = 142; uhub = 142;
yandexdisk = 143; yandexdisk = 143;
collectd = 144; #collectd = 144; #unused
consul = 145; consul = 145;
mailpile = 146; mailpile = 146;
redmine = 147; redmine = 147;
@ -213,7 +214,7 @@
plex = 193; plex = 193;
grafana = 196; grafana = 196;
skydns = 197; skydns = 197;
ripple-rest = 198; # ripple-rest = 198; # unused, removed 2017-08-12
nix-serve = 199; nix-serve = 199;
tvheadend = 200; tvheadend = 200;
uwsgi = 201; uwsgi = 201;
@ -295,6 +296,7 @@
aria2 = 277; aria2 = 277;
clickhouse = 278; clickhouse = 278;
rslsync = 279; rslsync = 279;
minio = 280;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399! # When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -333,7 +335,7 @@
dialout = 27; dialout = 27;
#polkituser = 28; # currently unused, polkitd doesn't need a group #polkituser = 28; # currently unused, polkitd doesn't need a group
utmp = 29; utmp = 29;
#ddclient = 30; # unused ddclient = 30;
davfs2 = 31; davfs2 = 31;
disnix = 33; disnix = 33;
osgi = 34; osgi = 34;
@ -414,6 +416,7 @@
btsync = 113; btsync = 113;
#minecraft = 114; # unused #minecraft = 114; # unused
#monetdb = 115; # unused (not packaged), removed 2016-09-19 #monetdb = 115; # unused (not packaged), removed 2016-09-19
vault = 115;
#ripped = 116; # unused #ripped = 116; # unused
#murmur = 117; # unused #murmur = 117; # unused
foundationdb = 118; foundationdb = 118;
@ -486,7 +489,7 @@
sabnzbd = 194; sabnzbd = 194;
#grafana = 196; #unused #grafana = 196; #unused
#skydns = 197; #unused #skydns = 197; #unused
#ripple-rest = 198; #unused # ripple-rest = 198; # unused, removed 2017-08-12
#nix-serve = 199; #unused #nix-serve = 199; #unused
#tvheadend = 200; #unused #tvheadend = 200; #unused
uwsgi = 201; uwsgi = 201;
@ -559,6 +562,7 @@
aria2 = 277; aria2 = 277;
clickhouse = 278; clickhouse = 278;
rslsync = 279; rslsync = 279;
minio = 280;
# When adding a gid, make sure it doesn't match an existing # When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal # uid. Users and groups with the same name should have equal

View File

@ -17,7 +17,7 @@ let
# } # }
merge = loc: defs: merge = loc: defs:
zipAttrs zipAttrs
(flatten (imap (n: def: imap (m: def': (flatten (imap1 (n: def: imap1 (m: def':
maintainer.merge (loc ++ ["[${toString n}-${toString m}]"]) maintainer.merge (loc ++ ["[${toString n}-${toString m}]"])
[{ inherit (def) file; value = def'; }]) def.value) defs)); [{ inherit (def) file; value = def'; }]) def.value) defs));
}; };

View File

@ -21,6 +21,7 @@
./config/sysctl.nix ./config/sysctl.nix
./config/system-environment.nix ./config/system-environment.nix
./config/system-path.nix ./config/system-path.nix
./config/terminfo.nix
./config/timezone.nix ./config/timezone.nix
./config/unix-odbc-drivers.nix ./config/unix-odbc-drivers.nix
./config/users-groups.nix ./config/users-groups.nix
@ -104,7 +105,6 @@
./programs/venus.nix ./programs/venus.nix
./programs/vim.nix ./programs/vim.nix
./programs/wireshark.nix ./programs/wireshark.nix
./programs/wvdial.nix
./programs/xfs_quota.nix ./programs/xfs_quota.nix
./programs/xonsh.nix ./programs/xonsh.nix
./programs/zsh/oh-my-zsh.nix ./programs/zsh/oh-my-zsh.nix
@ -115,6 +115,7 @@
./security/apparmor.nix ./security/apparmor.nix
./security/apparmor-suid.nix ./security/apparmor-suid.nix
./security/audit.nix ./security/audit.nix
./security/auditd.nix
./security/ca.nix ./security/ca.nix
./security/chromium-suid-sandbox.nix ./security/chromium-suid-sandbox.nix
./security/dhparams.nix ./security/dhparams.nix
@ -184,6 +185,7 @@
./services/databases/neo4j.nix ./services/databases/neo4j.nix
./services/databases/openldap.nix ./services/databases/openldap.nix
./services/databases/opentsdb.nix ./services/databases/opentsdb.nix
./services/databases/postage.nix
./services/databases/postgresql.nix ./services/databases/postgresql.nix
./services/databases/redis.nix ./services/databases/redis.nix
./services/databases/riak.nix ./services/databases/riak.nix
@ -235,16 +237,18 @@
./services/hardware/udisks2.nix ./services/hardware/udisks2.nix
./services/hardware/upower.nix ./services/hardware/upower.nix
./services/hardware/thermald.nix ./services/hardware/thermald.nix
./services/logging/SystemdJournal2Gelf.nix
./services/logging/awstats.nix ./services/logging/awstats.nix
./services/logging/fluentd.nix ./services/logging/fluentd.nix
./services/logging/graylog.nix ./services/logging/graylog.nix
./services/logging/heartbeat.nix
./services/logging/journalbeat.nix ./services/logging/journalbeat.nix
./services/logging/journalwatch.nix
./services/logging/klogd.nix ./services/logging/klogd.nix
./services/logging/logcheck.nix ./services/logging/logcheck.nix
./services/logging/logrotate.nix ./services/logging/logrotate.nix
./services/logging/logstash.nix ./services/logging/logstash.nix
./services/logging/rsyslogd.nix ./services/logging/rsyslogd.nix
./services/logging/SystemdJournal2Gelf.nix
./services/logging/syslog-ng.nix ./services/logging/syslog-ng.nix
./services/logging/syslogd.nix ./services/logging/syslogd.nix
./services/mail/dovecot.nix ./services/mail/dovecot.nix
@ -252,6 +256,7 @@
./services/mail/exim.nix ./services/mail/exim.nix
./services/mail/freepops.nix ./services/mail/freepops.nix
./services/mail/mail.nix ./services/mail/mail.nix
./services/mail/mailhog.nix
./services/mail/mlmmj.nix ./services/mail/mlmmj.nix
./services/mail/offlineimap.nix ./services/mail/offlineimap.nix
./services/mail/opendkim.nix ./services/mail/opendkim.nix
@ -282,6 +287,7 @@
./services/misc/emby.nix ./services/misc/emby.nix
./services/misc/errbot.nix ./services/misc/errbot.nix
./services/misc/etcd.nix ./services/misc/etcd.nix
./services/misc/exhibitor.nix
./services/misc/felix.nix ./services/misc/felix.nix
./services/misc/folding-at-home.nix ./services/misc/folding-at-home.nix
./services/misc/fstrim.nix ./services/misc/fstrim.nix
@ -318,10 +324,10 @@
./services/misc/radarr.nix ./services/misc/radarr.nix
./services/misc/redmine.nix ./services/misc/redmine.nix
./services/misc/rippled.nix ./services/misc/rippled.nix
./services/misc/ripple-rest.nix
./services/misc/ripple-data-api.nix ./services/misc/ripple-data-api.nix
./services/misc/rogue.nix ./services/misc/rogue.nix
./services/misc/siproxd.nix ./services/misc/siproxd.nix
./services/misc/snapper.nix
./services/misc/sonarr.nix ./services/misc/sonarr.nix
./services/misc/spice-vdagentd.nix ./services/misc/spice-vdagentd.nix
./services/misc/ssm-agent.nix ./services/misc/ssm-agent.nix
@ -349,6 +355,7 @@
./services/monitoring/munin.nix ./services/monitoring/munin.nix
./services/monitoring/nagios.nix ./services/monitoring/nagios.nix
./services/monitoring/netdata.nix ./services/monitoring/netdata.nix
./services/monitoring/osquery.nix
./services/monitoring/prometheus/default.nix ./services/monitoring/prometheus/default.nix
./services/monitoring/prometheus/alertmanager.nix ./services/monitoring/prometheus/alertmanager.nix
./services/monitoring/prometheus/blackbox-exporter.nix ./services/monitoring/prometheus/blackbox-exporter.nix
@ -509,7 +516,6 @@
./services/networking/teamspeak3.nix ./services/networking/teamspeak3.nix
./services/networking/tinc.nix ./services/networking/tinc.nix
./services/networking/tftpd.nix ./services/networking/tftpd.nix
./services/networking/tlsdated.nix
./services/networking/tox-bootstrapd.nix ./services/networking/tox-bootstrapd.nix
./services/networking/toxvpn.nix ./services/networking/toxvpn.nix
./services/networking/tvheadend.nix ./services/networking/tvheadend.nix
@ -554,12 +560,14 @@
./services/security/tor.nix ./services/security/tor.nix
./services/security/torify.nix ./services/security/torify.nix
./services/security/torsocks.nix ./services/security/torsocks.nix
./services/security/vault.nix
./services/system/cgmanager.nix ./services/system/cgmanager.nix
./services/system/cloud-init.nix ./services/system/cloud-init.nix
./services/system/dbus.nix ./services/system/dbus.nix
./services/system/earlyoom.nix ./services/system/earlyoom.nix
./services/system/kerberos.nix ./services/system/kerberos.nix
./services/system/nscd.nix ./services/system/nscd.nix
./services/system/saslauthd.nix
./services/system/uptimed.nix ./services/system/uptimed.nix
./services/torrent/deluge.nix ./services/torrent/deluge.nix
./services/torrent/flexget.nix ./services/torrent/flexget.nix
@ -575,6 +583,7 @@
./services/web-apps/frab.nix ./services/web-apps/frab.nix
./services/web-apps/mattermost.nix ./services/web-apps/mattermost.nix
./services/web-apps/nixbot.nix ./services/web-apps/nixbot.nix
./services/web-apps/piwik.nix
./services/web-apps/pump.io.nix ./services/web-apps/pump.io.nix
./services/web-apps/tt-rss.nix ./services/web-apps/tt-rss.nix
./services/web-apps/selfoss.nix ./services/web-apps/selfoss.nix
@ -584,9 +593,11 @@
./services/web-servers/fcgiwrap.nix ./services/web-servers/fcgiwrap.nix
./services/web-servers/jboss/default.nix ./services/web-servers/jboss/default.nix
./services/web-servers/lighttpd/cgit.nix ./services/web-servers/lighttpd/cgit.nix
./services/web-servers/lighttpd/collectd.nix
./services/web-servers/lighttpd/default.nix ./services/web-servers/lighttpd/default.nix
./services/web-servers/lighttpd/gitweb.nix ./services/web-servers/lighttpd/gitweb.nix
./services/web-servers/lighttpd/inginious.nix ./services/web-servers/lighttpd/inginious.nix
./services/web-servers/minio.nix
./services/web-servers/nginx/default.nix ./services/web-servers/nginx/default.nix
./services/web-servers/phpfpm/default.nix ./services/web-servers/phpfpm/default.nix
./services/web-servers/shellinabox.nix ./services/web-servers/shellinabox.nix

View File

@ -41,6 +41,9 @@
# Virtio (QEMU, KVM etc.) support. # Virtio (QEMU, KVM etc.) support.
"virtio_net" "virtio_pci" "virtio_blk" "virtio_scsi" "virtio_balloon" "virtio_console" "virtio_net" "virtio_pci" "virtio_blk" "virtio_scsi" "virtio_balloon" "virtio_console"
# VMware support.
"mptspi" "vmw_balloon" "vmwgfx" "vmw_vmci" "vmw_vsock_vmci_transport" "vmxnet3" "vsock"
# Hyper-V support. # Hyper-V support.
"hv_storvsc" "hv_storvsc"

View File

@ -55,8 +55,14 @@ with lib;
# same privileges as it would have inside it. This is particularly # same privileges as it would have inside it. This is particularly
# bad in the common case of running as root within the namespace. # bad in the common case of running as root within the namespace.
# #
# Setting the number of allowed userns to 0 effectively disables # Setting the number of allowed user namespaces to 0 effectively disables
# the feature at runtime. Attempting to create a user namespace # the feature at runtime. Attempting to create a user namespace
# with unshare will then fail with "no space left on device". # with unshare will then fail with "no space left on device".
boot.kernel.sysctl."user.max_user_namespaces" = mkDefault 0; boot.kernel.sysctl."user.max_user_namespaces" = mkDefault 0;
# Raise ASLR entropy for 64bit & 32bit, respectively.
#
# Note: mmap_rnd_compat_bits may not exist on 64bit.
boot.kernel.sysctl."vm.mmap_rnd_bits" = mkDefault 32;
boot.kernel.sysctl."vm.mmap_rnd_compat_bits" = mkDefault 16;
} }

View File

@ -6,21 +6,17 @@ with lib;
###### interface ###### interface
options = { options = {
programs.browserpass = { programs.browserpass.enable = mkEnableOption "the NativeMessaging configuration for Chromium, Chrome, and Vivaldi.";
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to install the NativeMessaging configuration for installed browsers.
'';
};
};
}; };
###### implementation ###### implementation
config = mkIf config.programs.browserpass.enable { config = mkIf config.programs.browserpass.enable {
environment.systemPackages = [ pkgs.browserpass ]; environment.systemPackages = [ pkgs.browserpass ];
environment.etc."chromium/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json"; environment.etc = {
environment.etc."opt/chrome/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json"; "chromium/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
"chromium/policies/managed/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-policy.json";
"opt/chrome/native-messaging-hosts/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-host.json";
"opt/chrome/policies/managed/com.dannyvankooten.browserpass.json".source = "${pkgs.browserpass}/etc/chrome-policy.json";
};
}; };
} }

View File

@ -34,7 +34,6 @@ in
{ PATH = [ "/bin" ]; { PATH = [ "/bin" ];
INFOPATH = [ "/info" "/share/info" ]; INFOPATH = [ "/info" "/share/info" ];
PKG_CONFIG_PATH = [ "/lib/pkgconfig" ]; PKG_CONFIG_PATH = [ "/lib/pkgconfig" ];
TERMINFO_DIRS = [ "/share/terminfo" ];
PERL5LIB = [ "/lib/perl5/site_perl" ]; PERL5LIB = [ "/lib/perl5/site_perl" ];
KDEDIRS = [ "" ]; KDEDIRS = [ "" ];
STRIGI_PLUGIN_PATH = [ "/lib/strigi/" ]; STRIGI_PLUGIN_PATH = [ "/lib/strigi/" ];
@ -50,9 +49,6 @@ in
environment.extraInit = environment.extraInit =
'' ''
# reset TERM with new TERMINFO available (if any)
export TERM=$TERM
unset ASPELL_CONF unset ASPELL_CONF
for i in ${concatStringsSep " " (reverseList cfg.profiles)} ; do for i in ${concatStringsSep " " (reverseList cfg.profiles)} ; do
if [ -d "$i/lib/aspell" ]; then if [ -d "$i/lib/aspell" ]; then

View File

@ -55,79 +55,24 @@ in
}; };
config = mkIf cfg.agent.enable { config = mkIf cfg.agent.enable {
systemd.user.services.gpg-agent = {
serviceConfig = {
ExecStart = [
""
("${pkgs.gnupg}/bin/gpg-agent --supervised "
+ optionalString cfg.agent.enableSSHSupport "--enable-ssh-support")
];
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload gpg-agent";
};
};
systemd.user.sockets.gpg-agent = { systemd.user.sockets.gpg-agent = {
wantedBy = [ "sockets.target" ]; wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent" ];
socketConfig = {
FileDescriptorName = "std";
SocketMode = "0600";
DirectoryMode = "0700";
};
}; };
systemd.user.sockets.gpg-agent-ssh = mkIf cfg.agent.enableSSHSupport { systemd.user.sockets.gpg-agent-ssh = mkIf cfg.agent.enableSSHSupport {
wantedBy = [ "sockets.target" ]; wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.ssh" ];
socketConfig = {
FileDescriptorName = "ssh";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
}; };
systemd.user.sockets.gpg-agent-extra = mkIf cfg.agent.enableExtraSocket { systemd.user.sockets.gpg-agent-extra = mkIf cfg.agent.enableExtraSocket {
wantedBy = [ "sockets.target" ]; wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.extra" ];
socketConfig = {
FileDescriptorName = "extra";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
}; };
systemd.user.sockets.gpg-agent-browser = mkIf cfg.agent.enableBrowserSocket { systemd.user.sockets.gpg-agent-browser = mkIf cfg.agent.enableBrowserSocket {
wantedBy = [ "sockets.target" ]; wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.gpg-agent.browser" ];
socketConfig = {
FileDescriptorName = "browser";
Service = "gpg-agent.service";
SocketMode = "0600";
DirectoryMode = "0700";
};
}; };
systemd.user.services.dirmngr = { systemd.user.sockets.dirmngr = mkIf cfg.dirmngr.enable {
requires = [ "dirmngr.socket" ];
after = [ "dirmngr.socket" ];
unitConfig = {
RefuseManualStart = "true";
};
serviceConfig = {
ExecStart = "${pkgs.gnupg}/bin/dirmngr --supervised";
ExecReload = "${pkgs.gnupg}/bin/gpgconf --reload dirmngr";
};
};
systemd.user.sockets.dirmngr = {
wantedBy = [ "sockets.target" ]; wantedBy = [ "sockets.target" ];
listenStreams = [ "%t/gnupg/S.dirmngr" ];
socketConfig = {
SocketMode = "0600";
DirectoryMode = "0700";
};
}; };
systemd.packages = [ pkgs.gnupg ]; systemd.packages = [ pkgs.gnupg ];
@ -147,7 +92,7 @@ in
''); '');
assertions = [ assertions = [
{ assertion = cfg.agent.enableSSHSupport && !config.programs.ssh.startAgent; { assertion = cfg.agent.enableSSHSupport -> !config.programs.ssh.startAgent;
message = "You can't use ssh-agent and GnuPG agent with SSH support enabled at the same time!"; message = "You can't use ssh-agent and GnuPG agent with SSH support enabled at the same time!";
} }
]; ];

View File

@ -0,0 +1,37 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.nylas-mail;
defaultUser = "nylas-mail";
in {
###### interface
options = {
services.nylas-mail = {
enable = mkEnableOption ''
nylas-mail - Open-source mail client built on the modern web with Electron, React, and Flux
'';
gnome3-keyring = mkOption {
type = types.bool;
default = true;
description = "Enable gnome3 keyring for nylas-mail.";
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.nylas-mail-bin ];
services.gnome3.gnome-keyring = mkIf cfg.gnome3-keyring {
enable = true;
};
};
}

View File

@ -26,6 +26,6 @@ with lib;
###### implementation ###### implementation
config = mkIf config.programs.qt5ct.enable { config = mkIf config.programs.qt5ct.enable {
environment.variables.QT_QPA_PLATFORMTHEME = "qt5ct"; environment.variables.QT_QPA_PLATFORMTHEME = "qt5ct";
environment.systemPackages = [ pkgs.qt5ct ]; environment.systemPackages = with pkgs; [ qt5ct libsForQt5.qtstyleplugins ];
}; };
} }

View File

@ -3,7 +3,12 @@
with lib; with lib;
let let
cfg = config.programs.thefuck; prg = config.programs;
cfg = prg.thefuck;
initScript = ''
eval $(${pkgs.thefuck}/bin/thefuck --alias ${cfg.alias})
'';
in in
{ {
options = { options = {
@ -24,8 +29,11 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ thefuck ]; environment.systemPackages = with pkgs; [ thefuck ];
environment.shellInit = '' environment.shellInit = initScript;
eval $(${pkgs.thefuck}/bin/thefuck --alias ${cfg.alias})
programs.zsh.shellInit = mkIf prg.zsh.enable initScript;
programs.fish.shellInit = mkIf prg.fish.enable ''
${pkgs.thefuck}/bin/thefuck --alias | source
''; '';
}; };
} }

View File

@ -1,71 +0,0 @@
# Global configuration for wvdial.
{ config, lib, pkgs, ... }:
with lib;
let
configFile = ''
[Dialer Defaults]
PPPD PATH = ${pkgs.ppp}/sbin/pppd
${config.environment.wvdial.dialerDefaults}
'';
cfg = config.environment.wvdial;
in
{
###### interface
options = {
environment.wvdial = {
dialerDefaults = mkOption {
default = "";
type = types.str;
example = ''Init1 = AT+CGDCONT=1,"IP","internet.t-mobile"'';
description = ''
Contents of the "Dialer Defaults" section of
<filename>/etc/wvdial.conf</filename>.
'';
};
pppDefaults = mkOption {
default = ''
noipdefault
usepeerdns
defaultroute
persist
noauth
'';
type = types.str;
description = "Default ppp settings for wvdial.";
};
};
};
###### implementation
config = mkIf (cfg.dialerDefaults != "") {
environment = {
etc =
[
{ source = pkgs.writeText "wvdial.conf" configFile;
target = "wvdial.conf";
}
{ source = pkgs.writeText "wvdial" cfg.pppDefaults;
target = "ppp/peers/wvdial";
}
];
};
};
}

View File

@ -15,6 +15,16 @@ in
''; '';
}; };
package = mkOption {
default = pkgs.oh-my-zsh;
defaultText = "pkgs.oh-my-zsh";
description = ''
Package to install for `oh-my-zsh` usage.
'';
type = types.package;
};
plugins = mkOption { plugins = mkOption {
default = []; default = [];
type = types.listOf(types.str); type = types.listOf(types.str);
@ -42,11 +52,15 @@ in
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ oh-my-zsh ];
programs.zsh.interactiveShellInit = with pkgs; with builtins; '' # Prevent zsh from overwriting oh-my-zsh's prompt
programs.zsh.promptInit = mkDefault "";
environment.systemPackages = [ cfg.package ];
programs.zsh.interactiveShellInit = with builtins; ''
# oh-my-zsh configuration generated by NixOS # oh-my-zsh configuration generated by NixOS
export ZSH=${oh-my-zsh}/share/oh-my-zsh export ZSH=${cfg.package}/share/oh-my-zsh
${optionalString (length(cfg.plugins) > 0) ${optionalString (length(cfg.plugins) > 0)
"plugins=(${concatStringsSep " " cfg.plugins})" "plugins=(${concatStringsSep " " cfg.plugins})"

View File

@ -97,45 +97,6 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
programs.zsh = {
shellInit = ''
. ${config.system.build.setEnvironment}
${cfge.shellInit}
'';
loginShellInit = cfge.loginShellInit;
interactiveShellInit = ''
# history defaults
SAVEHIST=2000
HISTSIZE=2000
HISTFILE=$HOME/.zsh_history
setopt HIST_IGNORE_DUPS SHARE_HISTORY HIST_FCNTL_LOCK
# Tell zsh how to find installed completions
for p in ''${(z)NIX_PROFILES}; do
fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions)
done
${if cfg.enableCompletion then "autoload -U compinit && compinit" else ""}
${optionalString (cfg.enableAutosuggestions)
"source ${pkgs.zsh-autosuggestions}/share/zsh-autosuggestions/zsh-autosuggestions.zsh"
}
${zshAliases}
${cfg.promptInit}
${cfge.interactiveShellInit}
HELPDIR="${pkgs.zsh}/share/zsh/$ZSH_VERSION/help"
'';
};
environment.etc."zshenv".text = environment.etc."zshenv".text =
'' ''
# /etc/zshenv: DO NOT EDIT -- this file has been generated automatically. # /etc/zshenv: DO NOT EDIT -- this file has been generated automatically.
@ -146,6 +107,10 @@ in
if [ -n "$__ETC_ZSHENV_SOURCED" ]; then return; fi if [ -n "$__ETC_ZSHENV_SOURCED" ]; then return; fi
export __ETC_ZSHENV_SOURCED=1 export __ETC_ZSHENV_SOURCED=1
. ${config.system.build.setEnvironment}
${cfge.shellInit}
${cfg.shellInit} ${cfg.shellInit}
# Read system-wide modifications. # Read system-wide modifications.
@ -163,6 +128,8 @@ in
if [ -n "$__ETC_ZPROFILE_SOURCED" ]; then return; fi if [ -n "$__ETC_ZPROFILE_SOURCED" ]; then return; fi
__ETC_ZPROFILE_SOURCED=1 __ETC_ZPROFILE_SOURCED=1
${cfge.loginShellInit}
${cfg.loginShellInit} ${cfg.loginShellInit}
# Read system-wide modifications. # Read system-wide modifications.
@ -182,8 +149,34 @@ in
. /etc/zinputrc . /etc/zinputrc
# history defaults
SAVEHIST=2000
HISTSIZE=2000
HISTFILE=$HOME/.zsh_history
setopt HIST_IGNORE_DUPS SHARE_HISTORY HIST_FCNTL_LOCK
HELPDIR="${pkgs.zsh}/share/zsh/$ZSH_VERSION/help"
${optionalString cfg.enableCompletion "autoload -U compinit && compinit"}
${optionalString (cfg.enableAutosuggestions)
"source ${pkgs.zsh-autosuggestions}/share/zsh-autosuggestions/zsh-autosuggestions.zsh"
}
${zshAliases}
${cfge.interactiveShellInit}
${cfg.interactiveShellInit} ${cfg.interactiveShellInit}
${cfg.promptInit}
# Tell zsh how to find installed completions
for p in ''${(z)NIX_PROFILES}; do
fpath+=($p/share/zsh/site-functions $p/share/zsh/$ZSH_VERSION/functions $p/share/zsh/vendor-completions)
done
# Read system-wide modifications. # Read system-wide modifications.
if test -f /etc/zshrc.local; then if test -f /etc/zshrc.local; then
. /etc/zshrc.local . /etc/zshrc.local

View File

@ -204,6 +204,7 @@ with lib;
"Set the option `services.xserver.displayManager.sddm.package' instead.") "Set the option `services.xserver.displayManager.sddm.package' instead.")
(mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "") (mkRemovedOptionModule [ "fonts" "fontconfig" "forceAutohint" ] "")
(mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "") (mkRemovedOptionModule [ "fonts" "fontconfig" "renderMonoTTFAsBitmap" ] "")
(mkRemovedOptionModule [ "boot" "zfs" "enableUnstable" ] "0.7.0 is now the default")
# ZSH # ZSH
(mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ]) (mkRenamedOptionModule [ "programs" "zsh" "enableSyntaxHighlighting" ] [ "programs" "zsh" "syntaxHighlighting" "enable" ])

View File

@ -0,0 +1,27 @@
{ config, lib, pkgs, ... }:
with lib;
{
options.security.auditd.enable = mkEnableOption "the Linux Audit daemon";
config = mkIf config.security.auditd.enable {
systemd.services.auditd = {
description = "Linux Audit daemon";
wantedBy = [ "basic.target" ];
unitConfig = {
ConditionVirtualization = "!container";
ConditionSecurity = [ "audit" ];
DefaultDependencies = false;
};
path = [ pkgs.audit ];
serviceConfig = {
ExecStartPre="${pkgs.coreutils}/bin/mkdir -p /var/log/audit";
ExecStart = "${pkgs.audit}/bin/auditd -l -n -s nochange";
};
};
};
}

View File

@ -66,10 +66,6 @@ in
# Don't edit this file. Set the NixOS options security.sudo.configFile # Don't edit this file. Set the NixOS options security.sudo.configFile
# or security.sudo.extraConfig instead. # or security.sudo.extraConfig instead.
# Environment variables to keep for root and %wheel.
Defaults:root,%wheel env_keep+=TERMINFO_DIRS
Defaults:root,%wheel env_keep+=TERMINFO
# Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic. # Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic.
Defaults env_keep+=SSH_AUTH_SOCK Defaults env_keep+=SSH_AUTH_SOCK

View File

@ -171,7 +171,7 @@ in
###### setcap activation script ###### setcap activation script
system.activationScripts.wrappers = system.activationScripts.wrappers =
lib.stringAfter [ "users" ] lib.stringAfter [ "specialfs" "users" ]
'' ''
# Look in the system path and in the default profile for # Look in the system path and in the default profile for
# programs to be wrapped. # programs to be wrapped.

View File

@ -7,6 +7,8 @@ let
inherit (pkgs) alsaUtils; inherit (pkgs) alsaUtils;
pulseaudioEnabled = config.hardware.pulseaudio.enable;
in in
{ {
@ -80,7 +82,7 @@ in
environment.systemPackages = [ alsaUtils ]; environment.systemPackages = [ alsaUtils ];
environment.etc = mkIf (config.sound.extraConfig != "") environment.etc = mkIf (!pulseaudioEnabled && config.sound.extraConfig != "")
[ [
{ source = pkgs.writeText "asound.conf" config.sound.extraConfig; { source = pkgs.writeText "asound.conf" config.sound.extraConfig;
target = "asound.conf"; target = "asound.conf";

View File

@ -12,7 +12,7 @@ let
mpdConf = pkgs.writeText "mpd.conf" '' mpdConf = pkgs.writeText "mpd.conf" ''
music_directory "${cfg.musicDirectory}" music_directory "${cfg.musicDirectory}"
playlist_directory "${cfg.dataDir}/playlists" playlist_directory "${cfg.playlistDirectory}"
db_file "${cfg.dbFile}" db_file "${cfg.dbFile}"
state_file "${cfg.dataDir}/state" state_file "${cfg.dataDir}/state"
sticker_file "${cfg.dataDir}/sticker.sql" sticker_file "${cfg.dataDir}/sticker.sql"
@ -42,14 +42,34 @@ in {
''; '';
}; };
startWhenNeeded = mkOption {
type = types.bool;
default = false;
description = ''
If set, <command>mpd</command> is socket-activated; that
is, instead of having it permanently running as a daemon,
systemd will start it on the first incoming connection.
'';
};
musicDirectory = mkOption { musicDirectory = mkOption {
type = types.path; type = types.path;
default = "${cfg.dataDir}/music"; default = "${cfg.dataDir}/music";
defaultText = ''''${dataDir}/music'';
description = '' description = ''
The directory where mpd reads music from. The directory where mpd reads music from.
''; '';
}; };
playlistDirectory = mkOption {
type = types.path;
default = "${cfg.dataDir}/playlists";
defaultText = ''''${dataDir}/playlists'';
description = ''
The directory where mpd stores playlists.
'';
};
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";
@ -108,6 +128,7 @@ in {
dbFile = mkOption { dbFile = mkOption {
type = types.str; type = types.str;
default = "${cfg.dataDir}/tag_cache"; default = "${cfg.dataDir}/tag_cache";
defaultText = ''''${dataDir}/tag_cache'';
description = '' description = ''
The path to MPD's database. The path to MPD's database.
''; '';
@ -121,16 +142,42 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
systemd.sockets.mpd = mkIf cfg.startWhenNeeded {
description = "Music Player Daemon Socket";
wantedBy = [ "sockets.target" ];
listenStreams = [
"${optionalString (cfg.network.listenAddress != "any") "${cfg.network.listenAddress}:"}${toString cfg.network.port}"
];
socketConfig = {
Backlog = 5;
KeepAlive = true;
PassCredentials = true;
};
};
systemd.services.mpd = { systemd.services.mpd = {
after = [ "network.target" "sound.target" ]; after = [ "network.target" "sound.target" ];
description = "Music Player Daemon"; description = "Music Player Daemon";
wantedBy = [ "multi-user.target" ]; wantedBy = optional (!cfg.startWhenNeeded) "multi-user.target";
preStart = "mkdir -p ${cfg.dataDir} && chown -R ${cfg.user}:${cfg.group} ${cfg.dataDir}"; preStart = ''
mkdir -p "${cfg.dataDir}" && chown -R ${cfg.user}:${cfg.group} "${cfg.dataDir}"
mkdir -p "${cfg.playlistDirectory}" && chown -R ${cfg.user}:${cfg.group} "${cfg.playlistDirectory}"
'';
serviceConfig = { serviceConfig = {
User = "${cfg.user}"; User = "${cfg.user}";
PermissionsStartOnly = true; PermissionsStartOnly = true;
ExecStart = "${pkgs.mpd}/bin/mpd --no-daemon ${mpdConf}"; ExecStart = "${pkgs.mpd}/bin/mpd --no-daemon ${mpdConf}";
Type = "notify";
LimitRTPRIO = 50;
LimitRTTIME = "infinity";
ProtectSystem = true;
NoNewPrivileges = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
ProtectKernelModules = true;
RestrictAddressFamilies = "AF_INET AF_INET6 AF_UNIX AF_NETLINK";
RestrictNamespaces = true;
}; };
}; };

View File

@ -44,7 +44,7 @@ let
cniConfig = pkgs.buildEnv { cniConfig = pkgs.buildEnv {
name = "kubernetes-cni-config"; name = "kubernetes-cni-config";
paths = imap (i: entry: paths = imap1 (i: entry:
pkgs.writeTextDir "${toString (10+i)}-${entry.type}.conf" (builtins.toJSON entry) pkgs.writeTextDir "${toString (10+i)}-${entry.type}.conf" (builtins.toJSON entry)
) cfg.kubelet.cni.config; ) cfg.kubelet.cni.config;
}; };

View File

@ -36,9 +36,9 @@ in
package = mkOption { package = mkOption {
type = types.package; type = types.package;
default = pkgs.slurm-llnl; default = pkgs.slurm;
defaultText = "pkgs.slurm-llnl"; defaultText = "pkgs.slurm";
example = literalExample "pkgs.slurm-llnl-full"; example = literalExample "pkgs.slurm-full";
description = '' description = ''
The package to use for slurm binaries. The package to use for slurm binaries.
''; '';

View File

@ -225,11 +225,7 @@ in {
User = cfg.user; User = cfg.user;
Group = cfg.group; Group = cfg.group;
WorkingDirectory = cfg.home; WorkingDirectory = cfg.home;
Environment = "PYTHONPATH=${cfg.package}/lib/python2.7/site-packages:${pkgs.buildbot-plugins.www}/lib/python2.7/site-packages:${pkgs.buildbot-plugins.waterfall-view}/lib/python2.7/site-packages:${pkgs.buildbot-plugins.console-view}/lib/python2.7/site-packages:${pkgs.python27Packages.future}/lib/python2.7/site-packages:${pkgs.python27Packages.dateutil}/lib/python2.7/site-packages:${pkgs.python27Packages.six}/lib/python2.7/site-packages:${pkgs.python27Packages.sqlalchemy}/lib/python2.7/site-packages:${pkgs.python27Packages.jinja2}/lib/python2.7/site-packages:${pkgs.python27Packages.markupsafe}/lib/python2.7/site-packages:${pkgs.python27Packages.sqlalchemy_migrate}/lib/python2.7/site-packages:${pkgs.python27Packages.tempita}/lib/python2.7/site-packages:${pkgs.python27Packages.decorator}/lib/python2.7/site-packages:${pkgs.python27Packages.sqlparse}/lib/python2.7/site-packages:${pkgs.python27Packages.txaio}/lib/python2.7/site-packages:${pkgs.python27Packages.autobahn}/lib/python2.7/site-packages:${pkgs.python27Packages.pyjwt}/lib/python2.7/site-packages:${pkgs.python27Packages.distro}/lib/python2.7/site-packages:${pkgs.python27Packages.pbr}/lib/python2.7/site-packages:${pkgs.python27Packages.urllib3}/lib/python2.7/site-packages"; ExecStart = "${cfg.package}/bin/buildbot start --nodaemon ${cfg.buildbotDir}";
# NOTE: call twistd directly with stdout logging for systemd
#ExecStart = "${cfg.package}/bin/buildbot start --nodaemon ${cfg.buildbotDir}";
ExecStart = "${pkgs.python27Packages.twisted}/bin/twistd -n -l - -y ${cfg.buildbotDir}/buildbot.tac";
}; };
}; };

View File

@ -4,15 +4,82 @@ with lib;
let let
cfg = config.services.gitlab-runner; cfg = config.services.gitlab-runner;
configFile = pkgs.writeText "config.toml" cfg.configText; configFile =
if (cfg.configFile == null) then
(pkgs.runCommand "config.toml" {
buildInputs = [ pkgs.remarshal ];
} ''
remarshal -if json -of toml \
< ${pkgs.writeText "config.json" (builtins.toJSON cfg.configOptions)} \
> $out
'')
else
cfg.configFile;
hasDocker = config.virtualisation.docker.enable; hasDocker = config.virtualisation.docker.enable;
in in
{ {
options.services.gitlab-runner = { options.services.gitlab-runner = {
enable = mkEnableOption "Gitlab Runner"; enable = mkEnableOption "Gitlab Runner";
configText = mkOption { configFile = mkOption {
description = "Verbatim config.toml to use"; default = null;
description = ''
Configuration file for gitlab-runner.
Use this option in favor of configOptions to avoid placing CI tokens in the nix store.
<option>configFile</option> takes precedence over <option>configOptions</option>.
Warning: Not using <option>configFile</option> will potentially result in secrets
leaking into the WORLD-READABLE nix store.
'';
type = types.nullOr types.path;
};
configOptions = mkOption {
description = ''
Configuration for gitlab-runner
<option>configFile</option> will take precedence over this option.
Warning: all Configuration, especially CI token, will be stored in a
WORLD-READABLE file in the Nix Store.
If you want to protect your CI token use <option>configFile</option> instead.
'';
type = types.attrs;
example = {
concurrent = 2;
runners = [{
name = "docker-nix-1.11";
url = "https://CI/";
token = "TOKEN";
executor = "docker";
builds_dir = "";
docker = {
host = "";
image = "nixos/nix:1.11";
privileged = true;
disable_cache = true;
cache_dir = "";
};
}];
};
};
gracefulTermination = mkOption {
default = false;
type = types.bool;
description = ''
Finish all remaining jobs before stopping, restarting or reconfiguring.
If not set gitlab-runner will stop immediatly without waiting for jobs to finish,
which will lead to failed builds.
'';
};
gracefulTimeout = mkOption {
default = "infinity";
type = types.str;
example = "5min 20s";
description = ''Time to wait until a graceful shutdown is turned into a forceful one.'';
}; };
workDir = mkOption { workDir = mkOption {
@ -45,6 +112,11 @@ in
--service gitlab-runner \ --service gitlab-runner \
--user gitlab-runner \ --user gitlab-runner \
''; '';
} // optionalAttrs (cfg.gracefulTermination) {
TimeoutStopSec = "${cfg.gracefulTimeout}";
KillSignal = "SIGQUIT";
KillMode = "process";
}; };
}; };

View File

@ -308,6 +308,7 @@ in
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" ]; after = [ "hydra-init.service" ];
environment = serverEnv; environment = serverEnv;
restartTriggers = [ hydraConf ];
serviceConfig = serviceConfig =
{ ExecStart = { ExecStart =
"@${cfg.package}/bin/hydra-server hydra-server -f -h '${cfg.listenHost}' " "@${cfg.package}/bin/hydra-server hydra-server -f -h '${cfg.listenHost}' "
@ -324,6 +325,7 @@ in
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ]; after = [ "hydra-init.service" "network.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ]; path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];
environment = env // { environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
IN_SYSTEMD = "1"; # to get log severity levels IN_SYSTEMD = "1"; # to get log severity levels
@ -344,7 +346,8 @@ in
{ wantedBy = [ "multi-user.target" ]; { wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ]; after = [ "hydra-init.service" "network.target" ];
path = [ cfg.package pkgs.nettools ]; path = with pkgs; [ cfg.package nettools jq ];
restartTriggers = [ hydraConf ];
environment = env; environment = env;
serviceConfig = serviceConfig =
{ ExecStart = "@${cfg.package}/bin/hydra-evaluator hydra-evaluator"; { ExecStart = "@${cfg.package}/bin/hydra-evaluator hydra-evaluator";

View File

@ -68,9 +68,9 @@ let
collectd = [{ collectd = [{
enabled = false; enabled = false;
typesdb = "${pkgs.collectd}/share/collectd/types.db"; typesdb = "${pkgs.collectd-data}/share/collectd/types.db";
database = "collectd_db"; database = "collectd_db";
port = 25826; bind-address = ":25826";
}]; }];
opentsdb = [{ opentsdb = [{
@ -149,7 +149,6 @@ in
type = types.attrs; type = types.attrs;
}; };
}; };
}; };

View File

@ -108,7 +108,7 @@ in
after = [ "network.target" ]; after = [ "network.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${mongodb}/bin/mongod --quiet --config ${mongoCnf} --fork --pidfilepath ${cfg.pidFile}"; ExecStart = "${mongodb}/bin/mongod --config ${mongoCnf} --fork --pidfilepath ${cfg.pidFile}";
User = cfg.user; User = cfg.user;
PIDFile = cfg.pidFile; PIDFile = cfg.pidFile;
Type = "forking"; Type = "forking";

View File

@ -20,6 +20,7 @@ let
'' ''
[mysqld] [mysqld]
port = ${toString cfg.port} port = ${toString cfg.port}
${optionalString (cfg.bind != null) "bind-address = ${cfg.bind}" }
${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "log-bin=mysql-bin"} ${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "log-bin=mysql-bin"}
${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "server-id = ${toString cfg.replication.serverId}"} ${optionalString (cfg.replication.role == "master" || cfg.replication.role == "slave") "server-id = ${toString cfg.replication.serverId}"}
${optionalString (cfg.replication.role == "slave" && !atLeast55) ${optionalString (cfg.replication.role == "slave" && !atLeast55)
@ -58,6 +59,13 @@ in
"; ";
}; };
bind = mkOption {
type = types.nullOr types.str;
default = null;
example = literalExample "0.0.0.0";
description = "Address to bind to. The default it to bind to all addresses";
};
port = mkOption { port = mkOption {
type = types.int; type = types.int;
default = 3306; default = 3306;

View File

@ -0,0 +1,205 @@
{ lib, pkgs, config, ... } :
with lib;
let
cfg = config.services.postage;
confFile = pkgs.writeTextFile {
name = "postage.conf";
text = ''
connection_file = ${postageConnectionsFile}
allow_custom_connections = ${builtins.toJSON cfg.allowCustomConnections}
postage_port = ${toString cfg.port}
super_only = ${builtins.toJSON cfg.superOnly}
${optionalString (!isNull cfg.loginGroup) "login_group = ${cfg.loginGroup}"}
login_timeout = ${toString cfg.loginTimeout}
web_root = ${cfg.package}/etc/postage/web_root
data_root = ${cfg.dataRoot}
${optionalString (!isNull cfg.tls) ''
tls_cert = ${cfg.tls.cert}
tls_key = ${cfg.tls.key}
''}
log_level = ${cfg.logLevel}
'';
};
postageConnectionsFile = pkgs.writeTextFile {
name = "postage-connections.conf";
text = concatStringsSep "\n"
(mapAttrsToList (name : conn : "${name}: ${conn}") cfg.connections);
};
postage = "postage";
in {
options.services.postage = {
enable = mkEnableOption "PostgreSQL Administration for the web";
package = mkOption {
type = types.package;
default = pkgs.postage;
defaultText = "pkgs.postage";
description = ''
The postage package to use.
'';
};
connections = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
"nuc-server" = "hostaddr=192.168.0.100 port=5432 dbname=postgres";
"mini-server" = "hostaddr=127.0.0.1 port=5432 dbname=postgres sslmode=require";
};
description = ''
Postage requires at least one PostgreSQL server be defined.
</para><para>
Detailed information about PostgreSQL connection strings is available at:
<link xlink:href="http://www.postgresql.org/docs/current/static/libpq-connect.html"/>
</para><para>
Note that you should not specify your user name or password. That
information will be entered on the login screen. If you specify a
username or password, it will be removed by Postage before attempting to
connect to a database.
'';
};
allowCustomConnections = mkOption {
type = types.bool;
default = false;
description = ''
This tells Postage whether or not to allow anyone to use a custom
connection from the login screen.
'';
};
port = mkOption {
type = types.int;
default = 8080;
description = ''
This tells Postage what port to listen on for browser requests.
'';
};
localOnly = mkOption {
type = types.bool;
default = true;
description = ''
This tells Postage whether or not to set the listening socket to local
addresses only.
'';
};
superOnly = mkOption {
type = types.bool;
default = true;
description = ''
This tells Postage whether or not to only allow super users to
login. The recommended value is true and will restrict users who are not
super users from logging in to any PostgreSQL instance through
Postage. Note that a connection will be made to PostgreSQL in order to
test if the user is a superuser.
'';
};
loginGroup = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
This tells Postage to only allow users in a certain PostgreSQL group to
login to Postage. Note that a connection will be made to PostgreSQL in
order to test if the user is a member of the login group.
'';
};
loginTimeout = mkOption {
type = types.int;
default = 3600;
description = ''
Number of seconds of inactivity before user is automatically logged
out.
'';
};
dataRoot = mkOption {
type = types.str;
default = "/var/lib/postage";
description = ''
This tells Postage where to put the SQL file history. All tabs are saved
to this location so that if you get disconnected from Postage you
don't lose your work.
'';
};
tls = mkOption {
type = types.nullOr (types.submodule {
options = {
cert = mkOption {
type = types.str;
description = "TLS certificate";
};
key = mkOption {
type = types.str;
description = "TLS key";
};
};
});
default = null;
description = ''
These options tell Postage where the TLS Certificate and Key files
reside. If you use these options then you'll only be able to access
Postage through a secure TLS connection. These options are only
necessary if you wish to connect directly to Postage using a secure TLS
connection. As an alternative, you can set up Postage in a reverse proxy
configuration. This allows your web server to terminate the secure
connection and pass on the request to Postage. You can find help to set
up this configuration in:
<link xlink:href="https://github.com/workflowproducts/postage/blob/master/INSTALL_NGINX.md"/>
'';
};
logLevel = mkOption {
type = types.enum ["error" "warn" "notice" "info"];
default = "error";
description = ''
Verbosity of logs
'';
};
};
config = mkIf cfg.enable {
systemd.services.postage = {
description = "postage - PostgreSQL Administration for the web";
wants = [ "postgresql.service" ];
after = [ "postgresql.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
User = postage;
Group = postage;
ExecStart = "${pkgs.postage}/sbin/postage -c ${confFile}" +
optionalString cfg.localOnly " --local-only=true";
};
};
users = {
users."${postage}" = {
name = postage;
group = postage;
home = cfg.dataRoot;
createHome = true;
};
groups."${postage}" = {
name = postage;
};
};
};
}

View File

@ -0,0 +1,110 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.rethinkdb;
rethinkdb = cfg.package;
in
{
###### interface
options = {
services.rethinkdb = {
enable = mkOption {
default = false;
description = "Whether to enable the RethinkDB server.";
};
#package = mkOption {
# default = pkgs.rethinkdb;
# description = "Which RethinkDB derivation to use.";
#};
user = mkOption {
default = "rethinkdb";
description = "User account under which RethinkDB runs.";
};
group = mkOption {
default = "rethinkdb";
description = "Group which rethinkdb user belongs to.";
};
dbpath = mkOption {
default = "/var/db/rethinkdb";
description = "Location where RethinkDB stores its data, 1 data directory per instance.";
};
pidpath = mkOption {
default = "/var/run/rethinkdb";
description = "Location where each instance's pid file is located.";
};
#cfgpath = mkOption {
# default = "/etc/rethinkdb/instances.d";
# description = "Location where RethinkDB stores it config files, 1 config file per instance.";
#};
# TODO: currently not used by our implementation.
#instances = mkOption {
# type = types.attrsOf types.str;
# default = {};
# description = "List of named RethinkDB instances in our cluster.";
#};
};
};
###### implementation
config = mkIf config.services.rethinkdb.enable {
environment.systemPackages = [ rethinkdb ];
systemd.services.rethinkdb = {
description = "RethinkDB server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
# TODO: abstract away 'default', which is a per-instance directory name
# allowing end user of this nix module to provide multiple instances,
# and associated directory per instance
ExecStart = "${rethinkdb}/bin/rethinkdb -d ${cfg.dbpath}/default";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
User = cfg.user;
Group = cfg.group;
PIDFile = "${cfg.pidpath}/default.pid";
PermissionsStartOnly = true;
};
preStart = ''
if ! test -e ${cfg.dbpath}; then
install -d -m0755 -o ${cfg.user} -g ${cfg.group} ${cfg.dbpath}
install -d -m0755 -o ${cfg.user} -g ${cfg.group} ${cfg.dbpath}/default
chown -R ${cfg.user}:${cfg.group} ${cfg.dbpath}
fi
if ! test -e "${cfg.pidpath}/default.pid"; then
install -D -o ${cfg.user} -g ${cfg.group} /dev/null "${cfg.pidpath}/default.pid"
fi
'';
};
users.extraUsers.rethinkdb = mkIf (cfg.user == "rethinkdb")
{ name = "rethinkdb";
description = "RethinkDB server user";
};
users.extraGroups = optionalAttrs (cfg.group == "rethinkdb") (singleton
{ name = "rethinkdb";
});
};
}

View File

@ -24,7 +24,7 @@
<para> <para>
Emacs runs within a graphical desktop environment using the X Emacs runs within a graphical desktop environment using the X
Window System, but works equally well on a text terminal. Under Window System, but works equally well on a text terminal. Under
<productname>OS X</productname>, a "Mac port" edition is <productname>macOS</productname>, a "Mac port" edition is
available, which uses Apple's native GUI frameworks. available, which uses Apple's native GUI frameworks.
</para> </para>
@ -84,7 +84,7 @@
<listitem> <listitem>
<para> <para>
Emacs 25 with the "Mac port" patches, providing a more Emacs 25 with the "Mac port" patches, providing a more
native look and feel under OS X. native look and feel under macOS.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View File

@ -39,7 +39,7 @@ let
admins = []; admins = [];
}; };
serverSettingsFile = pkgs.writeText "server-settings.json" (builtins.toJSON (filterAttrsRecursive (n: v: v != null) serverSettings)); serverSettingsFile = pkgs.writeText "server-settings.json" (builtins.toJSON (filterAttrsRecursive (n: v: v != null) serverSettings));
modDir = pkgs.factorio-mkModDirDrv cfg.mods; modDir = pkgs.factorio-utils.mkModDirDrv cfg.mods;
in in
{ {
options = { options = {

View File

@ -4,6 +4,8 @@ with lib;
let let
cfg = config.services.fluentd; cfg = config.services.fluentd;
pluginArgs = concatStringsSep " " (map (x: "-p ${x}") cfg.plugins);
in { in {
###### interface ###### interface
@ -28,6 +30,15 @@ in {
defaultText = "pkgs.fluentd"; defaultText = "pkgs.fluentd";
description = "The fluentd package to use."; description = "The fluentd package to use.";
}; };
plugins = mkOption {
type = types.listOf types.path;
default = [];
description = ''
A list of plugin paths to pass into fluentd. It will make plugins defined in ruby files
there available in your config.
'';
};
}; };
}; };
@ -39,7 +50,7 @@ in {
description = "Fluentd Daemon"; description = "Fluentd Daemon";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/fluentd -c ${pkgs.writeText "fluentd.conf" cfg.config}"; ExecStart = "${cfg.package}/bin/fluentd -c ${pkgs.writeText "fluentd.conf" cfg.config} ${pluginArgs}";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID"; ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
}; };
}; };

View File

@ -11,9 +11,7 @@ let
password_secret = ${cfg.passwordSecret} password_secret = ${cfg.passwordSecret}
root_username = ${cfg.rootUsername} root_username = ${cfg.rootUsername}
root_password_sha2 = ${cfg.rootPasswordSha2} root_password_sha2 = ${cfg.rootPasswordSha2}
elasticsearch_cluster_name = ${cfg.elasticsearchClusterName} elasticsearch_hosts = ${concatStringsSep "," cfg.elasticsearchHosts}
elasticsearch_discovery_zen_ping_multicast_enabled = ${boolToString cfg.elasticsearchDiscoveryZenPingMulticastEnabled}
elasticsearch_discovery_zen_ping_unicast_hosts = ${cfg.elasticsearchDiscoveryZenPingUnicastHosts}
message_journal_dir = ${cfg.messageJournalDir} message_journal_dir = ${cfg.messageJournalDir}
mongodb_uri = ${cfg.mongodbUri} mongodb_uri = ${cfg.mongodbUri}
plugin_dir = /var/lib/graylog/plugins plugin_dir = /var/lib/graylog/plugins
@ -91,22 +89,10 @@ in
''; '';
}; };
elasticsearchClusterName = mkOption { elasticsearchHosts = mkOption {
type = types.str; type = types.listOf types.str;
example = "graylog"; example = literalExample ''[ "http://node1:9200" "http://user:password@node2:19200" ]'';
description = "This must be the same as for your Elasticsearch cluster"; description = "List of valid URIs of the http ports of your elastic nodes. If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that requires authentication";
};
elasticsearchDiscoveryZenPingMulticastEnabled = mkOption {
type = types.bool;
default = false;
description = "Whether to use elasticsearch multicast discovery";
};
elasticsearchDiscoveryZenPingUnicastHosts = mkOption {
type = types.str;
default = "127.0.0.1:9300";
description = "Tells Graylogs Elasticsearch client how to find other cluster members. See Elasticsearch documentation for details";
}; };
messageJournalDir = mkOption { messageJournalDir = mkOption {

View File

@ -0,0 +1,72 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.heartbeat;
heartbeatYml = pkgs.writeText "heartbeat.yml" ''
name: ${cfg.name}
tags: ${builtins.toJSON cfg.tags}
${cfg.extraConfig}
'';
in
{
options = {
services.heartbeat = {
enable = mkEnableOption "heartbeat";
name = mkOption {
type = types.str;
default = "heartbeat";
description = "Name of the beat";
};
tags = mkOption {
type = types.listOf types.str;
default = [];
description = "Tags to place on the shipped log messages";
};
stateDir = mkOption {
type = types.str;
default = "/var/lib/heartbeat";
description = "The state directory. heartbeat's own logs and other data are stored here.";
};
extraConfig = mkOption {
type = types.lines;
default = ''
heartbeat.monitors:
- type: http
urls: ["http://localhost:9200"]
schedule: '@every 10s'
'';
description = "Any other configuration options you want to add";
};
};
};
config = mkIf cfg.enable {
systemd.services.heartbeat = with pkgs; {
description = "heartbeat log shipper";
wantedBy = [ "multi-user.target" ];
preStart = ''
mkdir -p "${cfg.stateDir}"/{data,logs}
chown nobody:nogroup "${cfg.stateDir}"/{data,logs}
'';
serviceConfig = {
User = "nobody";
PermissionsStartOnly = true;
AmbientCapabilities = "cap_net_raw";
ExecStart = "${pkgs.heartbeat}/bin/heartbeat -c \"${heartbeatYml}\" -path.data \"${cfg.stateDir}/data\" -path.logs \"${cfg.stateDir}/logs\"";
};
};
};
}

View File

@ -0,0 +1,246 @@
{ config, lib, pkgs, services, ... }:
with lib;
let
cfg = config.services.journalwatch;
user = "journalwatch";
dataDir = "/var/lib/${user}";
journalwatchConfig = pkgs.writeText "config" (''
# (File Generated by NixOS journalwatch module.)
[DEFAULT]
mail_binary = ${cfg.mailBinary}
priority = ${toString cfg.priority}
mail_from = ${cfg.mailFrom}
''
+ optionalString (cfg.mailTo != null) ''
mail_to = ${cfg.mailTo}
''
+ cfg.extraConfig);
journalwatchPatterns = pkgs.writeText "patterns" ''
# (File Generated by NixOS journalwatch module.)
${mkPatterns cfg.filterBlocks}
'';
# empty line at the end needed to to separate the blocks
mkPatterns = filterBlocks: concatStringsSep "\n" (map (block: ''
${block.match}
${block.filters}
'') filterBlocks);
in {
options = {
services.journalwatch = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
If enabled, periodically check the journal with journalwatch and report the results by mail.
'';
};
priority = mkOption {
type = types.int;
default = 6;
description = ''
Lowest priority of message to be considered.
A value between 7 ("debug"), and 0 ("emerg"). Defaults to 6 ("info").
If you don't care about anything with "info" priority, you can reduce
this to e.g. 5 ("notice") to considerably reduce the amount of
messages without needing many <option>filterBlocks</option>.
'';
};
# HACK: this is a workaround for journalwatch's usage of socket.getfqdn() which always returns localhost if
# there's an alias for the localhost on a separate line in /etc/hosts, or take for ages if it's not present and
# then return something right-ish in the direction of /etc/hostname. Just bypass it completely.
mailFrom = mkOption {
type = types.str;
default = "journalwatch@${config.networking.hostName}";
description = ''
Mail address to send journalwatch reports from.
'';
};
mailTo = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Mail address to send journalwatch reports to.
'';
};
mailBinary = mkOption {
type = types.path;
default = "/run/wrappers/bin/sendmail";
description = ''
Sendmail-compatible binary to be used to send the messages.
'';
};
extraConfig = mkOption {
type = types.str;
default = "";
description = ''
Extra lines to be added verbatim to the journalwatch/config configuration file.
You can add any commandline argument to the config, without the '--'.
See <literal>journalwatch --help</literal> for all arguments and their description.
'';
};
filterBlocks = mkOption {
type = types.listOf (types.submodule {
options = {
match = mkOption {
type = types.str;
example = "SYSLOG_IDENTIFIER = systemd";
description = ''
Syntax: <literal>field = value</literal>
Specifies the log entry <literal>field</literal> this block should apply to.
If the <literal>field</literal> of a message matches this <literal>value</literal>,
this patternBlock's <option>filters</option> are applied.
If <literal>value</literal> starts and ends with a slash, it is interpreted as
an extended python regular expression, if not, it's an exact match.
The journal fields are explained in systemd.journal-fields(7).
'';
};
filters = mkOption {
type = types.str;
example = ''
(Stopped|Stopping|Starting|Started) .*
(Reached target|Stopped target) .*
'';
description = ''
The filters to apply on all messages which satisfy <option>match</option>.
Any of those messages that match any specified filter will be removed from journalwatch's output.
Each filter is an extended Python regular expression.
You can specify multiple filters and separate them by newlines.
Lines starting with '#' are comments. Inline-comments are not permitted.
'';
};
};
});
example = [
# examples taken from upstream
{
match = "_SYSTEMD_UNIT = systemd-logind.service";
filters = ''
New session [a-z]?\d+ of user \w+\.
Removed session [a-z]?\d+\.
'';
}
{
match = "SYSLOG_IDENTIFIER = /(CROND|crond)/";
filters = ''
pam_unix\(crond:session\): session (opened|closed) for user \w+
\(\w+\) CMD .*
'';
}
];
# another example from upstream.
# very useful on priority = 6, and required as journalwatch throws an error when no pattern is defined at all.
default = [
{
match = "SYSLOG_IDENTIFIER = systemd";
filters = ''
(Stopped|Stopping|Starting|Started) .*
(Created slice|Removed slice) user-\d*\.slice\.
Received SIGRTMIN\+24 from PID .*
(Reached target|Stopped target) .*
Startup finished in \d*ms\.
'';
}
];
description = ''
filterBlocks can be defined to blacklist journal messages which are not errors.
Each block matches on a log entry field, and the filters in that block then are matched
against all messages with a matching log entry field.
All messages whose PRIORITY is at least 6 (INFO) are processed by journalwatch.
If you don't specify any filterBlocks, PRIORITY is reduced to 5 (NOTICE) by default.
All regular expressions are extended Python regular expressions, for details
see: http://doc.pyschools.com/html/regex.html
'';
};
interval = mkOption {
type = types.str;
default = "hourly";
description = ''
How often to run journalwatch.
The format is described in systemd.time(7).
'';
};
accuracy = mkOption {
type = types.str;
default = "10min";
description = ''
The time window around the interval in which the journalwatch run will be scheduled.
The format is described in systemd.time(7).
'';
};
};
};
config = mkIf cfg.enable {
users.extraUsers.${user} = {
isSystemUser = true;
createHome = true;
home = dataDir;
# for journal access
group = "systemd-journal";
};
systemd.services.journalwatch = {
environment = {
XDG_DATA_HOME = "${dataDir}/share";
XDG_CONFIG_HOME = "${dataDir}/config";
};
serviceConfig = {
User = user;
Type = "oneshot";
PermissionsStartOnly = true;
ExecStart = "${pkgs.python3Packages.journalwatch}/bin/journalwatch mail";
# lowest CPU and IO priority, but both still in best-effort class to prevent starvation
Nice=19;
IOSchedulingPriority=7;
};
preStart = ''
chown -R ${user}:systemd-journal ${dataDir}
chmod -R u+rwX,go-w ${dataDir}
mkdir -p ${dataDir}/config/journalwatch
ln -sf ${journalwatchConfig} ${dataDir}/config/journalwatch/config
ln -sf ${journalwatchPatterns} ${dataDir}/config/journalwatch/patterns
'';
};
systemd.timers.journalwatch = {
description = "Periodic journalwatch run";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = cfg.interval;
AccuracySec = cfg.accuracy;
Persistent = true;
};
};
};
meta = {
maintainers = with stdenv.lib.maintainers; [ florianjacob ];
};
}

Some files were not shown because too many files have changed in this diff Show More