Merge branch 'staging' into #38486

This commit is contained in:
Vladimír Čunát 2018-08-30 18:30:32 +02:00
commit 3f80b81ece
No known key found for this signature in database
GPG Key ID: E747DF1F9575A3AA
8766 changed files with 296774 additions and 178852 deletions

8
.dir-locals.el Normal file
View File

@ -0,0 +1,8 @@
;;; Directory Local Variables
;;; For more information see (info "(emacs) Directory Variables")
((nil
(bug-reference-bug-regexp . "\\(\\(?:[Ii]ssue \\|[Ff]ixe[ds] \\|[Rr]esolve[ds]? \\|[Cc]lose[ds]? \\|[Pp]\\(?:ull [Rr]equest\\|[Rr]\\) \\|(\\)#\\([0-9]+\\))?\\)")
(bug-reference-url-format . "https://github.com/NixOS/nixpkgs/issues/%s"))
(nix-mode
(tab-width . 2)))

View File

@ -13,8 +13,8 @@ charset = utf-8
# see https://nixos.org/nixpkgs/manual/#chap-conventions
# Match nix/ruby files, set indent to spaces with width of two
[*.{nix,rb}]
# Match nix/ruby/docbook files, set indent to spaces with width of two
[*.{nix,rb,xml}]
indent_style = space
indent_size = 2

18
.github/CODEOWNERS vendored
View File

@ -14,13 +14,15 @@
/lib @edolstra @nbp
/lib/systems @nbp @ericson2314
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
# Nixpkgs Internals
/default.nix @nbp
/pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314
/pkgs/stdenv
/pkgs/stdenv/generic @Ericson2314
/pkgs/stdenv/cross @Ericson2314
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314
@ -44,17 +46,18 @@
/nixos/modules/installer/tools/nixos-option.sh @nbp
# Python-related code and docs
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh
/doc/languages-frameworks/python.md @FRidh
# Haskell
/pkgs/development/compilers/ghc @peti
/pkgs/development/haskell-modules @peti
/pkgs/development/haskell-modules/default.nix @peti
/pkgs/development/haskell-modules/generic-builder.nix @peti
/pkgs/development/haskell-modules/hoogle.nix @peti
/pkgs/development/compilers/ghc @peti @ryantm @basvandijk
/pkgs/development/haskell-modules @peti @ryantm @basvandijk
/pkgs/development/haskell-modules/default.nix @peti @ryantm @basvandijk
/pkgs/development/haskell-modules/generic-builder.nix @peti @ryantm @basvandijk
/pkgs/development/haskell-modules/hoogle.nix @peti @ryantm @basvandijk
# R
/pkgs/applications/science/math/R @peti
@ -64,6 +67,9 @@
/pkgs/development/interpreters/ruby @zimbatm
/pkgs/development/ruby-modules @zimbatm
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7
# Darwin-related
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
/pkgs/os-specific/darwin @NixOS/darwin-maintainers

View File

@ -43,7 +43,7 @@ See the nixpkgs manual for more details on [standard meta-attributes](https://ni
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list archives, pull request discussions or upstream changes, it may require a lot of work.
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.

View File

@ -5,7 +5,7 @@
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- [ ] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `build-use-sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS)
- [ ] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS)
- Built on platform(s)
- [ ] NixOS
- [ ] macOS
@ -13,6 +13,7 @@
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Determined the impact on package closure size (by running `nix path-info -S` before and after)
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
---

View File

@ -8,7 +8,7 @@ build daemon as so-called channels. To get channel information via git, add
[nixpkgs-channels](https://github.com/NixOS/nixpkgs-channels.git) as a remote:
```
% git remote add channels git://github.com/NixOS/nixpkgs-channels.git
% git remote add channels https://github.com/NixOS/nixpkgs-channels.git
```
For stability and maximum binary package support, it is recommended to maintain
@ -37,5 +37,5 @@ For pull-requests, please rebase onto nixpkgs `master`.
Communication:
* [Mailing list](https://groups.google.com/forum/#!forum/nix-devel)
* [Discourse Forum](https://discourse.nixos.org/)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)

View File

@ -6,7 +6,10 @@ if ! builtins ? nixVersion || builtins.compareVersions requiredVersion builtins.
This version of Nixpkgs requires Nix >= ${requiredVersion}, please upgrade:
- If you are running NixOS, use `nixos-rebuild' to upgrade your system.
- If you are running NixOS, `nixos-rebuild' can be used to upgrade your system.
- Alternatively, with Nix > 2.0 `nix upgrade-nix' can be used to imperatively
upgrade Nix. You may use `nix-env --version' to check which version you have.
- If you installed Nix using the install script (https://nixos.org/nix/install),
it is safe to upgrade by running it again:

View File

@ -1,12 +1,22 @@
MD_TARGETS=$(addsuffix .xml, $(basename $(wildcard ./*.md ./**/*.md)))
.PHONY: all
all: validate out/html/index.html out/epub/manual.epub
all: validate format out/html/index.html out/epub/manual.epub
.PHONY: debug
debug:
nix-shell --run "xmloscopy --docbook5 ./manual.xml ./manual-full.xml"
.PHONY: format
format:
find . -iname '*.xml' -type f -print0 | xargs -0 -I{} -n1 \
xmlformat --config-file "$$XMLFORMAT_CONFIG" -i {}
.PHONY: fix-misc-xml
fix-misc-xml:
find . -iname '*.xml' -type f \
-exec ../nixos/doc/varlistentry-fixer.rb {} ';'
.PHONY: clean
clean:
rm -f ${MD_TARGETS} .version manual-full.xml
@ -64,7 +74,7 @@ manual-full.xml: ${MD_TARGETS} .version *.xml
.version:
nix-instantiate --eval \
-E '(import ../lib).nixpkgsVersion' > .version
-E '(import ../lib).version' > .version
%.section.xml: %.section.md
pandoc $^ -w docbook+smart \

File diff suppressed because it is too large Load Diff

View File

@ -1,40 +1,51 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-packageconfig">
<title>Global configuration</title>
<para>Nix comes with certain defaults about what packages can and
cannot be installed, based on a package's metadata. By default, Nix
will prevent installation if any of the following criteria are
true:</para>
<itemizedlist>
<listitem><para>The package is thought to be broken, and has had
its <literal>meta.broken</literal> set to
<literal>true</literal>.</para></listitem>
<listitem><para>The package's <literal>meta.license</literal> is set
to a license which is considered to be unfree.</para></listitem>
<listitem><para>The package has known security vulnerabilities but
has not or can not be updated for some reason, and a list of issues
has been entered in to the package's
<literal>meta.knownVulnerabilities</literal>.</para></listitem>
</itemizedlist>
<para>Note that all this is checked during evaluation already,
and the check includes any package that is evaluated.
In particular, all build-time dependencies are checked.
<literal>nix-env -qa</literal> will (attempt to) hide any packages
that would be refused.
</para>
<para>Each of these criteria can be altered in the nixpkgs
configuration.</para>
<para>The nixpkgs configuration for a NixOS system is set in the
<literal>configuration.nix</literal>, as in the following example:
<title>Global configuration</title>
<para>
Nix comes with certain defaults about what packages can and cannot be
installed, based on a package's metadata. By default, Nix will prevent
installation if any of the following criteria are true:
</para>
<itemizedlist>
<listitem>
<para>
The package is thought to be broken, and has had its
<literal>meta.broken</literal> set to <literal>true</literal>.
</para>
</listitem>
<listitem>
<para>
The package isn't intended to run on the given system, as none of its
<literal>meta.platforms</literal> match the given system.
</para>
</listitem>
<listitem>
<para>
The package's <literal>meta.license</literal> is set to a license which is
considered to be unfree.
</para>
</listitem>
<listitem>
<para>
The package has known security vulnerabilities but has not or can not be
updated for some reason, and a list of issues has been entered in to the
package's <literal>meta.knownVulnerabilities</literal>.
</para>
</listitem>
</itemizedlist>
<para>
Note that all this is checked during evaluation already, and the check
includes any package that is evaluated. In particular, all build-time
dependencies are checked. <literal>nix-env -qa</literal> will (attempt to)
hide any packages that would be refused.
</para>
<para>
Each of these criteria can be altered in the nixpkgs configuration.
</para>
<para>
The nixpkgs configuration for a NixOS system is set in the
<literal>configuration.nix</literal>, as in the following example:
<programlisting>
{
nixpkgs.config = {
@ -42,112 +53,156 @@ configuration.</para>
};
}
</programlisting>
However, this does not allow unfree software for individual users.
Their configurations are managed separately.</para>
<para>A user's of nixpkgs configuration is stored in a user-specific
configuration file located at
<filename>~/.config/nixpkgs/config.nix</filename>. For example:
However, this does not allow unfree software for individual users. Their
configurations are managed separately.
</para>
<para>
A user's of nixpkgs configuration is stored in a user-specific configuration
file located at <filename>~/.config/nixpkgs/config.nix</filename>. For
example:
<programlisting>
{
allowUnfree = true;
}
</programlisting>
</para>
<para>Note that we are not able to test or build unfree software on Hydra
due to policy. Most unfree licenses prohibit us from either executing or
distributing the software.</para>
<section xml:id="sec-allow-broken">
</para>
<para>
Note that we are not able to test or build unfree software on Hydra due to
policy. Most unfree licenses prohibit us from either executing or
distributing the software.
</para>
<section xml:id="sec-allow-broken">
<title>Installing broken packages</title>
<para>There are two ways to try compiling a package which has been
marked as broken.</para>
<para>
There are two ways to try compiling a package which has been marked as
broken.
</para>
<itemizedlist>
<listitem><para>
<listitem>
<para>
For allowing the build of a broken package once, you can use an
environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_BROKEN=1</programlisting>
</para></listitem>
<listitem><para>
For permanently allowing broken packages to be built, you may
add <literal>allowBroken = true;</literal> to your user's
configuration file, like this:
<programlisting>$ export NIXPKGS_ALLOW_BROKEN=1</programlisting>
</para>
</listitem>
<listitem>
<para>
For permanently allowing broken packages to be built, you may add
<literal>allowBroken = true;</literal> to your user's configuration file,
like this:
<programlisting>
{
allowBroken = true;
}
</programlisting>
</para></listitem>
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="sec-allow-unsupported-system">
<title>Installing packages on unsupported systems</title>
<section xml:id="sec-allow-unfree">
<title>Installing unfree packages</title>
<para>There are several ways to tweak how Nix handles a package
which has been marked as unfree.</para>
<para>
There are also two ways to try compiling a package which has been marked as
unsuported for the given system.
</para>
<itemizedlist>
<listitem><para>
To temporarily allow all unfree packages, you can use an
<listitem>
<para>
For allowing the build of a broken package once, you can use an
environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1</programlisting>
</para>
</listitem>
<listitem>
<para>
For permanently allowing broken packages to be built, you may add
<literal>allowUnsupportedSystem = true;</literal> to your user's
configuration file, like this:
<programlisting>
{
allowUnsupportedSystem = true;
}
</programlisting>
</para>
</listitem>
</itemizedlist>
<programlisting>$ export NIXPKGS_ALLOW_UNFREE=1</programlisting>
</para></listitem>
<para>
The difference between an a package being unsupported on some system and
being broken is admittedly a bit fuzzy. If a program
<emphasis>ought</emphasis> to work on a certain platform, but doesn't, the
platform should be included in <literal>meta.platforms</literal>, but marked
as broken with e.g. <literal>meta.broken =
!hostPlatform.isWindows</literal>. Of course, this begs the question of what
"ought" means exactly. That is left to the package maintainer.
</para>
</section>
<section xml:id="sec-allow-unfree">
<title>Installing unfree packages</title>
<listitem><para>
It is possible to permanently allow individual unfree packages,
while still blocking unfree packages by default using the
<literal>allowUnfreePredicate</literal> configuration
option in the user configuration file.</para>
<para>
There are several ways to tweak how Nix handles a package which has been
marked as unfree.
</para>
<para>This option is a function which accepts a package as a
parameter, and returns a boolean. The following example
configuration accepts a package and always returns false:
<itemizedlist>
<listitem>
<para>
To temporarily allow all unfree packages, you can use an environment
variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_UNFREE=1</programlisting>
</para>
</listitem>
<listitem>
<para>
It is possible to permanently allow individual unfree packages, while
still blocking unfree packages by default using the
<literal>allowUnfreePredicate</literal> configuration option in the user
configuration file.
</para>
<para>
This option is a function which accepts a package as a parameter, and
returns a boolean. The following example configuration accepts a package
and always returns false:
<programlisting>
{
allowUnfreePredicate = (pkg: false);
}
</programlisting>
</para>
<para>A more useful example, the following configuration allows
only allows flash player and visual studio code:
<para>
A more useful example, the following configuration allows only allows
flash player and visual studio code:
<programlisting>
{
allowUnfreePredicate = (pkg: elem (builtins.parseDrvName pkg.name).name [ "flashplayer" "vscode" ]);
}
</programlisting>
</para></listitem>
</para>
</listitem>
<listitem>
<para>It is also possible to whitelist and blacklist licenses
that are specifically acceptable or not acceptable, using
<para>
It is also possible to whitelist and blacklist licenses that are
specifically acceptable or not acceptable, using
<literal>whitelistedLicenses</literal> and
<literal>blacklistedLicenses</literal>, respectively.
</para>
<para>The following example configuration whitelists the
licenses <literal>amd</literal> and <literal>wtfpl</literal>:
<para>
The following example configuration whitelists the licenses
<literal>amd</literal> and <literal>wtfpl</literal>:
<programlisting>
{
whitelistedLicenses = with stdenv.lib.licenses; [ amd wtfpl ];
}
</programlisting>
</para>
<para>The following example configuration blacklists the
<literal>gpl3</literal> and <literal>agpl3</literal> licenses:
<para>
The following example configuration blacklists the <literal>gpl3</literal>
and <literal>agpl3</literal> licenses:
<programlisting>
{
blacklistedLicenses = with stdenv.lib.licenses; [ agpl3 gpl3 ];
@ -157,36 +212,38 @@ distributing the software.</para>
</listitem>
</itemizedlist>
<para>A complete list of licenses can be found in the file
<filename>lib/licenses.nix</filename> of the nixpkgs tree.</para>
</section>
<para>
A complete list of licenses can be found in the file
<filename>lib/licenses.nix</filename> of the nixpkgs tree.
</para>
</section>
<section xml:id="sec-allow-insecure">
<title>Installing insecure packages</title>
<section xml:id="sec-allow-insecure">
<title>
Installing insecure packages
</title>
<para>There are several ways to tweak how Nix handles a package
which has been marked as insecure.</para>
<para>
There are several ways to tweak how Nix handles a package which has been
marked as insecure.
</para>
<itemizedlist>
<listitem><para>
To temporarily allow all insecure packages, you can use an
environment variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_INSECURE=1</programlisting>
</para></listitem>
<listitem><para>
It is possible to permanently allow individual insecure
packages, while still blocking other insecure packages by
default using the <literal>permittedInsecurePackages</literal>
configuration option in the user configuration file.</para>
<para>The following example configuration permits the
installation of the hypothetically insecure package
<literal>hello</literal>, version <literal>1.2.3</literal>:
<listitem>
<para>
To temporarily allow all insecure packages, you can use an environment
variable for a single invocation of the nix tools:
<programlisting>$ export NIXPKGS_ALLOW_INSECURE=1</programlisting>
</para>
</listitem>
<listitem>
<para>
It is possible to permanently allow individual insecure packages, while
still blocking other insecure packages by default using the
<literal>permittedInsecurePackages</literal> configuration option in the
user configuration file.
</para>
<para>
The following example configuration permits the installation of the
hypothetically insecure package <literal>hello</literal>, version
<literal>1.2.3</literal>:
<programlisting>
{
permittedInsecurePackages = [
@ -196,45 +253,42 @@ distributing the software.</para>
</programlisting>
</para>
</listitem>
<listitem><para>
It is also possible to create a custom policy around which
insecure packages to allow and deny, by overriding the
<literal>allowInsecurePredicate</literal> configuration
option.</para>
<para>The <literal>allowInsecurePredicate</literal> option is a
function which accepts a package and returns a boolean, much
like <literal>allowUnfreePredicate</literal>.</para>
<para>The following configuration example only allows insecure
packages with very short names:
<listitem>
<para>
It is also possible to create a custom policy around which insecure
packages to allow and deny, by overriding the
<literal>allowInsecurePredicate</literal> configuration option.
</para>
<para>
The <literal>allowInsecurePredicate</literal> option is a function which
accepts a package and returns a boolean, much like
<literal>allowUnfreePredicate</literal>.
</para>
<para>
The following configuration example only allows insecure packages with
very short names:
<programlisting>
{
allowInsecurePredicate = (pkg: (builtins.stringLength (builtins.parseDrvName pkg.name).name) &lt;= 5);
}
</programlisting>
</para>
<para>Note that <literal>permittedInsecurePackages</literal> is
only checked if <literal>allowInsecurePredicate</literal> is not
specified.
</para></listitem>
<para>
Note that <literal>permittedInsecurePackages</literal> is only checked if
<literal>allowInsecurePredicate</literal> is not specified.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<!--============================================================-->
<section xml:id="sec-modify-via-packageOverrides">
<title>Modify packages via <literal>packageOverrides</literal></title>
<section xml:id="sec-modify-via-packageOverrides"><title>Modify
packages via <literal>packageOverrides</literal></title>
<para>You can define a function called
<varname>packageOverrides</varname> in your local
<filename>~/.config/nixpkgs/config.nix</filename> to override nix packages. It
must be a function that takes pkgs as an argument and return modified
set of packages.
<para>
You can define a function called <varname>packageOverrides</varname> in your
local <filename>~/.config/nixpkgs/config.nix</filename> to override nix
packages. It must be a function that takes pkgs as an argument and return
modified set of packages.
<programlisting>
{
packageOverrides = pkgs: rec {
@ -242,12 +296,9 @@ set of packages.
};
}
</programlisting>
</para>
</section>
<section xml:id="sec-declarative-package-management">
</para>
</section>
<section xml:id="sec-declarative-package-management">
<title>Declarative Package Management</title>
<section xml:id="sec-building-environment">
@ -265,7 +316,7 @@ set of packages.
use the following in <filename>~/.config/nixpkgs/config.nix</filename>:
</para>
<screen>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
@ -286,7 +337,7 @@ set of packages.
some of it isn't. Let's tell Nixpkgs to only link the stuff that we want:
</para>
<screen>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
@ -300,13 +351,12 @@ set of packages.
<para>
<literal>pathsToLink</literal> tells Nixpkgs to only link the paths listed
which gets rid of the extra stuff in the profile.
<filename>/bin</filename> and <filename>/share</filename> are good
defaults for a user environment, getting rid of the clutter. If you are
running on Nix on MacOS, you may want to add another path as well,
<filename>/Applications</filename>, that makes GUI apps available.
which gets rid of the extra stuff in the profile. <filename>/bin</filename>
and <filename>/share</filename> are good defaults for a user environment,
getting rid of the clutter. If you are running on Nix on MacOS, you may
want to add another path as well, <filename>/Applications</filename>, that
makes GUI apps available.
</para>
</section>
<section xml:id="sec-getting-documentation">
@ -322,13 +372,13 @@ set of packages.
section 4). Let's make Nix install those as well.
</para>
<screen>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myPackages = pkgs.buildEnv {
name = "my-packages";
paths = [ aspell bc coreutils ffmpeg nixUnstable emscripten jq nox silver-searcher ];
pathsToLink = [ "/share/man" "/share/doc" /bin" ];
pathsToLink = [ "/share/man" "/share/doc" "/bin" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
@ -338,11 +388,10 @@ set of packages.
<para>
This provides us with some useful documentation for using our packages.
However, if we actually want those manpages to be detected by man, we need
to set up our environment. This can also be managed within Nix
expressions.
to set up our environment. This can also be managed within Nix expressions.
</para>
<screen>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
@ -367,7 +416,7 @@ cp ${myProfile} $out/etc/profile.d/my-profile.sh
nox
silver-searcher
];
pathsToLink = [ "/share/man" "/share/doc" /bin" "/etc" ];
pathsToLink = [ "/share/man" "/share/doc" "/bin" "/etc" ];
extraOutputsToInstall = [ "man" "doc" ];
};
};
@ -375,12 +424,12 @@ cp ${myProfile} $out/etc/profile.d/my-profile.sh
</screen>
<para>
For this to work fully, you must also have this script sourced when you
are logged in. Try adding something like this to your
For this to work fully, you must also have this script sourced when you are
logged in. Try adding something like this to your
<filename>~/.profile</filename> file:
</para>
<screen>
<screen>
#!/bin/sh
if [ -d $HOME/.nix-profile/etc/profile.d ]; then
for i in $HOME/.nix-profile/etc/profile.d/*.sh; do
@ -395,7 +444,6 @@ fi
Now just run <literal>source $HOME/.profile</literal> and you can starting
loading man pages from your environent.
</para>
</section>
<section xml:id="sec-gnu-info-setup">
@ -407,7 +455,7 @@ fi
some small modifications to our environment scripts.
</para>
<screen>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myProfile = writeText "my-profile" ''
@ -456,9 +504,6 @@ cp ${myProfile} $out/etc/profile.d/my-profile.sh
root node. Note that <literal>texinfoInteractive</literal> is added to the
environment to give the <literal>install-info</literal> command.
</para>
</section>
</section>
</section>
</chapter>

View File

@ -1,35 +1,35 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-contributing">
<title>Contributing to this documentation</title>
<para>The DocBook sources of the Nixpkgs manual are in the <filename
<title>Contributing to this documentation</title>
<para>
The DocBook sources of the Nixpkgs manual are in the
<filename
xlink:href="https://github.com/NixOS/nixpkgs/tree/master/doc">doc</filename>
subdirectory of the Nixpkgs repository.</para>
<para>You can quickly check your edits with <command>make</command>:</para>
subdirectory of the Nixpkgs repository.
</para>
<para>
You can quickly check your edits with <command>make</command>:
</para>
<screen>
$ cd /path/to/nixpkgs/doc
$ nix-shell
[nix-shell]$ make
</screen>
<para>If you experience problems, run <command>make debug</command>
to help understand the docbook errors.</para>
<para>After making modifications to the manual, it's important to
build it before committing. You can do that as follows:
<para>
If you experience problems, run <command>make debug</command> to help
understand the docbook errors.
</para>
<para>
After making modifications to the manual, it's important to build it before
committing. You can do that as follows:
<screen>
$ cd /path/to/nixpkgs/doc
$ nix-shell
[nix-shell]$ make clean
[nix-shell]$ nix-build .
</screen>
If the build succeeds, the manual will be in
<filename>./result/share/doc/nixpkgs/manual.html</filename>.</para>
If the build succeeds, the manual will be in
<filename>./result/share/doc/nixpkgs/manual.html</filename>.
</para>
</chapter>

View File

@ -1,153 +1,218 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-cross">
<title>Cross-compilation</title>
<section xml:id="sec-cross-intro">
<title>Cross-compilation</title>
<section xml:id="sec-cross-intro">
<title>Introduction</title>
<para>
"Cross-compilation" means compiling a program on one machine for another type of machine.
For example, a typical use of cross compilation is to compile programs for embedded devices.
These devices often don't have the computing power and memory to compile their own programs.
One might think that cross-compilation is a fairly niche concern, but there are advantages to being rigorous about distinguishing build-time vs run-time environments even when one is developing and deploying on the same machine.
Nixpkgs is increasingly adopting the opinion that packages should be written with cross-compilation in mind, and nixpkgs should evaluate in a similar way (by minimizing cross-compilation-specific special cases) whether or not one is cross-compiling.
"Cross-compilation" means compiling a program on one machine for another
type of machine. For example, a typical use of cross compilation is to
compile programs for embedded devices. These devices often don't have the
computing power and memory to compile their own programs. One might think
that cross-compilation is a fairly niche concern, but there are advantages
to being rigorous about distinguishing build-time vs run-time environments
even when one is developing and deploying on the same machine. Nixpkgs is
increasingly adopting the opinion that packages should be written with
cross-compilation in mind, and nixpkgs should evaluate in a similar way (by
minimizing cross-compilation-specific special cases) whether or not one is
cross-compiling.
</para>
<para>
This chapter will be organized in three parts.
First, it will describe the basics of how to package software in a way that supports cross-compilation.
Second, it will describe how to use Nixpkgs when cross-compiling.
Third, it will describe the internal infrastructure supporting cross-compilation.
This chapter will be organized in three parts. First, it will describe the
basics of how to package software in a way that supports cross-compilation.
Second, it will describe how to use Nixpkgs when cross-compiling. Third, it
will describe the internal infrastructure supporting cross-compilation.
</para>
</section>
</section>
<!--============================================================-->
<section xml:id="sec-cross-packaging">
<section xml:id="sec-cross-packaging">
<title>Packaging in a cross-friendly manner</title>
<section>
<title>Platform parameters</title>
<para>
Nixpkgs follows the <link xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">common historical convention of GNU autoconf</link> of distinguishing between 3 types of platform: <wordasword>build</wordasword>, <wordasword>host</wordasword>, and <wordasword>target</wordasword>.
In summary, <wordasword>build</wordasword> is the platform on which a package is being built, <wordasword>host</wordasword> is the platform on which it is to run. The third attribute, <wordasword>target</wordasword>, is relevant only for certain specific compilers and build tools.
<para>
Nixpkgs follows the
<link xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">common
historical convention of GNU autoconf</link> of distinguishing between 3
types of platform: <wordasword>build</wordasword>,
<wordasword>host</wordasword>, and <wordasword>target</wordasword>. In
summary, <wordasword>build</wordasword> is the platform on which a package
is being built, <wordasword>host</wordasword> is the platform on which it
is to run. The third attribute, <wordasword>target</wordasword>, is
relevant only for certain specific compilers and build tools.
</para>
<para>
In Nixpkgs, these three platforms are defined as attribute sets under the names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>, and <literal>targetPlatform</literal>.
All three are always defined as attributes in the standard environment, and at the top level. That means one can get at them just like a dependency in a function that is imported with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...buildPlatform...</programlisting>, or just off <varname>stdenv</varname>:
<programlisting>{ stdenv, fooDep, barDep, .. }: ...stdenv.buildPlatform...</programlisting>.
In Nixpkgs, these three platforms are defined as attribute sets under the
names <literal>buildPlatform</literal>, <literal>hostPlatform</literal>,
and <literal>targetPlatform</literal>. All three are always defined as
attributes in the standard environment, and at the top level. That means
one can get at them just like a dependency in a function that is imported
with <literal>callPackage</literal>:
<programlisting>{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...buildPlatform...</programlisting>
, or just off <varname>stdenv</varname>:
<programlisting>{ stdenv, fooDep, barDep, .. }: ...stdenv.buildPlatform...</programlisting>
.
</para>
<variablelist>
<varlistentry>
<term><varname>buildPlatform</varname></term>
<listitem><para>
The "build platform" is the platform on which a package is built.
Once someone has a built package, or pre-built binary package, the build platform should not matter and be safe to ignore.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>hostPlatform</varname></term>
<listitem><para>
The "host platform" is the platform on which a package will be run.
This is the simplest platform to understand, but also the one with the worst name.
</para></listitem>
</varlistentry>
<varlistentry>
<term><varname>targetPlatform</varname></term>
<term>
<varname>buildPlatform</varname>
</term>
<listitem>
<para>
The "target platform" attribute is, unlike the other two attributes, not actually fundamental to the process of building software.
Instead, it is only relevant for compatibility with building certain specific compilers and build tools.
It can be safely ignored for all other packages.
The "build platform" is the platform on which a package is built. Once
someone has a built package, or pre-built binary package, the build
platform should not matter and be safe to ignore.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>hostPlatform</varname>
</term>
<listitem>
<para>
The "host platform" is the platform on which a package will be run. This
is the simplest platform to understand, but also the one with the worst
name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>targetPlatform</varname>
</term>
<listitem>
<para>
The "target platform" attribute is, unlike the other two attributes, not
actually fundamental to the process of building software. Instead, it is
only relevant for compatibility with building certain specific compilers
and build tools. It can be safely ignored for all other packages.
</para>
<para>
The build process of certain compilers is written in such a way that the compiler resulting from a single build can itself only produce binaries for a single platform.
The task specifying this single "target platform" is thus pushed to build time of the compiler.
The root cause of this mistake is often that the compiler (which will be run on the host) and the the standard library/runtime (which will be run on the target) are built by a single build process.
The build process of certain compilers is written in such a way that the
compiler resulting from a single build can itself only produce binaries
for a single platform. The task specifying this single "target platform"
is thus pushed to build time of the compiler. The root cause of this
mistake is often that the compiler (which will be run on the host) and
the the standard library/runtime (which will be run on the target) are
built by a single build process.
</para>
<para>
There is no fundamental need to think about a single target ahead of time like this.
If the tool supports modular or pluggable backends, both the need to specify the target at build time and the constraint of having only a single target disappear.
An example of such a tool is LLVM.
There is no fundamental need to think about a single target ahead of
time like this. If the tool supports modular or pluggable backends, both
the need to specify the target at build time and the constraint of
having only a single target disappear. An example of such a tool is
LLVM.
</para>
<para>
Although the existance of a "target platfom" is arguably a historical mistake, it is a common one: examples of tools that suffer from it are GCC, Binutils, GHC and Autoconf.
Nixpkgs tries to avoid sharing in the mistake where possible.
Still, because the concept of a target platform is so ingrained, it is best to support it as is.
Although the existence of a "target platfom" is arguably a historical
mistake, it is a common one: examples of tools that suffer from it are
GCC, Binutils, GHC and Autoconf. Nixpkgs tries to avoid sharing in the
mistake where possible. Still, because the concept of a target platform
is so ingrained, it is best to support it as is.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
The exact schema these fields follow is a bit ill-defined due to a long and convoluted evolution, but this is slowly being cleaned up.
You can see examples of ones used in practice in <literal>lib.systems.examples</literal>; note how they are not all very consistent.
For now, here are few fields can count on them containing:
The exact schema these fields follow is a bit ill-defined due to a long and
convoluted evolution, but this is slowly being cleaned up. You can see
examples of ones used in practice in
<literal>lib.systems.examples</literal>; note how they are not all very
consistent. For now, here are few fields can count on them containing:
</para>
<variablelist>
<varlistentry>
<term><varname>system</varname></term>
<term>
<varname>system</varname>
</term>
<listitem>
<para>
This is a two-component shorthand for the platform.
Examples of this would be "x86_64-darwin" and "i686-linux"; see <literal>lib.systems.doubles</literal> for more.
This format isn't very standard, but has built-in support in Nix, such as the <varname>builtins.currentSystem</varname> impure string.
This is a two-component shorthand for the platform. Examples of this
would be "x86_64-darwin" and "i686-linux"; see
<literal>lib.systems.doubles</literal> for more. This format isn't very
standard, but has built-in support in Nix, such as the
<varname>builtins.currentSystem</varname> impure string.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>config</varname></term>
<term>
<varname>config</varname>
</term>
<listitem>
<para>
This is a 3- or 4- component shorthand for the platform.
Examples of this would be "x86_64-unknown-linux-gnu" and "aarch64-apple-darwin14".
This is a standard format called the "LLVM target triple", as they are pioneered by LLVM and traditionally just used for the <varname>targetPlatform</varname>.
This format is strictly more informative than the "Nix host double", as the previous format could analogously be termed.
This needs a better name than <varname>config</varname>!
This is a 3- or 4- component shorthand for the platform. Examples of
this would be "x86_64-unknown-linux-gnu" and "aarch64-apple-darwin14".
This is a standard format called the "LLVM target triple", as they are
pioneered by LLVM and traditionally just used for the
<varname>targetPlatform</varname>. This format is strictly more
informative than the "Nix host double", as the previous format could
analogously be termed. This needs a better name than
<varname>config</varname>!
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>parsed</varname></term>
<term>
<varname>parsed</varname>
</term>
<listitem>
<para>
This is a nix representation of a parsed LLVM target triple with white-listed components.
This can be specified directly, or actually parsed from the <varname>config</varname>.
[Technically, only one need be specified and the others can be inferred, though the precision of inference may not be very good.]
See <literal>lib.systems.parse</literal> for the exact representation.
This is a nix representation of a parsed LLVM target triple with
white-listed components. This can be specified directly, or actually
parsed from the <varname>config</varname>. [Technically, only one need
be specified and the others can be inferred, though the precision of
inference may not be very good.] See
<literal>lib.systems.parse</literal> for the exact representation.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>libc</varname></term>
<term>
<varname>libc</varname>
</term>
<listitem>
<para>
This is a string identifying the standard C library used.
Valid identifiers include "glibc" for GNU libc, "libSystem" for Darwin's Libsystem, and "uclibc" for µClibc.
It should probably be refactored to use the module system, like <varname>parse</varname>.
This is a string identifying the standard C library used. Valid
identifiers include "glibc" for GNU libc, "libSystem" for Darwin's
Libsystem, and "uclibc" for µClibc. It should probably be refactored to
use the module system, like <varname>parse</varname>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>is*</varname></term>
<term>
<varname>is*</varname>
</term>
<listitem>
<para>
These predicates are defined in <literal>lib.systems.inspect</literal>, and slapped on every platform.
They are superior to the ones in <varname>stdenv</varname> as they force the user to be explicit about which platform they are inspecting.
Please use these instead of those.
These predicates are defined in <literal>lib.systems.inspect</literal>,
and slapped on every platform. They are superior to the ones in
<varname>stdenv</varname> as they force the user to be explicit about
which platform they are inspecting. Please use these instead of those.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>platform</varname></term>
<term>
<varname>platform</varname>
</term>
<listitem>
<para>
This is, quite frankly, a dumping ground of ad-hoc settings (it's an attribute set).
See <literal>lib.systems.platforms</literal> for examples—there's hopefully one in there that will work verbatim for each platform that is working.
Please help us triage these flags and give them better homes!
This is, quite frankly, a dumping ground of ad-hoc settings (it's an
attribute set). See <literal>lib.systems.platforms</literal> for
examples—there's hopefully one in there that will work verbatim for
each platform that is working. Please help us triage these flags and
give them better homes!
</para>
</listitem>
</varlistentry>
@ -156,153 +221,258 @@
<section>
<title>Specifying Dependencies</title>
<para>
In this section we explore the relationship between both runtime and buildtime dependencies and the 3 Autoconf platforms.
In this section we explore the relationship between both runtime and
buildtime dependencies and the 3 Autoconf platforms.
</para>
<para>
A runtime dependency between 2 packages implies that between them both the host and target platforms match.
This is directly implied by the meaning of "host platform" and "runtime dependency":
The package dependency exists while both packages are running on a single host platform.
A runtime dependency between 2 packages implies that between them both the
host and target platforms match. This is directly implied by the meaning of
"host platform" and "runtime dependency": The package dependency exists
while both packages are running on a single host platform.
</para>
<para>
A build time dependency, however, implies a shift in platforms between the depending package and the depended-on package.
The meaning of a build time dependency is that to build the depending package we need to be able to run the depended-on's package.
The depending package's build platform is therefore equal to the depended-on package's host platform.
Analogously, the depending package's host platform is equal to the depended-on package's target platform.
A build time dependency, however, implies a shift in platforms between the
depending package and the depended-on package. The meaning of a build time
dependency is that to build the depending package we need to be able to run
the depended-on's package. The depending package's build platform is
therefore equal to the depended-on package's host platform. Analogously,
the depending package's host platform is equal to the depended-on package's
target platform.
</para>
<para>
In this manner, given the 3 platforms for one package, we can determine the three platforms for all its transitive dependencies.
This is the most important guiding principle behind cross-compilation with Nixpkgs, and will be called the <wordasword>sliding window principle</wordasword>.
In this manner, given the 3 platforms for one package, we can determine the
three platforms for all its transitive dependencies. This is the most
important guiding principle behind cross-compilation with Nixpkgs, and will
be called the <wordasword>sliding window principle</wordasword>.
</para>
<para>
Some examples will probably make this clearer.
If a package is being built with a <literal>(build, host, target)</literal> platform triple of <literal>(foo, bar, bar)</literal>, then its build-time dependencies would have a triple of <literal>(foo, foo, bar)</literal>, and <emphasis>those packages'</emphasis> build-time dependencies would have triple of <literal>(foo, foo, foo)</literal>.
In other words, it should take two "rounds" of following build-time dependency edges before one reaches a fixed point where, by the sliding window principle, the platform triple no longer changes.
Indeed, this happens with cross compilation, where only rounds of native dependencies starting with the second necessarily coincide with native packages.
Some examples will probably make this clearer. If a package is being built
with a <literal>(build, host, target)</literal> platform triple of
<literal>(foo, bar, bar)</literal>, then its build-time dependencies would
have a triple of <literal>(foo, foo, bar)</literal>, and <emphasis>those
packages'</emphasis> build-time dependencies would have triple of
<literal>(foo, foo, foo)</literal>. In other words, it should take two
"rounds" of following build-time dependency edges before one reaches a
fixed point where, by the sliding window principle, the platform triple no
longer changes. Indeed, this happens with cross compilation, where only
rounds of native dependencies starting with the second necessarily coincide
with native packages.
</para>
<note><para>
The depending package's target platform is unconstrained by the sliding window principle, which makes sense in that one can in principle build cross compilers targeting arbitrary platforms.
</para></note>
<note>
<para>
How does this work in practice? Nixpkgs is now structured so that build-time dependencies are taken from <varname>buildPackages</varname>, whereas run-time dependencies are taken from the top level attribute set.
For example, <varname>buildPackages.gcc</varname> should be used at build time, while <varname>gcc</varname> should be used at run time.
Now, for most of Nixpkgs's history, there was no <varname>buildPackages</varname>, and most packages have not been refactored to use it explicitly.
Instead, one can use the six (<emphasis>gasp</emphasis>) attributes used for specifying dependencies as documented in <xref linkend="ssec-stdenv-dependencies"/>.
We "splice" together the run-time and build-time package sets with <varname>callPackage</varname>, and then <varname>mkDerivation</varname> for each of four attributes pulls the right derivation out.
This splicing can be skipped when not cross compiling as the package sets are the same, but is a bit slow for cross compiling.
Because of this, a best-of-both-worlds solution is in the works with no splicing or explicit access of <varname>buildPackages</varname> needed.
For now, feel free to use either method.
The depending package's target platform is unconstrained by the sliding
window principle, which makes sense in that one can in principle build
cross compilers targeting arbitrary platforms.
</para>
<note><para>
There is also a "backlink" <varname>targetPackages</varname>, yielding a package set whose <varname>buildPackages</varname> is the current package set.
This is a hack, though, to accommodate compilers with lousy build systems.
Please do not use this unless you are absolutely sure you are packaging such a compiler and there is no other way.
</para></note>
</note>
<para>
How does this work in practice? Nixpkgs is now structured so that
build-time dependencies are taken from <varname>buildPackages</varname>,
whereas run-time dependencies are taken from the top level attribute set.
For example, <varname>buildPackages.gcc</varname> should be used at build
time, while <varname>gcc</varname> should be used at run time. Now, for
most of Nixpkgs's history, there was no <varname>buildPackages</varname>,
and most packages have not been refactored to use it explicitly. Instead,
one can use the six (<emphasis>gasp</emphasis>) attributes used for
specifying dependencies as documented in
<xref linkend="ssec-stdenv-dependencies"/>. We "splice" together the
run-time and build-time package sets with <varname>callPackage</varname>,
and then <varname>mkDerivation</varname> for each of four attributes pulls
the right derivation out. This splicing can be skipped when not cross
compiling as the package sets are the same, but is a bit slow for cross
compiling. Because of this, a best-of-both-worlds solution is in the works
with no splicing or explicit access of <varname>buildPackages</varname>
needed. For now, feel free to use either method.
</para>
<note>
<para>
There is also a "backlink" <varname>targetPackages</varname>, yielding a
package set whose <varname>buildPackages</varname> is the current package
set. This is a hack, though, to accommodate compilers with lousy build
systems. Please do not use this unless you are absolutely sure you are
packaging such a compiler and there is no other way.
</para>
</note>
</section>
<section>
<title>Cross packagaing cookbook</title>
<title>Cross packaging cookbook</title>
<para>
Some frequently problems when packaging for cross compilation are good to just spell and answer.
Ideally the information above is exhaustive, so this section cannot provide any new information,
but its ludicrous and cruel to expect everyone to spend effort working through the interaction of many features just to figure out the same answer to the same common problem.
Some frequently problems when packaging for cross compilation are good to
just spell and answer. Ideally the information above is exhaustive, so this
section cannot provide any new information, but its ludicrous and cruel to
expect everyone to spend effort working through the interaction of many
features just to figure out the same answer to the same common problem.
Feel free to add to this list!
</para>
<qandaset>
<qandaentry>
<question><para>
What if my package's build system needs to build a C program to be run under the build environment?
</para></question>
<answer><para>
<programlisting>depsBuildBuild = [ buildPackages.stdenv.cc ];</programlisting>
<question>
<para>
What if my package's build system needs to build a C program to be run
under the build environment?
</para>
</question>
<answer>
<para>
<programlisting>depsBuildBuild = [ buildPackages.stdenv.cc ];</programlisting>
Add it to your <function>mkDerivation</function> invocation.
</para></answer>
</para>
</answer>
</qandaentry>
<qandaentry>
<question><para>
<question>
<para>
My package fails to find <command>ar</command>.
</para></question>
<answer><para>
Many packages assume that an unprefixed <command>ar</command> is available, but Nix doesn't provide one.
It only provides a prefixed one, just as it only does for all the other binutils programs.
It may be necessary to patch the package to fix the build system to use a prefixed `ar`.
</para></answer>
</para>
</question>
<answer>
<para>
Many packages assume that an unprefixed <command>ar</command> is
available, but Nix doesn't provide one. It only provides a prefixed one,
just as it only does for all the other binutils programs. It may be
necessary to patch the package to fix the build system to use a prefixed
`ar`.
</para>
</answer>
</qandaentry>
<qandaentry>
<question><para>
<question>
<para>
My package's testsuite needs to run host platform code.
</para></question>
<answer><para>
<programlisting>doCheck = stdenv.hostPlatform != stdenv.buildPlatfrom;</programlisting>
</para>
</question>
<answer>
<para>
<programlisting>doCheck = stdenv.hostPlatform != stdenv.buildPlatfrom;</programlisting>
Add it to your <function>mkDerivation</function> invocation.
</para></answer>
</para>
</answer>
</qandaentry>
</qandaset>
</section>
</section>
</section>
<!--============================================================-->
<section xml:id="sec-cross-usage">
<section xml:id="sec-cross-usage">
<title>Cross-building packages</title>
<note><para>
More information needs to moved from the old wiki, especially <link xlink:href="https://nixos.org/wiki/CrossCompiling" />, for this section.
</para></note>
<note>
<para>
Nixpkgs can be instantiated with <varname>localSystem</varname> alone, in which case there is no cross compiling and everything is built by and for that system,
or also with <varname>crossSystem</varname>, in which case packages run on the latter, but all building happens on the former.
Both parameters take the same schema as the 3 (build, host, and target) platforms defined in the previous section.
As mentioned above, <literal>lib.systems.examples</literal> has some platforms which are used as arguments for these parameters in practice.
You can use them programmatically, or on the command line: <programlisting>
More information needs to moved from the old wiki, especially
<link xlink:href="https://nixos.org/wiki/CrossCompiling" />, for this
section.
</para>
</note>
<para>
Nixpkgs can be instantiated with <varname>localSystem</varname> alone, in
which case there is no cross compiling and everything is built by and for
that system, or also with <varname>crossSystem</varname>, in which case
packages run on the latter, but all building happens on the former. Both
parameters take the same schema as the 3 (build, host, and target) platforms
defined in the previous section. As mentioned above,
<literal>lib.systems.examples</literal> has some platforms which are used as
arguments for these parameters in practice. You can use them
programmatically, or on the command line:
<programlisting>
nix-build &lt;nixpkgs&gt; --arg crossSystem '(import &lt;nixpkgs/lib&gt;).systems.examples.fooBarBaz' -A whatever</programlisting>
</para>
<note>
<para>
Eventually we would like to make these platform examples an unnecessary convenience so that <programlisting>
Eventually we would like to make these platform examples an unnecessary
convenience so that
<programlisting>
nix-build &lt;nixpkgs&gt; --arg crossSystem.config '&lt;arch&gt;-&lt;os&gt;-&lt;vendor&gt;-&lt;abi&gt;' -A whatever</programlisting>
works in the vast majority of cases.
The problem today is dependencies on other sorts of configuration which aren't given proper defaults.
We rely on the examples to crudely to set those configuration parameters in some vaguely sane manner on the users behalf.
Issue <link xlink:href="https://github.com/NixOS/nixpkgs/issues/34274">#34274</link> tracks this inconvenience along with its root cause in crufty configuration options.
works in the vast majority of cases. The problem today is dependencies on
other sorts of configuration which aren't given proper defaults. We rely on
the examples to crudely to set those configuration parameters in some
vaguely sane manner on the users behalf. Issue
<link xlink:href="https://github.com/NixOS/nixpkgs/issues/34274">#34274</link>
tracks this inconvenience along with its root cause in crufty configuration
options.
</para>
</note>
<para>
While one is free to pass both parameters in full, there's a lot of logic to fill in missing fields.
As discussed in the previous section, only one of <varname>system</varname>, <varname>config</varname>, and <varname>parsed</varname> is needed to infer the other two.
Additionally, <varname>libc</varname> will be inferred from <varname>parse</varname>.
Finally, <literal>localSystem.system</literal> is also <emphasis>impurely</emphasis> inferred based on the platform evaluation occurs.
This means it is often not necessary to pass <varname>localSystem</varname> at all, as in the command-line example in the previous paragraph.
While one is free to pass both parameters in full, there's a lot of logic to
fill in missing fields. As discussed in the previous section, only one of
<varname>system</varname>, <varname>config</varname>, and
<varname>parsed</varname> is needed to infer the other two. Additionally,
<varname>libc</varname> will be inferred from <varname>parse</varname>.
Finally, <literal>localSystem.system</literal> is also
<emphasis>impurely</emphasis> inferred based on the platform evaluation
occurs. This means it is often not necessary to pass
<varname>localSystem</varname> at all, as in the command-line example in the
previous paragraph.
</para>
<note>
<para>
Many sources (manual, wiki, etc) probably mention passing <varname>system</varname>, <varname>platform</varname>, along with the optional <varname>crossSystem</varname> to nixpkgs:
<literal>import &lt;nixpkgs&gt; { system = ..; platform = ..; crossSystem = ..; }</literal>.
Passing those two instead of <varname>localSystem</varname> is still supported for compatibility, but is discouraged.
Indeed, much of the inference we do for these parameters is motivated by compatibility as much as convenience.
Many sources (manual, wiki, etc) probably mention passing
<varname>system</varname>, <varname>platform</varname>, along with the
optional <varname>crossSystem</varname> to nixpkgs: <literal>import
&lt;nixpkgs&gt; { system = ..; platform = ..; crossSystem = ..;
}</literal>. Passing those two instead of <varname>localSystem</varname> is
still supported for compatibility, but is discouraged. Indeed, much of the
inference we do for these parameters is motivated by compatibility as much
as convenience.
</para>
</note>
<para>
One would think that <varname>localSystem</varname> and <varname>crossSystem</varname> overlap horribly with the three <varname>*Platforms</varname> (<varname>buildPlatform</varname>, <varname>hostPlatform,</varname> and <varname>targetPlatform</varname>; see <varname>stage.nix</varname> or the manual).
Actually, those identifiers are purposefully not used here to draw a subtle but important distinction:
While the granularity of having 3 platforms is necessary to properly *build* packages, it is overkill for specifying the user's *intent* when making a build plan or package set.
A simple "build vs deploy" dichotomy is adequate: the sliding window principle described in the previous section shows how to interpolate between the these two "end points" to get the 3 platform triple for each bootstrapping stage.
That means for any package a given package set, even those not bound on the top level but only reachable via dependencies or <varname>buildPackages</varname>, the three platforms will be defined as one of <varname>localSystem</varname> or <varname>crossSystem</varname>, with the former replacing the latter as one traverses build-time dependencies.
A last simple difference then is <varname>crossSystem</varname> should be null when one doesn't want to cross-compile, while the <varname>*Platform</varname>s are always non-null.
One would think that <varname>localSystem</varname> and
<varname>crossSystem</varname> overlap horribly with the three
<varname>*Platforms</varname> (<varname>buildPlatform</varname>,
<varname>hostPlatform,</varname> and <varname>targetPlatform</varname>; see
<varname>stage.nix</varname> or the manual). Actually, those identifiers are
purposefully not used here to draw a subtle but important distinction: While
the granularity of having 3 platforms is necessary to properly *build*
packages, it is overkill for specifying the user's *intent* when making a
build plan or package set. A simple "build vs deploy" dichotomy is adequate:
the sliding window principle described in the previous section shows how to
interpolate between the these two "end points" to get the 3 platform triple
for each bootstrapping stage. That means for any package a given package
set, even those not bound on the top level but only reachable via
dependencies or <varname>buildPackages</varname>, the three platforms will
be defined as one of <varname>localSystem</varname> or
<varname>crossSystem</varname>, with the former replacing the latter as one
traverses build-time dependencies. A last simple difference then is
<varname>crossSystem</varname> should be null when one doesn't want to
cross-compile, while the <varname>*Platform</varname>s are always non-null.
<varname>localSystem</varname> is always non-null.
</para>
</section>
</section>
<!--============================================================-->
<section xml:id="sec-cross-infra">
<section xml:id="sec-cross-infra">
<title>Cross-compilation infrastructure</title>
<para>To be written.</para>
<note><para>
If one explores nixpkgs, they will see derivations with names like <literal>gccCross</literal>.
Such <literal>*Cross</literal> derivations is a holdover from before we properly distinguished between the host and target platforms
—the derivation with "Cross" in the name covered the <literal>build = host != target</literal> case, while the other covered the <literal>host = target</literal>, with build platform the same or not based on whether one was using its <literal>.nativeDrv</literal> or <literal>.crossDrv</literal>.
This ugliness will disappear soon.
</para></note>
</section>
<para>
To be written.
</para>
<note>
<para>
If one explores nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is
a holdover from before we properly distinguished between the host and
target platforms —the derivation with "Cross" in the name covered the
<literal>build = host != target</literal> case, while the other covered the
<literal>host = target</literal>, with build platform the same or not based
on whether one was using its <literal>.nativeDrv</literal> or
<literal>.crossDrv</literal>. This ugliness will disappear soon.
</para>
</note>
</section>
</chapter>

View File

@ -1,13 +1,11 @@
let
pkgs = import ./.. { };
lib = pkgs.lib;
sources = lib.sourceFilesBySuffices ./. [".xml"];
sources-langs = ./languages-frameworks;
in
pkgs.stdenv.mkDerivation {
name = "nixpkgs-manual";
buildInputs = with pkgs; [ pandoc libxml2 libxslt zip jing ];
buildInputs = with pkgs; [ pandoc libxml2 libxslt zip jing xmlformat ];
src = ./.;
@ -16,8 +14,9 @@ pkgs.stdenv.mkDerivation {
# $ nix-shell --run "make clean all"
# otherwise they won't reapply :)
HIGHLIGHTJS = pkgs.documentation-highlighter;
XSL = "${pkgs.docbook5_xsl}/xml/xsl";
XSL = "${pkgs.docbook_xsl_ns}/xml/xsl";
RNG = "${pkgs.docbook5}/xml/rng/docbook/docbook.rng";
XMLFORMAT_CONFIG = ../nixos/doc/xmlformat.conf;
xsltFlags = lib.concatStringsSep " " [
"--param section.autolabel 1"
"--param section.label.includes.component.label 1"
@ -30,7 +29,7 @@ pkgs.stdenv.mkDerivation {
];
postPatch = ''
echo ${lib.nixpkgsVersion} > .version
echo ${lib.version} > .version
'';
installPhase = ''
@ -43,5 +42,6 @@ pkgs.stdenv.mkDerivation {
mkdir -p $out/nix-support/
echo "doc manual $dest manual.html" >> $out/nix-support/hydra-build-products
echo "doc manual $dest nixpkgs-manual.epub" >> $out/nix-support/hydra-build-products
'';
}

View File

@ -1,20 +1,19 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-functions">
<title>Functions reference</title>
<para>
The nixpkgs repository has several utility functions to manipulate Nix expressions.
</para>
<section xml:id="sec-overrides">
<title>Functions reference</title>
<para>
The nixpkgs repository has several utility functions to manipulate Nix
expressions.
</para>
<section xml:id="sec-overrides">
<title>Overriding</title>
<para>
Sometimes one wants to override parts of
<literal>nixpkgs</literal>, e.g. derivation attributes, the results of
derivations or even the whole package set.
Sometimes one wants to override parts of <literal>nixpkgs</literal>, e.g.
derivation attributes, the results of derivations or even the whole package
set.
</para>
<section xml:id="sec-pkg-override">
@ -24,28 +23,28 @@
The function <varname>override</varname> is usually available for all the
derivations in the nixpkgs expression (<varname>pkgs</varname>).
</para>
<para>
It is used to override the arguments passed to a function.
</para>
<para>
Example usages:
<programlisting>pkgs.foo.override { arg1 = val1; arg2 = val2; ... }</programlisting>
<programlisting>import pkgs.path { overlays = [ (self: super: {
<programlisting>pkgs.foo.override { arg1 = val1; arg2 = val2; ... }</programlisting>
<programlisting>import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
})]};</programlisting>
<programlisting>mypkg = pkgs.callPackage ./mypkg.nix {
<programlisting>mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
}</programlisting>
</para>
<para>
In the first example, <varname>pkgs.foo</varname> is the result of a function call
with some default arguments, usually a derivation.
Using <varname>pkgs.foo.override</varname> will call the same function with
the given new arguments.
In the first example, <varname>pkgs.foo</varname> is the result of a
function call with some default arguments, usually a derivation. Using
<varname>pkgs.foo.override</varname> will call the same function with the
given new arguments.
</para>
</section>
<section xml:id="sec-pkg-overrideAttrs">
@ -54,16 +53,15 @@
<para>
The function <varname>overrideAttrs</varname> allows overriding the
attribute set passed to a <varname>stdenv.mkDerivation</varname> call,
producing a new derivation based on the original one.
This function is available on all derivations produced by the
<varname>stdenv.mkDerivation</varname> function, which is most packages
in the nixpkgs expression <varname>pkgs</varname>.
producing a new derivation based on the original one. This function is
available on all derivations produced by the
<varname>stdenv.mkDerivation</varname> function, which is most packages in
the nixpkgs expression <varname>pkgs</varname>.
</para>
<para>
Example usage:
<programlisting>helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
<programlisting>helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
separateDebugInfo = true;
});</programlisting>
</para>
@ -84,28 +82,27 @@
<para>
Note that <varname>separateDebugInfo</varname> is processed only by the
<varname>stdenv.mkDerivation</varname> function, not the generated, raw
Nix derivation. Thus, using <varname>overrideDerivation</varname> will
not work in this case, as it overrides only the attributes of the final
Nix derivation. Thus, using <varname>overrideDerivation</varname> will not
work in this case, as it overrides only the attributes of the final
derivation. It is for this reason that <varname>overrideAttrs</varname>
should be preferred in (almost) all cases to
<varname>overrideDerivation</varname>, i.e. to allow using
<varname>sdenv.mkDerivation</varname> to process input arguments, as well
as the fact that it is easier to use (you can use the same attribute
names you see in your Nix code, instead of the ones generated (e.g.
as the fact that it is easier to use (you can use the same attribute names
you see in your Nix code, instead of the ones generated (e.g.
<varname>buildInputs</varname> vs <varname>nativeBuildInputs</varname>,
and involves less typing.
</para>
</note>
</section>
<section xml:id="sec-pkg-overrideDerivation">
<title>&lt;pkg&gt;.overrideDerivation</title>
<warning>
<para>You should prefer <varname>overrideAttrs</varname> in almost all
cases, see its documentation for the reasons why.
<para>
You should prefer <varname>overrideAttrs</varname> in almost all cases,
see its documentation for the reasons why.
<varname>overrideDerivation</varname> is not deprecated and will continue
to work, but is less nice to use and does not have as many abilities as
<varname>overrideAttrs</varname>.
@ -113,32 +110,31 @@
</warning>
<warning>
<para>Do not use this function in Nixpkgs as it evaluates a Derivation
before modifying it, which breaks package abstraction and removes
error-checking of function arguments. In addition, this
evaluation-per-function application incurs a performance penalty,
which can become a problem if many overrides are used.
It is only intended for ad-hoc customisation, such as in
<filename>~/.config/nixpkgs/config.nix</filename>.
<para>
Do not use this function in Nixpkgs as it evaluates a Derivation before
modifying it, which breaks package abstraction and removes error-checking
of function arguments. In addition, this evaluation-per-function
application incurs a performance penalty, which can become a problem if
many overrides are used. It is only intended for ad-hoc customisation,
such as in <filename>~/.config/nixpkgs/config.nix</filename>.
</para>
</warning>
<para>
The function <varname>overrideDerivation</varname> creates a new derivation
based on an existing one by overriding the original's attributes with
the attribute set produced by the specified function.
This function is available on all
derivations defined using the <varname>makeOverridable</varname> function.
Most standard derivation-producing functions, such as
<varname>stdenv.mkDerivation</varname>, are defined using this
function, which means most packages in the nixpkgs expression,
based on an existing one by overriding the original's attributes with the
attribute set produced by the specified function. This function is
available on all derivations defined using the
<varname>makeOverridable</varname> function. Most standard
derivation-producing functions, such as
<varname>stdenv.mkDerivation</varname>, are defined using this function,
which means most packages in the nixpkgs expression,
<varname>pkgs</varname>, have this function.
</para>
<para>
Example usage:
<programlisting>mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
<programlisting>mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
name = "sed-4.2.2-pre";
src = fetchurl {
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
@ -155,75 +151,67 @@
</para>
<para>
The argument <varname>oldAttrs</varname> is used to refer to the attribute set of
the original derivation.
The argument <varname>oldAttrs</varname> is used to refer to the attribute
set of the original derivation.
</para>
<note>
<para>
A package's attributes are evaluated *before* being modified by
the <varname>overrideDerivation</varname> function.
For example, the <varname>name</varname> attribute reference
in <varname>url = "mirror://gnu/hello/${name}.tar.gz";</varname>
is filled-in *before* the <varname>overrideDerivation</varname> function
modifies the attribute set. This means that overriding the
<varname>name</varname> attribute, in this example, *will not* change the
value of the <varname>url</varname> attribute. Instead, we need to override
both the <varname>name</varname> *and* <varname>url</varname> attributes.
A package's attributes are evaluated *before* being modified by the
<varname>overrideDerivation</varname> function. For example, the
<varname>name</varname> attribute reference in <varname>url =
"mirror://gnu/hello/${name}.tar.gz";</varname> is filled-in *before* the
<varname>overrideDerivation</varname> function modifies the attribute set.
This means that overriding the <varname>name</varname> attribute, in this
example, *will not* change the value of the <varname>url</varname>
attribute. Instead, we need to override both the <varname>name</varname>
*and* <varname>url</varname> attributes.
</para>
</note>
</section>
<section xml:id="sec-lib-makeOverridable">
<title>lib.makeOverridable</title>
<para>
The function <varname>lib.makeOverridable</varname> is used to make the result
of a function easily customizable. This utility only makes sense for functions
that accept an argument set and return an attribute set.
The function <varname>lib.makeOverridable</varname> is used to make the
result of a function easily customizable. This utility only makes sense for
functions that accept an argument set and return an attribute set.
</para>
<para>
Example usage:
<programlisting>f = { a, b }: { result = a+b; }
<programlisting>f = { a, b }: { result = a+b; }
c = lib.makeOverridable f { a = 1; b = 2; }</programlisting>
</para>
<para>
The variable <varname>c</varname> is the value of the <varname>f</varname> function
applied with some default arguments. Hence the value of <varname>c.result</varname>
is <literal>3</literal>, in this example.
The variable <varname>c</varname> is the value of the <varname>f</varname>
function applied with some default arguments. Hence the value of
<varname>c.result</varname> is <literal>3</literal>, in this example.
</para>
<para>
The variable <varname>c</varname> however also has some additional functions, like
<link linkend="sec-pkg-override">c.override</link> which can be used to
override the default arguments. In this example the value of
The variable <varname>c</varname> however also has some additional
functions, like <link linkend="sec-pkg-override">c.override</link> which
can be used to override the default arguments. In this example the value of
<varname>(c.override { a = 4; }).result</varname> is 6.
</para>
</section>
</section>
<section xml:id="sec-generators">
</section>
<section xml:id="sec-generators">
<title>Generators</title>
<para>
Generators are functions that create file formats from nix
data structures, e.g. for configuration files.
There are generators available for: <literal>INI</literal>,
<literal>JSON</literal> and <literal>YAML</literal>
Generators are functions that create file formats from nix data structures,
e.g. for configuration files. There are generators available for:
<literal>INI</literal>, <literal>JSON</literal> and <literal>YAML</literal>
</para>
<para>
All generators follow a similar call interface: <code>generatorName
configFunctions data</code>, where <literal>configFunctions</literal> is
an attrset of user-defined functions that format nested parts of the
content.
configFunctions data</code>, where <literal>configFunctions</literal> is an
attrset of user-defined functions that format nested parts of the content.
They each have common defaults, so often they do not need to be set
manually. An example is <code>mkSectionName ? (name: libStr.escape [ "[" "]"
] name)</code> from the <literal>INI</literal> generator. It receives the
@ -233,11 +221,11 @@
</para>
<para>
Generators can be fine-tuned to produce exactly the file format required
by your application/service. One example is an INI-file format which uses
Generators can be fine-tuned to produce exactly the file format required by
your application/service. One example is an INI-file format which uses
<literal>: </literal> as separator, the strings
<literal>"yes"</literal>/<literal>"no"</literal> as boolean values
and requires all string values to be quoted:
<literal>"yes"</literal>/<literal>"no"</literal> as boolean values and
requires all string values to be quoted:
</para>
<programlisting>
@ -270,7 +258,9 @@ in customToINI {
}
</programlisting>
<para>This will produce the following INI file as nix string:</para>
<para>
This will produce the following INI file as nix string:
</para>
<programlisting>
[main]
@ -284,89 +274,140 @@ str\:ange:"very::strange"
merge:"diff3"
</programlisting>
<note><para>Nix store paths can be converted to strings by enclosing a
derivation attribute like so: <code>"${drv}"</code>.</para></note>
<note>
<para>
Nix store paths can be converted to strings by enclosing a derivation
attribute like so: <code>"${drv}"</code>.
</para>
</note>
<para>
Detailed documentation for each generator can be found in
<literal>lib/generators.nix</literal>.
</para>
</section>
<section xml:id="sec-debug">
<title>Debugging Nix Expressions</title>
</section>
<para>
Nix is a unityped, dynamic language, this means every value can potentially
appear anywhere. Since it is also non-strict, evaluation order and what
ultimately is evaluated might surprise you. Therefore it is important to be
able to debug nix expressions.
</para>
<section xml:id="sec-fhs-environments">
<para>
In the <literal>lib/debug.nix</literal> file you will find a number of
functions that help (pretty-)printing values while evaluation is runnnig.
You can even specify how deep these values should be printed recursively,
and transform them on the fly. Please consult the docstrings in
<literal>lib/debug.nix</literal> for usage information.
</para>
</section>
<section xml:id="sec-fhs-environments">
<title>buildFHSUserEnv</title>
<para>
<function>buildFHSUserEnv</function> provides a way to build and run
FHS-compatible lightweight sandboxes. It creates an isolated root with
bound <filename>/nix/store</filename>, so its footprint in terms of disk
space needed is quite small. This allows one to run software which is hard or
unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions,
games distributed as tarballs, software with integrity checking and/or external
self-updated binaries. It uses Linux namespaces feature to create
temporary lightweight environments which are destroyed after all child
processes exit, without root user rights requirement. Accepted arguments are:
FHS-compatible lightweight sandboxes. It creates an isolated root with bound
<filename>/nix/store</filename>, so its footprint in terms of disk space
needed is quite small. This allows one to run software which is hard or
unfeasible to patch for NixOS -- 3rd-party source trees with FHS
assumptions, games distributed as tarballs, software with integrity checking
and/or external self-updated binaries. It uses Linux namespaces feature to
create temporary lightweight environments which are destroyed after all
child processes exit, without root user rights requirement. Accepted
arguments are:
</para>
<variablelist>
<varlistentry>
<term><literal>name</literal></term>
<listitem><para>Environment name.</para></listitem>
<term>
<literal>name</literal>
</term>
<listitem>
<para>
Environment name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>targetPkgs</literal></term>
<listitem><para>Packages to be installed for the main host's architecture
(i.e. x86_64 on x86_64 installations). Along with libraries binaries are also
installed.</para></listitem>
<term>
<literal>targetPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for the main host's architecture (i.e. x86_64 on
x86_64 installations). Along with libraries binaries are also installed.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>multiPkgs</literal></term>
<listitem><para>Packages to be installed for all architectures supported by
a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are
installed by default.</para></listitem>
<term>
<literal>multiPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for all architectures supported by a host (i.e.
i686 and x86_64 on x86_64 installations). Only libraries are installed by
default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>extraBuildCommands</literal></term>
<listitem><para>Additional commands to be executed for finalizing the
directory structure.</para></listitem>
<term>
<literal>extraBuildCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the directory
structure.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>extraBuildCommandsMulti</literal></term>
<listitem><para>Like <literal>extraBuildCommands</literal>, but
executed only on multilib architectures.</para></listitem>
<term>
<literal>extraBuildCommandsMulti</literal>
</term>
<listitem>
<para>
Like <literal>extraBuildCommands</literal>, but executed only on multilib
architectures.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>extraOutputsToInstall</literal></term>
<listitem><para>Additional derivation outputs to be linked for both
target and multi-architecture packages.</para></listitem>
<term>
<literal>extraOutputsToInstall</literal>
</term>
<listitem>
<para>
Additional derivation outputs to be linked for both target and
multi-architecture packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>extraInstallCommands</literal></term>
<listitem><para>Additional commands to be executed for finalizing the
derivation with runner script.</para></listitem>
<term>
<literal>extraInstallCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the derivation with
runner script.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>runScript</literal></term>
<listitem><para>A command that would be executed inside the sandbox and
passed all the command line arguments. It defaults to
<literal>bash</literal>.</para></listitem>
<term>
<literal>runScript</literal>
</term>
<listitem>
<para>
A command that would be executed inside the sandbox and passed all the
command line arguments. It defaults to <literal>bash</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
@ -400,47 +441,47 @@ merge:"diff3"
Running <literal>nix-shell</literal> would then drop you into a shell with
these libraries and binaries available. You can use this to run
closed-source applications which expect FHS structure without hassles:
simply change <literal>runScript</literal> to the application path,
e.g. <filename>./bin/start.sh</filename> -- relative paths are supported.
simply change <literal>runScript</literal> to the application path, e.g.
<filename>./bin/start.sh</filename> -- relative paths are supported.
</para>
</section>
</section>
<xi:include href="shell.section.xml" />
<section xml:id="sec-pkgs-dockerTools">
<title>pkgs.dockerTools</title>
<section xml:id="sec-pkgs-dockerTools">
<title>pkgs.dockerTools</title>
<para>
<para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and
manipulating Docker images according to the
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120">
Docker Image Specification v1.2.0
</link>. Docker itself is not used to perform any of the operations done by these
functions.
</para>
Docker Image Specification v1.2.0 </link>. Docker itself is not used to
perform any of the operations done by these functions.
</para>
<warning>
<warning>
<para>
The <varname>dockerTools</varname> API is unstable and may be subject to
backwards-incompatible changes in the future.
</para>
</warning>
</warning>
<section xml:id="ssec-pkgs-dockerTools-buildImage">
<section xml:id="ssec-pkgs-dockerTools-buildImage">
<title>buildImage</title>
<para>
This function is analogous to the <command>docker build</command> command,
in that can used to build a Docker-compatible repository tarball containing
a single image with one or multiple layers. As such, the result
is suitable for being loaded in Docker with <command>docker load</command>.
a single image with one or multiple layers. As such, the result is suitable
for being loaded in Docker with <command>docker load</command>.
</para>
<para>
The parameters of <varname>buildImage</varname> with relative example values are
described below:
The parameters of <varname>buildImage</varname> with relative example
values are described below:
</para>
<example xml:id='ex-dockerTools-buildImage'><title>Docker build</title>
<programlisting>
<example xml:id='ex-dockerTools-buildImage'>
<title>Docker build</title>
<programlisting>
buildImage {
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' />
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' />
@ -466,99 +507,93 @@ merge:"diff3"
</programlisting>
</example>
<para>The above example will build a Docker image <literal>redis/latest</literal>
from the given base image. Loading and running this image in Docker results in
<literal>redis-server</literal> being started automatically.
<para>
The above example will build a Docker image <literal>redis/latest</literal>
from the given base image. Loading and running this image in Docker results
in <literal>redis-server</literal> being started automatically.
</para>
<calloutlist>
<callout arearefs='ex-dockerTools-buildImage-1'>
<para>
<varname>name</varname> specifies the name of the resulting image.
This is the only required argument for <varname>buildImage</varname>.
<varname>name</varname> specifies the name of the resulting image. This
is the only required argument for <varname>buildImage</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-2'>
<para>
<varname>tag</varname> specifies the tag of the resulting image.
By default it's <literal>latest</literal>.
<varname>tag</varname> specifies the tag of the resulting image. By
default it's <literal>null</literal>, which indicates that the nix output
hash will be used as tag.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-3'>
<para>
<varname>fromImage</varname> is the repository tarball containing the base image.
It must be a valid Docker image, such as exported by <command>docker save</command>.
By default it's <literal>null</literal>, which can be seen as equivalent
to <literal>FROM scratch</literal> of a <filename>Dockerfile</filename>.
<varname>fromImage</varname> is the repository tarball containing the
base image. It must be a valid Docker image, such as exported by
<command>docker save</command>. By default it's <literal>null</literal>,
which can be seen as equivalent to <literal>FROM scratch</literal> of a
<filename>Dockerfile</filename>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-4'>
<para>
<varname>fromImageName</varname> can be used to further specify
the base image within the repository, in case it contains multiple images.
By default it's <literal>null</literal>, in which case
<varname>buildImage</varname> will peek the first image available
in the repository.
<varname>fromImageName</varname> can be used to further specify the base
image within the repository, in case it contains multiple images. By
default it's <literal>null</literal>, in which case
<varname>buildImage</varname> will peek the first image available in the
repository.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-5'>
<para>
<varname>fromImageTag</varname> can be used to further specify the tag
of the base image within the repository, in case an image contains multiple tags.
By default it's <literal>null</literal>, in which case
<varname>buildImage</varname> will peek the first tag available for the base image.
<varname>fromImageTag</varname> can be used to further specify the tag of
the base image within the repository, in case an image contains multiple
tags. By default it's <literal>null</literal>, in which case
<varname>buildImage</varname> will peek the first tag available for the
base image.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-6'>
<para>
<varname>contents</varname> is a derivation that will be copied in the new
layer of the resulting image. This can be similarly seen as
<varname>contents</varname> is a derivation that will be copied in the
new layer of the resulting image. This can be similarly seen as
<command>ADD contents/ /</command> in a <filename>Dockerfile</filename>.
By default it's <literal>null</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'>
<para>
<varname>runAsRoot</varname> is a bash script that will run as root
in an environment that overlays the existing layers of the base image with
the new resulting layer, including the previously copied
<varname>contents</varname> derivation.
This can be similarly seen as
<varname>runAsRoot</varname> is a bash script that will run as root in an
environment that overlays the existing layers of the base image with the
new resulting layer, including the previously copied
<varname>contents</varname> derivation. This can be similarly seen as
<command>RUN ...</command> in a <filename>Dockerfile</filename>.
<note>
<para>
Using this parameter requires the <literal>kvm</literal>
device to be available.
Using this parameter requires the <literal>kvm</literal> device to be
available.
</para>
</note>
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-8'>
<para>
<varname>config</varname> is used to specify the configuration of the
containers that will be started off the built image in Docker.
The available options are listed in the
containers that will be started off the built image in Docker. The
available options are listed in the
<link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions">
Docker Image Specification v1.2.0
</link>.
Docker Image Specification v1.2.0 </link>.
</para>
</callout>
</calloutlist>
<para>
After the new layer has been created, its closure
(to which <varname>contents</varname>, <varname>config</varname> and
<varname>runAsRoot</varname> contribute) will be copied in the layer itself.
Only new dependencies that are not already in the existing layers will be copied.
After the new layer has been created, its closure (to which
<varname>contents</varname>, <varname>config</varname> and
<varname>runAsRoot</varname> contribute) will be copied in the layer
itself. Only new dependencies that are not already in the existing layers
will be copied.
</para>
<para>
@ -568,58 +603,57 @@ merge:"diff3"
<para>
The resulting repository will only list the single image
<varname>image/tag</varname>. In the case of <xref linkend='ex-dockerTools-buildImage'/>
it would be <varname>redis/latest</varname>.
<varname>image/tag</varname>. In the case of
<xref linkend='ex-dockerTools-buildImage'/> it would be
<varname>redis/latest</varname>.
</para>
<para>
It is possible to inspect the arguments with which an image was built
using its <varname>buildArgs</varname> attribute.
It is possible to inspect the arguments with which an image was built using
its <varname>buildArgs</varname> attribute.
</para>
<note>
<para>
If you see errors similar to <literal>getProtocolByName: does not exist (no such protocol name: tcp)</literal>
you may need to add <literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
If you see errors similar to <literal>getProtocolByName: does not exist
(no such protocol name: tcp)</literal> you may need to add
<literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
</para>
</note>
<note>
<para>
If you see errors similar to <literal>Error_Protocol ("certificate has unknown CA",True,UnknownCa)</literal>
you may need to add <literal>pkgs.cacert</literal> to <varname>contents</varname>.
If you see errors similar to <literal>Error_Protocol ("certificate has
unknown CA",True,UnknownCa)</literal> you may need to add
<literal>pkgs.cacert</literal> to <varname>contents</varname>.
</para>
</note>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title>
<para>
This function is analogous to the <command>docker pull</command> command,
in that can be used to fetch a Docker image from a Docker registry.
Currently only registry <literal>v1</literal> is supported.
By default <link xlink:href="https://hub.docker.com/">Docker Hub</link>
is used to pull images.
in that can be used to pull a Docker image from a Docker registry. By
default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is
used to pull images.
</para>
<para>
Its parameters are described in the example below:
</para>
<example xml:id='ex-dockerTools-pullImage'><title>Docker pull</title>
<programlisting>
<example xml:id='ex-dockerTools-pullImage'>
<title>Docker pull</title>
<programlisting>
pullImage {
imageName = "debian"; <co xml:id='ex-dockerTools-pullImage-1' />
imageTag = "jessie"; <co xml:id='ex-dockerTools-pullImage-2' />
imageId = null; <co xml:id='ex-dockerTools-pullImage-3' />
sha256 = "1bhw5hkz6chrnrih0ymjbmn69hyfriza2lr550xyvpdrnbzr4gk2"; <co xml:id='ex-dockerTools-pullImage-4' />
indexUrl = "https://index.docker.io"; <co xml:id='ex-dockerTools-pullImage-5' />
registryVersion = "v1";
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-3' />
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-4' />
os = "linux"; <co xml:id='ex-dockerTools-pullImage-5' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-6' />
}
</programlisting>
</example>
@ -627,66 +661,72 @@ merge:"diff3"
<calloutlist>
<callout arearefs='ex-dockerTools-pullImage-1'>
<para>
<varname>imageName</varname> specifies the name of the image to be downloaded,
which can also include the registry namespace (e.g. <literal>library/debian</literal>).
<varname>imageName</varname> specifies the name of the image to be
downloaded, which can also include the registry namespace (e.g.
<literal>nixos</literal>). This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-2'>
<para>
<varname>imageDigest</varname> specifies the digest of the image to be
downloaded. Skopeo can be used to get the digest of an image, with its
<varname>inspect</varname> subcommand. Since a given
<varname>imageName</varname> may transparently refer to a manifest list
of images which support multiple architectures and/or operating systems,
supply the `--override-os` and `--override-arch` arguments to specify
exactly which image you want. By default it will match the OS and
architecture of the host the command is run on.
<programlisting>
$ nix-shell --packages skopeo jq --command "skopeo --override-os linux --override-arch x86_64 inspect docker://docker.io/nixos/nix:1.11 | jq -r '.Digest'"
sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b
</programlisting>
This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-2'>
<para>
<varname>imageTag</varname> specifies the tag of the image to be downloaded.
By default it's <literal>latest</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-3'>
<para>
<varname>imageId</varname>, if specified this exact image will be fetched, instead
of <varname>imageName/imageTag</varname>. However, the resulting repository
will still be named <varname>imageName/imageTag</varname>.
By default it's <literal>null</literal>.
<varname>finalImageTag</varname>, if specified, this is the tag of the
image to be created. Note it is never used to fetch the image since we
prefer to rely on the immutable digest ID. By default it's
<literal>latest</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-4'>
<para>
<varname>sha256</varname> is the checksum of the whole fetched image.
This argument is required.
</para>
<note>
<para>The checksum is computed on the unpacked directory, not on the final tarball.</para>
</note>
</callout>
<callout arearefs='ex-dockerTools-pullImage-5'>
<para>
In the above example the default values are shown for the variables
<varname>indexUrl</varname> and <varname>registryVersion</varname>.
Hence by default the Docker.io registry is used to pull the images.
<varname>os</varname>, if specified, is the operating system of the
fetched image. By default it's <literal>linux</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-6'>
<para>
<varname>arch</varname>, if specified, is the cpu architecture of the
fetched image. By default it's <literal>x86_64</literal>.
</para>
</callout>
</calloutlist>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-exportImage">
<section xml:id="ssec-pkgs-dockerTools-exportImage">
<title>exportImage</title>
<para>
This function is analogous to the <command>docker export</command> command,
in that can used to flatten a Docker image that contains multiple layers.
It is in fact the result of the merge of all the layers of the image.
As such, the result is suitable for being imported in Docker
with <command>docker import</command>.
It is in fact the result of the merge of all the layers of the image. As
such, the result is suitable for being imported in Docker with
<command>docker import</command>.
</para>
<note>
<para>
Using this function requires the <literal>kvm</literal>
device to be available.
Using this function requires the <literal>kvm</literal> device to be
available.
</para>
</note>
@ -694,8 +734,9 @@ merge:"diff3"
The parameters of <varname>exportImage</varname> are the following:
</para>
<example xml:id='ex-dockerTools-exportImage'><title>Docker export</title>
<programlisting>
<example xml:id='ex-dockerTools-exportImage'>
<title>Docker export</title>
<programlisting>
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
@ -708,29 +749,31 @@ merge:"diff3"
<para>
The parameters relative to the base image have the same synopsis as
described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that
<varname>fromImage</varname> is the only required argument in this case.
described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except
that <varname>fromImage</varname> is the only required argument in this
case.
</para>
<para>
The <varname>name</varname> argument is the name of the derivation output,
which defaults to <varname>fromImage.name</varname>.
</para>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
<title>shadowSetup</title>
<para>
This constant string is a helper for setting up the base files for managing
users and groups, only if such files don't exist already.
It is suitable for being used in a
<varname>runAsRoot</varname> <xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like
users and groups, only if such files don't exist already. It is suitable
for being used in a <varname>runAsRoot</varname>
<xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like
in the example below:
</para>
<example xml:id='ex-dockerTools-shadowSetup'><title>Shadow base files</title>
<programlisting>
<example xml:id='ex-dockerTools-shadowSetup'>
<title>Shadow base files</title>
<programlisting>
buildImage {
name = "shadow-basic";
@ -751,9 +794,6 @@ merge:"diff3"
<literal>/etc/login.defs</literal> are necessary for shadow-utils to
manipulate users and groups.
</para>
</section>
</section>
</section>
</section>
</chapter>

View File

@ -30,7 +30,7 @@ Packages, including the Nix packages collection, are distributed through
distributed for users of Nix on non-NixOS distributions through the channel
`nixpkgs`. Users of NixOS generally use one of the `nixos-*` channels, e.g.
`nixos-16.03`, which includes all packages and modules for the stable NixOS
16.03. The purpose of stable NixOS releases are generally only given
16.03. Stable NixOS releases are generally only given
security updates. More up to date packages and modules are available via the
`nixos-unstable` channel.

View File

@ -1,30 +1,34 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-beam">
<title>BEAM Languages (Erlang, Elixir &amp; LFE)</title>
<section xml:id="beam-introduction">
<title>Introduction</title>
<para>
In this document and related Nix expressions, we use the term,
<emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name
of the Erlang Virtual Machine and, as far as we're concerned, from a
packaging perspective, all languages that run on the BEAM are
interchangeable. That which varies, like the build system, is transparent
to users of any given BEAM package, so we make no distinction.
<emphasis>BEAM</emphasis>, to describe the environment. BEAM is the name of
the Erlang Virtual Machine and, as far as we're concerned, from a packaging
perspective, all languages that run on the BEAM are interchangeable. That
which varies, like the build system, is transparent to users of any given
BEAM package, so we make no distinction.
</para>
</section>
<section xml:id="beam-structure">
<title>Structure</title>
<para>
All BEAM-related expressions are available via the top-level
<literal>beam</literal> attribute, which includes:
</para>
<itemizedlist>
<listitem>
<para>
<literal>interpreters</literal>: a set of compilers running on the
BEAM, including multiple Erlang/OTP versions
<literal>interpreters</literal>: a set of compilers running on the BEAM,
including multiple Erlang/OTP versions
(<literal>beam.interpreters.erlangR19</literal>, etc), Elixir
(<literal>beam.interpreters.elixir</literal>) and LFE
(<literal>beam.interpreters.lfe</literal>).
@ -32,12 +36,13 @@
</listitem>
<listitem>
<para>
<literal>packages</literal>: a set of package sets, each compiled with
a specific Erlang/OTP version, e.g.
<literal>packages</literal>: a set of package sets, each compiled with a
specific Erlang/OTP version, e.g.
<literal>beam.packages.erlangR19</literal>.
</para>
</listitem>
</itemizedlist>
<para>
The default Erlang compiler, defined by
<literal>beam.interpreters.erlang</literal>, is aliased as
@ -45,19 +50,22 @@
<literal>beam.packages.erlang</literal> and aliased at the top level as
<literal>beamPackages</literal>.
</para>
<para>
To create a package set built with a custom Erlang version, use the
lambda, <literal>beam.packagesWith</literal>, which accepts an Erlang/OTP
derivation and produces a package set similar to
To create a package set built with a custom Erlang version, use the lambda,
<literal>beam.packagesWith</literal>, which accepts an Erlang/OTP derivation
and produces a package set similar to
<literal>beam.packages.erlang</literal>.
</para>
<para>
Many Erlang/OTP distributions available in
<literal>beam.interpreters</literal> have versions with ODBC and/or Java
enabled. For example, there's
<literal>beam.interpreters.erlangR19_odbc_javac</literal>, which
corresponds to <literal>beam.interpreters.erlangR19</literal>.
<literal>beam.interpreters.erlangR19_odbc_javac</literal>, which corresponds
to <literal>beam.interpreters.erlangR19</literal>.
</para>
<para xml:id="erlang-call-package">
We also provide the lambda,
<literal>beam.packages.erlang.callPackage</literal>, which simplifies
@ -65,10 +73,13 @@
<literal>beam.packages.erlang</literal> into the top-level context.
</para>
</section>
<section xml:id="build-tools">
<title>Build Tools</title>
<section xml:id="build-tools-rebar3">
<title>Rebar3</title>
<para>
By default, Rebar3 wants to manage its own dependencies. This is perfectly
acceptable in the normal, non-Nix setup, but in the Nix world, it is not.
@ -84,17 +95,20 @@
</listitem>
<listitem>
<para>
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It
should work exactly as would any other version of Rebar3. Any Erlang
package should rely on <literal>rebar3</literal> instead. See <xref
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It should
work exactly as would any other version of Rebar3. Any Erlang package
should rely on <literal>rebar3</literal> instead. See
<xref
linkend="rebar3-packages"/>.
</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="build-tools-other">
<title>Mix &amp; Erlang.mk</title>
<para>
Both Mix and Erlang.mk work exactly as expected. There is a bootstrap
process that needs to be run for both, however, which is supported by the
@ -102,23 +116,22 @@
derivations, respectively.
</para>
</section>
</section>
</section>
<section xml:id="how-to-install-beam-packages">
<section xml:id="how-to-install-beam-packages">
<title>How to Install BEAM Packages</title>
<para>
BEAM packages are not registered at the top level, simply because they are
not relevant to the vast majority of Nix users. They are installable using
the <literal>beam.packages.erlang</literal> attribute set (aliased as
<literal>beamPackages</literal>), which points to packages built by the
default Erlang/OTP version in Nixpkgs, as defined by
<literal>beam.interpreters.erlang</literal>.
To list the available packages in
<literal>beamPackages</literal>, use the following command:
<literal>beam.interpreters.erlang</literal>. To list the available packages
in <literal>beamPackages</literal>, use the following command:
</para>
<programlisting>
<programlisting>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -qaP -A beamPackages
beamPackages.esqlite esqlite-0.2.1
beamPackages.goldrush goldrush-0.1.7
@ -128,34 +141,43 @@ beamPackages.lager lager-3.0.2
beamPackages.meck meck-0.8.3
beamPackages.rebar3-pc pc-1.1.0
</programlisting>
<para>
To install any of those packages into your profile, refer to them by their
attribute path (first column):
</para>
<programlisting>
<programlisting>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</programlisting>
<para>
The attribute path of any BEAM package corresponds to the name of that
particular package in <link xlink:href="https://hex.pm">Hex</link> or its
OTP Application/Release name.
</para>
</section>
<section xml:id="packaging-beam-applications">
</section>
<section xml:id="packaging-beam-applications">
<title>Packaging BEAM Applications</title>
<section xml:id="packaging-erlang-applications">
<title>Erlang Applications</title>
<section xml:id="rebar3-packages">
<title>Rebar3 Packages</title>
<para>
The Nix function, <literal>buildRebar3</literal>, defined in
<literal>beam.packages.erlang.buildRebar3</literal> and aliased at the
top level, can be used to build a derivation that understands how to
build a Rebar3 project. For example, we can build <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> as
follows:
<literal>beam.packages.erlang.buildRebar3</literal> and aliased at the top
level, can be used to build a derivation that understands how to build a
Rebar3 project. For example, we can build
<link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>
as follows:
</para>
<programlisting>
<programlisting>
{ stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec {
@ -172,33 +194,40 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
beamDeps = [ ibrowse jsx erlware_commons ];
}
</programlisting>
<para>
Such derivations are callable with
<literal>beam.packages.erlang.callPackage</literal> (see <xref
linkend="erlang-call-package"/>). To call this package using the normal
<literal>callPackage</literal>, refer to dependency packages via
<literal>beamPackages</literal>, e.g.
<literal>beam.packages.erlang.callPackage</literal> (see
<xref
linkend="erlang-call-package"/>). To call this package using
the normal <literal>callPackage</literal>, refer to dependency packages
via <literal>beamPackages</literal>, e.g.
<literal>beamPackages.ibrowse</literal>.
</para>
<para>
Notably, <literal>buildRebar3</literal> includes
<literal>beamDeps</literal>, while
<literal>stdenv.mkDerivation</literal> does not. BEAM dependencies added
there will be correctly handled by the system.
<literal>beamDeps</literal>, while <literal>stdenv.mkDerivation</literal>
does not. BEAM dependencies added there will be correctly handled by the
system.
</para>
<para>
If a package needs to compile native code via Rebar3's port compilation
mechanism, add <literal>compilePort = true;</literal> to the derivation.
</para>
</section>
<section xml:id="erlang-mk-packages">
<title>Erlang.mk Packages</title>
<para>
Erlang.mk functions similarly to Rebar3, except we use
<literal>buildErlangMk</literal> instead of
<literal>buildRebar3</literal>.
</para>
<programlisting>
<programlisting>
{ buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk {
@ -223,13 +252,16 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
}
</programlisting>
</section>
<section xml:id="mix-packages">
<title>Mix Packages</title>
<para>
Mix functions similarly to Rebar3, except we use
<literal>buildMix</literal> instead of <literal>buildRebar3</literal>.
</para>
<programlisting>
<programlisting>
{ buildMix, fetchHex, plug, absinthe }:
buildMix {
@ -253,10 +285,12 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
};
}
</programlisting>
<para>
Alternatively, we can use <literal>buildHex</literal> as a shortcut:
</para>
<programlisting>
<programlisting>
{ buildHex, buildMix, plug, absinthe }:
buildHex {
@ -280,19 +314,23 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</programlisting>
</section>
</section>
</section>
<section xml:id="how-to-develop">
</section>
<section xml:id="how-to-develop">
<title>How to Develop</title>
<section xml:id="accessing-an-environment">
<title>Accessing an Environment</title>
<para>
Often, we simply want to access a valid environment that contains a
specific package and its dependencies. We can accomplish that with the
<literal>env</literal> attribute of a derivation. For example, let's say
we want to access an Erlang REPL with <literal>ibrowse</literal> loaded
up. We could do the following:
<literal>env</literal> attribute of a derivation. For example, let's say we
want to access an Erlang REPL with <literal>ibrowse</literal> loaded up. We
could do the following:
</para>
<programlisting>
<programlisting>
$ nix-shell -A beamPackages.ibrowse.env --run "erl"
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
@ -333,22 +371,25 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
ok
2>
</programlisting>
<para>
Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key
to this functionality.
</para>
</section>
<section xml:id="creating-a-shell">
<title>Creating a Shell</title>
<para>
Getting access to an environment often isn't enough to do real
development. Usually, we need to create a <literal>shell.nix</literal>
file and do our development inside of the environment specified therein.
This file looks a lot like the packaging described above, except that
<literal>src</literal> points to the project root and we call the package
directly.
Getting access to an environment often isn't enough to do real development.
Usually, we need to create a <literal>shell.nix</literal> file and do our
development inside of the environment specified therein. This file looks a
lot like the packaging described above, except that <literal>src</literal>
points to the project root and we call the package directly.
</para>
<programlisting>
<programlisting>
{ pkgs ? import &quot;&lt;nixpkgs&quot;&gt; {} }:
with pkgs;
@ -368,13 +409,16 @@ in
drv
</programlisting>
<section xml:id="building-in-a-shell">
<title>Building in a Shell (for Mix Projects)</title>
<para>
We can leverage the support of the derivation, irrespective of the build
derivation, by calling the commands themselves.
</para>
<programlisting>
<programlisting>
# =============================================================================
# Variables
# =============================================================================
@ -431,44 +475,54 @@ analyze: build plt
$(NIX_SHELL) --run "mix dialyzer --no-compile"
</programlisting>
<para>
Using a <literal>shell.nix</literal> as described (see <xref
Using a <literal>shell.nix</literal> as described (see
<xref
linkend="creating-a-shell"/>) should just work. Aside from
<literal>test</literal>, <literal>plt</literal>, and
<literal>analyze</literal>, the Make targets work just fine for all of the
build derivations.
</para>
</section>
</section>
</section>
<section xml:id="generating-packages-from-hex-with-hex2nix">
</section>
</section>
<section xml:id="generating-packages-from-hex-with-hex2nix">
<title>Generating Packages from Hex with <literal>hex2nix</literal></title>
<para>
Updating the <link xlink:href="https://hex.pm">Hex</link> package set
requires <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>. Given the
path to the Erlang modules (usually
requires
<link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>.
Given the path to the Erlang modules (usually
<literal>pkgs/development/erlang-modules</literal>), it will dump a file
called <literal>hex-packages.nix</literal>, containing all the packages that
use a recognized build system in <link
xlink:href="https://hex.pm">Hex</link>. It can't be determined, however,
whether every package is buildable.
use a recognized build system in
<link
xlink:href="https://hex.pm">Hex</link>. It can't be determined,
however, whether every package is buildable.
</para>
<para>
To make life easier for our users, try to build every <link
xlink:href="https://hex.pm">Hex</link> package and remove those that fail.
To do that, simply run the following command in the root of your
To make life easier for our users, try to build every
<link
xlink:href="https://hex.pm">Hex</link> package and remove those
that fail. To do that, simply run the following command in the root of your
<literal>nixpkgs</literal> repository:
</para>
<programlisting>
<programlisting>
$ nix-build -A beamPackages
</programlisting>
<para>
That will attempt to build every package in
<literal>beamPackages</literal>. Then manually remove those that fail.
Hopefully, someone will improve <link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link> in the
future to automate the process.
That will attempt to build every package in <literal>beamPackages</literal>.
Then manually remove those that fail. Hopefully, someone will improve
<link
xlink:href="https://github.com/erlang-nix/hex2nix">hex2nix</link>
in the future to automate the process.
</para>
</section>
</section>
</section>

View File

@ -1,40 +1,37 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-bower">
<title>Bower</title>
<title>Bower</title>
<para>
<link xlink:href="http://bower.io">Bower</link> is a package manager for web
site front-end components. Bower packages (comprising of build artefacts and
sometimes sources) are stored in <command>git</command> repositories,
typically on Github. The package registry is run by the Bower team with
package metadata coming from the <filename>bower.json</filename> file within
each package.
</para>
<para>
<link xlink:href="http://bower.io">Bower</link> is a package manager
for web site front-end components. Bower packages (comprising of
build artefacts and sometimes sources) are stored in
<command>git</command> repositories, typically on Github. The
package registry is run by the Bower team with package metadata
coming from the <filename>bower.json</filename> file within each
package.
</para>
<para>
The end result of running Bower is a <filename>bower_components</filename>
directory which can be included in the web app's build process.
</para>
<para>
The end result of running Bower is a
<filename>bower_components</filename> directory which can be included
in the web app's build process.
</para>
<para>
<para>
Bower can be run interactively, by installing
<varname>nodePackages.bower</varname>. More interestingly, the Bower
components can be declared in a Nix derivation, with the help of
<varname>nodePackages.bower2nix</varname>.
</para>
</para>
<section xml:id="ssec-bower2nix-usage">
<section xml:id="ssec-bower2nix-usage">
<title><command>bower2nix</command> usage</title>
<para>
Suppose you have a <filename>bower.json</filename> with the following contents:
<example xml:id="ex-bowerJson"><title><filename>bower.json</filename></title>
<para>
Suppose you have a <filename>bower.json</filename> with the following
contents:
<example xml:id="ex-bowerJson">
<title><filename>bower.json</filename></title>
<programlisting language="json">
<![CDATA[{
"name": "my-web-app",
@ -44,14 +41,12 @@
}
}]]>
</programlisting>
</example>
</para>
</example>
</para>
<para>
<para>
Running <command>bower2nix</command> will produce something like the
following output:
<programlisting language="nix">
<![CDATA[{ fetchbower, buildEnv }:
buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
@ -60,31 +55,31 @@ buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
(fetchbower "jquery" "2.2.2" "1.9.1 - 2" "10sp5h98sqwk90y4k6hbdviwqzvzwqf47r3r51pakch5ii2y7js1")
]; }]]>
</programlisting>
</para>
<para>
Using the <command>bower2nix</command> command line arguments, the
output can be redirected to a file. A name like
<filename>bower-packages.nix</filename> would be fine.
</para>
<para>
The resulting derivation is a union of all the downloaded Bower
packages (and their dependencies). To use it, they still need to be
linked together by Bower, which is where
<varname>buildBowerComponents</varname> is useful.
</para>
</section>
<section xml:id="ssec-build-bower-components"><title><varname>buildBowerComponents</varname> function</title>
</para>
<para>
The function is implemented in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/bower-modules/generic/default.nix">
Using the <command>bower2nix</command> command line arguments, the output
can be redirected to a file. A name like
<filename>bower-packages.nix</filename> would be fine.
</para>
<para>
The resulting derivation is a union of all the downloaded Bower packages
(and their dependencies). To use it, they still need to be linked together
by Bower, which is where <varname>buildBowerComponents</varname> is useful.
</para>
</section>
<section xml:id="ssec-build-bower-components">
<title><varname>buildBowerComponents</varname> function</title>
<para>
The function is implemented in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/bower-modules/generic/default.nix">
<filename>pkgs/development/bower-modules/generic/default.nix</filename></link>.
Example usage:
<example xml:id="ex-buildBowerComponents"><title>buildBowerComponents</title>
<example xml:id="ex-buildBowerComponents">
<title>buildBowerComponents</title>
<programlisting language="nix">
bowerComponents = buildBowerComponents {
name = "my-web-app";
@ -92,42 +87,42 @@ bowerComponents = buildBowerComponents {
src = myWebApp; <co xml:id="ex-buildBowerComponents-2" />
};
</programlisting>
</example>
</example>
</para>
<para>
In <xref linkend="ex-buildBowerComponents" />, the following arguments
are of special significance to the function:
<calloutlist>
<para>
In <xref linkend="ex-buildBowerComponents" />, the following arguments are
of special significance to the function:
<calloutlist>
<callout arearefs="ex-buildBowerComponents-1">
<para>
<varname>generated</varname> specifies the file which was created by <command>bower2nix</command>.
<varname>generated</varname> specifies the file which was created by
<command>bower2nix</command>.
</para>
</callout>
<callout arearefs="ex-buildBowerComponents-2">
<para>
<varname>src</varname> is your project's sources. It needs to
contain a <filename>bower.json</filename> file.
<varname>src</varname> is your project's sources. It needs to contain a
<filename>bower.json</filename> file.
</para>
</callout>
</calloutlist>
</para>
</calloutlist>
</para>
<para>
<varname>buildBowerComponents</varname> will run Bower to link
together the output of <command>bower2nix</command>, resulting in a
<para>
<varname>buildBowerComponents</varname> will run Bower to link together the
output of <command>bower2nix</command>, resulting in a
<filename>bower_components</filename> directory which can be used.
</para>
</para>
<para>
<para>
Here is an example of a web frontend build process using
<command>gulp</command>. You might use <command>grunt</command>, or
anything else.
</para>
<command>gulp</command>. You might use <command>grunt</command>, or anything
else.
</para>
<example xml:id="ex-bowerGulpFile"><title>Example build script (<filename>gulpfile.js</filename>)</title>
<example xml:id="ex-bowerGulpFile">
<title>Example build script (<filename>gulpfile.js</filename>)</title>
<programlisting language="javascript">
<![CDATA[var gulp = require('gulp');
@ -142,9 +137,9 @@ gulp.task('build', [], function () {
.pipe(gulp.dest("./gulpdist/"));
});]]>
</programlisting>
</example>
</example>
<example xml:id="ex-buildBowerComponentsDefaultNix">
<example xml:id="ex-buildBowerComponentsDefaultNix">
<title>Full example — <filename>default.nix</filename></title>
<programlisting language="nix">
{ myWebApp ? { outPath = ./.; name = "myWebApp"; }
@ -172,59 +167,51 @@ pkgs.stdenv.mkDerivation {
installPhase = "mv gulpdist $out";
}
</programlisting>
</example>
</example>
<para>
A few notes about <xref linkend="ex-buildBowerComponentsDefaultNix" />:
<calloutlist>
<para>
A few notes about <xref linkend="ex-buildBowerComponentsDefaultNix" />:
<calloutlist>
<callout arearefs="ex-buildBowerComponentsDefault-1">
<para>
The result of <varname>buildBowerComponents</varname> is an
input to the frontend build.
The result of <varname>buildBowerComponents</varname> is an input to the
frontend build.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-2">
<para>
Whether to symlink or copy the
<filename>bower_components</filename> directory depends on the
build tool in use. In this case a copy is used to avoid
<command>gulp</command> silliness with permissions.
Whether to symlink or copy the <filename>bower_components</filename>
directory depends on the build tool in use. In this case a copy is used
to avoid <command>gulp</command> silliness with permissions.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-3">
<para>
<command>gulp</command> requires <varname>HOME</varname> to
refer to a writeable directory.
<command>gulp</command> requires <varname>HOME</varname> to refer to a
writeable directory.
</para>
</callout>
<callout arearefs="ex-buildBowerComponentsDefault-4">
<para>
The actual build command. Other tools could be used.
</para>
</callout>
</calloutlist>
</para>
</section>
</calloutlist>
</para>
</section>
<section xml:id="ssec-bower2nix-troubleshooting">
<section xml:id="ssec-bower2nix-troubleshooting">
<title>Troubleshooting</title>
<variablelist>
<variablelist>
<varlistentry>
<term>
<literal>ENOCACHE</literal> errors from
<varname>buildBowerComponents</varname>
<literal>ENOCACHE</literal> errors from <varname>buildBowerComponents</varname>
</term>
<listitem>
<para>
This means that Bower was looking for a package version which
doesn't exist in the generated
<filename>bower-packages.nix</filename>.
This means that Bower was looking for a package version which doesn't
exist in the generated <filename>bower-packages.nix</filename>.
</para>
<para>
If <filename>bower.json</filename> has been updated, then run
@ -232,13 +219,11 @@ A few notes about <xref linkend="ex-buildBowerComponentsDefaultNix" />:
</para>
<para>
It could also be a bug in <command>bower2nix</command> or
<command>fetchbower</command>. If possible, try reformulating
the version specification in <filename>bower.json</filename>.
<command>fetchbower</command>. If possible, try reformulating the version
specification in <filename>bower.json</filename>.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</variablelist>
</section>
</section>

View File

@ -1,36 +1,38 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-coq">
<title>Coq</title>
<title>Coq</title>
<para>
Coq libraries should be installed in
<literal>$(out)/lib/coq/${coq.coq-version}/user-contrib/</literal>.
Such directories are automatically added to the
<literal>$COQPATH</literal> environment variable by the hook defined
in the Coq derivation.
<literal>$(out)/lib/coq/${coq.coq-version}/user-contrib/</literal>. Such
directories are automatically added to the <literal>$COQPATH</literal>
environment variable by the hook defined in the Coq derivation.
</para>
<para>
Some libraries require OCaml and sometimes also Camlp5 or findlib.
The exact versions that were used to build Coq are saved in the
<literal>coq.ocaml</literal> and <literal>coq.camlp5</literal>
and <literal>coq.findlib</literal> attributes.
Some libraries require OCaml and sometimes also Camlp5 or findlib. The exact
versions that were used to build Coq are saved in the
<literal>coq.ocaml</literal> and <literal>coq.camlp5</literal> and
<literal>coq.findlib</literal> attributes.
</para>
<para>
Coq libraries may be compatible with some specific versions of Coq only.
The <literal>compatibleCoqVersions</literal> attribute is used to
precisely select those versions of Coq that are compatible with this
derivation.
Coq libraries may be compatible with some specific versions of Coq only. The
<literal>compatibleCoqVersions</literal> attribute is used to precisely
select those versions of Coq that are compatible with this derivation.
</para>
<para>
Here is a simple package example. It is a pure Coq library, thus it
depends on Coq. It builds on the Mathematical Components library, thus it
also takes <literal>mathcomp</literal> as <literal>buildInputs</literal>.
Its <literal>Makefile</literal> has been generated using
<literal>coq_makefile</literal> so we only have to
set the <literal>$COQLIB</literal> variable at install time.
Here is a simple package example. It is a pure Coq library, thus it depends
on Coq. It builds on the Mathematical Components library, thus it also takes
<literal>mathcomp</literal> as <literal>buildInputs</literal>. Its
<literal>Makefile</literal> has been generated using
<literal>coq_makefile</literal> so we only have to set the
<literal>$COQLIB</literal> variable at install time.
</para>
<programlisting>
<programlisting>
{ stdenv, fetchFromGitHub, coq, mathcomp }:
stdenv.mkDerivation rec {

View File

@ -1,14 +1,14 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-go">
<title>Go</title>
<title>Go</title>
<para>
The function <varname>buildGoPackage</varname> builds standard Go programs.
</para>
<para>The function <varname>buildGoPackage</varname> builds
standard Go programs.
</para>
<example xml:id='ex-buildGoPackage'><title>buildGoPackage</title>
<example xml:id='ex-buildGoPackage'>
<title>buildGoPackage</title>
<programlisting>
deis = buildGoPackage rec {
name = "deis-${version}";
@ -29,55 +29,56 @@ deis = buildGoPackage rec {
buildFlags = "--tags release"; <co xml:id='ex-buildGoPackage-4' />
}
</programlisting>
</example>
<para><xref linkend='ex-buildGoPackage'/> is an example expression using buildGoPackage,
the following arguments are of special significance to the function:
<calloutlist>
</example>
<para>
<xref linkend='ex-buildGoPackage'/> is an example expression using
buildGoPackage, the following arguments are of special significance to the
function:
<calloutlist>
<callout arearefs='ex-buildGoPackage-1'>
<para>
<varname>goPackagePath</varname> specifies the package's canonical Go import path.
<varname>goPackagePath</varname> specifies the package's canonical Go
import path.
</para>
</callout>
<callout arearefs='ex-buildGoPackage-2'>
<para>
<varname>subPackages</varname> limits the builder from building child packages that
have not been listed. If <varname>subPackages</varname> is not specified, all child
packages will be built.
<varname>subPackages</varname> limits the builder from building child
packages that have not been listed. If <varname>subPackages</varname> is
not specified, all child packages will be built.
</para>
<para>
In this example only <literal>github.com/deis/deis/client</literal> will be built.
In this example only <literal>github.com/deis/deis/client</literal> will
be built.
</para>
</callout>
<callout arearefs='ex-buildGoPackage-3'>
<para>
<varname>goDeps</varname> is where the Go dependencies of a Go program are listed
as a list of package source identified by Go import path.
It could be imported as a separate <varname>deps.nix</varname> file for
<varname>goDeps</varname> is where the Go dependencies of a Go program are
listed as a list of package source identified by Go import path. It could
be imported as a separate <varname>deps.nix</varname> file for
readability. The dependency data structure is described below.
</para>
</callout>
<callout arearefs='ex-buildGoPackage-4'>
<para>
<varname>buildFlags</varname> is a list of flags passed to the go build command.
<varname>buildFlags</varname> is a list of flags passed to the go build
command.
</para>
</callout>
</calloutlist>
</para>
</calloutlist>
<para>
The <varname>goDeps</varname> attribute can be imported from a separate
<varname>nix</varname> file that defines which Go libraries are needed and
should be included in <varname>GOPATH</varname> for
<varname>buildPhase</varname>.
</para>
</para>
<para>The <varname>goDeps</varname> attribute can be imported from a separate
<varname>nix</varname> file that defines which Go libraries are needed and should
be included in <varname>GOPATH</varname> for <varname>buildPhase</varname>.
</para>
<example xml:id='ex-goDeps'><title>deps.nix</title>
<example xml:id='ex-goDeps'>
<title>deps.nix</title>
<programlisting>
[ <co xml:id='ex-goDeps-1' />
{
@ -100,67 +101,60 @@ the following arguments are of special significance to the function:
}
]
</programlisting>
</example>
<para>
<calloutlist>
</example>
<para>
<calloutlist>
<callout arearefs='ex-goDeps-1'>
<para>
<varname>goDeps</varname> is a list of Go dependencies.
</para>
</callout>
<callout arearefs='ex-goDeps-2'>
<para>
<varname>goPackagePath</varname> specifies Go package import path.
</para>
</callout>
<callout arearefs='ex-goDeps-3'>
<para>
<varname>fetch type</varname> that needs to be used to get package source. If <varname>git</varname>
is used there should be <varname>url</varname>, <varname>rev</varname> and <varname>sha256</varname>
defined next to it.
<varname>fetch type</varname> that needs to be used to get package source.
If <varname>git</varname> is used there should be <varname>url</varname>,
<varname>rev</varname> and <varname>sha256</varname> defined next to it.
</para>
</callout>
</calloutlist>
</para>
</calloutlist>
<para>
To extract dependency information from a Go package in automated way use
<link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>. It can
produce complete derivation and <varname>goDeps</varname> file for Go
programs.
</para>
</para>
<para>To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>.
It can produce complete derivation and <varname>goDeps</varname> file for Go programs.</para>
<para>
<varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" />
where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows:
<screen>
<para>
<varname>buildGoPackage</varname> produces
<xref linkend='chap-multiple-output' xrefstyle="select: title" /> where
<varname>bin</varname> includes program binaries. You can test build a Go
binary as follows:
<screen>
$ nix-build -A deis.bin
</screen>
or build all outputs with:
<screen>
<screen>
$ nix-build -A deis.all
</screen>
<varname>bin</varname> output will be installed by default with
<varname>nix-env -i</varname> or <varname>systemPackages</varname>.
</para>
<varname>bin</varname> output will be installed by default with <varname>nix-env -i</varname>
or <varname>systemPackages</varname>.
</para>
<para>
You may use Go packages installed into the active Nix profiles by adding
the following to your ~/.bashrc:
<para>
You may use Go packages installed into the active Nix profiles by adding the
following to your ~/.bashrc:
<screen>
for p in $NIX_PROFILES; do
GOPATH="$p/share/go:$GOPATH"
done
</screen>
</para>
</para>
</section>

View File

@ -312,7 +312,7 @@ For example, installing the following environment
allows one to browse module documentation index [not too dissimilar to
this](https://downloads.haskell.org/~ghc/latest/docs/html/libraries/index.html)
for all the specified packages and their dependencies by directing a browser of
choice to `~/.nix-profiles/share/doc/hoogle/index.html` (or
choice to `~/.nix-profile/share/doc/hoogle/index.html` (or
`/run/current-system/sw/share/doc/hoogle/index.html` in case you put it in
`environment.systemPackages` in NixOS).
@ -334,10 +334,29 @@ navigate there.
Finally, you can run
```shell
hoogle server -p 8080 --local
hoogle server --local -p 8080
```
and navigate to http://localhost:8080/ for your own local
[Hoogle](https://www.haskell.org/hoogle/).
[Hoogle](https://www.haskell.org/hoogle/). The `--local` flag makes the hoogle
server serve files from your nix store over http, without the flag it will use
`file://` URIs. Note, however, that Firefox and possibly other browsers
disallow navigation from `http://` to `file://` URIs for security reasons,
which might be quite an inconvenience. Versions before v5 did not have this
flag. See
[this page](http://kb.mozillazine.org/Links_to_local_pages_do_not_work) for
workarounds.
For NixOS users there's a service which runs this exact command for you.
Specify the `packages` you want documentation for and the `haskellPackages` set
you want them to come from. Add the following to `configuration.nix`.
```nix
services.hoogle = {
enable = true;
packages = (hpkgs: with hpkgs; [text cryptonite]);
haskellPackages = pkgs.haskellPackages;
};
```
### How to build a Haskell project using Stack
@ -666,6 +685,112 @@ prefer one built with GHC 7.8.x in the first place. However, for users who
cannot use GHC 7.10.x at all for some reason, the approach of downgrading to an
older version might be useful.
### How to override packages in all compiler-specific package sets
In the previous section we learned how to override a package in a single
compiler-specific package set. You may have some overrides defined that you want
to use across multiple package sets. To accomplish this you could use the
technique that we learned in the previous section by repeating the overrides for
all the compiler-specific package sets. For example:
```nix
{
packageOverrides = super: let self = super.pkgs; in
{
haskell = super.haskell // {
packages = super.haskell.packages // {
ghc784 = super.haskell.packages.ghc784.override {
overrides = self: super: {
my-package = ...;
my-other-package = ...;
};
};
ghc822 = super.haskell.packages.ghc784.override {
overrides = self: super: {
my-package = ...;
my-other-package = ...;
};
};
...
};
};
};
}
```
However there's a more convenient way to override all compiler-specific package
sets at once:
```nix
{
packageOverrides = super: let self = super.pkgs; in
{
haskell = super.haskell // {
packageOverrides = self: super: {
my-package = ...;
my-other-package = ...;
};
};
};
}
```
### How to specify source overrides for your Haskell package
When starting a Haskell project you can use `developPackage`
to define a derivation for your package at the `root` path
as well as source override versions for Hackage packages, like so:
```nix
# default.nix
{ compilerVersion ? "ghc842" }:
let
# pinning nixpkgs using new Nix 2.0 builtin `fetchGit`
pkgs = import (fetchGit (import ./version.nix)) { };
compiler = pkgs.haskell.packages."${compilerVersion}";
pkg = compiler.developPackage {
root = ./.;
source-overrides = {
# Let's say the GHC 8.4.2 haskellPackages uses 1.6.0.0 and your test suite is incompatible with >= 1.6.0.0
HUnit = "1.5.0.0";
};
};
in pkg
```
This could be used in place of a simplified `stack.yaml` defining a Nix
derivation for your Haskell package.
As you can see this allows you to specify only the source version found on
Hackage and nixpkgs will take care of the rest.
You can also specify `buildInputs` for your Haskell derivation for packages
that directly depend on external libraries like so:
```nix
# default.nix
{ compilerVersion ? "ghc842" }:
let
# pinning nixpkgs using new Nix 2.0 builtin `fetchGit`
pkgs = import (fetchGit (import ./version.nix)) { };
compiler = pkgs.haskell.packages."${compilerVersion}";
pkg = compiler.developPackage {
root = ./.;
source-overrides = {
HUnit = "1.5.0.0"; # Let's say the GHC 8.4.2 haskellPackages uses 1.6.0.0 and your test suite is incompatible with >= 1.6.0.0
};
};
# in case your package source depends on any libraries directly, not just transitively.
buildInputs = [ zlib ];
in pkg.overrideAttrs(attrs: {
buildInputs = attrs.buildInputs ++ buildInputs;
})
```
Notice that you will need to override (via `overrideAttrs` or similar) the
derivation returned by the `developPackage` Nix lambda as there is no `buildInputs`
named argument you can pass directly into the `developPackage` lambda.
### How to recover from GHC's infamous non-deterministic library ID bug
GHC and distributed build farms don't get along well:
@ -922,6 +1047,19 @@ As you can see, `packunused` finds out that although the testsuite component has
no redundant dependencies the library component of `scientific-0.3.5.1` depends
on `ghc-prim` which is unused in the library.
### Using hackage2nix with nixpkgs
Hackage package derivations are found in the
[`hackage-packages.nix`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/hackage-packages.nix)
file within `nixpkgs` and are used as the initial package set for
`haskellPackages`. The `hackage-packages.nix` file is not meant to be edited
by hand, but rather autogenerated by [`hackage2nix`](https://github.com/NixOS/cabal2nix/tree/master/hackage2nix),
which by default uses the [`configuration-hackage2nix.yaml`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/haskell-modules/configuration-hackage2nix.yaml)
file to generate all the derivations.
To modify the contents `configuration-hackage2nix.yaml`, follow the
instructions on [`hackage2nix`](https://github.com/NixOS/cabal2nix/tree/master/hackage2nix).
## Other resources
- The Youtube video [Nix Loves Haskell](https://www.youtube.com/watch?v=BsBhi_r-OeE)

View File

@ -1,36 +1,31 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-language-support">
<title>Support for specific programming languages and frameworks</title>
<para>The <link linkend="chap-stdenv">standard build
environment</link> makes it easy to build typical Autotools-based
packages with very little code. Any other kind of package can be
accomodated by overriding the appropriate phases of
<literal>stdenv</literal>. However, there are specialised functions
in Nixpkgs to easily build packages for other programming languages,
such as Perl or Haskell. These are described in this chapter.</para>
<xi:include href="beam.xml" />
<xi:include href="bower.xml" />
<xi:include href="coq.xml" />
<xi:include href="go.xml" />
<xi:include href="haskell.section.xml" />
<xi:include href="idris.section.xml" />
<xi:include href="java.xml" />
<xi:include href="lua.xml" />
<xi:include href="node.section.xml" />
<xi:include href="perl.xml" />
<xi:include href="python.section.xml" />
<xi:include href="qt.xml" />
<xi:include href="r.section.xml" />
<xi:include href="ruby.xml" />
<xi:include href="rust.section.xml" />
<xi:include href="texlive.xml" />
<xi:include href="vim.section.xml" />
<xi:include href="emscripten.section.xml" />
<title>Support for specific programming languages and frameworks</title>
<para>
The <link linkend="chap-stdenv">standard build environment</link> makes it
easy to build typical Autotools-based packages with very little code. Any
other kind of package can be accomodated by overriding the appropriate phases
of <literal>stdenv</literal>. However, there are specialised functions in
Nixpkgs to easily build packages for other programming languages, such as
Perl or Haskell. These are described in this chapter.
</para>
<xi:include href="beam.xml" />
<xi:include href="bower.xml" />
<xi:include href="coq.xml" />
<xi:include href="go.xml" />
<xi:include href="haskell.section.xml" />
<xi:include href="idris.section.xml" />
<xi:include href="java.xml" />
<xi:include href="lua.xml" />
<xi:include href="node.section.xml" />
<xi:include href="perl.xml" />
<xi:include href="python.section.xml" />
<xi:include href="qt.xml" />
<xi:include href="r.section.xml" />
<xi:include href="ruby.xml" />
<xi:include href="rust.section.xml" />
<xi:include href="texlive.xml" />
<xi:include href="vim.section.xml" />
<xi:include href="emscripten.section.xml" />
</chapter>

View File

@ -1,11 +1,10 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-java">
<title>Java</title>
<title>Java</title>
<para>Ant-based Java packages are typically built from source as follows:
<para>
Ant-based Java packages are typically built from source as follows:
<programlisting>
stdenv.mkDerivation {
name = "...";
@ -16,33 +15,36 @@ stdenv.mkDerivation {
buildPhase = "ant";
}
</programlisting>
Note that <varname>jdk</varname> is an alias for the OpenJDK (self-built
where available, or pre-built via Zulu). Platforms with OpenJDK not (yet) in
Nixpkgs (<literal>Aarch32</literal>, <literal>Aarch64</literal>) point to the
(unfree) <literal>oraclejdk</literal>.
</para>
Note that <varname>jdk</varname> is an alias for the OpenJDK.</para>
<para>JAR files that are intended to be used by other packages should
be installed in <filename>$out/share/java</filename>. The OpenJDK has
a stdenv setup hook that adds any JARs in the
<filename>share/java</filename> directories of the build inputs to the
<envar>CLASSPATH</envar> environment variable. For instance, if the
package <literal>libfoo</literal> installs a JAR named
<filename>foo.jar</filename> in its <filename>share/java</filename>
directory, and another package declares the attribute
<para>
JAR files that are intended to be used by other packages should be installed
in <filename>$out/share/java</filename>. JDKs have a stdenv setup hook that
add any JARs in the <filename>share/java</filename> directories of the build
inputs to the <envar>CLASSPATH</envar> environment variable. For instance, if
the package <literal>libfoo</literal> installs a JAR named
<filename>foo.jar</filename> in its <filename>share/java</filename>
directory, and another package declares the attribute
<programlisting>
buildInputs = [ jdk libfoo ];
</programlisting>
then <envar>CLASSPATH</envar> will be set to
<filename>/nix/store/...-libfoo/share/java/foo.jar</filename>.
</para>
then <envar>CLASSPATH</envar> will be set to
<filename>/nix/store/...-libfoo/share/java/foo.jar</filename>.</para>
<para>Private JARs
should be installed in a location like
<filename>$out/share/<replaceable>package-name</replaceable></filename>.</para>
<para>If your Java package provides a program, you need to generate a
wrapper script to run it using the OpenJRE. You can use
<literal>makeWrapper</literal> for this:
<para>
Private JARs should be installed in a location like
<filename>$out/share/<replaceable>package-name</replaceable></filename>.
</para>
<para>
If your Java package provides a program, you need to generate a wrapper
script to run it using the OpenJRE. You can use
<literal>makeWrapper</literal> for this:
<programlisting>
buildInputs = [ makeWrapper ];
@ -53,23 +55,30 @@ installPhase =
--add-flags "-cp $out/share/java/foo.jar org.foo.Main"
'';
</programlisting>
Note the use of <literal>jre</literal>, which is the part of the OpenJDK
package that contains the Java Runtime Environment. By using
<literal>${jre}/bin/java</literal> instead of
<literal>${jdk}/bin/java</literal>, you prevent your package from depending
on the JDK at runtime.
</para>
Note the use of <literal>jre</literal>, which is the part of the
OpenJDK package that contains the Java Runtime Environment. By using
<literal>${jre}/bin/java</literal> instead of
<literal>${jdk}/bin/java</literal>, you prevent your package from
depending on the JDK at runtime.</para>
<para>It is possible to use a different Java compiler than
<command>javac</command> from the OpenJDK. For instance, to use the
GNU Java Compiler:
<para>
Note all JDKs passthru <literal>home</literal>, so if your application
requires environment variables like <envar>JAVA_HOME</envar> being set, that
can be done in a generic fashion with the <literal>--set</literal> argument
of <literal>makeWrapper</literal>:
<programlisting>
--set JAVA_HOME ${jdk.home}
</programlisting>
</para>
<para>
It is possible to use a different Java compiler than <command>javac</command>
from the OpenJDK. For instance, to use the GNU Java Compiler:
<programlisting>
buildInputs = [ gcj ant ];
</programlisting>
Here, Ant will automatically use <command>gij</command> (the GNU Java
Runtime) instead of the OpenJRE.</para>
Here, Ant will automatically use <command>gij</command> (the GNU Java
Runtime) instead of the OpenJRE.
</para>
</section>

View File

@ -1,24 +1,22 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-lua">
<title>Lua</title>
<title>Lua</title>
<para>
Lua packages are built by the <varname>buildLuaPackage</varname> function. This function is
implemented
in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules/generic/default.nix">
<para>
Lua packages are built by the <varname>buildLuaPackage</varname> function.
This function is implemented in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules/generic/default.nix">
<filename>pkgs/development/lua-modules/generic/default.nix</filename></link>
and works similarly to <varname>buildPerlPackage</varname>. (See
<xref linkend="sec-language-perl"/> for details.)
</para>
</para>
<para>
Lua packages are defined
in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/lua-packages.nix"><filename>pkgs/top-level/lua-packages.nix</filename></link>.
<para>
Lua packages are defined in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/lua-packages.nix"><filename>pkgs/top-level/lua-packages.nix</filename></link>.
Most of them are simple. For example:
<programlisting>
<programlisting>
fileSystem = buildLuaPackage {
name = "filesystem-1.6.2";
src = fetchurl {
@ -32,20 +30,19 @@ fileSystem = buildLuaPackage {
};
};
</programlisting>
</para>
</para>
<para>
<para>
Though, more complicated package should be placed in a seperate file in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/lua-modules"><filename>pkgs/development/lua-modules</filename></link>.
</para>
<para>
Lua packages accept additional parameter <varname>disabled</varname>, which defines
the condition of disabling package from luaPackages. For example, if package has
<varname>disabled</varname> assigned to <literal>lua.luaversion != "5.1"</literal>,
it will not be included in any luaPackages except lua51Packages, making it
only be built for lua 5.1.
</para>
</para>
<para>
Lua packages accept additional parameter <varname>disabled</varname>, which
defines the condition of disabling package from luaPackages. For example, if
package has <varname>disabled</varname> assigned to <literal>lua.luaversion
!= "5.1"</literal>, it will not be included in any luaPackages except
lua51Packages, making it only be built for lua 5.1.
</para>
</section>

View File

@ -1,24 +1,27 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-perl">
<title>Perl</title>
<title>Perl</title>
<para>
Nixpkgs provides a function <varname>buildPerlPackage</varname>, a generic
package builder function for any Perl package that has a standard
<varname>Makefile.PL</varname>. Its implemented in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/perl-modules/generic"><filename>pkgs/development/perl-modules/generic</filename></link>.
</para>
<para>Nixpkgs provides a function <varname>buildPerlPackage</varname>,
a generic package builder function for any Perl package that has a
standard <varname>Makefile.PL</varname>. Its implemented in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/perl-modules/generic"><filename>pkgs/development/perl-modules/generic</filename></link>.</para>
<para>Perl packages from CPAN are defined in <link
<para>
Perl packages from CPAN are defined in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link>,
rather than <filename>pkgs/all-packages.nix</filename>. Most Perl
packages are so straight-forward to build that they are defined here
directly, rather than having a separate function for each package
called from <filename>perl-packages.nix</filename>. However, more
complicated packages should be put in a separate file, typically in
<filename>pkgs/development/perl-modules</filename>. Here is an
example of the former:
rather than <filename>pkgs/all-packages.nix</filename>. Most Perl packages
are so straight-forward to build that they are defined here directly, rather
than having a separate function for each package called from
<filename>perl-packages.nix</filename>. However, more complicated packages
should be put in a separate file, typically in
<filename>pkgs/development/perl-modules</filename>. Here is an example of the
former:
<programlisting>
ClassC3 = buildPerlPackage rec {
name = "Class-C3-0.21";
@ -28,74 +31,72 @@ ClassC3 = buildPerlPackage rec {
};
};
</programlisting>
Note the use of <literal>mirror://cpan/</literal>, and the
<literal>${name}</literal> in the URL definition to ensure that the
name attribute is consistent with the source that were actually
downloading. Perl packages are made available in
<filename>all-packages.nix</filename> through the variable
<varname>perlPackages</varname>. For instance, if you have a package
that needs <varname>ClassC3</varname>, you would typically write
Note the use of <literal>mirror://cpan/</literal>, and the
<literal>${name}</literal> in the URL definition to ensure that the name
attribute is consistent with the source that were actually downloading.
Perl packages are made available in <filename>all-packages.nix</filename>
through the variable <varname>perlPackages</varname>. For instance, if you
have a package that needs <varname>ClassC3</varname>, you would typically
write
<programlisting>
foo = import ../path/to/foo.nix {
inherit stdenv fetchurl ...;
inherit (perlPackages) ClassC3;
};
</programlisting>
in <filename>all-packages.nix</filename>. You can test building a
Perl package as follows:
in <filename>all-packages.nix</filename>. You can test building a Perl
package as follows:
<screen>
$ nix-build -A perlPackages.ClassC3
</screen>
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to
the start of the name attribute, so the package above is actually
called <literal>perl-Class-C3-0.21</literal>. So to install it, you
can say:
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the
start of the name attribute, so the package above is actually called
<literal>perl-Class-C3-0.21</literal>. So to install it, you can say:
<screen>
$ nix-env -i perl-Class-C3
</screen>
(Of course you can also install using the attribute name: <literal>nix-env -i
-A perlPackages.ClassC3</literal>.)
</para>
(Of course you can also install using the attribute name:
<literal>nix-env -i -A perlPackages.ClassC3</literal>.)</para>
<para>So what does <varname>buildPerlPackage</varname> do? It does
the following:
<orderedlist>
<listitem><para>In the configure phase, it calls <literal>perl
Makefile.PL</literal> to generate a Makefile. You can set the
variable <varname>makeMakerFlags</varname> to pass flags to
<filename>Makefile.PL</filename></para></listitem>
<listitem><para>It adds the contents of the <envar>PERL5LIB</envar>
environment variable to <literal>#! .../bin/perl</literal> line of
Perl scripts as <literal>-I<replaceable>dir</replaceable></literal>
flags. This ensures that a script can find its
dependencies.</para></listitem>
<listitem><para>In the fixup phase, it writes the propagated build
inputs (<varname>propagatedBuildInputs</varname>) to the file
<para>
So what does <varname>buildPerlPackage</varname> do? It does the following:
<orderedlist>
<listitem>
<para>
In the configure phase, it calls <literal>perl Makefile.PL</literal> to
generate a Makefile. You can set the variable
<varname>makeMakerFlags</varname> to pass flags to
<filename>Makefile.PL</filename>
</para>
</listitem>
<listitem>
<para>
It adds the contents of the <envar>PERL5LIB</envar> environment variable
to <literal>#! .../bin/perl</literal> line of Perl scripts as
<literal>-I<replaceable>dir</replaceable></literal> flags. This ensures
that a script can find its dependencies.
</para>
</listitem>
<listitem>
<para>
In the fixup phase, it writes the propagated build inputs
(<varname>propagatedBuildInputs</varname>) to the file
<filename>$out/nix-support/propagated-user-env-packages</filename>.
<command>nix-env</command> recursively installs all packages listed
in this file when you install a package that has it. This ensures
that a Perl package can find its dependencies.</para></listitem>
</orderedlist>
</para>
<para><varname>buildPerlPackage</varname> is built on top of
<varname>stdenv</varname>, so everything can be customised in the
usual way. For instance, the <literal>BerkeleyDB</literal> module has
a <varname>preConfigure</varname> hook to generate a configuration
file used by <filename>Makefile.PL</filename>:
<command>nix-env</command> recursively installs all packages listed in
this file when you install a package that has it. This ensures that a Perl
package can find its dependencies.
</para>
</listitem>
</orderedlist>
</para>
<para>
<varname>buildPerlPackage</varname> is built on top of
<varname>stdenv</varname>, so everything can be customised in the usual way.
For instance, the <literal>BerkeleyDB</literal> module has a
<varname>preConfigure</varname> hook to generate a configuration file used by
<filename>Makefile.PL</filename>:
<programlisting>
{ buildPerlPackage, fetchurl, db }:
@ -108,23 +109,20 @@ buildPerlPackage rec {
};
preConfigure = ''
echo "LIB = ${db}/lib" > config.in
echo "INCLUDE = ${db}/include" >> config.in
echo "LIB = ${db.out}/lib" > config.in
echo "INCLUDE = ${db.dev}/include" >> config.in
'';
}
</programlisting>
</para>
</para>
<para>Dependencies on other Perl packages can be specified in the
<varname>buildInputs</varname> and
<varname>propagatedBuildInputs</varname> attributes. If something is
exclusively a build-time dependency, use
<varname>buildInputs</varname>; if its (also) a runtime dependency,
use <varname>propagatedBuildInputs</varname>. For instance, this
builds a Perl module that has runtime dependencies on a bunch of other
modules:
<para>
Dependencies on other Perl packages can be specified in the
<varname>buildInputs</varname> and <varname>propagatedBuildInputs</varname>
attributes. If something is exclusively a build-time dependency, use
<varname>buildInputs</varname>; if its (also) a runtime dependency, use
<varname>propagatedBuildInputs</varname>. For instance, this builds a Perl
module that has runtime dependencies on a bunch of other modules:
<programlisting>
ClassC3Componentised = buildPerlPackage rec {
name = "Class-C3-Componentised-1.0004";
@ -137,24 +135,26 @@ ClassC3Componentised = buildPerlPackage rec {
];
};
</programlisting>
</para>
</para>
<section xml:id="ssec-generation-from-CPAN">
<title>Generation from CPAN</title>
<section xml:id="ssec-generation-from-CPAN"><title>Generation from CPAN</title>
<para>Nix expressions for Perl packages can be generated (almost)
automatically from CPAN. This is done by the program
<command>nix-generate-from-cpan</command>, which can be installed
as follows:</para>
<para>
Nix expressions for Perl packages can be generated (almost) automatically
from CPAN. This is done by the program
<command>nix-generate-from-cpan</command>, which can be installed as
follows:
</para>
<screen>
$ nix-env -i nix-generate-from-cpan
</screen>
<para>This program takes a Perl module name, looks it up on CPAN,
fetches and unpacks the corresponding package, and prints a Nix
expression on standard output. For example:
<para>
This program takes a Perl module name, looks it up on CPAN, fetches and
unpacks the corresponding package, and prints a Nix expression on standard
output. For example:
<screen>
$ nix-generate-from-cpan XML::Simple
XMLSimple = buildPerlPackage rec {
@ -170,26 +170,23 @@ $ nix-generate-from-cpan XML::Simple
};
};
</screen>
The output can be pasted into
<filename>pkgs/top-level/perl-packages.nix</filename> or wherever else you
need it.
</para>
</section>
The output can be pasted into
<filename>pkgs/top-level/perl-packages.nix</filename> or wherever else
you need it.</para>
<section xml:id="ssec-perl-cross-compilation">
<title>Cross-compiling modules</title>
<para>
Nixpkgs has experimental support for cross-compiling Perl modules. In many
cases, it will just work out of the box, even for modules with native
extensions. Sometimes, however, the Makefile.PL for a module may
(indirectly) import a native module. In that case, you will need to make a
stub for that module that will satisfy the Makefile.PL and install it into
<filename>lib/perl5/site_perl/cross_perl/${perl.version}</filename>. See the
<varname>postInstall</varname> for <varname>DBI</varname> for an example.
</para>
</section>
</section>
<section xml:id="ssec-perl-cross-compilation"><title>Cross-compiling modules</title>
<para>Nixpkgs has experimental support for cross-compiling Perl
modules. In many cases, it will just work out of the box, even for
modules with native extensions. Sometimes, however, the Makefile.PL
for a module may (indirectly) import a native module. In that case,
you will need to make a stub for that module that will satisfy the
Makefile.PL and install it into
<filename>lib/perl5/site_perl/cross_perl/${perl.version}</filename>.
See the <varname>postInstall</varname> for <varname>DBI</varname> for
an example.</para>
</section>
</section>

View File

@ -200,7 +200,7 @@ building Python libraries is `buildPythonPackage`. Let's see how we can build th
doCheck = false;
meta = {
homepage = "http://github.com/pytoolz/toolz/";
homepage = "https://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
@ -245,7 +245,7 @@ with import <nixpkgs> {};
doCheck = false;
meta = {
homepage = "http://github.com/pytoolz/toolz/";
homepage = "https://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
};
};
@ -328,7 +328,7 @@ when building the bindings and are therefore added as `buildInputs`.
meta = {
description = "Pythonic binding for the libxml2 and libxslt libraries";
homepage = http://lxml.de;
homepage = https://lxml.de;
license = licenses.bsd3;
maintainers = with maintainers; [ sjourdois ];
};
@ -374,7 +374,7 @@ and `CFLAGS`.
description = "A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms";
homepage = http://hgomersall.github.com/pyFFTW/;
license = with licenses; [ bsd2 bsd3 ];
maintainer = with maintainers; [ fridh ];
maintainers = with maintainers; [ fridh ];
};
};
}
@ -424,7 +424,7 @@ available.
At some point you'll likely have multiple packages which you would
like to be able to use in different projects. In order to minimise unnecessary
duplication we now look at how you can maintain yourself a repository with your
duplication we now look at how you can maintain a repository with your
own packages. The important functions here are `import` and `callPackage`.
### Including a derivation using `callPackage`
@ -436,7 +436,7 @@ Let's split the package definition from the environment definition.
We first create a function that builds `toolz` in `~/path/to/toolz/release.nix`
```nix
{ pkgs, buildPythonPackage }:
{ lib, pkgs, buildPythonPackage }:
buildPythonPackage rec {
pname = "toolz";
@ -447,7 +447,7 @@ buildPythonPackage rec {
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
meta = with lib; {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
@ -484,7 +484,7 @@ and in this case the `python35` interpreter is automatically used.
### Interpreters
Versions 2.7, 3.3, 3.4, 3.5 and 3.6 of the CPython interpreter are available as
Versions 2.7, 3.4, 3.5, 3.6 and 3.7 of the CPython interpreter are available as
respectively `python27`, `python34`, `python35` and `python36`. The PyPy interpreter
is available as `pypy`. The aliases `python2` and `python3` correspond to respectively `python27` and
`python35`. The default interpreter, `python`, maps to `python2`.
@ -533,6 +533,7 @@ sets are
* `pkgs.python34Packages`
* `pkgs.python35Packages`
* `pkgs.python36Packages`
* `pkgs.python37Packages`
* `pkgs.pypyPackages`
and the aliases
@ -587,30 +588,32 @@ The `buildPythonPackage` mainly does four things:
As in Perl, dependencies on other Python packages can be specified in the
`buildInputs` and `propagatedBuildInputs` attributes. If something is
exclusively a build-time dependency, use `buildInputs`; if its (also) a runtime
exclusively a build-time dependency, use `buildInputs`; if it is (also) a runtime
dependency, use `propagatedBuildInputs`.
By default tests are run because `doCheck = true`. Test dependencies, like
e.g. the test runner, should be added to `buildInputs`.
e.g. the test runner, should be added to `checkInputs`.
By default `meta.platforms` is set to the same value
as the interpreter unless overridden otherwise.
##### `buildPythonPackage` parameters
All parameters from `mkDerivation` function are still supported.
All parameters from `stdenv.mkDerivation` function are still supported. The following are specific to `buildPythonPackage`:
* `namePrefix`: Prepended text to `${name}` parameter. Defaults to `"python3.3-"` for Python 3.3, etc. Set it to `""` if you're packaging an application or a command line tool.
* `disabled`: If `true`, package is not build for particular python interpreter version. Grep around `pkgs/top-level/python-packages.nix` for examples.
* `setupPyBuildFlags`: List of flags passed to `setup.py build_ext` command.
* `pythonPath`: List of packages to be added into `$PYTHONPATH`. Packages in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
* `catchConflicts ? true`: If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`.
* `checkInputs ? []`: Dependencies needed for running the `checkPhase`. These are added to `buildInputs` when `doCheck = true`.
* `disabled` ? false: If `true`, package is not build for the particular Python interpreter version.
* `dontWrapPythonPrograms ? false`: Skip wrapping of python programs.
* `installFlags ? []`: A list of strings. Arguments to be passed to `pip install`. To pass options to `python setup.py install`, use `--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"].
* `format ? "setuptools"`: Format of the source. Valid options are `"setuptools"`, `"flit"`, `"wheel"`, and `"other"`. `"setuptools"` is for when the source has a `setup.py` and `setuptools` is used to build a wheel, `flit`, in case `flit` should be used to build a wheel, and `wheel` in case a wheel is provided. Use `other` when a custom `buildPhase` and/or `installPhase` is needed.
* `makeWrapperArgs ? []`: A list of strings. Arguments to be passed to `makeWrapper`, which wraps generated binaries. By default, the arguments to `makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling the binary. Additional arguments here can allow a developer to set environment variables which will be available when the binary is run. For example, `makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
* `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications to `""`.
* `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
* `preShellHook`: Hook to execute commands before `shellHook`.
* `postShellHook`: Hook to execute commands after `shellHook`.
* `makeWrapperArgs`: A list of strings. Arguments to be passed to `makeWrapper`, which wraps generated binaries. By default, the arguments to `makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling the binary. Additional arguments here can allow a developer to set environment variables which will be available when the binary is run. For example, `makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
* `installFlags`: A list of strings. Arguments to be passed to `pip install`. To pass options to `python setup.py install`, use `--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"].
* `format`: Format of the source. Valid options are `setuptools` (default), `flit`, `wheel`, and `other`. `setuptools` is for when the source has a `setup.py` and `setuptools` is used to build a wheel, `flit`, in case `flit` should be used to build a wheel, and `wheel` in case a wheel is provided. In case you need to provide your own `buildPhase` and `installPhase` you can use `other`.
* `catchConflicts` If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`.
* `checkInputs` Dependencies needed for running the `checkPhase`. These are added to `buildInputs` when `doCheck = true`.
* `removeBinByteCode ? true`: Remove bytecode from `/bin`. Bytecode is only created when the filenames end with `.py`.
* `setupPyBuildFlags ? []`: List of flags passed to `setup.py build_ext` command.
##### Overriding Python packages
@ -642,11 +645,47 @@ in python.withPackages(ps: [ps.blaze])).env
#### `buildPythonApplication` function
The `buildPythonApplication` function is practically the same as `buildPythonPackage`.
The difference is that `buildPythonPackage` by default prefixes the names of the packages with the version of the interpreter.
Because with an application we're not interested in multiple version the prefix is dropped.
The `buildPythonApplication` function is practically the same as
`buildPythonPackage`. The main purpose of this function is to build a Python
package where one is interested only in the executables, and not importable
modules. For that reason, when adding this package to a `python.buildEnv`, the
modules won't be made available.
#### python.buildEnv function
Another difference is that `buildPythonPackage` by default prefixes the names of
the packages with the version of the interpreter. Because this is irrelevant for
applications, the prefix is omitted.
#### `toPythonApplication` function
A distinction is made between applications and libraries, however, sometimes a
package is used as both. In this case the package is added as a library to
`python-packages.nix` and as an application to `all-packages.nix`. To reduce
duplication the `toPythonApplication` can be used to convert a library to an
application.
The Nix expression shall use `buildPythonPackage` and be called from
`python-packages.nix`. A reference shall be created from `all-packages.nix` to
the attribute in `python-packages.nix`, and the `toPythonApplication` shall be
applied to the reference:
```nix
youtube-dl = with pythonPackages; toPythonApplication youtube-dl;
```
#### `toPythonModule` function
In some cases, such as bindings, a package is created using
`stdenv.mkDerivation` and added as attribute in `all-packages.nix`.
The Python bindings should be made available from `python-packages.nix`.
The `toPythonModule` function takes a derivation and makes certain Python-specific modifications.
```nix
opencv = toPythonModule (pkgs.opencv.override {
enablePython = true;
pythonPackages = self;
});
```
Do pay attention to passing in the right Python version!
#### `python.buildEnv` function
Python environments can be created using the low-level `pkgs.buildEnv` function.
This example shows how to create an environment that has the Pyramid Web Framework.
@ -688,7 +727,7 @@ specified packages in its path.
* `postBuild`: Shell command executed after the build of environment.
* `ignoreCollisions`: Ignore file collisions inside the environment (default is `false`).
#### python.withPackages function
#### `python.withPackages` function
The `python.withPackages` function provides a simpler interface to the `python.buildEnv` functionality.
It takes a function as an argument that is passed the set of python packages and returns the list
@ -941,7 +980,7 @@ stdenv.mkDerivation {
# the following packages are related to the dependencies of your python
# project.
# In this particular example the python modules listed in the
# requirements.tx require the following packages to be installed locally
# requirements.txt require the following packages to be installed locally
# in order to compile any binary extensions they may require.
#
taglib
@ -973,14 +1012,14 @@ folder and not downloaded again.
If you need to change a package's attribute(s) from `configuration.nix` you could do:
```nix
nixpkgs.config.packageOverrides = superP: {
pythonPackages = superP.pythonPackages.override {
overrides = self: super: {
bepasty-server = super.bepasty-server.overrideAttrs ( oldAttrs: {
src = pkgs.fetchgit {
url = "https://github.com/bepasty/bepasty-server";
sha256 = "9ziqshmsf0rjvdhhca55sm0x8jz76fsf2q4rwh4m6lpcf8wr0nps";
rev = "e2516e8cf4f2afb5185337073607eb9e84a61d2d";
nixpkgs.config.packageOverrides = super: {
python = super.python.override {
packageOverrides = python-self: python-super: {
zerobin = python-super.zerobin.overrideAttrs (oldAttrs: {
src = super.fetchgit {
url = "https://github.com/sametmax/0bin";
rev = "a344dbb18fe7a855d0742b9a1cede7ce423b34ec";
sha256 = "16d769kmnrpbdr0ph0whyf4yff5df6zi4kmwx7sz1d3r6c8p6xji";
};
});
};
@ -988,27 +1027,39 @@ If you need to change a package's attribute(s) from `configuration.nix` you coul
};
```
If you are using the `bepasty-server` package somewhere, for example in `systemPackages` or indirectly from `services.bepasty`, then a `nixos-rebuild switch` will rebuild the system but with the `bepasty-server` package using a different `src` attribute. This way one can modify `python` based software/libraries easily. Using `self` and `super` one can also alter dependencies (`buildInputs`) between the old state (`self`) and new state (`super`).
`pythonPackages.zerobin` is now globally overridden. All packages and also the
`zerobin` NixOS service use the new definition.
Note that `python-super` refers to the old package set and `python-self`
to the new, overridden version.
To modify only a Python package set instead of a whole Python derivation, use this snippet:
```nix
myPythonPackages = pythonPackages.override {
overrides = self: super: {
zerobin = ...;
};
}
```
### How to override a Python package using overlays?
To alter a python package using overlays, you would use the following approach:
Use the following overlay template:
```nix
self: super:
rec {
{
python = super.python.override {
packageOverrides = python-self: python-super: {
bepasty-server = python-super.bepasty-server.overrideAttrs ( oldAttrs: {
src = self.pkgs.fetchgit {
url = "https://github.com/bepasty/bepasty-server";
sha256 = "9ziqshmsf0rjvdhhca55sm0x8jz76fsf2q4rwh4m6lpcf8wr0nps";
rev = "e2516e8cf4f2afb5185337073607eb9e84a61d2d";
zerobin = python-super.zerobin.overrideAttrs (oldAttrs: {
src = super.fetchgit {
url = "https://github.com/sametmax/0bin";
rev = "a344dbb18fe7a855d0742b9a1cede7ce423b34ec";
sha256 = "16d769kmnrpbdr0ph0whyf4yff5df6zi4kmwx7sz1d3r6c8p6xji";
};
});
};
};
pythonPackages = python.pkgs;
}
```

View File

@ -1,58 +1,74 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-qt">
<title>Qt</title>
<title>Qt</title>
<para>
Qt is a comprehensive desktop and mobile application development toolkit for
C++. Legacy support is available for Qt 3 and Qt 4, but all current
development uses Qt 5. The Qt 5 packages in Nixpkgs are updated frequently to
take advantage of new features, but older versions are typically retained
until their support window ends. The most important consideration in
packaging Qt-based software is ensuring that each package and all its
dependencies use the same version of Qt 5; this consideration motivates most
of the tools described below.
</para>
<para>
Qt is a comprehensive desktop and mobile application development toolkit for C++.
Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5.
The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features,
but older versions are typically retained until their support window ends.
The most important consideration in packaging Qt-based software is ensuring that each package and all its dependencies use the same version of Qt 5;
this consideration motivates most of the tools described below.
</para>
<section xml:id="ssec-qt-libraries">
<title>Packaging Libraries for Nixpkgs</title>
<section xml:id="ssec-qt-libraries"><title>Packaging Libraries for Nixpkgs</title>
<para>
Whenever possible, libraries that use Qt 5 should be built with each
available version. Packages providing libraries should be added to the
top-level function <varname>mkLibsForQt5</varname>, which is used to build a
set of libraries for every Qt 5 version. A special
<varname>callPackage</varname> function is used in this scope to ensure that
the entire dependency tree uses the same Qt 5 version. Import dependencies
unqualified, i.e., <literal>qtbase</literal> not
<literal>qt5.qtbase</literal>. <emphasis>Do not</emphasis> import a package
set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
</para>
<para>
Whenever possible, libraries that use Qt 5 should be built with each available version.
Packages providing libraries should be added to the top-level function <varname>mkLibsForQt5</varname>,
which is used to build a set of libraries for every Qt 5 version.
A special <varname>callPackage</varname> function is used in this scope to ensure that the entire dependency tree uses the same Qt 5 version.
Import dependencies unqualified, i.e., <literal>qtbase</literal> not <literal>qt5.qtbase</literal>.
<emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
</para>
<para>
If a library does not support a particular version of Qt 5, it is best to
mark it as broken by setting its <literal>meta.broken</literal> attribute. A
package may be marked broken for certain versions by testing the
<literal>qtbase.version</literal> attribute, which will always give the
current Qt 5 version.
</para>
</section>
<para>
If a library does not support a particular version of Qt 5, it is best to mark it as broken by setting its <literal>meta.broken</literal> attribute.
A package may be marked broken for certain versions by testing the <literal>qtbase.version</literal> attribute, which will always give the current Qt 5 version.
</para>
<section xml:id="ssec-qt-applications">
<title>Packaging Applications for Nixpkgs</title>
<para>
Call your application expression using
<literal>libsForQt5.callPackage</literal> instead of
<literal>callPackage</literal>. Import dependencies unqualified, i.e.,
<literal>qtbase</literal> not <literal>qt5.qtbase</literal>. <emphasis>Do
not</emphasis> import a package set such as <literal>qt5</literal> or
<literal>libsForQt5</literal>.
</para>
<para>
Qt 5 maintains strict backward compatibility, so it is generally best to
build an application package against the latest version using the
<varname>libsForQt5</varname> library set. In case a package does not build
with the latest Qt version, it is possible to pick a set pinned to a
particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that
is the latest version the package supports. If a package must be pinned to
an older Qt version, be sure to file a bug upstream; because Qt is strictly
backwards-compatible, any incompatibility is by definition a bug in the
application.
</para>
<para>
When testing applications in Nixpkgs, it is a common practice to build the
package with <literal>nix-build</literal> and run it using the created
symbolic link. This will not work with Qt applications, however, because
they have many hard runtime requirements that can only be guaranteed if the
package is actually installed. To test a Qt application, install it with
<literal>nix-env</literal> or run it inside <literal>nix-shell</literal>.
</para>
</section>
</section>
<section xml:id="ssec-qt-applications"><title>Packaging Applications for Nixpkgs</title>
<para>
Call your application expression using <literal>libsForQt5.callPackage</literal> instead of <literal>callPackage</literal>.
Import dependencies unqualified, i.e., <literal>qtbase</literal> not <literal>qt5.qtbase</literal>.
<emphasis>Do not</emphasis> import a package set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
</para>
<para>
Qt 5 maintains strict backward compatibility, so it is generally best to build an application package against the latest version using the <varname>libsForQt5</varname> library set.
In case a package does not build with the latest Qt version, it is possible to pick a set pinned to a particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that is the latest version the package supports.
If a package must be pinned to an older Qt version, be sure to file a bug upstream;
because Qt is strictly backwards-compatible, any incompatibility is by definition a bug in the application.
</para>
<para>
When testing applications in Nixpkgs, it is a common practice to build the package with <literal>nix-build</literal> and run it using the created symbolic link.
This will not work with Qt applications, however, because they have many hard runtime requirements that can only be guaranteed if the package is actually installed.
To test a Qt application, install it with <literal>nix-env</literal> or run it inside <literal>nix-shell</literal>.
</para>
</section>
</section>

View File

@ -1,17 +1,19 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-ruby">
<title>Ruby</title>
<title>Ruby</title>
<para>
There currently is support to bundle applications that are packaged as Ruby
gems. The utility "bundix" allows you to write a
<filename>Gemfile</filename>, let bundler create a
<filename>Gemfile.lock</filename>, and then convert this into a nix
expression that contains all Gem dependencies automatically.
</para>
<para>There currently is support to bundle applications that are packaged as
Ruby gems. The utility "bundix" allows you to write a
<filename>Gemfile</filename>, let bundler create a
<filename>Gemfile.lock</filename>, and then convert this into a nix
expression that contains all Gem dependencies automatically.
</para>
<para>For example, to package sensu, we did:</para>
<para>
For example, to package sensu, we did:
</para>
<screen>
<![CDATA[$ cd pkgs/servers/monitoring
@ -42,17 +44,18 @@ bundlerEnv rec {
}]]>
</screen>
<para>Please check in the <filename>Gemfile</filename>,
<filename>Gemfile.lock</filename> and the
<filename>gemset.nix</filename> so future updates can be run easily.
</para>
<para>
Please check in the <filename>Gemfile</filename>,
<filename>Gemfile.lock</filename> and the <filename>gemset.nix</filename> so
future updates can be run easily.
</para>
<para>For tools written in Ruby - i.e. where the desire is to install
a package and then execute e.g. <command>rake</command> at the command
line, there is an alternative builder called <literal>bundlerApp</literal>.
Set up the <filename>gemset.nix</filename> the same way, and then, for
example:
</para>
<para>
For tools written in Ruby - i.e. where the desire is to install a package and
then execute e.g. <command>rake</command> at the command line, there is an
alternative builder called <literal>bundlerApp</literal>. Set up the
<filename>gemset.nix</filename> the same way, and then, for example:
</para>
<screen>
<![CDATA[{ lib, bundlerApp }:
@ -72,31 +75,31 @@ bundlerApp {
}]]>
</screen>
<para>The chief advantage of <literal>bundlerApp</literal> over
<literal>bundlerEnv</literal> is the executables introduced in the
environment are precisely those selected in the <literal>exes</literal>
list, as opposed to <literal>bundlerEnv</literal> which adds all the
executables made available by gems in the gemset, which can mean e.g.
<command>rspec</command> or <command>rake</command> in unpredictable
versions available from various packages.
</para>
<para>
The chief advantage of <literal>bundlerApp</literal> over
<literal>bundlerEnv</literal> is the executables introduced in the
environment are precisely those selected in the <literal>exes</literal> list,
as opposed to <literal>bundlerEnv</literal> which adds all the executables
made available by gems in the gemset, which can mean e.g.
<command>rspec</command> or <command>rake</command> in unpredictable versions
available from various packages.
</para>
<para>Resulting derivations for both builders also have two helpful
attributes, <literal>env</literal> and <literal>wrappedRuby</literal>.
The first one allows one to quickly drop into
<command>nix-shell</command> with the specified environment present.
E.g. <command>nix-shell -A sensu.env</command> would give you an
environment with Ruby preset so it has all the libraries necessary
for <literal>sensu</literal> in its paths. The second one can be
used to make derivations from custom Ruby scripts which have
<filename>Gemfile</filename>s with their dependencies specified. It is
a derivation with <command>ruby</command> wrapped so it can find all
the needed dependencies. For example, to make a derivation
<literal>my-script</literal> for a <filename>my-script.rb</filename>
(which should be placed in <filename>bin</filename>) you should run
<command>bundix</command> as specified above and then use
<literal>bundlerEnv</literal> like this:
</para>
<para>
Resulting derivations for both builders also have two helpful attributes,
<literal>env</literal> and <literal>wrappedRuby</literal>. The first one
allows one to quickly drop into <command>nix-shell</command> with the
specified environment present. E.g. <command>nix-shell -A sensu.env</command>
would give you an environment with Ruby preset so it has all the libraries
necessary for <literal>sensu</literal> in its paths. The second one can be
used to make derivations from custom Ruby scripts which have
<filename>Gemfile</filename>s with their dependencies specified. It is a
derivation with <command>ruby</command> wrapped so it can find all the needed
dependencies. For example, to make a derivation <literal>my-script</literal>
for a <filename>my-script.rb</filename> (which should be placed in
<filename>bin</filename>) you should run <command>bundix</command> as
specified above and then use <literal>bundlerEnv</literal> like this:
</para>
<programlisting>
<![CDATA[let env = bundlerEnv {
@ -118,5 +121,4 @@ in stdenv.mkDerivation {
'';
}]]>
</programlisting>
</section>

View File

@ -59,6 +59,11 @@ all crate sources of this package. Currently it is obtained by inserting a
fake checksum into the expression and building the package once. The correct
checksum can be then take from the failed build.
When the `Cargo.lock`, provided by upstream, is not in sync with the
`Cargo.toml`, it is possible to use `cargoPatches` to update it. All patches
added in `cargoPatches` will also be prepended to the patches in `patches` at
build-time.
To install crates with nix there is also an experimental project called
[nixcrates](https://github.com/fractalide/nixcrates).

View File

@ -1,27 +1,42 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-language-texlive">
<title>TeX Live</title>
<title>TeX Live</title>
<para>
Since release 15.09 there is a new TeX Live packaging that lives entirely
under attribute <varname>texlive</varname>.
</para>
<section>
<title>User's guide</title>
<para>Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute <varname>texlive</varname>.</para>
<section><title>User's guide</title>
<itemizedlist>
<listitem><para>
For basic usage just pull <varname>texlive.combined.scheme-basic</varname> for an environment with basic LaTeX support.</para></listitem>
<listitem><para>
<listitem>
<para>
For basic usage just pull <varname>texlive.combined.scheme-basic</varname>
for an environment with basic LaTeX support.
</para>
</listitem>
<listitem>
<para>
It typically won't work to use separately installed packages together.
Instead, you can build a custom set of packages like this:
<programlisting>
<programlisting>
texlive.combine {
inherit (texlive) scheme-small collection-langkorean algorithms cm-super;
}
</programlisting>
There are all the schemes, collections and a few thousand packages, as defined upstream (perhaps with tiny differences).
</para></listitem>
<listitem><para>
By default you only get executables and files needed during runtime, and a little documentation for the core packages. To change that, you need to add <varname>pkgFilter</varname> function to <varname>combine</varname>.
<programlisting>
There are all the schemes, collections and a few thousand packages, as
defined upstream (perhaps with tiny differences).
</para>
</listitem>
<listitem>
<para>
By default you only get executables and files needed during runtime, and a
little documentation for the core packages. To change that, you need to
add <varname>pkgFilter</varname> function to <varname>combine</varname>.
<programlisting>
texlive.combine {
# inherit (texlive) whatever-you-want;
pkgFilter = pkg:
@ -30,34 +45,55 @@ texlive.combine {
# there are also other attributes: version, name
}
</programlisting>
</para></listitem>
<listitem><para>
</para>
</listitem>
<listitem>
<para>
You can list packages e.g. by <command>nix-repl</command>.
<programlisting>
<programlisting>
$ nix-repl
nix-repl> :l &lt;nixpkgs>
nix-repl> texlive.collection-&lt;TAB>
</programlisting>
</para></listitem>
<listitem><para>
Note that the wrapper assumes that the result has a chance to be useful. For example, the core executables should be present, as well as some core data files. The supported way of ensuring this is by including some scheme, for example <varname>scheme-basic</varname>, into the combination.
</para></listitem>
</para>
</listitem>
<listitem>
<para>
Note that the wrapper assumes that the result has a chance to be useful.
For example, the core executables should be present, as well as some core
data files. The supported way of ensuring this is by including some
scheme, for example <varname>scheme-basic</varname>, into the combination.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section>
<title>Known problems</title>
<section><title>Known problems</title>
<itemizedlist>
<listitem><para>
Some tools are still missing, e.g. luajittex;</para></listitem>
<listitem><para>
some apps aren't packaged/tested yet (asymptote, biber, etc.);</para></listitem>
<listitem><para>
feature/bug: when a package is rejected by <varname>pkgFilter</varname>, its dependencies are still propagated;</para></listitem>
<listitem><para>
in case of any bugs or feature requests, file a github issue or better a pull request and /cc @vcunat.</para></listitem>
<listitem>
<para>
Some tools are still missing, e.g. luajittex;
</para>
</listitem>
<listitem>
<para>
some apps aren't packaged/tested yet (asymptote, biber, etc.);
</para>
</listitem>
<listitem>
<para>
feature/bug: when a package is rejected by <varname>pkgFilter</varname>,
its dependencies are still propagated;
</para>
</listitem>
<listitem>
<para>
in case of any bugs or feature requests, file a github issue or better a
pull request and /cc @vcunat.
</para>
</listitem>
</itemizedlist>
</section>
</section>
</section>

View File

@ -1,14 +1,10 @@
<book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude">
<info>
<title>Nixpkgs Contributors Guide</title>
<subtitle>Version <xi:include href=".version" parse="text" /></subtitle>
<subtitle>Version <xi:include href=".version" parse="text" />
</subtitle>
</info>
<xi:include href="introduction.chapter.xml" />
<xi:include href="quick-start.xml" />
<xi:include href="stdenv.xml" />
@ -25,5 +21,4 @@
<xi:include href="submitting-changes.xml" />
<xi:include href="reviewing-contributions.xml" />
<xi:include href="contributing.xml" />
</book>

View File

@ -1,37 +1,34 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-meta">
<title>Meta-attributes</title>
<para>Nix packages can declare <emphasis>meta-attributes</emphasis>
that contain information about a package such as a description, its
homepage, its license, and so on. For instance, the GNU Hello package
has a <varname>meta</varname> declaration like this:
<title>Meta-attributes</title>
<para>
Nix packages can declare <emphasis>meta-attributes</emphasis> that contain
information about a package such as a description, its homepage, its license,
and so on. For instance, the GNU Hello package has a <varname>meta</varname>
declaration like this:
<programlisting>
meta = {
meta = with stdenv.lib; {
description = "A program that produces a familiar, friendly greeting";
longDescription = ''
GNU Hello is a program that prints "Hello, world!" when you run it.
It is fully customizable.
'';
homepage = http://www.gnu.org/software/hello/manual/;
license = stdenv.lib.licenses.gpl3Plus;
maintainers = [ stdenv.lib.maintainers.eelco ];
platforms = stdenv.lib.platforms.all;
license = licenses.gpl3Plus;
maintainers = [ maintainers.eelco ];
platforms = platforms.all;
};
</programlisting>
</para>
<para>Meta-attributes are not passed to the builder of the package.
Thus, a change to a meta-attribute doesnt trigger a recompilation of
the package. The value of a meta-attribute must be a string.</para>
<para>The meta-attributes of a package can be queried from the
command-line using <command>nix-env</command>:
</para>
<para>
Meta-attributes are not passed to the builder of the package. Thus, a change
to a meta-attribute doesnt trigger a recompilation of the package. The
value of a meta-attribute must be a string.
</para>
<para>
The meta-attributes of a package can be queried from the command-line using
<command>nix-env</command>:
<screen>
$ nix-env -qa hello --json
{
@ -70,252 +67,311 @@ $ nix-env -qa hello --json
</screen>
<command>nix-env</command> knows about the
<varname>description</varname> field specifically:
<command>nix-env</command> knows about the <varname>description</varname>
field specifically:
<screen>
$ nix-env -qa hello --description
hello-2.3 A program that produces a familiar, friendly greeting
</screen>
</para>
<section xml:id="sec-standard-meta-attributes">
<title>Standard meta-attributes</title>
</para>
<section xml:id="sec-standard-meta-attributes"><title>Standard
meta-attributes</title>
<para>It is expected that each meta-attribute is one of the following:</para>
<variablelist>
<para>
It is expected that each meta-attribute is one of the following:
</para>
<variablelist>
<varlistentry>
<term><varname>description</varname></term>
<listitem><para>A short (one-line) description of the package.
This is shown by <command>nix-env -q --description</command> and
also on the Nixpkgs release pages.</para>
<para>Dont include a period at the end. Dont include newline
characters. Capitalise the first character. For brevity, dont
repeat the name of package — just describe what it does.</para>
<para>Wrong: <literal>"libpng is a library that allows you to decode PNG images."</literal></para>
<para>Right: <literal>"A library for decoding PNG images"</literal></para>
<term>
<varname>description</varname>
</term>
<listitem>
<para>
A short (one-line) description of the package. This is shown by
<command>nix-env -q --description</command> and also on the Nixpkgs
release pages.
</para>
<para>
Dont include a period at the end. Dont include newline characters.
Capitalise the first character. For brevity, dont repeat the name of
package — just describe what it does.
</para>
<para>
Wrong: <literal>"libpng is a library that allows you to decode PNG
images."</literal>
</para>
<para>
Right: <literal>"A library for decoding PNG images"</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>longDescription</varname></term>
<listitem><para>An arbitrarily long description of the
package.</para></listitem>
<term>
<varname>longDescription</varname>
</term>
<listitem>
<para>
An arbitrarily long description of the package.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>branch</varname></term>
<listitem><para>Release branch. Used to specify that a package is not
going to receive updates that are not in this branch; for example, Linux
kernel 3.0 is supposed to be updated to 3.0.X, not 3.1.</para></listitem>
<term>
<varname>branch</varname>
</term>
<listitem>
<para>
Release branch. Used to specify that a package is not going to receive
updates that are not in this branch; for example, Linux kernel 3.0 is
supposed to be updated to 3.0.X, not 3.1.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>homepage</varname></term>
<listitem><para>The packages homepage. Example:
<literal>http://www.gnu.org/software/hello/manual/</literal></para></listitem>
<term>
<varname>homepage</varname>
</term>
<listitem>
<para>
The packages homepage. Example:
<literal>http://www.gnu.org/software/hello/manual/</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>downloadPage</varname></term>
<listitem><para>The page where a link to the current version can be found. Example:
<literal>http://ftp.gnu.org/gnu/hello/</literal></para></listitem>
<term>
<varname>downloadPage</varname>
</term>
<listitem>
<para>
The page where a link to the current version can be found. Example:
<literal>http://ftp.gnu.org/gnu/hello/</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>license</varname></term>
<term>
<varname>license</varname>
</term>
<listitem>
<para>
The license, or licenses, for the package. One from the attribute set
defined in <link
defined in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
<filename>nixpkgs/lib/licenses.nix</filename></link>. At this moment
using both a list of licenses and a single license is valid. If the
license field is in the form of a list representation, then it means
that parts of the package are licensed differently. Each license
should preferably be referenced by their attribute. The non-list
attribute value can also be a space delimited string representation of
the contained attribute shortNames or spdxIds. The following are all valid
examples:
license field is in the form of a list representation, then it means that
parts of the package are licensed differently. Each license should
preferably be referenced by their attribute. The non-list attribute value
can also be a space delimited string representation of the contained
attribute shortNames or spdxIds. The following are all valid examples:
<itemizedlist>
<listitem><para>Single license referenced by attribute (preferred)
<listitem>
<para>
Single license referenced by attribute (preferred)
<literal>stdenv.lib.licenses.gpl3</literal>.
</para></listitem>
<listitem><para>Single license referenced by its attribute shortName (frowned upon)
</para>
</listitem>
<listitem>
<para>
Single license referenced by its attribute shortName (frowned upon)
<literal>"gpl3"</literal>.
</para></listitem>
<listitem><para>Single license referenced by its attribute spdxId (frowned upon)
</para>
</listitem>
<listitem>
<para>
Single license referenced by its attribute spdxId (frowned upon)
<literal>"GPL-3.0"</literal>.
</para></listitem>
<listitem><para>Multiple licenses referenced by attribute (preferred)
<literal>with stdenv.lib.licenses; [ asl20 free ofl ]</literal>.
</para></listitem>
<listitem><para>Multiple licenses referenced as a space delimited string of attribute shortNames (frowned upon)
<literal>"asl20 free ofl"</literal>.
</para></listitem>
</para>
</listitem>
<listitem>
<para>
Multiple licenses referenced by attribute (preferred) <literal>with
stdenv.lib.licenses; [ asl20 free ofl ]</literal>.
</para>
</listitem>
<listitem>
<para>
Multiple licenses referenced as a space delimited string of attribute
shortNames (frowned upon) <literal>"asl20 free ofl"</literal>.
</para>
</listitem>
</itemizedlist>
For details, see <xref linkend='sec-meta-license'/>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>maintainers</varname></term>
<listitem><para>A list of names and e-mail addresses of the
maintainers of this Nix expression. If
you would like to be a maintainer of a package, you may want to add
yourself to <link
<term>
<varname>maintainers</varname>
</term>
<listitem>
<para>
A list of names and e-mail addresses of the maintainers of this Nix
expression. If you would like to be a maintainer of a package, you may
want to add yourself to
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/maintainers/maintainer-list.nix"><filename>nixpkgs/maintainers/maintainer-list.nix</filename></link>
and write something like <literal>[ stdenv.lib.maintainers.alice
stdenv.lib.maintainers.bob ]</literal>.</para></listitem>
stdenv.lib.maintainers.bob ]</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>priority</varname></term>
<listitem><para>The <emphasis>priority</emphasis> of the package,
used by <command>nix-env</command> to resolve file name conflicts
between packages. See the Nix manual page for
<command>nix-env</command> for details. Example:
<literal>"10"</literal> (a low-priority
package).</para></listitem>
<term>
<varname>priority</varname>
</term>
<listitem>
<para>
The <emphasis>priority</emphasis> of the package, used by
<command>nix-env</command> to resolve file name conflicts between
packages. See the Nix manual page for <command>nix-env</command> for
details. Example: <literal>"10"</literal> (a low-priority package).
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>platforms</varname></term>
<listitem><para>The list of Nix platform types on which the
package is supported. Hydra builds packages according to the
platform specified. If no platform is specified, the package does
not have prebuilt binaries. An example is:
<term>
<varname>platforms</varname>
</term>
<listitem>
<para>
The list of Nix platform types on which the package is supported. Hydra
builds packages according to the platform specified. If no platform is
specified, the package does not have prebuilt binaries. An example is:
<programlisting>
meta.platforms = stdenv.lib.platforms.linux;
</programlisting>
Attribute Set <varname>stdenv.lib.platforms</varname> defines
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/systems/doubles.nix">
various common lists</link> of platforms types.</para></listitem>
various common lists</link> of platforms types.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>hydraPlatforms</varname></term>
<listitem><para>The list of Nix platform types for which the Hydra
instance at <literal>hydra.nixos.org</literal> will build the
package. (Hydra is the Nix-based continuous build system.) It
defaults to the value of <varname>meta.platforms</varname>. Thus,
the only reason to set <varname>meta.hydraPlatforms</varname> is
if you want <literal>hydra.nixos.org</literal> to build the
package on a subset of <varname>meta.platforms</varname>, or not
at all, e.g.
<term>
<varname>hydraPlatforms</varname>
</term>
<listitem>
<para>
The list of Nix platform types for which the Hydra instance at
<literal>hydra.nixos.org</literal> will build the package. (Hydra is the
Nix-based continuous build system.) It defaults to the value of
<varname>meta.platforms</varname>. Thus, the only reason to set
<varname>meta.hydraPlatforms</varname> is if you want
<literal>hydra.nixos.org</literal> to build the package on a subset of
<varname>meta.platforms</varname>, or not at all, e.g.
<programlisting>
meta.platforms = stdenv.lib.platforms.linux;
meta.hydraPlatforms = [];
</programlisting>
</para></listitem>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>broken</varname></term>
<listitem><para>If set to <literal>true</literal>, the package is
marked as “broken”, meaning that it wont show up in
<literal>nix-env -qa</literal>, and cannot be built or installed.
Such packages should be removed from Nixpkgs eventually unless
they are fixed.</para></listitem>
<term>
<varname>broken</varname>
</term>
<listitem>
<para>
If set to <literal>true</literal>, the package is marked as “broken”,
meaning that it wont show up in <literal>nix-env -qa</literal>, and
cannot be built or installed. Such packages should be removed from
Nixpkgs eventually unless they are fixed.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>updateWalker</varname></term>
<listitem><para>If set to <literal>true</literal>, the package is
tested to be updated correctly by the <literal>update-walker.sh</literal>
script without additional settings. Such packages have
<varname>meta.version</varname> set and their homepage (or
the page specified by <varname>meta.downloadPage</varname>) contains
a direct link to the package tarball.</para></listitem>
<term>
<varname>updateWalker</varname>
</term>
<listitem>
<para>
If set to <literal>true</literal>, the package is tested to be updated
correctly by the <literal>update-walker.sh</literal> script without
additional settings. Such packages have <varname>meta.version</varname>
set and their homepage (or the page specified by
<varname>meta.downloadPage</varname>) contains a direct link to the
package tarball.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="sec-meta-license">
<title>Licenses</title>
</variablelist>
</section>
<section xml:id="sec-meta-license"><title>Licenses</title>
<para>The <varname>meta.license</varname> attribute should preferrably contain
a value from <varname>stdenv.lib.licenses</varname> defined in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
<filename>nixpkgs/lib/licenses.nix</filename></link>,
or in-place license description of the same format if the license is
unlikely to be useful in another expression.</para>
<para>Although it's typically better to indicate the specific license,
a few generic options are available:
<variablelist>
<para>
The <varname>meta.license</varname> attribute should preferrably contain a
value from <varname>stdenv.lib.licenses</varname> defined in
<link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/lib/licenses.nix">
<filename>nixpkgs/lib/licenses.nix</filename></link>, or in-place license
description of the same format if the license is unlikely to be useful in
another expression.
</para>
<para>
Although it's typically better to indicate the specific license, a few
generic options are available:
<variablelist>
<varlistentry>
<term><varname>stdenv.lib.licenses.free</varname>,
<varname>"free"</varname></term>
<listitem><para>Catch-all for free software licenses not listed
above.</para></listitem>
<term>
<varname>stdenv.lib.licenses.free</varname>, <varname>"free"</varname>
</term>
<listitem>
<para>
Catch-all for free software licenses not listed above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>stdenv.lib.licenses.unfreeRedistributable</varname>,
<varname>"unfree-redistributable"</varname></term>
<listitem><para>Unfree package that can be redistributed in binary
form. That is, its legal to redistribute the
<emphasis>output</emphasis> of the derivation. This means that
the package can be included in the Nixpkgs
channel.</para>
<para>Sometimes proprietary software can only be redistributed
unmodified. Make sure the builder doesnt actually modify the
original binaries; otherwise were breaking the license. For
instance, the NVIDIA X11 drivers can be redistributed unmodified,
but our builder applies <command>patchelf</command> to make them
work. Thus, its license is <varname>"unfree"</varname> and it
cannot be included in the Nixpkgs channel.</para></listitem>
<term>
<varname>stdenv.lib.licenses.unfreeRedistributable</varname>, <varname>"unfree-redistributable"</varname>
</term>
<listitem>
<para>
Unfree package that can be redistributed in binary form. That is, its
legal to redistribute the <emphasis>output</emphasis> of the derivation.
This means that the package can be included in the Nixpkgs channel.
</para>
<para>
Sometimes proprietary software can only be redistributed unmodified.
Make sure the builder doesnt actually modify the original binaries;
otherwise were breaking the license. For instance, the NVIDIA X11
drivers can be redistributed unmodified, but our builder applies
<command>patchelf</command> to make them work. Thus, its license is
<varname>"unfree"</varname> and it cannot be included in the Nixpkgs
channel.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>stdenv.lib.licenses.unfree</varname>,
<varname>"unfree"</varname></term>
<listitem><para>Unfree package that cannot be redistributed. You
can build it yourself, but you cannot redistribute the output of
the derivation. Thus it cannot be included in the Nixpkgs
channel.</para></listitem>
<term>
<varname>stdenv.lib.licenses.unfree</varname>, <varname>"unfree"</varname>
</term>
<listitem>
<para>
Unfree package that cannot be redistributed. You can build it yourself,
but you cannot redistribute the output of the derivation. Thus it cannot
be included in the Nixpkgs channel.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><varname>stdenv.lib.licenses.unfreeRedistributableFirmware</varname>,
<varname>"unfree-redistributable-firmware"</varname></term>
<listitem><para>This package supplies unfree, redistributable
firmware. This is a separate value from
<varname>unfree-redistributable</varname> because not everybody
cares whether firmware is free.</para></listitem>
<term>
<varname>stdenv.lib.licenses.unfreeRedistributableFirmware</varname>, <varname>"unfree-redistributable-firmware"</varname>
</term>
<listitem>
<para>
This package supplies unfree, redistributable firmware. This is a
separate value from <varname>unfree-redistributable</varname> because
not everybody cares whether firmware is free.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
</variablelist>
</para>
</section>
</chapter>

View File

@ -5,99 +5,319 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-multiple-output">
<title>Multiple-output packages</title>
<section>
<title>Introduction</title>
<title>Multiple-output packages</title>
<para>
The Nix language allows a derivation to produce multiple outputs, which is
similar to what is utilized by other Linux distribution packaging systems.
The outputs reside in separate nix store paths, so they can be mostly
handled independently of each other, including passing to build inputs,
garbage collection or binary substitution. The exception is that building
from source always produces all the outputs.
</para>
<section><title>Introduction</title>
<para>The Nix language allows a derivation to produce multiple outputs, which is similar to what is utilized by other Linux distribution packaging systems. The outputs reside in separate nix store paths, so they can be mostly handled independently of each other, including passing to build inputs, garbage collection or binary substitution. The exception is that building from source always produces all the outputs.</para>
<para>The main motivation is to save disk space by reducing runtime closure sizes; consequently also sizes of substituted binaries get reduced. Splitting can be used to have more granular runtime dependencies, for example the typical reduction is to split away development-only files, as those are typically not needed during runtime. As a result, closure sizes of many packages can get reduced to a half or even much less.</para>
<note><para>The reduction effects could be instead achieved by building the parts in completely separate derivations. That would often additionally reduce build-time closures, but it tends to be much harder to write such derivations, as build systems typically assume all parts are being built at once. This compromise approach of single source package producing multiple binary packages is also utilized often by rpm and deb.</para></note>
</section>
<para>
The main motivation is to save disk space by reducing runtime closure sizes;
consequently also sizes of substituted binaries get reduced. Splitting can
be used to have more granular runtime dependencies, for example the typical
reduction is to split away development-only files, as those are typically
not needed during runtime. As a result, closure sizes of many packages can
get reduced to a half or even much less.
</para>
<note>
<para>
The reduction effects could be instead achieved by building the parts in
completely separate derivations. That would often additionally reduce
build-time closures, but it tends to be much harder to write such
derivations, as build systems typically assume all parts are being built at
once. This compromise approach of single source package producing multiple
binary packages is also utilized often by rpm and deb.
</para>
</note>
</section>
<section>
<title>Installing a split package</title>
<para>
When installing a package via <varname>systemPackages</varname> or
<command>nix-env</command> you have several options:
</para>
<section><title>Installing a split package</title>
<para>When installing a package via <varname>systemPackages</varname> or <command>nix-env</command> you have several options:</para>
<itemizedlist>
<listitem><para>You can install particular outputs explicitly, as each is available in the Nix language as an attribute of the package. The <varname>outputs</varname> attribute contains a list of output names.</para></listitem>
<listitem><para>You can let it use the default outputs. These are handled by <varname>meta.outputsToInstall</varname> attribute that contains a list of output names.</para>
<para>TODO: more about tweaking the attribute, etc.</para></listitem>
<listitem><para>NixOS provides configuration option <varname>environment.extraOutputsToInstall</varname> that allows adding extra outputs of <varname>environment.systemPackages</varname> atop the default ones. It's mainly meant for documentation and debug symbols, and it's also modified by specific options.</para>
<note><para>At this moment there is no similar configurability for packages installed by <command>nix-env</command>. You can still use approach from <xref linkend="sec-modify-via-packageOverrides" /> to override <varname>meta.outputsToInstall</varname> attributes, but that's a rather inconvenient way.</para></note>
<listitem>
<para>
You can install particular outputs explicitly, as each is available in the
Nix language as an attribute of the package. The
<varname>outputs</varname> attribute contains a list of output names.
</para>
</listitem>
<listitem>
<para>
You can let it use the default outputs. These are handled by
<varname>meta.outputsToInstall</varname> attribute that contains a list of
output names.
</para>
<para>
TODO: more about tweaking the attribute, etc.
</para>
</listitem>
<listitem>
<para>
NixOS provides configuration option
<varname>environment.extraOutputsToInstall</varname> that allows adding
extra outputs of <varname>environment.systemPackages</varname> atop the
default ones. It's mainly meant for documentation and debug symbols, and
it's also modified by specific options.
</para>
<note>
<para>
At this moment there is no similar configurability for packages installed
by <command>nix-env</command>. You can still use approach from
<xref linkend="sec-modify-via-packageOverrides" /> to override
<varname>meta.outputsToInstall</varname> attributes, but that's a rather
inconvenient way.
</para>
</note>
</listitem>
</itemizedlist>
</section>
</section>
<section>
<title>Using a split package</title>
<section><title>Using a split package</title>
<para>In the Nix language the individual outputs can be reached explicitly as attributes, e.g. <varname>coreutils.info</varname>, but the typical case is just using packages as build inputs.</para>
<para>When a multiple-output derivation gets into a build input of another derivation, the <varname>dev</varname> output is added if it exists, otherwise the first output is added. In addition to that, <varname>propagatedBuildOutputs</varname> of that package which by default contain <varname>$outputBin</varname> and <varname>$outputLib</varname> are also added. (See <xref linkend="multiple-output-file-type-groups" />.)</para>
</section>
<para>
In the Nix language the individual outputs can be reached explicitly as
attributes, e.g. <varname>coreutils.info</varname>, but the typical case is
just using packages as build inputs.
</para>
<para>
When a multiple-output derivation gets into a build input of another
derivation, the <varname>dev</varname> output is added if it exists,
otherwise the first output is added. In addition to that,
<varname>propagatedBuildOutputs</varname> of that package which by default
contain <varname>$outputBin</varname> and <varname>$outputLib</varname> are
also added. (See <xref linkend="multiple-output-file-type-groups" />.)
</para>
</section>
<section>
<title>Writing a split derivation</title>
<section><title>Writing a split derivation</title>
<para>Here you find how to write a derivation that produces multiple outputs.</para>
<para>In nixpkgs there is a framework supporting multiple-output derivations. It tries to cover most cases by default behavior. You can find the source separated in &lt;<filename>nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh</filename>&gt;; it's relatively well-readable. The whole machinery is triggered by defining the <varname>outputs</varname> attribute to contain the list of desired output names (strings).</para>
<programlisting>outputs = [ "bin" "dev" "out" "doc" ];</programlisting>
<para>Often such a single line is enough. For each output an equally named environment variable is passed to the builder and contains the path in nix store for that output. By convention, the first output should contain the executable programs provided by the package as that output is used by Nix in string conversions, allowing references to binaries like <literal>${pkgs.perl}/bin/perl</literal> to always work. Typically you also want to have the main <varname>out</varname> output, as it catches any files that didn't get elsewhere.</para>
<para>
Here you find how to write a derivation that produces multiple outputs.
</para>
<note><para>There is a special handling of the <varname>debug</varname> output, described at <xref linkend="stdenv-separateDebugInfo" />.</para></note>
<para>
In nixpkgs there is a framework supporting multiple-output derivations. It
tries to cover most cases by default behavior. You can find the source
separated in
&lt;<filename>nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh</filename>&gt;;
it's relatively well-readable. The whole machinery is triggered by defining
the <varname>outputs</varname> attribute to contain the list of desired
output names (strings).
</para>
<programlisting>outputs = [ "bin" "dev" "out" "doc" ];</programlisting>
<para>
Often such a single line is enough. For each output an equally named
environment variable is passed to the builder and contains the path in nix
store for that output. Typically you also want to have the main
<varname>out</varname> output, as it catches any files that didn't get
elsewhere.
</para>
<note>
<para>
There is a special handling of the <varname>debug</varname> output,
described at <xref linkend="stdenv-separateDebugInfo" />.
</para>
</note>
<section xml:id="multiple-output-file-binaries-first-convention">
<title><quote>Binaries first</quote></title>
<para>
A commonly adopted convention in <literal>nixpkgs</literal> is that
executables provided by the package are contained within its first output.
This convention allows the dependent packages to reference the executables
provided by packages in a uniform manner. For instance, provided with the
knowledge that the <literal>perl</literal> package contains a
<literal>perl</literal> executable it can be referenced as
<literal>${pkgs.perl}/bin/perl</literal> within a Nix derivation that needs
to execute a Perl script.
</para>
<para>
The <literal>glibc</literal> package is a deliberate single exception to
the <quote>binaries first</quote> convention. The <literal>glibc</literal>
has <literal>libs</literal> as its first output allowing the libraries
provided by <literal>glibc</literal> to be referenced directly (e.g.
<literal>${stdenv.glibc}/lib/ld-linux-x86-64.so.2</literal>). The
executables provided by <literal>glibc</literal> can be accessed via its
<literal>bin</literal> attribute (e.g.
<literal>${stdenv.glibc.bin}/bin/ldd</literal>).
</para>
<para>
The reason for why <literal>glibc</literal> deviates from the convention is
because referencing a library provided by <literal>glibc</literal> is a
very common operation among Nix packages. For instance, third-party
executables packaged by Nix are typically patched and relinked with the
relevant version of <literal>glibc</literal> libraries from Nix packages
(please see the documentation on
<link xlink:href="https://nixos.org/patchelf.html">patchelf</link> for more
details).
</para>
</section>
<section xml:id="multiple-output-file-type-groups">
<title>File type groups</title>
<para>The support code currently recognizes some particular kinds of outputs and either instructs the build system of the package to put files into their desired outputs or it moves the files during the fixup phase. Each group of file types has an <varname>outputFoo</varname> variable specifying the output name where they should go. If that variable isn't defined by the derivation writer, it is guessed &ndash; a default output name is defined, falling back to other possibilities if the output isn't defined.</para>
<para>
The support code currently recognizes some particular kinds of outputs and
either instructs the build system of the package to put files into their
desired outputs or it moves the files during the fixup phase. Each group of
file types has an <varname>outputFoo</varname> variable specifying the
output name where they should go. If that variable isn't defined by the
derivation writer, it is guessed &ndash; a default output name is defined,
falling back to other possibilities if the output isn't defined.
</para>
<variablelist>
<varlistentry><term><varname>
$outputDev</varname></term><listitem><para>
is for development-only files. These include C(++) headers, pkg-config, cmake and aclocal files. They go to <varname>dev</varname> or <varname>out</varname> by default.
</para></listitem>
<varlistentry>
<term>
<varname> $outputDev</varname>
</term>
<listitem>
<para>
is for development-only files. These include C(++) headers, pkg-config,
cmake and aclocal files. They go to <varname>dev</varname> or
<varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputBin</varname>
</term>
<listitem>
<para>
is meant for user-facing binaries, typically residing in bin/. They go
to <varname>bin</varname> or <varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputLib</varname>
</term>
<listitem>
<para>
is meant for libraries, typically residing in <filename>lib/</filename>
and <filename>libexec/</filename>. They go to <varname>lib</varname> or
<varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputDoc</varname>
</term>
<listitem>
<para>
is for user documentation, typically residing in
<filename>share/doc/</filename>. It goes to <varname>doc</varname> or
<varname>out</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputDevdoc</varname>
</term>
<listitem>
<para>
is for <emphasis>developer</emphasis> documentation. Currently we count
gtk-doc and devhelp books in there. It goes to <varname>devdoc</varname>
or is removed (!) by default. This is because e.g. gtk-doc tends to be
rather large and completely unused by nixpkgs users.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputMan</varname>
</term>
<listitem>
<para>
is for man pages (except for section 3). They go to
<varname>man</varname> or <varname>$outputBin</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputDevman</varname>
</term>
<listitem>
<para>
is for section 3 man pages. They go to <varname>devman</varname> or
<varname>$outputMan</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname> $outputInfo</varname>
</term>
<listitem>
<para>
is for info pages. They go to <varname>info</varname> or
<varname>$outputBin</varname> by default.
</para>
</listitem>
</varlistentry>
<varlistentry><term><varname>
$outputBin</varname></term><listitem><para>
is meant for user-facing binaries, typically residing in bin/. They go to <varname>bin</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputLib</varname></term><listitem><para>
is meant for libraries, typically residing in <filename>lib/</filename> and <filename>libexec/</filename>. They go to <varname>lib</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputDoc</varname></term><listitem><para>
is for user documentation, typically residing in <filename>share/doc/</filename>. It goes to <varname>doc</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputDevdoc</varname></term><listitem><para>
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and devhelp books in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputMan</varname></term><listitem><para>
is for man pages (except for section 3). They go to <varname>man</varname> or <varname>$outputBin</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputDevman</varname></term><listitem><para>
is for section 3 man pages. They go to <varname>devman</varname> or <varname>$outputMan</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputInfo</varname></term><listitem><para>
is for info pages. They go to <varname>info</varname> or <varname>$outputBin</varname> by default.
</para></listitem></varlistentry>
</variablelist>
</section>
<section><title>Common caveats</title>
<section>
<title>Common caveats</title>
<itemizedlist>
<listitem><para>Some configure scripts don't like some of the parameters passed by default by the framework, e.g. <literal>--docdir=/foo/bar</literal>. You can disable this by setting <literal>setOutputFlags = false;</literal>.</para></listitem>
<listitem><para>The outputs of a single derivation can retain references to each other, but note that circular references are not allowed. (And each strongly-connected component would act as a single output anyway.)</para></listitem>
<listitem><para>Most of split packages contain their core functionality in libraries. These libraries tend to refer to various kind of data that typically gets into <varname>out</varname>, e.g. locale strings, so there is often no advantage in separating the libraries into <varname>lib</varname>, as keeping them in <varname>out</varname> is easier.</para></listitem>
<listitem><para>Some packages have hidden assumptions on install paths, which complicates splitting.</para></listitem>
<listitem>
<para>
Some configure scripts don't like some of the parameters passed by
default by the framework, e.g. <literal>--docdir=/foo/bar</literal>. You
can disable this by setting <literal>setOutputFlags = false;</literal>.
</para>
</listitem>
<listitem>
<para>
The outputs of a single derivation can retain references to each other,
but note that circular references are not allowed. (And each
strongly-connected component would act as a single output anyway.)
</para>
</listitem>
<listitem>
<para>
Most of split packages contain their core functionality in libraries.
These libraries tend to refer to various kind of data that typically gets
into <varname>out</varname>, e.g. locale strings, so there is often no
advantage in separating the libraries into <varname>lib</varname>, as
keeping them in <varname>out</varname> is easier.
</para>
</listitem>
<listitem>
<para>
Some packages have hidden assumptions on install paths, which complicates
splitting.
</para>
</listitem>
</itemizedlist>
</section>
</section><!--Writing a split derivation-->
</section>
<!--Writing a split derivation-->
</chapter>

View File

@ -64,7 +64,7 @@ stdenv.mkDerivation {
sha256 = "1ian3kwh2vg6hr3ymrv48s04gijs539vzrq62xr76bxbhbwnz2np";
};
inherit noSysDirs;
configureFlags = "--target=arm-linux";
configureFlags = [ "--target=arm-linux" ];
}
---

View File

@ -1,95 +1,117 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-overlays">
<title>Overlays</title>
<para>This chapter describes how to extend and change Nixpkgs packages using
overlays. Overlays are used to add layers in the fix-point used by Nixpkgs
to compose the set of all packages.</para>
<para>Nixpkgs can be configured with a list of overlays, which are
applied in order. This means that the order of the overlays can be significant
if multiple layers override the same package.</para>
<title>Overlays</title>
<para>
This chapter describes how to extend and change Nixpkgs packages using
overlays. Overlays are used to add layers in the fix-point used by Nixpkgs to
compose the set of all packages.
</para>
<para>
Nixpkgs can be configured with a list of overlays, which are applied in
order. This means that the order of the overlays can be significant if
multiple layers override the same package.
</para>
<!--============================================================-->
<section xml:id="sec-overlays-install">
<title>Installing overlays</title>
<section xml:id="sec-overlays-install">
<title>Installing overlays</title>
<para>The list of overlays is determined as follows.</para>
<para>If the <varname>overlays</varname> argument is not provided explicitly, we look for overlays in a path. The path
is determined as follows:
<orderedlist>
<para>
The list of overlays is determined as follows.
</para>
<para>
If the <varname>overlays</varname> argument is not provided explicitly, we
look for overlays in a path. The path is determined as follows:
<orderedlist>
<listitem>
<para>First, if an <varname>overlays</varname> argument to the nixpkgs function itself is given,
then that is used.</para>
<para>This can be passed explicitly when importing nipxkgs, for example
<literal>import &lt;nixpkgs> { overlays = [ overlay1 overlay2 ]; }</literal>.</para>
<para>
First, if an <varname>overlays</varname> argument to the nixpkgs function
itself is given, then that is used.
</para>
<para>
This can be passed explicitly when importing nipxkgs, for example
<literal>import &lt;nixpkgs> { overlays = [ overlay1 overlay2 ];
}</literal>.
</para>
</listitem>
<listitem>
<para>Otherwise, if the Nix path entry <literal>&lt;nixpkgs-overlays></literal> exists, we look for overlays
at that path, as described below.</para>
<para>See the section on <literal>NIX_PATH</literal> in the Nix manual for more details on how to
set a value for <literal>&lt;nixpkgs-overlays>.</literal></para>
<para>
Otherwise, if the Nix path entry <literal>&lt;nixpkgs-overlays></literal>
exists, we look for overlays at that path, as described below.
</para>
<para>
See the section on <literal>NIX_PATH</literal> in the Nix manual for more
details on how to set a value for
<literal>&lt;nixpkgs-overlays>.</literal>
</para>
</listitem>
<listitem>
<para>If one of <filename>~/.config/nixpkgs/overlays.nix</filename> and
<filename>~/.config/nixpkgs/overlays/</filename> exists, then we look for overlays at that path, as
described below. It is an error if both exist.</para>
<para>
If one of <filename>~/.config/nixpkgs/overlays.nix</filename> and
<filename>~/.config/nixpkgs/overlays/</filename> exists, then we look for
overlays at that path, as described below. It is an error if both exist.
</para>
</listitem>
</orderedlist>
</para>
</orderedlist>
</para>
<para>If we are looking for overlays at a path, then there are two cases:
<itemizedlist>
<listitem>
<para>If the path is a file, then the file is imported as a Nix expression and used as the list of
overlays.</para>
</listitem>
<listitem>
<para>If the path is a directory, then we take the content of the directory, order it
lexicographically, and attempt to interpret each as an overlay by:
<para>
If we are looking for overlays at a path, then there are two cases:
<itemizedlist>
<listitem>
<para>Importing the file, if it is a <literal>.nix</literal> file.</para>
<para>
If the path is a file, then the file is imported as a Nix expression and
used as the list of overlays.
</para>
</listitem>
<listitem>
<para>Importing a top-level <filename>default.nix</filename> file, if it is a directory.</para>
<para>
If the path is a directory, then we take the content of the directory,
order it lexicographically, and attempt to interpret each as an overlay
by:
<itemizedlist>
<listitem>
<para>
Importing the file, if it is a <literal>.nix</literal> file.
</para>
</listitem>
<listitem>
<para>
Importing a top-level <filename>default.nix</filename> file, if it is
a directory.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
</itemizedlist>
</para>
<para>On a NixOS system the value of the <literal>nixpkgs.overlays</literal> option, if present,
is passed to the system Nixpkgs directly as an argument. Note that this does not affect the overlays for
non-NixOS operations (e.g. <literal>nix-env</literal>), which are looked up independently.</para>
<para>The <filename>overlays.nix</filename> option therefore provides a convenient way to use the same
overlays for a NixOS system configuration and user configuration: the same file can be used
as <filename>overlays.nix</filename> and imported as the value of <literal>nixpkgs.overlays</literal>.</para>
</section>
<para>
On a NixOS system the value of the <literal>nixpkgs.overlays</literal>
option, if present, is passed to the system Nixpkgs directly as an argument.
Note that this does not affect the overlays for non-NixOS operations (e.g.
<literal>nix-env</literal>), which are looked up independently.
</para>
<para>
The <filename>overlays.nix</filename> option therefore provides a convenient
way to use the same overlays for a NixOS system configuration and user
configuration: the same file can be used as
<filename>overlays.nix</filename> and imported as the value of
<literal>nixpkgs.overlays</literal>.
</para>
</section>
<!--============================================================-->
<section xml:id="sec-overlays-definition">
<title>Defining overlays</title>
<section xml:id="sec-overlays-definition">
<title>Defining overlays</title>
<para>Overlays are Nix functions which accept two arguments,
conventionally called <varname>self</varname> and <varname>super</varname>,
and return a set of packages. For example, the following is a valid overlay.</para>
<para>
Overlays are Nix functions which accept two arguments, conventionally called
<varname>self</varname> and <varname>super</varname>, and return a set of
packages. For example, the following is a valid overlay.
</para>
<programlisting>
self: super:
@ -104,31 +126,39 @@ self: super:
}
</programlisting>
<para>The first argument (<varname>self</varname>) corresponds to the final package
set. You should use this set for the dependencies of all packages specified in your
overlay. For example, all the dependencies of <varname>rr</varname> in the example above come
from <varname>self</varname>, as well as the overridden dependencies used in the
<varname>boost</varname> override.</para>
<para>
The first argument (<varname>self</varname>) corresponds to the final
package set. You should use this set for the dependencies of all packages
specified in your overlay. For example, all the dependencies of
<varname>rr</varname> in the example above come from
<varname>self</varname>, as well as the overridden dependencies used in the
<varname>boost</varname> override.
</para>
<para>The second argument (<varname>super</varname>)
corresponds to the result of the evaluation of the previous stages of
Nixpkgs. It does not contain any of the packages added by the current
overlay, nor any of the following overlays. This set should be used either
to refer to packages you wish to override, or to access functions defined
in Nixpkgs. For example, the original recipe of <varname>boost</varname>
in the above example, comes from <varname>super</varname>, as well as the
<varname>callPackage</varname> function.</para>
<para>
The second argument (<varname>super</varname>) corresponds to the result of
the evaluation of the previous stages of Nixpkgs. It does not contain any of
the packages added by the current overlay, nor any of the following
overlays. This set should be used either to refer to packages you wish to
override, or to access functions defined in Nixpkgs. For example, the
original recipe of <varname>boost</varname> in the above example, comes from
<varname>super</varname>, as well as the <varname>callPackage</varname>
function.
</para>
<para>The value returned by this function should be a set similar to
<filename>pkgs/top-level/all-packages.nix</filename>, containing
overridden and/or new packages.</para>
<para>Overlays are similar to other methods for customizing Nixpkgs, in particular
the <literal>packageOverrides</literal> attribute described in <xref linkend="sec-modify-via-packageOverrides"/>.
Indeed, <literal>packageOverrides</literal> acts as an overlay with only the
<varname>super</varname> argument. It is therefore appropriate for basic use,
but overlays are more powerful and easier to distribute.</para>
</section>
<para>
The value returned by this function should be a set similar to
<filename>pkgs/top-level/all-packages.nix</filename>, containing overridden
and/or new packages.
</para>
<para>
Overlays are similar to other methods for customizing Nixpkgs, in particular
the <literal>packageOverrides</literal> attribute described in
<xref linkend="sec-modify-via-packageOverrides"/>. Indeed,
<literal>packageOverrides</literal> acts as an overlay with only the
<varname>super</varname> argument. It is therefore appropriate for basic
use, but overlays are more powerful and easier to distribute.
</para>
</section>
</chapter>

View File

@ -1,4 +1,5 @@
.docbook .xref img[src^=images\/callouts\/],
.screen img,
.programlisting img {
width: 1em;
}

File diff suppressed because it is too large Load Diff

View File

@ -1,27 +1,25 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-platform-nodes">
<title>Platform Notes</title>
<section xml:id="sec-darwin">
<title>Darwin (macOS)</title>
<title>Platform Notes</title>
<section xml:id="sec-darwin">
<title>Darwin (macOS)</title>
<para>Some common issues when packaging software for darwin:</para>
<itemizedlist>
<listitem>
<para>
The darwin <literal>stdenv</literal> uses clang instead of gcc.
When referring to the compiler <varname>$CC</varname> or <command>cc</command>
will work in both cases. Some builds hardcode gcc/g++ in their
build scripts, that can usually be fixed with using something
like <literal>makeFlags = [ "CC=cc" ];</literal> or by patching
the build scripts.
Some common issues when packaging software for darwin:
</para>
<programlisting>
<itemizedlist>
<listitem>
<para>
The darwin <literal>stdenv</literal> uses clang instead of gcc. When
referring to the compiler <varname>$CC</varname> or <command>cc</command>
will work in both cases. Some builds hardcode gcc/g++ in their build
scripts, that can usually be fixed with using something like
<literal>makeFlags = [ "CC=cc" ];</literal> or by patching the build
scripts.
</para>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
@ -31,18 +29,16 @@
}
</programlisting>
</listitem>
<listitem>
<para>
On darwin libraries are linked using absolute paths, libraries
are resolved by their <literal>install_name</literal> at link
time. Sometimes packages won't set this correctly causing the
library lookups to fail at runtime. This can be fixed by adding
extra linker flags or by running <command>install_name_tool -id</command>
during the <function>fixupPhase</function>.
On darwin libraries are linked using absolute paths, libraries are
resolved by their <literal>install_name</literal> at link time. Sometimes
packages won't set this correctly causing the library lookups to fail at
runtime. This can be fixed by adding extra linker flags or by running
<command>install_name_tool -id</command> during the
<function>fixupPhase</function>.
</para>
<programlisting>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
@ -50,16 +46,45 @@
}
</programlisting>
</listitem>
<listitem>
<para>
Even if the libraries are linked using absolute paths and resolved via
their <literal>install_name</literal> correctly, tests can sometimes fail
to run binaries. This happens because the <varname>checkPhase</varname>
runs before the libraries are installed.
</para>
<para>
This can usually be solved by running the tests after the
<varname>installPhase</varname> or alternatively by using
<varname>DYLD_LIBRARY_PATH</varname>. More information about this variable
can be found in the <citerefentry>
<refentrytitle>dyld</refentrytitle>
<manvolnum>1</manvolnum></citerefentry> manpage.
</para>
<programlisting>
dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib
Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq
Reason: image not found
./tests/jqtest: line 5: 75779 Abort trap: 6
</programlisting>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
doInstallCheck = true;
installCheckTarget = "check";
}
</programlisting>
</listitem>
<listitem>
<para>
Some packages assume xcode is available and use <command>xcrun</command>
to resolve build tools like <command>clang</command>, etc.
This causes errors like <code>xcode-select: error: no developer tools were found at '/Applications/Xcode.app'</code>
while the build doesn't actually depend on xcode.
to resolve build tools like <command>clang</command>, etc. This causes
errors like <code>xcode-select: error: no developer tools were found at
'/Applications/Xcode.app'</code> while the build doesn't actually depend
on xcode.
</para>
<programlisting>
<programlisting>
stdenv.mkDerivation {
name = "libfoo-1.2.3";
# ...
@ -69,15 +94,12 @@
'';
}
</programlisting>
<para>
The package <literal>xcbuild</literal> can be used to build projects
that really depend on Xcode, however projects that build some kind of
graphical interface won't work without using Xcode in an impure way.
The package <literal>xcbuild</literal> can be used to build projects that
really depend on Xcode, however projects that build some kind of graphical
interface won't work without using Xcode in an impure way.
</para>
</listitem>
</itemizedlist>
</section>
</itemizedlist>
</section>
</chapter>

View File

@ -1,223 +1,219 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-quick-start">
<title>Quick Start to Adding a Package</title>
<para>To add a package to Nixpkgs:
<orderedlist>
<title>Quick Start to Adding a Package</title>
<para>
To add a package to Nixpkgs:
<orderedlist>
<listitem>
<para>Checkout the Nixpkgs source tree:
<para>
Checkout the Nixpkgs source tree:
<screen>
$ git clone git://github.com/NixOS/nixpkgs.git
$ git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs</screen>
</para>
</listitem>
<listitem>
<para>Find a good place in the Nixpkgs tree to add the Nix
expression for your package. For instance, a library package
typically goes into
<para>
Find a good place in the Nixpkgs tree to add the Nix expression for your
package. For instance, a library package typically goes into
<filename>pkgs/development/libraries/<replaceable>pkgname</replaceable></filename>,
while a web browser goes into
<filename>pkgs/applications/networking/browsers/<replaceable>pkgname</replaceable></filename>.
See <xref linkend="sec-organisation" /> for some hints on the tree
organisation. Create a directory for your package, e.g.
<screen>
$ mkdir pkgs/development/libraries/libfoo</screen>
</para>
</listitem>
<listitem>
<para>In the package directory, create a Nix expression — a piece
of code that describes how to build the package. In this case, it
should be a <emphasis>function</emphasis> that is called with the
package dependencies as arguments, and returns a build of the
package in the Nix store. The expression should usually be called
<filename>default.nix</filename>.
<para>
In the package directory, create a Nix expression — a piece of code that
describes how to build the package. In this case, it should be a
<emphasis>function</emphasis> that is called with the package dependencies
as arguments, and returns a build of the package in the Nix store. The
expression should usually be called <filename>default.nix</filename>.
<screen>
$ emacs pkgs/development/libraries/libfoo/default.nix
$ git add pkgs/development/libraries/libfoo/default.nix</screen>
</para>
<para>You can have a look at the existing Nix expressions under
<filename>pkgs/</filename> to see how its done. Here are some
good ones:
<para>
You can have a look at the existing Nix expressions under
<filename>pkgs/</filename> to see how its done. Here are some good
ones:
<itemizedlist>
<listitem>
<para>GNU Hello: <link
<para>
GNU Hello:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/hello/default.nix"><filename>pkgs/applications/misc/hello/default.nix</filename></link>.
Trivial package, which specifies some <varname>meta</varname>
attributes which is good practice.</para>
attributes which is good practice.
</para>
</listitem>
<listitem>
<para>GNU cpio: <link
<para>
GNU cpio:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/archivers/cpio/default.nix"><filename>pkgs/tools/archivers/cpio/default.nix</filename></link>.
Also a simple package. The generic builder in
<varname>stdenv</varname> does everything for you. It has
no dependencies beyond <varname>stdenv</varname>.</para>
Also a simple package. The generic builder in <varname>stdenv</varname>
does everything for you. It has no dependencies beyond
<varname>stdenv</varname>.
</para>
</listitem>
<listitem>
<para>GNU Multiple Precision arithmetic library (GMP): <link
<para>
GNU Multiple Precision arithmetic library (GMP):
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/gmp/5.1.x.nix"><filename>pkgs/development/libraries/gmp/5.1.x.nix</filename></link>.
Also done by the generic builder, but has a dependency on
<varname>m4</varname>.</para>
<varname>m4</varname>.
</para>
</listitem>
<listitem>
<para>Pan, a GTK-based newsreader: <link
<para>
Pan, a GTK-based newsreader:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/newsreaders/pan/default.nix"><filename>pkgs/applications/networking/newsreaders/pan/default.nix</filename></link>.
Has an optional dependency on <varname>gtkspell</varname>,
which is only built if <varname>spellCheck</varname> is
<literal>true</literal>.</para>
Has an optional dependency on <varname>gtkspell</varname>, which is
only built if <varname>spellCheck</varname> is <literal>true</literal>.
</para>
</listitem>
<listitem>
<para>Apache HTTPD: <link
<para>
Apache HTTPD:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/http/apache-httpd/2.4.nix"><filename>pkgs/servers/http/apache-httpd/2.4.nix</filename></link>.
A bunch of optional features, variable substitutions in the
configure flags, a post-install hook, and miscellaneous
hackery.</para>
A bunch of optional features, variable substitutions in the configure
flags, a post-install hook, and miscellaneous hackery.
</para>
</listitem>
<listitem>
<para>Thunderbird: <link
<para>
Thunderbird:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/mailreaders/thunderbird/default.nix"><filename>pkgs/applications/networking/mailreaders/thunderbird/default.nix</filename></link>.
Lots of dependencies.</para>
Lots of dependencies.
</para>
</listitem>
<listitem>
<para>JDiskReport, a Java utility: <link
<para>
JDiskReport, a Java utility:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/misc/jdiskreport/default.nix"><filename>pkgs/tools/misc/jdiskreport/default.nix</filename></link>
(and the <link
(and the
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/misc/jdiskreport/builder.sh">builder</link>).
Nixpkgs doesnt have a decent <varname>stdenv</varname> for
Java yet so this is pretty ad-hoc.</para>
Nixpkgs doesnt have a decent <varname>stdenv</varname> for Java yet
so this is pretty ad-hoc.
</para>
</listitem>
<listitem>
<para>XML::Simple, a Perl module: <link
<para>
XML::Simple, a Perl module:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix"><filename>pkgs/top-level/perl-packages.nix</filename></link>
(search for the <varname>XMLSimple</varname> attribute).
Most Perl modules are so simple to build that they are
defined directly in <filename>perl-packages.nix</filename>;
no need to make a separate file for them.</para>
(search for the <varname>XMLSimple</varname> attribute). Most Perl
modules are so simple to build that they are defined directly in
<filename>perl-packages.nix</filename>; no need to make a separate file
for them.
</para>
</listitem>
<listitem>
<para>Adobe Reader: <link
<para>
Adobe Reader:
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/adobe-reader/default.nix"><filename>pkgs/applications/misc/adobe-reader/default.nix</filename></link>.
Shows how binary-only packages can be supported. In
particular the <link
Shows how binary-only packages can be supported. In particular the
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/misc/adobe-reader/builder.sh">builder</link>
uses <command>patchelf</command> to set the RUNPATH and ELF
interpreter of the executables so that the right libraries
are found at runtime.</para>
</listitem>
</itemizedlist>
uses <command>patchelf</command> to set the RUNPATH and ELF interpreter
of the executables so that the right libraries are found at runtime.
</para>
<para>Some notes:
</listitem>
</itemizedlist>
</para>
<para>
Some notes:
<itemizedlist>
<listitem>
<para>All <varname linkend="chap-meta">meta</varname>
attributes are optional, but its still a good idea to
provide at least the <varname>description</varname>,
<varname>homepage</varname> and <varname
linkend="sec-meta-license">license</varname>.</para>
</listitem>
<listitem>
<para>You can use <command>nix-prefetch-url</command> (or similar nix-prefetch-git, etc)
<replaceable>url</replaceable> to get the SHA-256 hash of
source distributions. There are similar commands as <command>nix-prefetch-git</command> and
<command>nix-prefetch-hg</command> available in <literal>nix-prefetch-scripts</literal> package.</para>
</listitem>
<listitem>
<para>A list of schemes for <literal>mirror://</literal>
URLs can be found in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-support/fetchurl/mirrors.nix"><filename>pkgs/build-support/fetchurl/mirrors.nix</filename></link>.</para>
</listitem>
</itemizedlist>
<para>
All <varname linkend="chap-meta">meta</varname> attributes are
optional, but its still a good idea to provide at least the
<varname>description</varname>, <varname>homepage</varname> and
<varname
linkend="sec-meta-license">license</varname>.
</para>
<para>The exact syntax and semantics of the Nix expression
language, including the built-in function, are described in the
Nix manual in the <link
xlink:href="http://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter
on writing Nix expressions</link>.</para>
</listitem>
<listitem>
<para>Add a call to the function defined in the previous step to
<para>
You can use <command>nix-prefetch-url</command> (or similar
nix-prefetch-git, etc) <replaceable>url</replaceable> to get the
SHA-256 hash of source distributions. There are similar commands as
<command>nix-prefetch-git</command> and
<command>nix-prefetch-hg</command> available in
<literal>nix-prefetch-scripts</literal> package.
</para>
</listitem>
<listitem>
<para>
A list of schemes for <literal>mirror://</literal> URLs can be found in
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-support/fetchurl/mirrors.nix"><filename>pkgs/build-support/fetchurl/mirrors.nix</filename></link>.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The exact syntax and semantics of the Nix expression language, including
the built-in function, are described in the Nix manual in the
<link
xlink:href="http://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter
on writing Nix expressions</link>.
</para>
</listitem>
<listitem>
<para>
Add a call to the function defined in the previous step to
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/all-packages.nix"><filename>pkgs/top-level/all-packages.nix</filename></link>
with some descriptive name for the variable,
e.g. <varname>libfoo</varname>.
<screen>
with some descriptive name for the variable, e.g.
<varname>libfoo</varname>.
<screen>
$ emacs pkgs/top-level/all-packages.nix</screen>
</para>
<para>The attributes in that file are sorted by category (like
“Development / Libraries”) that more-or-less correspond to the
directory structure of Nixpkgs, and then by attribute name.</para>
<para>
The attributes in that file are sorted by category (like “Development /
Libraries”) that more-or-less correspond to the directory structure of
Nixpkgs, and then by attribute name.
</para>
</listitem>
<listitem>
<para>To test whether the package builds, run the following command
from the root of the nixpkgs source tree:
<screen>
<para>
To test whether the package builds, run the following command from the
root of the nixpkgs source tree:
<screen>
$ nix-build -A libfoo</screen>
where <varname>libfoo</varname> should be the variable name
defined in the previous step. You may want to add the flag
<option>-K</option> to keep the temporary build directory in case
something fails. If the build succeeds, a symlink
<filename>./result</filename> to the package in the Nix store is
created.</para>
</listitem>
<listitem>
<para>If you want to install the package into your profile
(optional), do
<screen>
$ nix-env -f . -iA libfoo</screen>
where <varname>libfoo</varname> should be the variable name defined in the
previous step. You may want to add the flag <option>-K</option> to keep
the temporary build directory in case something fails. If the build
succeeds, a symlink <filename>./result</filename> to the package in the
Nix store is created.
</para>
</listitem>
<listitem>
<para>Optionally commit the new package and open a pull request, or send a patch to
<literal>https://groups.google.com/forum/#!forum/nix-devel</literal>.</para>
<para>
If you want to install the package into your profile (optional), do
<screen>
$ nix-env -f . -iA libfoo</screen>
</para>
</listitem>
</orderedlist>
</para>
<listitem>
<para>
Optionally commit the new package and open a pull request, or send a patch
to <literal>https://groups.google.com/forum/#!forum/nix-devel</literal>.
</para>
</listitem>
</orderedlist>
</para>
</chapter>

File diff suppressed because it is too large Load Diff

View File

@ -3,91 +3,153 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-reviewing-contributions">
<title>Reviewing contributions</title>
<warning>
<para>The following section is a draft and reviewing policy is still being
discussed.</para>
</warning>
<para>The nixpkgs projects receives a fairly high number of contributions via
GitHub pull-requests. Reviewing and approving these is an important task and a
way to contribute to the project.</para>
<para>The high change rate of nixpkgs make any pull request that is open for
long enough subject to conflicts that will require extra work from the
submitter or the merger. Reviewing pull requests in a timely manner and being
<title>Reviewing contributions</title>
<warning>
<para>
The following section is a draft, and the policy for reviewing is still
being discussed in issues such as
<link
xlink:href="https://github.com/NixOS/nixpkgs/issues/11166">#11166
</link> and
<link
xlink:href="https://github.com/NixOS/nixpkgs/issues/20836">#20836
</link>.
</para>
</warning>
<para>
The nixpkgs project receives a fairly high number of contributions via GitHub
pull-requests. Reviewing and approving these is an important task and a way
to contribute to the project.
</para>
<para>
The high change rate of nixpkgs makes any pull request that remains open for
too long subject to conflicts that will require extra work from the submitter
or the merger. Reviewing pull requests in a timely manner and being
responsive to the comments is the key to avoid these. GitHub provides sort
filters that can be used to see the <link
filters that can be used to see the
<link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most
recently</link> and the <link
recently</link> and the
<link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-asc">least
recently</link> updated pull-requests.</para>
<para>When reviewing a pull request, please always be nice and polite.
recently</link> updated pull-requests. We highly encourage looking at
<link xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+review%3Anone+status%3Asuccess+-label%3A%222.status%3A+work-in-progress%22+no%3Aproject+no%3Aassignee+no%3Amilestone">
this list of ready to merge, unreviewed pull requests</link>.
</para>
<para>
When reviewing a pull request, please always be nice and polite.
Controversial changes can lead to controversial opinions, but it is important
to respect every community members and their work.</para>
<para>GitHub provides reactions, they are a simple and quick way to provide
feedback to pull-requests or any comments. The thumb-down reaction should be
used with care and if possible accompanied with some explanations so the
submitter has directions to improve his contribution.</para>
<para>Pull-requests reviews should include a list of what has been reviewed in a
comment, so other reviewers and mergers can know the state of the
review.</para>
<para>All the review template samples provided in this section are generic and
to respect every community member and their work.
</para>
<para>
GitHub provides reactions as a simple and quick way to provide feedback to
pull-requests or any comments. The thumb-down reaction should be used with
care and if possible accompanied with some explanation so the submitter has
directions to improve their contribution.
</para>
<para>
Pull-request reviews should include a list of what has been reviewed in a
comment, so other reviewers and mergers can know the state of the review.
</para>
<para>
All the review template samples provided in this section are generic and
meant as examples. Their usage is optional and the reviewer is free to adapt
them to his liking.</para>
them to their liking.
</para>
<section>
<title>Package updates</title>
<section><title>Package updates</title>
<para>
A package update is the most trivial and common type of pull-request. These
pull-requests mainly consist of updating the version part of the package
name and the source hash.
</para>
<para>A package update is the most trivial and common type of pull-request.
These pull-requests mainly consist in updating the version part of the package
name and the source hash.</para>
<para>It can happen that non trivial updates include patches or more complex
changes.</para>
<para>
It can happen that non-trivial updates include patches or more complex
changes.
</para>
<para>Reviewing process:</para>
<para>
Reviewing process:
</para>
<itemizedlist>
<listitem><para>Add labels to the pull-request. (Requires commit
rights)</para>
<itemizedlist>
<listitem><para><literal>8.has: package (update)</literal> and any topic
label that fit the updated package.</para></listitem>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: package (update)</literal> and any topic label that fit
the updated package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the package versioning is fitting the
guidelines.</para></listitem>
<listitem><para>Ensure that the commit text is fitting the
guidelines.</para></listitem>
<listitem><para>Ensure that the package maintainers are notified.</para>
<listitem>
<para>
Ensure that the package versioning fits the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the commit text fits the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the package maintainers are notified.
</para>
<itemizedlist>
<listitem><para>mention-bot usually notify GitHub users based on the
submitted changes, but it can happen that it misses some of the
package maintainers.</para></listitem>
<listitem>
<para>
<link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link>
will make GitHub notify users based on the submitted changes, but it can
happen that it misses some of the package maintainers.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the meta field contains correct
information.</para>
<listitem>
<para>
Ensure that the meta field information is correct.
</para>
<itemizedlist>
<listitem><para>License can change with version updates, so it should be
checked to be fitting upstream license.</para></listitem>
<listitem><para>If the package has no maintainer, a maintainer must be
set. This can be the update submitter or a community member that
accepts to take maintainership of the package.</para></listitem>
<listitem>
<para>
License can change with version updates, so it should be checked to
match the upstream license.
</para>
</listitem>
<listitem>
<para>
If the package has no maintainer, a maintainer must be set. This can be
the update submitter or a community member that accepts to take
maintainership of the package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the code contains no typos.</para></listitem>
<listitem><para>Building the package locally.</para>
<listitem>
<para>
Ensure that the code contains no typos.
</para>
</listitem>
<listitem>
<para>
Building the package locally.
</para>
<itemizedlist>
<listitem><para>Pull-requests are often targeted to the master or staging
branch so building the pull-request locally as it is submitted can
trigger a large amount of source builds.</para>
<para>It is possible to rebase the changes on nixos-unstable or
<listitem>
<para>
Pull-requests are often targeted to the master or staging branch, and
building the pull-request locally when it is submitted can trigger many
source builds.
</para>
<para>
It is possible to rebase the changes on nixos-unstable or
nixpkgs-unstable for easier review by running the following commands
from a nixpkgs clone.
<screen>
@ -100,41 +162,54 @@ $ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
</screen>
<calloutlist>
<callout arearefs='reviewing-rebase-1'>
<para>This should be done only once to be able to fetch channel
branches from the nixpkgs-channels repository.</para>
<para>
This should be done only once to be able to fetch channel branches
from the nixpkgs-channels repository.
</para>
</callout>
<callout arearefs='reviewing-rebase-2'>
<para>Fetching the nixos-unstable branch.</para>
<para>
Fetching the nixos-unstable branch.
</para>
</callout>
<callout arearefs='reviewing-rebase-3'>
<para>Fetching the pull-request changes, <varname>PRNUMBER</varname>
is the number at the end of the pull-request title and
<varname>BASEBRANCH</varname> the base branch of the
pull-request.</para>
<para>
Fetching the pull-request changes, <varname>PRNUMBER</varname> is the
number at the end of the pull-request title and
<varname>BASEBRANCH</varname> the base branch of the pull-request.
</para>
</callout>
<callout arearefs='reviewing-rebase-3'>
<para>Rebasing the pull-request changes to the nixos-unstable
branch.</para>
<callout arearefs='reviewing-rebase-4'>
<para>
Rebasing the pull-request changes to the nixos-unstable branch.
</para>
</callout>
</calloutlist>
</para>
</listitem>
<listitem>
<para>The <link xlink:href="https://github.com/madjar/nox">nox</link>
tool can be used to review a pull-request content in a single command.
It doesn't rebase on a channel branch so it might trigger multiple
source builds. <varname>PRNUMBER</varname> should be replaced by the
number at the end of the pull-request title.</para>
<para>
The <link xlink:href="https://github.com/madjar/nox">nox</link> tool can
be used to review a pull-request content in a single command. It doesn't
rebase on a channel branch so it might trigger multiple source builds.
<varname>PRNUMBER</varname> should be replaced by the number at the end
of the pull-request title.
</para>
<screen>
$ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
</screen>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Running every binary.</para></listitem>
</itemizedlist>
<listitem>
<para>
Running every binary.
</para>
</listitem>
</itemizedlist>
<example><title>Sample template for a package update review</title>
<example>
<title>Sample template for a package update review</title>
<screen>
##### Reviewed points
@ -148,55 +223,105 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
##### Comments
</screen></example>
</section>
</screen>
</example>
</section>
<section>
<title>New packages</title>
<section><title>New packages</title>
<para>
New packages are a common type of pull-requests. These pull requests
consists in adding a new nix-expression for a package.
</para>
<para>New packages are a common type of pull-requests. These pull requests
consists in adding a new nix-expression for a package.</para>
<para>
Reviewing process:
</para>
<para>Reviewing process:</para>
<itemizedlist>
<listitem><para>Add labels to the pull-request. (Requires commit
rights)</para>
<itemizedlist>
<listitem><para><literal>8.has: package (new)</literal> and any topic
label that fit the new package.</para></listitem>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: package (new)</literal> and any topic label that fit the
new package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the package versioning is fitting the
guidelines.</para></listitem>
<listitem><para>Ensure that the commit name is fitting the
guidelines.</para></listitem>
<listitem><para>Ensure that the meta field contains correct
information.</para>
<listitem>
<para>
Ensure that the package versioning is fitting the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the commit name is fitting the guidelines.
</para>
</listitem>
<listitem>
<para>
Ensure that the meta field contains correct information.
</para>
<itemizedlist>
<listitem><para>License must be checked to be fitting upstream
license.</para></listitem>
<listitem><para>Platforms should be set or the package will not get binary
substitutes.</para></listitem>
<listitem><para>A maintainer must be set, this can be the package
submitter or a community member that accepts to take maintainership of
the package.</para></listitem>
<listitem>
<para>
License must be checked to be fitting upstream license.
</para>
</listitem>
<listitem>
<para>
Platforms should be set or the package will not get binary substitutes.
</para>
</listitem>
<listitem>
<para>
A maintainer must be set, this can be the package submitter or a
community member that accepts to take maintainership of the package.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the code contains no typos.</para></listitem>
<listitem><para>Ensure the package source.</para>
<listitem>
<para>
Ensure that the code contains no typos.
</para>
</listitem>
<listitem>
<para>
Ensure the package source.
</para>
<itemizedlist>
<listitem><para>Mirrors urls should be used when
available.</para></listitem>
<listitem><para>The most appropriate function should be used (e.g.
packages from GitHub should use
<literal>fetchFromGitHub</literal>).</para></listitem>
<listitem>
<para>
Mirrors urls should be used when available.
</para>
</listitem>
<listitem>
<para>
The most appropriate function should be used (e.g. packages from GitHub
should use <literal>fetchFromGitHub</literal>).
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Building the package locally.</para></listitem>
<listitem><para>Running every binary.</para></listitem>
</itemizedlist>
<listitem>
<para>
Building the package locally.
</para>
</listitem>
<listitem>
<para>
Running every binary.
</para>
</listitem>
</itemizedlist>
<example><title>Sample template for a new package review</title>
<example>
<title>Sample template for a new package review</title>
<screen>
##### Reviewed points
@ -218,58 +343,108 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
##### Comments
</screen></example>
</section>
</screen>
</example>
</section>
<section>
<title>Module updates</title>
<section><title>Module updates</title>
<para>
Module updates are submissions changing modules in some ways. These often
contains changes to the options or introduce new options.
</para>
<para>Module updates are submissions changing modules in some ways. These often
contains changes to the options or introduce new options.</para>
<para>
Reviewing process
</para>
<para>Reviewing process</para>
<itemizedlist>
<listitem><para>Add labels to the pull-request. (Requires commit
rights)</para>
<itemizedlist>
<listitem><para><literal>8.has: module (update)</literal> and any topic
label that fit the module.</para></listitem>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: module (update)</literal> and any topic label that fit
the module.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the module maintainers are notified.</para>
<listitem>
<para>
Ensure that the module maintainers are notified.
</para>
<itemizedlist>
<listitem><para>Mention-bot notify GitHub users based on the submitted
changes, but it can happen that it miss some of the package
maintainers.</para></listitem>
<listitem>
<para>
<link xlink:href="https://help.github.com/articles/about-codeowners/">CODEOWNERS</link>
will make GitHub notify users based on the submitted changes, but it can
happen that it misses some of the package maintainers.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the module tests, if any, are
succeeding.</para></listitem>
<listitem><para>Ensure that the introduced options are correct.</para>
<listitem>
<para>
Ensure that the module tests, if any, are succeeding.
</para>
</listitem>
<listitem>
<para>
Ensure that the introduced options are correct.
</para>
<itemizedlist>
<listitem><para>Type should be appropriate (string related types differs
in their merging capabilities, <literal>optionSet</literal> and
<literal>string</literal> types are deprecated).</para></listitem>
<listitem><para>Description, default and example should be
provided.</para></listitem>
<listitem>
<para>
Type should be appropriate (string related types differs in their
merging capabilities, <literal>optionSet</literal> and
<literal>string</literal> types are deprecated).
</para>
</listitem>
<listitem>
<para>
Description, default and example should be provided.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that option changes are backward compatible.</para>
<listitem>
<para>
Ensure that option changes are backward compatible.
</para>
<itemizedlist>
<listitem><para><literal>mkRenamedOptionModule</literal> and
<listitem>
<para>
<literal>mkRenamedOptionModule</literal> and
<literal>mkAliasOptionModule</literal> functions provide way to make
option changes backward compatible.</para></listitem>
option changes backward compatible.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that removed options are declared with
<literal>mkRemovedOptionModule</literal></para></listitem>
<listitem><para>Ensure that changes that are not backward compatible are
mentioned in release notes.</para></listitem>
<listitem><para>Ensure that documentations affected by the change is
updated.</para></listitem>
</itemizedlist>
<listitem>
<para>
Ensure that removed options are declared with
<literal>mkRemovedOptionModule</literal>
</para>
</listitem>
<listitem>
<para>
Ensure that changes that are not backward compatible are mentioned in
release notes.
</para>
</listitem>
<listitem>
<para>
Ensure that documentations affected by the change is updated.
</para>
</listitem>
</itemizedlist>
<example><title>Sample template for a module update review</title>
<example>
<title>Sample template for a module update review</title>
<screen>
##### Reviewed points
@ -286,51 +461,89 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
##### Comments
</screen></example>
</section>
</screen>
</example>
</section>
<section>
<title>New modules</title>
<section><title>New modules</title>
<para>
New modules submissions introduce a new module to NixOS.
</para>
<para>New modules submissions introduce a new module to NixOS.</para>
<itemizedlist>
<listitem><para>Add labels to the pull-request. (Requires commit
rights)</para>
<itemizedlist>
<listitem><para><literal>8.has: module (new)</literal> and any topic label
that fit the module.</para></listitem>
<listitem>
<para>
Add labels to the pull-request. (Requires commit rights)
</para>
<itemizedlist>
<listitem>
<para>
<literal>8.has: module (new)</literal> and any topic label that fit the
module.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the module tests, if any, are
succeeding.</para></listitem>
<listitem><para>Ensure that the introduced options are correct.</para>
<listitem>
<para>
Ensure that the module tests, if any, are succeeding.
</para>
</listitem>
<listitem>
<para>
Ensure that the introduced options are correct.
</para>
<itemizedlist>
<listitem><para>Type should be appropriate (string related types differs
in their merging capabilities, <literal>optionSet</literal> and
<literal>string</literal> types are deprecated).</para></listitem>
<listitem><para>Description, default and example should be
provided.</para></listitem>
<listitem>
<para>
Type should be appropriate (string related types differs in their
merging capabilities, <literal>optionSet</literal> and
<literal>string</literal> types are deprecated).
</para>
</listitem>
<listitem>
<para>
Description, default and example should be provided.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that module <literal>meta</literal> field is
present</para>
<listitem>
<para>
Ensure that module <literal>meta</literal> field is present
</para>
<itemizedlist>
<listitem><para>Maintainers should be declared in
<literal>meta.maintainers</literal>.</para></listitem>
<listitem><para>Module documentation should be declared with
<literal>meta.doc</literal>.</para></listitem>
<listitem>
<para>
Maintainers should be declared in <literal>meta.maintainers</literal>.
</para>
</listitem>
<listitem>
<para>
Module documentation should be declared with
<literal>meta.doc</literal>.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Ensure that the module respect other modules
functionality.</para>
<listitem>
<para>
Ensure that the module respect other modules functionality.
</para>
<itemizedlist>
<listitem><para>For example, enabling a module should not open firewall
ports by default.</para></listitem>
<listitem>
<para>
For example, enabling a module should not open firewall ports by
default.
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</itemizedlist>
<example><title>Sample template for a new module review</title>
<example>
<title>Sample template for a new module review</title>
<screen>
##### Reviewed points
@ -348,32 +561,41 @@ $ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
##### Comments
</screen></example>
</section>
</screen>
</example>
</section>
<section>
<title>Other submissions</title>
<section><title>Other submissions</title>
<para>
Other type of submissions requires different reviewing steps.
</para>
<para>Other type of submissions requires different reviewing steps.</para>
<para>
If you consider having enough knowledge and experience in a topic and would
like to be a long-term reviewer for related submissions, please contact the
current reviewers for that topic. They will give you information about the
reviewing process. The main reviewers for a topic can be hard to find as
there is no list, but checking past pull-requests to see who reviewed or
git-blaming the code to see who committed to that topic can give some hints.
</para>
<para>If you consider having enough knowledge and experience in a topic and
would like to be a long-term reviewer for related submissions, please contact
the current reviewers for that topic. They will give you information about the
reviewing process.
The main reviewers for a topic can be hard to find as there is no list, but
checking past pull-requests to see who reviewed or git-blaming the code to see
who committed to that topic can give some hints.</para>
<para>
Container system, boot system and library changes are some examples of the
pull requests fitting this category.
</para>
</section>
<section>
<title>Merging pull-requests</title>
<para>Container system, boot system and library changes are some examples of the
pull requests fitting this category.</para>
<para>
It is possible for community members that have enough knowledge and
experience on a special topic to contribute by merging pull requests.
</para>
</section>
<section><title>Merging pull-requests</title>
<para>It is possible for community members that have enough knowledge and
experience on a special topic to contribute by merging pull requests.</para>
<para>TODO: add the procedure to request merging rights.</para>
<para>
TODO: add the procedure to request merging rights.
</para>
<!--
The following paragraph about how to deal with unactive contributors is just a
@ -384,10 +606,13 @@ policy.
three months will have their commit rights revoked.</para>
-->
<para>In a case a contributor leaves definitively the Nix community, he should
create an issue or notify the mailing list with references of packages and
modules he maintains so the maintainership can be taken over by other
contributors.</para>
</section>
<para>
In a case a contributor leaves definitively the Nix community, he should
create an issue or post on
<link
xlink:href="https://discourse.nixos.org">Discourse</link> with
references of packages and modules he maintains so the maintainership can be
taken over by other contributors.
</para>
</section>
</chapter>

View File

@ -1,4 +1,5 @@
{ pkgs ? import ../. {} }:
(import ./default.nix).overrideAttrs (x: {
buildInputs = x.buildInputs ++ [ pkgs.xmloscopy ];
buildInputs = x.buildInputs ++ [ pkgs.xmloscopy pkgs.ruby ];
})

File diff suppressed because it is too large Load Diff

View File

@ -1,230 +1,261 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-submitting-changes">
<title>Submitting changes</title>
<section>
<title>Making patches</title>
<title>Submitting changes</title>
<section>
<title>Making patches</title>
<itemizedlist>
<listitem>
<para>Read <link xlink:href="https://nixos.org/nixpkgs/manual/">Manual (How to write packages for Nix)</link>.</para>
</listitem>
<listitem>
<para>Fork the repository on GitHub.</para>
</listitem>
<listitem>
<para>Create a branch for your future fix.
<itemizedlist>
<listitem>
<para>You can make branch from a commit of your local <command>nixos-version</command>. That will help you to avoid additional local compilations. Because you will receive packages from binary cache.
<itemizedlist>
<listitem>
<para>For example: <command>nixos-version</command> returns <command>15.05.git.0998212 (Dingo)</command>. So you can do:</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para>
Read <link xlink:href="https://nixos.org/nixpkgs/manual/">Manual (How to
write packages for Nix)</link>.
</para>
</listitem>
<listitem>
<para>
Fork the repository on GitHub.
</para>
</listitem>
<listitem>
<para>
Create a branch for your future fix.
<itemizedlist>
<listitem>
<para>
You can make branch from a commit of your local
<command>nixos-version</command>. That will help you to avoid
additional local compilations. Because you will receive packages from
binary cache.
<itemizedlist>
<listitem>
<para>
For example: <command>nixos-version</command> returns
<command>15.05.git.0998212 (Dingo)</command>. So you can do:
</para>
</listitem>
</itemizedlist>
<screen>
$ git checkout 0998212
$ git checkout -b 'fix/pkg-name-update'
</screen>
</para>
</listitem>
<listitem>
<para>Please avoid working directly on the <command>master</command> branch.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>Make commits of logical units.
<itemizedlist>
<listitem>
<para>If you removed pkgs, made some major NixOS changes etc., write about them in <command>nixos/doc/manual/release-notes/rl-unstable.xml</command>.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>Check for unnecessary whitespace with <command>git diff --check</command> before committing.</para>
</listitem>
<listitem>
<para>Format the commit in a following way:</para>
</para>
</listitem>
<listitem>
<para>
Please avoid working directly on the <command>master</command> branch.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
Make commits of logical units.
<itemizedlist>
<listitem>
<para>
If you removed pkgs, made some major NixOS changes etc., write about
them in
<command>nixos/doc/manual/release-notes/rl-unstable.xml</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
Check for unnecessary whitespace with <command>git diff --check</command>
before committing.
</para>
</listitem>
<listitem>
<para>
Format the commit in a following way:
</para>
<programlisting>
(pkg-name | nixos/&lt;module>): (from -> to | init at version | refactor | etc)
Additional information.
</programlisting>
<itemizedlist>
<listitem>
<para>
Examples:
<itemizedlist>
<listitem>
<para>
<command>nginx: init at 2.0.1</command>
</para>
</listitem>
<listitem>
<para>
<command>firefox: 54.0.1 -> 55.0</command>
</para>
</listitem>
<listitem>
<para>
<command>nixos/hydra: add bazBaz option</command>
</para>
</listitem>
<listitem>
<para>
<command>nixos/nginx: refactor config generation</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Test your changes. If you work with
<itemizedlist>
<listitem>
<para>
nixpkgs:
<itemizedlist>
<listitem>
<para>
update pkg ->
<itemizedlist>
<listitem>
<para>
<command>nix-env -i pkg-name -f &lt;path to your local nixpkgs
folder&gt;</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
add pkg ->
<itemizedlist>
<listitem>
<para>
Make sure it's in
<command>pkgs/top-level/all-packages.nix</command>
</para>
</listitem>
<listitem>
<para>
<command>nix-env -i pkg-name -f &lt;path to your local nixpkgs
folder&gt;</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
<emphasis>If you don't want to install pkg in you
profile</emphasis>.
<itemizedlist>
<listitem>
<para>
<command>nix-build -A pkg-attribute-name &lt;path to your local
nixpkgs folder&gt;/default.nix</command> and check results in the
folder <command>result</command>. It will appear in the same
directory where you did <command>nix-build</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
If you did <command>nix-env -i pkg-name</command> you can do
<command>nix-env -e pkg-name</command> to uninstall it from your
system.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
NixOS and its modules:
<itemizedlist>
<listitem>
<para>
You can add new module to your NixOS configuration file (usually
it's <command>/etc/nixos/configuration.nix</command>). And do
<command>sudo nixos-rebuild test -I nixpkgs=&lt;path to your local
nixpkgs folder&gt; --fast</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
If you have commits <command>pkg-name: oh, forgot to insert
whitespace</command>: squash commits in this case. Use <command>git rebase
-i</command>.
</para>
</listitem>
<listitem>
<para>
Rebase you branch against current <command>master</command>.
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Submitting changes</title>
<itemizedlist>
<listitem>
<para>Examples:
<itemizedlist>
<listitem>
<para>
<command>nginx: init at 2.0.1</command>
</para>
</listitem>
<listitem>
<para>
<command>firefox: 54.0.1 -> 55.0</command>
</para>
</listitem>
<listitem>
<para>
<command>nixos/hydra: add bazBaz option</command>
</para>
</listitem>
<listitem>
<para>
<command>nixos/nginx: refactor config generation</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Test your changes. If you work with
<itemizedlist>
<listitem>
<para>nixpkgs:
<itemizedlist>
<listitem>
<para>update pkg ->
<itemizedlist>
<listitem>
<para>
<command>nix-env -i pkg-name -f &lt;path to your local nixpkgs folder&gt;</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>add pkg ->
<itemizedlist>
<listitem>
<para>Make sure it's in <command>pkgs/top-level/all-packages.nix</command>
</para>
</listitem>
<listitem>
<para>
<command>nix-env -i pkg-name -f &lt;path to your local nixpkgs folder&gt;</command>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
<emphasis>If you don't want to install pkg in you profile</emphasis>.
<itemizedlist>
<listitem>
<para>
<command>nix-build -A pkg-attribute-name &lt;path to your local nixpkgs folder&gt;/default.nix</command> and check results in the folder <command>result</command>. It will appear in the same directory where you did <command>nix-build</command>.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>If you did <command>nix-env -i pkg-name</command> you can do <command>nix-env -e pkg-name</command> to uninstall it from your system.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>NixOS and its modules:
<itemizedlist>
<listitem>
<para>You can add new module to your NixOS configuration file (usually it's <command>/etc/nixos/configuration.nix</command>).
And do <command>sudo nixos-rebuild test -I nixpkgs=&lt;path to your local nixpkgs folder&gt; --fast</command>.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>If you have commits <command>pkg-name: oh, forgot to insert whitespace</command>: squash commits in this case. Use <command>git rebase -i</command>.</para>
</listitem>
<listitem>
<para>Rebase you branch against current <command>master</command>.</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Submitting changes</title>
<itemizedlist>
<listitem>
<para>Push your changes to your fork of nixpkgs.</para>
</listitem>
<listitem>
<para>Create pull request:
<itemizedlist>
<listitem>
<para>Write the title in format <command>(pkg-name | nixos/&lt;module>): improvement</command>.
<itemizedlist>
<listitem>
<para>If you update the pkg, write versions <command>from -> to</command>.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>Write in comment if you have tested your patch. Do not rely much on <command>TravisCI</command>.</para>
</listitem>
<listitem>
<para>If you make an improvement, write about your motivation.</para>
</listitem>
<listitem>
<para>Notify maintainers of the package. For example add to the message: <command>cc @jagajaga @domenkozar</command>.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</section>
<section>
<itemizedlist>
<listitem>
<para>
Push your changes to your fork of nixpkgs.
</para>
</listitem>
<listitem>
<para>
Create pull request:
<itemizedlist>
<listitem>
<para>
Write the title in format <command>(pkg-name | nixos/&lt;module>):
improvement</command>.
<itemizedlist>
<listitem>
<para>
If you update the pkg, write versions <command>from -> to</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
Write in comment if you have tested your patch. Do not rely much on
<command>TravisCI</command>.
</para>
</listitem>
<listitem>
<para>
If you make an improvement, write about your motivation.
</para>
</listitem>
<listitem>
<para>
Notify maintainers of the package. For example add to the message:
<command>cc @jagajaga @domenkozar</command>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Pull Request Template</title>
<para>
The pull request template helps determine what steps have been made for a
contribution so far, and will help guide maintainers on the status of a
@ -232,168 +263,200 @@ Additional information.
the title does not address and link any existing issues related to the pull
request.
</para>
<para>When a PR is created, it will be pre-populated with some checkboxes detailed below:
<para>
When a PR is created, it will be pre-populated with some checkboxes detailed
below:
</para>
<section>
<title>Tested using sandboxing</title>
<para>
When sandbox builds are enabled, Nix will setup an isolated environment
for each build process. It is used to remove further hidden dependencies
set by the build environment to improve reproducibility. This includes
access to the network during the build outside of
<function>fetch*</function> functions and files outside the Nix store.
Depending on the operating system access to other resources are blocked
as well (ex. inter process communication is isolated on Linux); see <link
When sandbox builds are enabled, Nix will setup an isolated environment for
each build process. It is used to remove further hidden dependencies set by
the build environment to improve reproducibility. This includes access to
the network during the build outside of <function>fetch*</function>
functions and files outside the Nix store. Depending on the operating
system access to other resources are blocked as well (ex. inter process
communication is isolated on Linux); see
<link
xlink:href="https://nixos.org/nix/manual/#description-45">build-use-sandbox</link>
in Nix manual for details.
</para>
<para>
Sandboxing is not enabled by default in Nix due to a small performance
hit on each build. In pull requests for <link
xlink:href="https://github.com/NixOS/nixpkgs/">nixpkgs</link> people
are asked to test builds with sandboxing enabled (see <literal>Tested
using sandboxing</literal> in the pull request template) because
Sandboxing is not enabled by default in Nix due to a small performance hit
on each build. In pull requests for
<link
xlink:href="https://github.com/NixOS/nixpkgs/">nixpkgs</link>
people are asked to test builds with sandboxing enabled (see
<literal>Tested using sandboxing</literal> in the pull request template)
because
in<link
xlink:href="https://nixos.org/hydra/">https://nixos.org/hydra/</link>
sandboxing is also used.
</para>
<para>
Depending if you use NixOS or other platforms you can use one of the
following methods to enable sandboxing <emphasis role="bold">before</emphasis> building the package:
following methods to enable sandboxing
<emphasis role="bold">before</emphasis> building the package:
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Globally enable sandboxing on NixOS</emphasis>:
add the following to
<filename>configuration.nix</filename>
<screen>nix.useSandbox = true;</screen>
add the following to <filename>configuration.nix</filename>
<screen>nix.useSandbox = true;</screen>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Globally enable sandboxing on non-NixOS platforms</emphasis>:
add the following to: <filename>/etc/nix/nix.conf</filename>
<screen>build-use-sandbox = true</screen>
<emphasis role="bold">Globally enable sandboxing on non-NixOS
platforms</emphasis>: add the following to:
<filename>/etc/nix/nix.conf</filename>
<screen>build-use-sandbox = true</screen>
</para>
</listitem>
</itemizedlist>
</para>
</section>
<section>
<title>Built on platform(s)</title>
<para>
Many Nix packages are designed to run on multiple
platforms. As such, it's important to let the maintainer know which
platforms your changes have been tested on. It's not always practical to
test a change on all platforms, and is not required for a pull request to
be merged. Only check the systems you tested the build on in this
section.
Many Nix packages are designed to run on multiple platforms. As such, it's
important to let the maintainer know which platforms your changes have been
tested on. It's not always practical to test a change on all platforms, and
is not required for a pull request to be merged. Only check the systems you
tested the build on in this section.
</para>
</section>
<section>
<title>Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)</title>
<para>
Packages with automated tests are much more likely to be merged in a
timely fashion because it doesn't require as much manual testing by the
maintainer to verify the functionality of the package. If there are
existing tests for the package, they should be run to verify your changes
do not break the tests. Tests only apply to packages with NixOS modules
defined and can only be run on Linux. For more details on writing and
running tests, see the <link
Packages with automated tests are much more likely to be merged in a timely
fashion because it doesn't require as much manual testing by the maintainer
to verify the functionality of the package. If there are existing tests for
the package, they should be run to verify your changes do not break the
tests. Tests only apply to packages with NixOS modules defined and can only
be run on Linux. For more details on writing and running tests, see the
<link
xlink:href="https://nixos.org/nixos/manual/index.html#sec-nixos-tests">section
in the NixOS manual</link>.
</para>
</section>
<section>
<title>Tested compilation of all pkgs that depend on this change using <command>nox-review</command></title>
<para>
If you are updating a package's version, you can use nox to make sure all
packages that depend on the updated package still compile correctly. This
can be done using the nox utility. The <command>nox-review</command>
utility can look for and build all dependencies either based on
uncommited changes with the <literal>wip</literal> option or specifying a
github pull request number.
utility can look for and build all dependencies either based on uncommited
changes with the <literal>wip</literal> option or specifying a github pull
request number.
</para>
<para>
review uncommitted changes:
<screen>nix-shell -p nox --run "nox-review wip"</screen>
<screen>nix-shell -p nox --run "nox-review wip"</screen>
</para>
<para>
review changes from pull request number 12345:
<screen>nix-shell -p nox --run "nox-review pr 12345"</screen>
<screen>nix-shell -p nox --run "nox-review pr 12345"</screen>
</para>
</section>
<section>
<title>Tested execution of all binary files (usually in <filename>./result/bin/</filename>)</title>
<para>
It's important to test any executables generated by a build when you
change or create a package in nixpkgs. This can be done by looking in
It's important to test any executables generated by a build when you change
or create a package in nixpkgs. This can be done by looking in
<filename>./result/bin</filename> and running any files in there, or at a
minimum, the main executable for the package. For example, if you make a change
to <package>texlive</package>, you probably would only check the binaries
associated with the change you made rather than testing all of them.
minimum, the main executable for the package. For example, if you make a
change to <package>texlive</package>, you probably would only check the
binaries associated with the change you made rather than testing all of
them.
</para>
</section>
<section>
<title>Meets nixpkgs contribution standards</title>
<para>
The last checkbox is fits <link
The last checkbox is fits
<link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md">CONTRIBUTING.md</link>.
The contributing document has detailed information on standards the Nix
community has for commit messages, reviews, licensing of contributions
you make to the project, etc... Everyone should read and understand the
community has for commit messages, reviews, licensing of contributions you
make to the project, etc... Everyone should read and understand the
standards the community has for contributing before submitting a pull
request.
</para>
</section>
</section>
</section>
<section>
<title>Hotfixing pull requests</title>
<section>
<title>Hotfixing pull requests</title>
<itemizedlist>
<listitem>
<para>
Make the appropriate changes in you branch.
</para>
</listitem>
<listitem>
<para>
Don't create additional commits, do
<itemizedlist>
<listitem>
<para>
<command>git rebase -i</command>
</para>
</listitem>
<listitem>
<para>
<command>git push --force</command> to your branch.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Commit policy</title>
<itemizedlist>
<listitem>
<para>Make the appropriate changes in you branch.</para>
</listitem>
<itemizedlist>
<listitem>
<para>
Commits must be sufficiently tested before being merged, both for the
master and staging branches.
</para>
</listitem>
<listitem>
<para>
Hydra builds for master and staging should not be used as testing
platform, it's a build farm for changes that have been already tested.
</para>
</listitem>
<listitem>
<para>
When changing the bootloader installation process, extra care must be
taken. Grub installations cannot be rolled back, hence changes may break
people's installations forever. For any non-trivial change to the
bootloader please file a PR asking for review, especially from @edolstra.
</para>
</listitem>
</itemizedlist>
<listitem>
<para>Don't create additional commits, do
<itemizedlist>
<listitem>
<para><command>git rebase -i</command></para>
</listitem>
<listitem>
<para>
<command>git push --force</command> to your branch.</para>
</listitem>
</itemizedlist>
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Commit policy</title>
<itemizedlist>
<listitem>
<para>Commits must be sufficiently tested before being merged, both for the master and staging branches.</para>
</listitem>
<listitem>
<para>Hydra builds for master and staging should not be used as testing platform, it's a build farm for changes that have been already tested.</para>
</listitem>
<listitem>
<para>When changing the bootloader installation process, extra care must be taken. Grub installations cannot be rolled back, hence changes may break people's installations forever. For any non-trivial change to the bootloader please file a PR asking for review, especially from @edolstra.</para>
</listitem>
</itemizedlist>
<section>
<section>
<title>Master branch</title>
<itemizedlist>
@ -403,9 +466,9 @@ Additional information.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section>
<section>
<title>Staging branch</title>
<itemizedlist>
@ -413,23 +476,24 @@ Additional information.
<para>
It's only for non-breaking mass-rebuild commits. That means it's not to
be used for testing, and changes must have been well tested already.
<link xlink:href="http://comments.gmane.org/gmane.linux.distributions.nixos/13447">Read policy here</link>.
<link xlink:href="https://web.archive.org/web/20160528180406/http://comments.gmane.org/gmane.linux.distributions.nixos/13447">Read
policy here</link>.
</para>
</listitem>
<listitem>
<para>
If the branch is already in a broken state, please refrain from adding
extra new breakages. Stabilize it for a few days, merge into master,
then resume development on staging.
<link xlink:href="http://hydra.nixos.org/jobset/nixpkgs/staging#tabs-evaluations">Keep an eye on the staging evaluations here</link>.
If any fixes for staging happen to be already in master, then master can
be merged into staging.
extra new breakages. Stabilize it for a few days, merge into master, then
resume development on staging.
<link xlink:href="http://hydra.nixos.org/jobset/nixpkgs/staging#tabs-evaluations">Keep
an eye on the staging evaluations here</link>. If any fixes for staging
happen to be already in master, then master can be merged into staging.
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section>
<section>
<title>Stable release branches</title>
<itemizedlist>
@ -440,8 +504,10 @@ Additional information.
clear description about why this needs to be included in the stable
branch.
</para>
<para>An example of a cherry-picked commit would look like this:</para>
<screen>
<para>
An example of a cherry-picked commit would look like this:
</para>
<screen>
nixos: Refactor the world.
The original commit message describing the reason why the world was torn apart.
@ -452,8 +518,6 @@ the stone age.
</screen>
</listitem>
</itemizedlist>
</section>
</section>
</section>
</section>
</chapter>

View File

@ -3,9 +3,9 @@
let
inherit (builtins) head tail length;
inherit (lib.trivial) and or;
inherit (lib.trivial) and;
inherit (lib.strings) concatStringsSep;
inherit (lib.lists) fold concatMap concatLists all deepSeqList;
inherit (lib.lists) fold concatMap concatLists;
in
rec {
@ -195,8 +195,9 @@ rec {
{ x = "foo"; y = "bar"; }
=> { x = "x-foo"; y = "y-bar"; }
*/
mapAttrs = f: set:
listToAttrs (map (attr: { name = attr; value = f attr set.${attr}; }) (attrNames set));
mapAttrs = builtins.mapAttrs or
(f: set:
listToAttrs (map (attr: { name = attr; value = f attr set.${attr}; }) (attrNames set)));
/* Like `mapAttrs', but allows the name of each attribute to be
@ -383,11 +384,12 @@ rec {
recursiveUpdateUntil = pred: lhs: rhs:
let f = attrPath:
zipAttrsWith (n: values:
let here = attrPath ++ [n]; in
if tail values == []
|| pred attrPath (head (tail values)) (head values) then
|| pred here (head (tail values)) (head values) then
head values
else
f (attrPath ++ [n]) values
f here values
);
in f [] [rhs lhs];

View File

@ -1,5 +1,5 @@
{lib, pkgs}:
let inherit (lib) nv nvs; in
let inherit (lib) nvs; in
{
# composableDerivation basically mixes these features:

View File

@ -1,9 +1,4 @@
{ lib }:
let
inherit (builtins) attrNames;
in
rec {
@ -200,9 +195,10 @@ rec {
let self = f self // {
newScope = scope: newScope (self // scope);
callPackage = self.newScope {};
# TODO(@Ericson2314): Haromonize argument order of `g` with everything else
overrideScope = g:
makeScope newScope
(self_: let super = f self_; in super // g super self_);
(lib.fixedPoints.extends (lib.flip g) f);
packages = f;
};
in self;

View File

@ -1,34 +1,67 @@
/* Collection of functions useful for debugging
broken nix expressions.
* `trace`-like functions take two values, print
the first to stderr and return the second.
* `traceVal`-like functions take one argument
which both printed and returned.
* `traceSeq`-like functions fully evaluate their
traced value before printing (not just to weak
head normal form like trace does by default).
* Functions that end in `-Fn` take an additional
function as their first argument, which is applied
to the traced value before it is printed.
*/
{ lib }:
let
inherit (builtins) trace attrNamesToStr isAttrs isList isInt
isString isBool head substring attrNames;
inherit (lib) all id mapAttrsFlatten elem isFunction;
inherit (builtins) trace isAttrs isList isInt
head substring attrNames;
inherit (lib) id elem isFunction;
in
rec {
inherit (builtins) addErrorContext;
# -- TRACING --
addErrorContextToAttrs = lib.mapAttrs (a: v: lib.addErrorContext "while evaluating ${a}" v);
/* Trace msg, but only if pred is true.
traceIf = p: msg: x: if p then trace msg x else x;
Example:
traceIf true "hello" 3
trace: hello
=> 3
*/
traceIf = pred: msg: x: if pred then trace msg x else x;
traceVal = x: trace x x;
traceXMLVal = x: trace (builtins.toXML x) x;
traceXMLValMarked = str: x: trace (str + builtins.toXML x) x;
/* Trace the value and also return it.
# strict trace functions (traced structure is fully evaluated and printed)
Example:
traceValFn (v: "mystring ${v}") "foo"
trace: mystring foo
=> "foo"
*/
traceValFn = f: x: trace (f x) x;
traceVal = traceValFn id;
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first. */
/* `builtins.trace`, but the value is `builtins.deepSeq`ed first.
Example:
trace { a.b.c = 3; } null
trace: { a = <CODE>; }
=> null
traceSeq { a.b.c = 3; } null
trace: { a = { b = { c = 3; }; }; }
=> null
*/
traceSeq = x: y: trace (builtins.deepSeq x x) y;
/* Like `traceSeq`, but only down to depth n.
* This is very useful because lots of `traceSeq` usages
* lead to an infinite recursion.
/* Like `traceSeq`, but only evaluate down to depth n.
This is very useful because lots of `traceSeq` usages
lead to an infinite recursion.
Example:
traceSeqN 2 { a.b.c = 3; } null
trace: { a = { b = {}; }; }
=> null
*/
traceSeqN = depth: x: y: with lib;
let snip = v: if isList v then noQuotes "[]" v
@ -43,39 +76,16 @@ rec {
in trace (generators.toPretty { allowPrettyValues = true; }
(modify depth snip x)) y;
/* `traceSeq`, but the same value is traced and returned */
traceValSeq = v: traceVal (builtins.deepSeq v v);
/* `traceValSeq` but with fixed depth */
traceValSeqN = depth: v: traceSeqN depth v v;
/* A combination of `traceVal` and `traceSeq` */
traceValSeqFn = f: v: traceValFn f (builtins.deepSeq v v);
traceValSeq = traceValSeqFn id;
/* A combination of `traceVal` and `traceSeqN`. */
traceValSeqNFn = f: depth: v: traceSeqN depth (f v) v;
traceValSeqN = traceValSeqNFn id;
# this can help debug your code as well - designed to not produce thousands of lines
traceShowVal = x: trace (showVal x) x;
traceShowValMarked = str: x: trace (str + showVal x) x;
attrNamesToStr = a: lib.concatStringsSep "; " (map (x: "${x}=") (attrNames a));
showVal = x:
if isAttrs x then
if x ? outPath then "x is a derivation, name ${if x ? name then x.name else "<no name>"}, { ${attrNamesToStr x} }"
else "x is attr set { ${attrNamesToStr x} }"
else if isFunction x then "x is a function"
else if x == [] then "x is an empty list"
else if isList x then "x is a list, first element is: ${showVal (head x)}"
else if x == true then "x is boolean true"
else if x == false then "x is boolean false"
else if x == null then "x is null"
else if isInt x then "x is an integer `${toString x}'"
else if isString x then "x is a string `${substring 0 50 x}...'"
else "x is probably a path `${substring 0 50 (toString x)}...'";
# trace the arguments passed to function and its result
# maybe rewrite these functions in a traceCallXml like style. Then one function is enough
traceCall = n: f: a: let t = n2: x: traceShowValMarked "${n} ${n2}:" x; in t "result" (f (t "arg 1" a));
traceCall2 = n: f: a: b: let t = n2: x: traceShowValMarked "${n} ${n2}:" x; in t "result" (f (t "arg 1" a) (t "arg 2" b));
traceCall3 = n: f: a: b: c: let t = n2: x: traceShowValMarked "${n} ${n2}:" x; in t "result" (f (t "arg 1" a) (t "arg 2" b) (t "arg 3" c));
# FIXME: rename this?
traceValIfNot = c: x:
if c x then true else trace (showVal x) false;
# -- TESTING --
/* Evaluate a set of tests. A test is an attribute set {expr,
expected}, denoting an expression and its expected result. The
@ -99,9 +109,68 @@ rec {
# usage: { testX = allTrue [ true ]; }
testAllTrue = expr: { inherit expr; expected = map (x: true) expr; };
strict = v:
trace "Warning: strict is deprecated and will be removed in the next release"
(builtins.seq v v);
# -- DEPRECATED --
traceShowVal = x: trace (showVal x) x;
traceShowValMarked = str: x: trace (str + showVal x) x;
attrNamesToStr = a:
trace ( "Warning: `attrNamesToStr` is deprecated "
+ "and will be removed in the next release. "
+ "Please use more specific concatenation "
+ "for your uses (`lib.concat(Map)StringsSep`)." )
(lib.concatStringsSep "; " (map (x: "${x}=") (attrNames a)));
showVal = with lib;
trace ( "Warning: `showVal` is deprecated "
+ "and will be removed in the next release, "
+ "please use `traceSeqN`" )
(let
modify = v:
let pr = f: { __pretty = f; val = v; };
in if isDerivation v then pr
(drv: "<δ:${drv.name}:${concatStringsSep ","
(attrNames drv)}>")
else if [] == v then pr (const "[]")
else if isList v then pr (l: "[ ${go (head l)}, ]")
else if isAttrs v then pr
(a: "{ ${ concatStringsSep ", " (attrNames a)} }")
else v;
go = x: generators.toPretty
{ allowPrettyValues = true; }
(modify x);
in go);
traceXMLVal = x:
trace ( "Warning: `traceXMLVal` is deprecated "
+ "and will be removed in the next release. "
+ "Please use `traceValFn builtins.toXML`." )
(trace (builtins.toXML x) x);
traceXMLValMarked = str: x:
trace ( "Warning: `traceXMLValMarked` is deprecated "
+ "and will be removed in the next release. "
+ "Please use `traceValFn (x: str + builtins.toXML x)`." )
(trace (str + builtins.toXML x) x);
# trace the arguments passed to function and its result
# maybe rewrite these functions in a traceCallXml like style. Then one function is enough
traceCall = n: f: a: let t = n2: x: traceShowValMarked "${n} ${n2}:" x; in t "result" (f (t "arg 1" a));
traceCall2 = n: f: a: b: let t = n2: x: traceShowValMarked "${n} ${n2}:" x; in t "result" (f (t "arg 1" a) (t "arg 2" b));
traceCall3 = n: f: a: b: c: let t = n2: x: traceShowValMarked "${n} ${n2}:" x; in t "result" (f (t "arg 1" a) (t "arg 2" b) (t "arg 3" c));
traceValIfNot = c: x:
trace ( "Warning: `traceValIfNot` is deprecated "
+ "and will be removed in the next release. "
+ "Please use `if/then/else` and `traceValSeq 1`.")
(if c x then true else traceSeq (showVal x) false);
addErrorContextToAttrs = attrs:
trace ( "Warning: `addErrorContextToAttrs` is deprecated "
+ "and will be removed in the next release. "
+ "Please use `builtins.addErrorContext` directly." )
(lib.mapAttrs (a: v: lib.addErrorContext "while evaluating ${a}" v) attrs);
# example: (traceCallXml "myfun" id 3) will output something like
# calling myfun arg 1: 3 result: 3
@ -109,17 +178,20 @@ rec {
# note: if result doesn't evaluate you'll get no trace at all (FIXME)
# args should be printed in any case
traceCallXml = a:
if !isInt a then
trace ( "Warning: `traceCallXml` is deprecated "
+ "and will be removed in the next release. "
+ "Please complain if you use the function regularly." )
(if !isInt a then
traceCallXml 1 "calling ${a}\n"
else
let nr = a;
in (str: expr:
if isFunction expr then
(arg:
traceCallXml (builtins.add 1 nr) "${str}\n arg ${builtins.toString nr} is \n ${builtins.toXML (strict arg)}" (expr arg)
traceCallXml (builtins.add 1 nr) "${str}\n arg ${builtins.toString nr} is \n ${builtins.toXML (builtins.seq arg arg)}" (expr arg)
)
else
let r = strict expr;
let r = builtins.seq expr expr;
in trace "${str}\n result:\n${builtins.toXML r}" r
);
));
}

View File

@ -5,9 +5,11 @@
*/
let
callLibs = file: import file { inherit lib; };
inherit (import ./fixed-points.nix {}) makeExtensible;
lib = rec {
lib = makeExtensible (self: let
callLibs = file: import file { lib = self; };
in with self; {
# often used, or depending on very little
trivial = callLibs ./trivial.nix;
@ -49,15 +51,15 @@ let
# back-compat aliases
platforms = systems.forMeta;
inherit (builtins) add addErrorContext attrNames
concatLists deepSeq elem elemAt filter genericClosure genList
getAttr hasAttr head isAttrs isBool isInt isList
isString length lessThan listToAttrs pathExists readFile
replaceStrings seq stringLength sub substring tail;
inherit (trivial) id const concat or and boolToString mergeAttrs
flip mapNullable inNixShell min max importJSON warn info
nixpkgsVersion mod compare splitByAndCompare
functionArgs setFunctionArgs isFunction;
inherit (builtins) add addErrorContext attrNames concatLists
deepSeq elem elemAt filter genericClosure genList getAttr
hasAttr head isAttrs isBool isInt isList isString length
lessThan listToAttrs pathExists readFile replaceStrings seq
stringLength sub substring tail;
inherit (trivial) id const concat or and bitAnd bitOr bitXor bitNot
boolToString mergeAttrs flip mapNullable inNixShell min max
importJSON warn info nixpkgsVersion version mod compare
splitByAndCompare functionArgs setFunctionArgs isFunction;
inherit (fixedPoints) fix fix' extends composeExtensions
makeExtensible makeExtensibleWithCustomName;
@ -72,30 +74,32 @@ let
inherit (lists) singleton foldr fold foldl foldl' imap0 imap1
concatMap flatten remove findSingle findFirst any all count
optional optionals toList range partition zipListsWith zipLists
reverseList listDfs toposort sort compareLists take drop sublist
last init crossLists unique intersectLists subtractLists
mutuallyExclusive;
reverseList listDfs toposort sort naturalSort compareLists take
drop sublist last init crossLists unique intersectLists
subtractLists mutuallyExclusive groupBy groupBy';
inherit (strings) concatStrings concatMapStrings concatImapStrings
intersperse concatStringsSep concatMapStringsSep
concatImapStringsSep makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath makePerlPath optionalString
makeLibraryPath makeBinPath makePerlPath makeFullPerlPath optionalString
hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs replaceChars lowerChars upperChars
toLower toUpper addContextFrom splitString removePrefix
removeSuffix versionOlder versionAtLeast getVersion nameFromURL
enableFeature fixedWidthString fixedWidthNumber isStorePath
escapeShellArg escapeShellArgs replaceChars lowerChars
upperChars toLower toUpper addContextFrom splitString
removePrefix removeSuffix versionOlder versionAtLeast getVersion
nameFromURL enableFeature enableFeatureAs withFeature
withFeatureAs fixedWidthString fixedWidthNumber isStorePath
toInt readPathsFromFile fileContents;
inherit (stringsWithDeps) textClosureList textClosureMap
noDepEntry fullDepEntry packEntry stringAfter;
inherit (customisation) overrideDerivation makeOverridable
callPackageWith callPackagesWith extendDerivation
hydraJob makeScope;
callPackageWith callPackagesWith extendDerivation hydraJob
makeScope;
inherit (meta) addMetaAttrs dontDistribute setName updateName
appendToName mapDerivationAttrset lowPrio lowPrioSet hiPrio
hiPrioSet;
inherit (sources) pathType pathIsDirectory cleanSourceFilter
cleanSource sourceByRegex sourceFilesBySuffices
commitIdFromGitRepo cleanSourceWith pathHasContext canCleanSource;
commitIdFromGitRepo cleanSourceWith pathHasContext
canCleanSource;
inherit (modules) evalModules closeModules unifyModuleSyntax
applyIfFunction unpackSubmodule packSubmodule mergeModules
mergeModules' mergeOptionDecls evalOptionValue mergeDefinitions
@ -113,11 +117,11 @@ let
unknownModule mkOption;
inherit (types) isType setType defaultTypeMerge defaultFunctor
isOptionType mkOptionType;
inherit (debug) addErrorContextToAttrs traceIf traceVal
inherit (debug) addErrorContextToAttrs traceIf traceVal traceValFn
traceXMLVal traceXMLValMarked traceSeq traceSeqN traceValSeq
traceValSeqN traceShowVal traceShowValMarked
showVal traceCall traceCall2 traceCall3 traceValIfNot runTests
testAllTrue strict traceCallXml attrNamesToStr;
traceValSeqFn traceValSeqN traceValSeqNFn traceShowVal
traceShowValMarked showVal traceCall traceCall2 traceCall3
traceValIfNot runTests testAllTrue traceCallXml attrNamesToStr;
inherit (misc) maybeEnv defaultMergeArg defaultMerge foldArgs
defaultOverridableDelayableArgs composedArgsAndFun
maybeAttrNullable maybeAttr ifEnable checkFlag getValue
@ -126,7 +130,7 @@ let
closePropagation mapAttrsFlatten nvs setAttr setAttrMerge
mergeAttrsWithFunc mergeAttrsConcatenateValues
mergeAttrsNoOverride mergeAttrByFunc mergeAttrsByFuncDefaults
mergeAttrsByFuncDefaultsClean mergeAttrBy
prepareDerivationArgs nixType imap overridableDelayableArgs;
};
mergeAttrsByFuncDefaultsClean mergeAttrBy prepareDerivationArgs
nixType imap overridableDelayableArgs;
});
in lib

View File

@ -19,8 +19,6 @@ let
libStr = lib.strings;
libAttr = lib.attrsets;
flipMapAttrs = flip libAttr.mapAttrs;
inherit (lib) isFunction;
in
@ -143,18 +141,13 @@ rec {
(This means fn is type Val -> String.) */
allowPrettyValues ? false
}@args: v: with builtins;
if isInt v then toString v
let isPath = v: typeOf v == "path";
in if isInt v then toString v
else if isString v then ''"${libStr.escape [''"''] v}"''
else if true == v then "true"
else if false == v then "false"
else if null == v then "null"
else if isFunction v then
let fna = lib.functionArgs v;
showFnas = concatStringsSep "," (libAttr.mapAttrsToList
(name: hasDefVal: if hasDefVal then "(${name})" else name)
fna);
in if fna == {} then "<λ>"
else "<λ:{${showFnas}}>"
else if isPath v then toString v
else if isList v then "[ "
+ libStr.concatMapStringsSep " " (toPretty args) v
+ " ]"
@ -163,12 +156,71 @@ rec {
if attrNames v == [ "__pretty" "val" ] && allowPrettyValues
then v.__pretty v.val
# TODO: there is probably a better representation?
else if v ? type && v.type == "derivation" then "<δ>"
else if v ? type && v.type == "derivation" then
"<δ:${v.name}>"
# "<δ:${concatStringsSep "," (builtins.attrNames v)}>"
else "{ "
+ libStr.concatStringsSep " " (libAttr.mapAttrsToList
(name: value:
"${toPretty args name} = ${toPretty args value};") v)
+ " }"
else if isFunction v then
let fna = lib.functionArgs v;
showFnas = concatStringsSep "," (libAttr.mapAttrsToList
(name: hasDefVal: if hasDefVal then "(${name})" else name)
fna);
in if fna == {} then "<λ>"
else "<λ:{${showFnas}}>"
else abort "generators.toPretty: should never happen (v = ${v})";
# PLIST handling
toPlist = {}: v: let
isFloat = builtins.isFloat or (x: false);
expr = ind: x: with builtins;
if isNull x then "" else
if isBool x then bool ind x else
if isInt x then int ind x else
if isString x then str ind x else
if isList x then list ind x else
if isAttrs x then attrs ind x else
if isFloat x then float ind x else
abort "generators.toPlist: should never happen (v = ${v})";
literal = ind: x: ind + x;
bool = ind: x: literal ind (if x then "<true/>" else "<false/>");
int = ind: x: literal ind "<integer>${toString x}</integer>";
str = ind: x: literal ind "<string>${x}</string>";
key = ind: x: literal ind "<key>${x}</key>";
float = ind: x: literal ind "<real>${toString x}</real>";
indent = ind: expr "\t${ind}";
item = ind: libStr.concatMapStringsSep "\n" (indent ind);
list = ind: x: libStr.concatStringsSep "\n" [
(literal ind "<array>")
(item ind x)
(literal ind "</array>")
];
attrs = ind: x: libStr.concatStringsSep "\n" [
(literal ind "<dict>")
(attr ind x)
(literal ind "</dict>")
];
attr = let attrFilter = name: value: name != "_module" && value != null;
in ind: x: libStr.concatStringsSep "\n" (lib.flatten (lib.mapAttrsToList
(name: value: lib.optional (attrFilter name value) [
(key "\t${ind}" name)
(expr "\t${ind}" value)
]) x));
in ''<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
${expr "" v}
</plist>'';
}

57
lib/kernel.nix Normal file
View File

@ -0,0 +1,57 @@
{ lib
# we pass the kernel version here to keep a nice syntax `whenOlder "4.13"`
# kernelVersion, e.g., config.boot.kernelPackages.version
, version
, mkValuePreprocess ? null
}:
with lib;
rec {
# Common patterns
when = cond: opt: if cond then opt else null;
whenAtLeast = ver: when (versionAtLeast version ver);
whenOlder = ver: when (versionOlder version ver);
whenBetween = verLow: verHigh: when (versionAtLeast version verLow && versionOlder version verHigh);
# Keeping these around in case we decide to change this horrible implementation :)
option = x: if x == null then null else "?${x}";
yes = "y";
no = "n";
module = "m";
mkValue = val:
let
isNumber = c: elem c ["0" "1" "2" "3" "4" "5" "6" "7" "8" "9"];
in
if val == "" then "\"\""
else if val == yes || val == module || val == no then val
else if all isNumber (stringToCharacters val) then val
else if substring 0 2 val == "0x" then val
else val; # FIXME: fix quoting one day
# generate nix intermediate kernel config file of the form
#
# VIRTIO_MMIO m
# VIRTIO_BLK y
# VIRTIO_CONSOLE n
# NET_9P_VIRTIO? y
#
# Use mkValuePreprocess to preprocess option values, aka mark 'modules' as
# 'yes' or vice-versa
# Borrowed from copumpkin https://github.com/NixOS/nixpkgs/pull/12158
# returns a string, expr should be an attribute set
generateNixKConf = exprs: mkValuePreprocess:
let
mkConfigLine = key: rawval:
let
val = if builtins.isFunction mkValuePreprocess then mkValuePreprocess rawval else rawval;
in
if val == null
then ""
else if hasPrefix "?" val
then "${key}? ${mkValue (removePrefix "?" val)}\n"
else "${key} ${mkValue val}\n";
mkConf = cfg: concatStrings (mapAttrsToList mkConfigLine cfg);
in mkConf exprs;
}

View File

@ -99,6 +99,16 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = ''BSD 4-clause "Original" or "Old" License'';
};
bsl10 = {
fullName = "Business Source License 1.0";
url = https://mariadb.com/bsl10;
};
bsl11 = {
fullName = "Business Source License 1.1";
url = https://mariadb.com/bsl11;
};
clArtistic = spdx {
spdxId = "ClArtistic";
fullName = "Clarified Artistic License";
@ -112,26 +122,37 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
cc-by-nc-sa-20 = spdx {
spdxId = "CC-BY-NC-SA-2.0";
fullName = "Creative Commons Attribution Non Commercial Share Alike 2.0";
free = false;
};
cc-by-nc-sa-25 = spdx {
spdxId = "CC-BY-NC-SA-2.5";
fullName = "Creative Commons Attribution Non Commercial Share Alike 2.5";
free = false;
};
cc-by-nc-sa-30 = spdx {
spdxId = "CC-BY-NC-SA-3.0";
fullName = "Creative Commons Attribution Non Commercial Share Alike 3.0";
free = false;
};
cc-by-nc-sa-40 = spdx {
spdxId = "CC-BY-NC-SA-4.0";
fullName = "Creative Commons Attribution Non Commercial Share Alike 4.0";
free = false;
};
cc-by-nc-40 = spdx {
spdxId = "CC-BY-NC-4.0";
fullName = "Creative Commons Attribution Non Commercial 4.0 International";
free = false;
};
cc-by-nd-30 = spdx {
spdxId = "CC-BY-ND-3.0";
fullName = "Creative Commons Attribution-No Derivative Works v3.00";
free = false;
};
cc-by-sa-25 = spdx {
@ -189,6 +210,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Common Public License 1.0";
};
curl = {
fullName = "MIT/X11 derivate";
url = "https://curl.haxx.se/docs/copyright.html";
};
doc = spdx {
spdxId = "DOC";
fullName = "DOC License";
@ -210,6 +236,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Eiffel Forum License v2.0";
};
elastic = {
fullName = "ELASTIC LICENSE";
url = https://github.com/elastic/elasticsearch/blob/master/licenses/ELASTIC-LICENSE.txt;
free = false;
};
epl10 = spdx {
spdxId = "EPL-1.0";
fullName = "Eclipse Public License 1.0";
@ -445,6 +477,7 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
msrla = {
fullName = "Microsoft Research License Agreement";
url = "http://research.microsoft.com/en-us/projects/pex/msr-la.txt";
free = false;
};
ncsa = spdx {
@ -585,6 +618,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "Vim License";
};
virtualbox-puel = {
fullName = "Oracle VM VirtualBox Extension Pack Personal Use and Evaluation License (PUEL)";
url = "https://www.virtualbox.org/wiki/VirtualBox_PUEL";
free = false;
};
vsl10 = spdx {
spdxId = "VSL-1.0";
fullName = "Vovida Software License v1.0";
@ -615,6 +654,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "wxWindows Library Licence, Version 3.1";
};
xfig = {
fullName = "xfig";
url = "http://mcj.sourceforge.net/authors.html#xfig";
};
zlib = spdx {
spdxId = "Zlib";
fullName = "zlib License";

View File

@ -1,7 +1,9 @@
# General list operations.
{ lib }:
with lib.trivial;
let
inherit (lib.strings) toInt;
in
rec {
inherit (builtins) head tail length isList elemAt concatLists filter elem genList;
@ -62,7 +64,6 @@ rec {
*/
foldl = op: nul: list:
let
len = length list;
foldl' = n:
if n == -1
then nul
@ -99,7 +100,7 @@ rec {
concatMap (x: [x] ++ ["z"]) ["a" "b"]
=> [ "a" "z" "b" "z" ]
*/
concatMap = f: list: concatLists (map f list);
concatMap = builtins.concatMap or (f: list: concatLists (map f list));
/* Flatten the argument into a single list; that is, nested lists are
spliced into the top-level lists.
@ -248,6 +249,42 @@ rec {
else { right = t.right; wrong = [h] ++ t.wrong; }
) { right = []; wrong = []; });
/* Splits the elements of a list into many lists, using the return value of a predicate.
Predicate should return a string which becomes keys of attrset `groupBy' returns.
`groupBy'' allows to customise the combining function and initial value
Example:
groupBy (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = [ 5 3 4 ]; false = [ 1 2 ]; }
groupBy (x: x.name) [ {name = "icewm"; script = "icewm &";}
{name = "xfce"; script = "xfce4-session &";}
{name = "icewm"; script = "icewmbg &";}
{name = "mate"; script = "gnome-session &";}
]
=> { icewm = [ { name = "icewm"; script = "icewm &"; }
{ name = "icewm"; script = "icewmbg &"; } ];
mate = [ { name = "mate"; script = "gnome-session &"; } ];
xfce = [ { name = "xfce"; script = "xfce4-session &"; } ];
}
groupBy' allows to customise the combining function and initial value
Example:
groupBy' builtins.add 0 (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = 12; false = 3; }
*/
groupBy' = op: nul: pred: lst:
foldl' (r: e:
let
key = pred e;
in
r // { ${key} = op (r.${key} or nul) e; }
) {} lst;
groupBy = groupBy' (sum: e: sum ++ [e]) [];
/* Merges two lists of the same size together. If the sizes aren't the same
the merging stops at the shortest. How both lists are merged is defined
by the first argument.
@ -409,6 +446,25 @@ rec {
then compareLists cmp (tail a) (tail b)
else rel;
/* Sort list using "Natural sorting".
Numeric portions of strings are sorted in numeric order.
Example:
naturalSort ["disk11" "disk8" "disk100" "disk9"]
=> ["disk8" "disk9" "disk11" "disk100"]
naturalSort ["10.46.133.149" "10.5.16.62" "10.54.16.25"]
=> ["10.5.16.62" "10.46.133.149" "10.54.16.25"]
naturalSort ["v0.2" "v0.15" "v0.0.9"]
=> [ "v0.0.9" "v0.2" "v0.15" ]
*/
naturalSort = lst:
let
vectorise = s: map (x: if isList x then toInt (head x) else x) (builtins.split "(0|[1-9][0-9]*)" s);
prepared = map (x: [ (vectorise x) x ]) lst; # remember vectorised version for O(n) regex splits
less = a: b: (compareLists compare (head a) (head b)) < 0;
in
map (x: elemAt x 1) (sort less prepared);
/* Return the first (at most) N elements of a list.
Example:

View File

@ -86,6 +86,4 @@ rec {
then { system = elem; }
else { parsed = elem; };
in lib.matchAttrs pattern platform;
enableIfAvailable = p: if p.meta.available or true then [ p ] else [];
}

View File

@ -59,7 +59,7 @@ rec {
};
};
closed = closeModules (modules ++ [ internalModule ]) ({ inherit config options; lib = import ./.; } // specialArgs);
closed = closeModules (modules ++ [ internalModule ]) ({ inherit config options lib; } // specialArgs);
options = mergeModules prefix (reverseList (filterModules (specialArgs.modulesPath or "") closed));
@ -159,7 +159,7 @@ rec {
context = name: ''while evaluating the module argument `${name}' in "${key}":'';
extraArgs = builtins.listToAttrs (map (name: {
inherit name;
value = addErrorContext (context name)
value = builtins.addErrorContext (context name)
(args.${name} or config._module.args.${name});
}) requiredArgs);
@ -309,7 +309,8 @@ rec {
res.mergedValue;
in opt //
{ value = addErrorContext "while evaluating the option `${showOption loc}':" value;
{ value = builtins.addErrorContext "while evaluating the option `${showOption loc}':" value;
inherit (res.defsFinal') highestPrio;
definitions = map (def: def.value) res.defsFinal;
files = map (def: def.file) res.defsFinal;
inherit (res) isDefined;
@ -317,7 +318,7 @@ rec {
# Merge definitions of a value of a given type.
mergeDefinitions = loc: type: defs: rec {
defsFinal =
defsFinal' =
let
# Process mkMerge and mkIf properties.
defs' = concatMap (m:
@ -325,15 +326,20 @@ rec {
) defs;
# Process mkOverride properties.
defs'' = filterOverrides defs';
defs'' = filterOverrides' defs';
# Sort mkOrder properties.
defs''' =
# Avoid sorting if we don't have to.
if any (def: def.value._type or "" == "order") defs''
then sortProperties defs''
else defs'';
in defs''';
if any (def: def.value._type or "" == "order") defs''.values
then sortProperties defs''.values
else defs''.values;
in {
values = defs''';
inherit (defs'') highestPrio;
};
defsFinal = defsFinal'.values;
# Type-check the remaining definitions, and merge them.
mergedValue = foldl' (res: def:
@ -416,13 +422,18 @@ rec {
Note that "z" has the default priority 100.
*/
filterOverrides = defs:
filterOverrides = defs: (filterOverrides' defs).values;
filterOverrides' = defs:
let
defaultPrio = 100;
getPrio = def: if def.value._type or "" == "override" then def.value.priority else defaultPrio;
highestPrio = foldl' (prio: def: min (getPrio def) prio) 9999 defs;
strip = def: if def.value._type or "" == "override" then def // { value = def.value.content; } else def;
in concatMap (def: if getPrio def == highestPrio then [(strip def)] else []) defs;
in {
values = concatMap (def: if getPrio def == highestPrio then [(strip def)] else []) defs;
inherit highestPrio;
};
/* Sort a list of properties. The sort priority of a property is
1000 by default, but can be overridden by wrapping the property
@ -482,7 +493,7 @@ rec {
inherit priority content;
};
mkOptionDefault = mkOverride 1001; # priority of option defaults
mkOptionDefault = mkOverride 1500; # priority of option defaults
mkDefault = mkOverride 1000; # used in config sections of non-user modules to set a default
mkForce = mkOverride 50;
mkVMOverride = mkOverride 10; # used by nixos-rebuild build-vm
@ -521,9 +532,7 @@ rec {
#
mkAliasDefinitions = mkAliasAndWrapDefinitions id;
mkAliasAndWrapDefinitions = wrap: option:
mkMerge
(optional (isOption option && option.isDefined)
(wrap (mkMerge option.definitions)));
mkIf (isOption option && option.isDefined) (wrap (mkMerge option.definitions));
/* Compatibility. */
@ -658,21 +667,25 @@ rec {
};
doRename = { from, to, visible, warn, use }:
{ config, options, ... }:
let
fromOpt = getAttrFromPath from options;
toOf = attrByPath to
(abort "Renaming error: option `${showOption to}' does not exist.");
in
{ config, options, ... }:
{ options = setAttrByPath from (mkOption {
{
options = setAttrByPath from (mkOption {
inherit visible;
description = "Alias of <option>${showOption to}</option>.";
apply = x: use (toOf config);
});
config = {
warnings =
let opt = getAttrFromPath from options; in
optional (warn && opt.isDefined)
"The option `${showOption from}' defined in ${showFiles opt.files} has been renamed to `${showOption to}'.";
} // setAttrByPath to (mkAliasDefinitions (getAttrFromPath from options));
config = mkMerge [
{
warnings = optional (warn && fromOpt.isDefined)
"The option `${showOption from}' defined in ${showFiles fromOpt.files} has been renamed to `${showOption to}'.";
}
(mkAliasAndWrapDefinitions (setAttrByPath to) fromOpt)
];
};
}

View File

@ -127,7 +127,20 @@ rec {
/* Helper functions. */
showOption = concatStringsSep ".";
# Convert an option, described as a list of the option parts in to a
# safe, human readable version. ie:
#
# (showOption ["foo" "bar" "baz"]) == "foo.bar.baz"
# (showOption ["foo" "bar.baz" "tux"]) == "foo.\"bar.baz\".tux"
showOption = parts: let
escapeOptionPart = part:
let
escaped = lib.strings.escapeNixString part;
in if escaped == "\"${part}\""
then part
else escaped;
in (concatStringsSep ".") (map escapeOptionPart parts);
showFiles = files: concatStringsSep " and " (map (f: "`${f}'") files);
unknownModule = "<unknown-file>";

View File

@ -82,7 +82,7 @@ rec {
=> "//bin"
*/
makeSearchPath = subDir: packages:
concatStringsSep ":" (map (path: path + "/" + subDir) packages);
concatStringsSep ":" (map (path: path + "/" + subDir) (builtins.filter (x: x != null) packages));
/* Construct a Unix-style search path, using given package output.
If no output is found, fallback to `.out` and then to the default.
@ -121,11 +121,20 @@ rec {
Example:
pkgs = import <nixpkgs> { }
makePerlPath [ pkgs.perlPackages.NetSMTP ]
makePerlPath [ pkgs.perlPackages.libnet ]
=> "/nix/store/n0m1fk9c960d8wlrs62sncnadygqqc6y-perl-Net-SMTP-1.25/lib/perl5/site_perl"
*/
makePerlPath = makeSearchPathOutput "lib" "lib/perl5/site_perl";
/* Construct a perl search path recursively including all dependencies (such as $PERL5LIB)
Example:
pkgs = import <nixpkgs> { }
makeFullPerlPath [ pkgs.perlPackages.CGI ]
=> "/nix/store/fddivfrdc1xql02h9q500fpnqy12c74n-perl-CGI-4.38/lib/perl5/site_perl:/nix/store/8hsvdalmsxqkjg0c5ifigpf31vc4vsy2-perl-HTML-Parser-3.72/lib/perl5/site_perl:/nix/store/zhc7wh0xl8hz3y3f71nhlw1559iyvzld-perl-HTML-Tagset-3.20/lib/perl5/site_perl"
*/
makeFullPerlPath = deps: makePerlPath (lib.misc.closePropagation deps);
/* Depending on the boolean `cond', return either the given string
or the empty string. Useful to concatenate against a bigger string.
@ -414,6 +423,39 @@ rec {
*/
enableFeature = enable: feat: "--${if enable then "enable" else "disable"}-${feat}";
/* Create an --{enable-<feat>=<value>,disable-<feat>} string that can be passed to
standard GNU Autoconf scripts.
Example:
enableFeature true "shared" "foo"
=> "--enable-shared=foo"
enableFeature false "shared" (throw "ignored")
=> "--disable-shared"
*/
enableFeatureAs = enable: feat: value: enableFeature enable feat + optionalString enable "=${value}";
/* Create an --{with,without}-<feat> string that can be passed to
standard GNU Autoconf scripts.
Example:
withFeature true "shared"
=> "--with-shared"
withFeature false "shared"
=> "--without-shared"
*/
withFeature = with_: feat: "--${if with_ then "with" else "without"}-${feat}";
/* Create an --{with-<feat>=<value>,without-<feat>} string that can be passed to
standard GNU Autoconf scripts.
Example:
with_Feature true "shared" "foo"
=> "--with-shared=foo"
with_Feature false "shared" (throw "ignored")
=> "--without-shared"
*/
withFeatureAs = with_: feat: value: withFeature with_ feat + optionalString with_ "=${value}";
/* Create a fixed width string with additional prefix to match
required width.

View File

@ -29,6 +29,7 @@ rec {
/**/ if final.isDarwin then "libSystem"
else if final.isMinGW then "msvcrt"
else if final.isMusl then "musl"
else if final.isUClibc then "uclibc"
else if final.isAndroid then "bionic"
else if final.isLinux /* default */ then "glibc"
# TODO(@Ericson2314) think more about other operating systems
@ -44,8 +45,16 @@ rec {
};
# Misc boolean options
useAndroidPrebuilt = false;
useiOSPrebuilt = false;
} // mapAttrs (n: v: v final.parsed) inspect.predicates
// args;
in assert final.useAndroidPrebuilt -> final.isAndroid;
assert lib.foldl
(pass: { assertion, message }:
if assertion final
then pass
else throw message)
true
(final.parsed.abi.assertions or []);
final;
}

View File

@ -26,7 +26,7 @@ in rec {
none = [];
arm = filterDoubles predicates.isArm;
arm = filterDoubles predicates.isAarch32;
aarch64 = filterDoubles predicates.isAarch64;
x86 = filterDoubles predicates.isx86;
i686 = filterDoubles predicates.isi686;
@ -36,7 +36,7 @@ in rec {
cygwin = filterDoubles predicates.isCygwin;
darwin = filterDoubles predicates.isDarwin;
freebsd = filterDoubles predicates.isFreeBSD;
# Should be better, but MinGW is unclear, and HURD is bit-rotted.
# Should be better, but MinGW is unclear.
gnu = filterDoubles (matchAttrs { kernel = parse.kernels.linux; abi = parse.abis.gnu; });
illumos = filterDoubles predicates.isSunOS;
linux = filterDoubles predicates.isLinux;
@ -44,5 +44,5 @@ in rec {
openbsd = filterDoubles predicates.isOpenBSD;
unix = filterDoubles predicates.isUnix;
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux"];
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "powerpc64le-linux"];
}

View File

@ -8,39 +8,55 @@ rec {
#
# Linux
#
powernv = {
config = "powerpc64le-unknown-linux-gnu";
platform = platforms.powernv;
};
musl-power = {
config = "powerpc64le-unknown-linux-musl";
platform = platforms.powernv;
};
sheevaplug = rec {
config = "armv5tel-unknown-linux-gnueabi";
arch = "armv5tel";
float = "soft";
platform = platforms.sheevaplug;
};
raspberryPi = rec {
config = "armv6l-unknown-linux-gnueabihf";
arch = "armv6l";
float = "hard";
fpu = "vfp";
platform = platforms.raspberrypi;
};
armv7l-hf-multiplatform = rec {
config = "arm-unknown-linux-gnueabihf";
arch = "armv7-a";
float = "hard";
fpu = "vfpv3-d16";
config = "armv7a-unknown-linux-gnueabihf";
platform = platforms.armv7l-hf-multiplatform;
};
aarch64-multiplatform = rec {
config = "aarch64-unknown-linux-gnu";
arch = "aarch64";
platform = platforms.aarch64-multiplatform;
};
armv5te-android-prebuilt = rec {
config = "armv5tel-unknown-linux-androideabi";
sdkVer = "21";
ndkVer = "10e";
platform = platforms.armv5te-android;
useAndroidPrebuilt = true;
};
armv7a-android-prebuilt = rec {
config = "armv7a-unknown-linux-androideabi";
sdkVer = "24";
ndkVer = "17";
platform = platforms.armv7a-android;
useAndroidPrebuilt = true;
};
aarch64-android-prebuilt = rec {
config = "aarch64-unknown-linux-android";
arch = "aarch64";
sdkVer = "24";
ndkVer = "17";
platform = platforms.aarch64-multiplatform;
useAndroidPrebuilt = true;
};
@ -51,16 +67,17 @@ rec {
};
pogoplug4 = rec {
arch = "armv5tel";
config = "armv5tel-unknown-linux-gnueabi";
float = "soft";
platform = platforms.pogoplug4;
};
ben-nanonote = rec {
config = "mipsel-unknown-linux-uclibc";
platform = platforms.ben_nanonote;
};
fuloongminipc = rec {
config = "mipsel-unknown-linux-gnu";
arch = "mips";
float = "hard";
platform = platforms.fuloong2f_n32;
};
@ -88,16 +105,42 @@ rec {
#
iphone64 = {
config = "aarch64-apple-darwin14";
arch = "arm64";
libc = "libSystem";
config = "aarch64-apple-ios";
# config = "aarch64-apple-darwin14";
sdkVer = "10.2";
xcodeVer = "8.2";
xcodePlatform = "iPhoneOS";
useiOSPrebuilt = true;
platform = {};
};
iphone32 = {
config = "arm-apple-darwin10";
arch = "armv7-a";
libc = "libSystem";
config = "armv7a-apple-ios";
# config = "arm-apple-darwin10";
sdkVer = "10.2";
xcodeVer = "8.2";
xcodePlatform = "iPhoneOS";
useiOSPrebuilt = true;
platform = {};
};
iphone64-simulator = {
config = "x86_64-apple-ios";
# config = "x86_64-apple-darwin14";
sdkVer = "10.2";
xcodeVer = "8.2";
xcodePlatform = "iPhoneSimulator";
useiOSPrebuilt = true;
platform = {};
};
iphone32-simulator = {
config = "i686-apple-ios";
# config = "i386-apple-darwin11";
sdkVer = "10.2";
xcodeVer = "8.2";
xcodePlatform = "iPhoneSimulator";
useiOSPrebuilt = true;
platform = {};
};
@ -108,7 +151,6 @@ rec {
# 32 bit mingw-w64
mingw32 = {
config = "i686-pc-mingw32";
arch = "x86"; # Irrelevant
libc = "msvcrt"; # This distinguishes the mingw (non posix) toolchain
platform = {};
};
@ -117,7 +159,6 @@ rec {
mingwW64 = {
# That's the triplet they use in the mingw-w64 docs.
config = "x86_64-pc-mingw32";
arch = "x86_64"; # Irrelevant
libc = "msvcrt"; # This distinguishes the mingw (non posix) toolchain
platform = {};
};

View File

@ -3,11 +3,13 @@ let
inherit (lib.systems) parse;
inherit (lib.systems.inspect) patterns;
abis = lib.mapAttrs (_: abi: builtins.removeAttrs abi [ "assertions" ]) parse.abis;
in rec {
all = [ {} ]; # `{}` matches anything
none = [];
arm = [ patterns.isArm ];
arm = [ patterns.isAarch32 ];
aarch64 = [ patterns.isAarch64 ];
x86 = [ patterns.isx86 ];
i686 = [ patterns.isi686 ];
@ -18,8 +20,12 @@ in rec {
cygwin = [ patterns.isCygwin ];
darwin = [ patterns.isDarwin ];
freebsd = [ patterns.isFreeBSD ];
# Should be better, but MinGW is unclear, and HURD is bit-rotted.
gnu = [ { kernel = parse.kernels.linux; abi = parse.abis.gnu; } ];
# Should be better, but MinGW is unclear.
gnu = [
{ kernel = parse.kernels.linux; abi = abis.gnu; }
{ kernel = parse.kernels.linux; abi = abis.gnueabi; }
{ kernel = parse.kernels.linux; abi = abis.gnueabihf; }
];
illumos = [ patterns.isSunOS ];
linux = [ patterns.isLinux ];
netbsd = [ patterns.isNetBSD ];

View File

@ -3,16 +3,21 @@ with import ./parse.nix { inherit lib; };
with lib.attrsets;
with lib.lists;
let abis_ = abis; in
let abis = lib.mapAttrs (_: abi: builtins.removeAttrs abi [ "assertions" ]) abis_; in
rec {
patterns = rec {
isi686 = { cpu = cpuTypes.i686; };
isx86_64 = { cpu = cpuTypes.x86_64; };
isPowerPC = { cpu = cpuTypes.powerpc; };
isPower = { cpu = { family = "power"; }; };
isx86 = { cpu = { family = "x86"; }; };
isArm = { cpu = { family = "arm"; }; };
isAarch64 = { cpu = { family = "aarch64"; }; };
isAarch32 = { cpu = { family = "arm"; bits = 32; }; };
isAarch64 = { cpu = { family = "arm"; bits = 64; }; };
isMips = { cpu = { family = "mips"; }; };
isRiscV = { cpu = { family = "riscv"; }; };
isSparc = { cpu = { family = "sparc"; }; };
isWasm = { cpu = { family = "wasm"; }; };
is32bit = { cpu = { bits = 32; }; };
@ -22,14 +27,13 @@ rec {
isBSD = { kernel = { families = { inherit (kernelFamilies) bsd; }; }; };
isDarwin = { kernel = { families = { inherit (kernelFamilies) darwin; }; }; };
isUnix = [ isBSD isDarwin isLinux isSunOS isHurd isCygwin ];
isUnix = [ isBSD isDarwin isLinux isSunOS isCygwin ];
isMacOS = { kernel = kernels.macos; };
isiOS = { kernel = kernels.ios; };
isLinux = { kernel = kernels.linux; };
isSunOS = { kernel = kernels.solaris; };
isFreeBSD = { kernel = kernels.freebsd; };
isHurd = { kernel = kernels.hurd; };
isNetBSD = { kernel = kernels.netbsd; };
isOpenBSD = { kernel = kernels.openbsd; };
isWindows = { kernel = kernels.windows; };
@ -38,9 +42,13 @@ rec {
isAndroid = [ { abi = abis.android; } { abi = abis.androideabi; } ];
isMusl = with abis; map (a: { abi = a; }) [ musl musleabi musleabihf ];
isUClibc = with abis; map (a: { abi = a; }) [ uclibc uclibceabi uclibceabihf ];
isEfi = map (family: { cpu.family = family; })
[ "x86" "arm" "aarch64" ];
# Deprecated after 18.03
isArm = isAarch32;
};
matchAnyAttrs = patterns:

View File

@ -18,6 +18,7 @@
with lib.lists;
with lib.types;
with lib.attrsets;
with lib.strings;
with (import ./inspect.nix { inherit lib; }).predicates;
let
@ -34,7 +35,7 @@ rec {
################################################################################
types.openSignifiantByte = mkOptionType {
types.openSignificantByte = mkOptionType {
name = "significant-byte";
description = "Endianness";
merge = mergeOneOption;
@ -42,7 +43,7 @@ rec {
types.significantByte = enum (attrValues significantBytes);
significantBytes = setTypes types.openSignifiantByte {
significantBytes = setTypes types.openSignificantByte {
bigEndian = {};
littleEndian = {};
};
@ -68,20 +69,36 @@ rec {
cpuTypes = with significantBytes; setTypes types.openCpuType {
arm = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv7a = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv7l = { bits = 32; significantByte = littleEndian; family = "arm"; };
aarch64 = { bits = 64; significantByte = littleEndian; family = "aarch64"; };
armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; version = "5"; };
armv6m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "6"; };
armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; version = "6"; };
armv7a = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; };
armv7r = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; };
armv7m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; };
armv7l = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; };
armv8a = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; };
armv8r = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; };
armv8m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; };
aarch64 = { bits = 64; significantByte = littleEndian; family = "arm"; version = "8"; };
i686 = { bits = 32; significantByte = littleEndian; family = "x86"; };
x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; };
mips = { bits = 32; significantByte = bigEndian; family = "mips"; };
mipsel = { bits = 32; significantByte = littleEndian; family = "mips"; };
mips64 = { bits = 64; significantByte = bigEndian; family = "mips"; };
mips64el = { bits = 64; significantByte = littleEndian; family = "mips"; };
powerpc = { bits = 32; significantByte = bigEndian; family = "power"; };
powerpc64 = { bits = 64; significantByte = bigEndian; family = "power"; };
powerpc64le = { bits = 64; significantByte = littleEndian; family = "power"; };
riscv32 = { bits = 32; significantByte = littleEndian; family = "riscv"; };
riscv64 = { bits = 64; significantByte = littleEndian; family = "riscv"; };
sparc = { bits = 32; significantByte = bigEndian; family = "sparc"; };
sparc64 = { bits = 64; significantByte = bigEndian; family = "sparc"; };
wasm32 = { bits = 32; significantByte = littleEndian; family = "wasm"; };
wasm64 = { bits = 64; significantByte = littleEndian; family = "wasm"; };
};
@ -155,7 +172,6 @@ rec {
macos = { execFormat = macho; families = { inherit darwin; }; name = "darwin"; };
ios = { execFormat = macho; families = { inherit darwin; }; };
freebsd = { execFormat = elf; families = { inherit bsd; }; };
hurd = { execFormat = elf; families = { }; };
linux = { execFormat = elf; families = { }; };
netbsd = { execFormat = elf; families = { inherit bsd; }; };
none = { execFormat = unknown; families = { }; };
@ -165,9 +181,6 @@ rec {
} // { # aliases
# 'darwin' is the kernel for all of them. We choose macOS by default.
darwin = kernels.macos;
# TODO(@Ericson2314): Handle these Darwin version suffixes more generally.
darwin10 = kernels.macos;
darwin14 = kernels.macos;
watchos = kernels.ios;
tvos = kernels.ios;
win32 = kernels.windows;
@ -184,24 +197,47 @@ rec {
types.abi = enum (attrValues abis);
abis = setTypes types.openAbi {
android = {};
cygnus = {};
gnu = {};
msvc = {};
eabi = {};
androideabi = {};
gnueabi = {};
gnueabihf = {};
musleabi = {};
musleabihf = {};
android = {
assertions = [
{ assertion = platform: !platform.isAarch32;
message = ''
The "android" ABI is not for 32-bit ARM. Use "androideabi" instead.
'';
}
];
};
gnueabi = { float = "soft"; };
gnueabihf = { float = "hard"; };
gnu = {
assertions = [
{ assertion = platform: !platform.isAarch32;
message = ''
The "gnu" ABI is ambiguous on 32-bit ARM. Use "gnueabi" or "gnueabihf" instead.
'';
}
];
};
musleabi = { float = "soft"; };
musleabihf = { float = "hard"; };
musl = {};
uclibceabihf = { float = "soft"; };
uclibceabi = { float = "hard"; };
uclibc = {};
unknown = {};
};
################################################################################
types.system = mkOptionType {
types.parsedPlatform = mkOptionType {
name = "system";
description = "fully parsed representation of llvm- or nix-style platform tuple";
merge = mergeOneOption;
@ -215,15 +251,13 @@ rec {
isSystem = isType "system";
mkSystem = components:
assert types.system.check components;
assert types.parsedPlatform.check components;
setType "system" components;
mkSkeletonFromList = l: {
"2" = # We only do 2-part hacks for things Nix already supports
if elemAt l 1 == "cygwin"
then { cpu = elemAt l 0; kernel = "windows"; abi = "cygnus"; }
else if elemAt l 1 == "gnu"
then { cpu = elemAt l 0; kernel = "hurd"; abi = "gnu"; }
else { cpu = elemAt l 0; kernel = elemAt l 1; };
"3" = # Awkwards hacks, beware!
if elemAt l 1 == "apple"
@ -232,6 +266,8 @@ rec {
then { cpu = elemAt l 0; kernel = elemAt l 1; abi = elemAt l 2; }
else if (elemAt l 2 == "mingw32") # autotools breaks on -gnu for window
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "windows"; abi = "gnu"; }
else if hasPrefix "netbsd" (elemAt l 2)
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; }
else throw "Target specification with 3 components is ambiguous";
"4" = { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; abi = elemAt l 3; };
}.${toString (length l)}
@ -258,10 +294,17 @@ rec {
else if isDarwin parsed then vendors.apple
else if isWindows parsed then vendors.pc
else vendors.unknown;
kernel = getKernel args.kernel;
kernel = if hasPrefix "darwin" args.kernel then getKernel "darwin"
else if hasPrefix "netbsd" args.kernel then getKernel "netbsd"
else getKernel args.kernel;
abi =
/**/ if args ? abi then getAbi args.abi
else if isLinux parsed then abis.gnu
else if isLinux parsed then
if isAarch32 parsed then
if lib.versionAtLeast (parsed.cpu.version or "0") "6"
then abis.gnueabihf
else abis.gnueabi
else abis.gnu
else if isWindows parsed then abis.gnu
else abis.unknown;
};

View File

@ -20,12 +20,31 @@ rec {
kernelAutoModules = false;
};
powernv = {
name = "PowerNV";
kernelArch = "powerpc";
kernelBaseConfig = "powernv_defconfig";
kernelTarget = "zImage";
kernelInstallTarget = "install";
kernelFile = "vmlinux";
kernelAutoModules = true;
# avoid driver/FS trouble arising from unusual page size
kernelExtraConfig = ''
PPC_64K_PAGES n
PPC_4K_PAGES y
IPV6 y
'';
};
##
## ARM
##
pogoplug4 = {
name = "pogoplug4";
gcc = {
arch = "armv5te";
float = "soft";
};
kernelMajor = "2.6";
@ -158,185 +177,36 @@ rec {
kernelDTB = true; # Beyond 3.10
gcc = {
arch = "armv5te";
float = "soft";
};
};
raspberrypi = {
name = "raspberrypi";
kernelMajor = "2.6";
kernelBaseConfig = "bcmrpi_defconfig";
kernelBaseConfig = "bcm2835_defconfig";
kernelDTB = true;
kernelArch = "arm";
kernelAutoModules = false;
kernelAutoModules = true;
kernelPreferBuiltin = true;
kernelExtraConfig = ''
BLK_DEV_RAM y
BLK_DEV_INITRD y
BLK_DEV_CRYPTOLOOP m
BLK_DEV_DM m
DM_CRYPT m
MD y
REISERFS_FS m
BTRFS_FS y
XFS_FS m
JFS_FS y
EXT4_FS y
IP_PNP y
IP_PNP_DHCP y
NFS_FS y
ROOT_NFS y
TUN m
NFS_V4 y
NFS_V4_1 y
NFS_FSCACHE y
NFSD m
NFSD_V2_ACL y
NFSD_V3 y
NFSD_V3_ACL y
NFSD_V4 y
NETFILTER y
IP_NF_IPTABLES y
IP_NF_FILTER y
IP_NF_MATCH_ADDRTYPE y
IP_NF_TARGET_LOG y
IP_NF_MANGLE y
IPV6 m
VLAN_8021Q m
CIFS y
CIFS_XATTR y
CIFS_POSIX y
CIFS_FSCACHE y
CIFS_ACL y
ZRAM m
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
# Fail to build
DRM n
SCSI_ADVANSYS n
USB_ISP1362_HCD n
SND_SOC n
SND_ALI5451 n
FB_SAVAGE n
SCSI_NSP32 n
ATA_SFF n
SUNGEM n
IRDA n
ATM_HE n
SCSI_ACARD n
BLK_DEV_CMD640_ENHANCED n
FUSE_FS m
# nixos mounts some cgroup
CGROUPS y
# Latencytop
LATENCYTOP y
'';
kernelTarget = "zImage";
gcc = {
arch = "armv6";
fpu = "vfp";
float = "hard";
# TODO(@Ericson2314) what is this and is it a good idea? It was
# used in some cross compilation examples but not others.
#
# abi = "aapcs-linux";
};
};
raspberrypi2 = armv7l-hf-multiplatform // {
name = "raspberrypi2";
kernelBaseConfig = "bcm2709_defconfig";
kernelDTB = true;
kernelAutoModules = false;
kernelExtraConfig = ''
BLK_DEV_RAM y
BLK_DEV_INITRD y
BLK_DEV_CRYPTOLOOP m
BLK_DEV_DM m
DM_CRYPT m
MD y
REISERFS_FS m
BTRFS_FS y
XFS_FS m
JFS_FS y
EXT4_FS y
IP_PNP y
IP_PNP_DHCP y
NFS_FS y
ROOT_NFS y
TUN m
NFS_V4 y
NFS_V4_1 y
NFS_FSCACHE y
NFSD m
NFSD_V2_ACL y
NFSD_V3 y
NFSD_V3_ACL y
NFSD_V4 y
NETFILTER y
IP_NF_IPTABLES y
IP_NF_FILTER y
IP_NF_MATCH_ADDRTYPE y
IP_NF_TARGET_LOG y
IP_NF_MANGLE y
IPV6 m
VLAN_8021Q m
CIFS y
CIFS_XATTR y
CIFS_POSIX y
CIFS_FSCACHE y
CIFS_ACL y
ZRAM m
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
# Fail to build
DRM n
SCSI_ADVANSYS n
USB_ISP1362_HCD n
SND_SOC n
SND_ALI5451 n
FB_SAVAGE n
SCSI_NSP32 n
ATA_SFF n
SUNGEM n
IRDA n
ATM_HE n
SCSI_ACARD n
BLK_DEV_CMD640_ENHANCED n
FUSE_FS m
# nixos mounts some cgroup
CGROUPS y
# Latencytop
LATENCYTOP y
# Disable the common config Xen, it doesn't build on ARM
XEN? n
'';
kernelTarget = "zImage";
};
# Legacy attribute, for compatibility with existing configs only.
raspberrypi2 = armv7l-hf-multiplatform;
scaleway-c1 = armv7l-hf-multiplatform // {
gcc = {
cpu = "cortex-a9";
fpu = "vfpv3";
float = "hard";
};
};
@ -363,7 +233,6 @@ rec {
gcc = {
cpu = "cortex-a9";
fpu = "neon";
float = "hard";
};
};
@ -376,6 +245,132 @@ rec {
kernelBaseConfig = "guruplug_defconfig";
};
beaglebone = armv7l-hf-multiplatform // {
name = "beaglebone";
kernelBaseConfig = "bb.org_defconfig";
kernelAutoModules = false;
kernelExtraConfig = ""; # TBD kernel config
kernelTarget = "zImage";
};
# https://developer.android.com/ndk/guides/abis#armeabi
armv5te-android = {
name = "armeabi";
gcc = {
arch = "armv5te";
float = "soft";
float-abi = "soft";
};
};
# https://developer.android.com/ndk/guides/abis#v7a
armv7a-android = {
name = "armeabi-v7a";
gcc = {
arch = "armv7-a";
float = "hard";
float-abi = "softfp";
fpu = "vfpv3-d16";
};
};
armv7l-hf-multiplatform = {
name = "armv7l-hf-multiplatform";
kernelMajor = "2.6"; # Using "2.6" enables 2.6 kernel syscalls in glibc.
kernelBaseConfig = "multi_v7_defconfig";
kernelArch = "arm";
kernelDTB = true;
kernelAutoModules = true;
kernelPreferBuiltin = true;
kernelTarget = "zImage";
kernelExtraConfig = ''
# Serial port for Raspberry Pi 3. Upstream forgot to add it to the ARMv7 defconfig.
SERIAL_8250_BCM2835AUX y
SERIAL_8250_EXTENDED y
SERIAL_8250_SHARE_IRQ y
# Fix broken sunxi-sid nvmem driver.
TI_CPTS y
# Hangs ODROID-XU4
ARM_BIG_LITTLE_CPUIDLE n
# Disable OABI to have seccomp_filter (required for systemd)
# https://github.com/raspberrypi/firmware/issues/651
OABI_COMPAT n
'';
gcc = {
# Some table about fpu flags:
# http://community.arm.com/servlet/JiveServlet/showImage/38-1981-3827/blogentry-103749-004812900+1365712953_thumb.png
# Cortex-A5: -mfpu=neon-fp16
# Cortex-A7 (rpi2): -mfpu=neon-vfpv4
# Cortex-A8 (beaglebone): -mfpu=neon
# Cortex-A9: -mfpu=neon-fp16
# Cortex-A15: -mfpu=neon-vfpv4
# More about FPU:
# https://wiki.debian.org/ArmHardFloatPort/VfpComparison
# vfpv3-d16 is what Debian uses and seems to be the best compromise: NEON is not supported in e.g. Scaleway or Tegra 2,
# and the above page suggests NEON is only an improvement with hand-written assembly.
arch = "armv7-a";
fpu = "vfpv3-d16";
# For Raspberry Pi the 2 the best would be:
# cpu = "cortex-a7";
# fpu = "neon-vfpv4";
};
};
aarch64-multiplatform = {
name = "aarch64-multiplatform";
kernelMajor = "2.6"; # Using "2.6" enables 2.6 kernel syscalls in glibc.
kernelBaseConfig = "defconfig";
kernelArch = "arm64";
kernelDTB = true;
kernelAutoModules = true;
kernelPreferBuiltin = true;
kernelExtraConfig = ''
# Raspberry Pi 3 stuff. Not needed for kernels >= 4.10.
ARCH_BCM2835 y
BCM2835_MBOX y
BCM2835_WDT y
RASPBERRYPI_FIRMWARE y
RASPBERRYPI_POWER y
SERIAL_8250_BCM2835AUX y
SERIAL_8250_EXTENDED y
SERIAL_8250_SHARE_IRQ y
# Cavium ThunderX stuff.
PCI_HOST_THUNDER_ECAM y
# Nvidia Tegra stuff.
PCI_TEGRA y
# The default (=y) forces us to have the XHCI firmware available in initrd,
# which our initrd builder can't currently do easily.
USB_XHCI_TEGRA m
'';
kernelTarget = "Image";
gcc = {
arch = "armv8-a";
};
};
##
## MIPS
##
ben_nanonote = {
name = "ben_nanonote";
kernelMajor = "2.6";
kernelArch = "mips";
gcc = {
arch = "mips32";
float = "soft";
};
};
fuloong2f_n32 = {
name = "fuloong2f_n32";
kernelMajor = "2.6";
@ -449,97 +444,14 @@ rec {
kernelTarget = "vmlinux";
gcc = {
arch = "loongson2f";
float = "hard";
abi = "n32";
};
};
beaglebone = armv7l-hf-multiplatform // {
name = "beaglebone";
kernelBaseConfig = "bb.org_defconfig";
kernelAutoModules = false;
kernelExtraConfig = ""; # TBD kernel config
kernelTarget = "zImage";
};
armv7l-hf-multiplatform = {
name = "armv7l-hf-multiplatform";
kernelMajor = "2.6"; # Using "2.6" enables 2.6 kernel syscalls in glibc.
kernelBaseConfig = "multi_v7_defconfig";
kernelArch = "arm";
kernelDTB = true;
kernelAutoModules = true;
kernelPreferBuiltin = true;
kernelTarget = "zImage";
kernelExtraConfig = ''
# Serial port for Raspberry Pi 3. Upstream forgot to add it to the ARMv7 defconfig.
SERIAL_8250_BCM2835AUX y
SERIAL_8250_EXTENDED y
SERIAL_8250_SHARE_IRQ y
# Fix broken sunxi-sid nvmem driver.
TI_CPTS y
# Hangs ODROID-XU4
ARM_BIG_LITTLE_CPUIDLE n
'';
gcc = {
# Some table about fpu flags:
# http://community.arm.com/servlet/JiveServlet/showImage/38-1981-3827/blogentry-103749-004812900+1365712953_thumb.png
# Cortex-A5: -mfpu=neon-fp16
# Cortex-A7 (rpi2): -mfpu=neon-vfpv4
# Cortex-A8 (beaglebone): -mfpu=neon
# Cortex-A9: -mfpu=neon-fp16
# Cortex-A15: -mfpu=neon-vfpv4
# More about FPU:
# https://wiki.debian.org/ArmHardFloatPort/VfpComparison
# vfpv3-d16 is what Debian uses and seems to be the best compromise: NEON is not supported in e.g. Scaleway or Tegra 2,
# and the above page suggests NEON is only an improvement with hand-written assembly.
arch = "armv7-a";
fpu = "vfpv3-d16";
float = "hard";
# For Raspberry Pi the 2 the best would be:
# cpu = "cortex-a7";
# fpu = "neon-vfpv4";
};
};
aarch64-multiplatform = {
name = "aarch64-multiplatform";
kernelMajor = "2.6"; # Using "2.6" enables 2.6 kernel syscalls in glibc.
kernelBaseConfig = "defconfig";
kernelArch = "arm64";
kernelDTB = true;
kernelAutoModules = true;
kernelPreferBuiltin = true;
kernelExtraConfig = ''
# Raspberry Pi 3 stuff. Not needed for kernels >= 4.10.
ARCH_BCM2835 y
BCM2835_MBOX y
BCM2835_WDT y
RASPBERRYPI_FIRMWARE y
RASPBERRYPI_POWER y
SERIAL_8250_BCM2835AUX y
SERIAL_8250_EXTENDED y
SERIAL_8250_SHARE_IRQ y
# Cavium ThunderX stuff.
PCI_HOST_THUNDER_ECAM y
# Nvidia Tegra stuff.
PCI_TEGRA y
# The default (=y) forces us to have the XHCI firmware available in initrd,
# which our initrd builder can't currently do easily.
USB_XHCI_TEGRA m
'';
kernelTarget = "Image";
gcc = {
arch = "armv8-a";
};
};
##
## Other
##
riscv-multiplatform = bits: {
name = "riscv-multiplatform";
@ -562,5 +474,6 @@ rec {
"armv7l-linux" = armv7l-hf-multiplatform;
"aarch64-linux" = aarch64-multiplatform;
"mipsel-linux" = fuloong2f_n32;
"powerpc64le-linux" = powernv;
}.${system} or pcBase;
}

View File

@ -45,6 +45,21 @@ runTests {
expected = true;
};
testBitAnd = {
expr = (bitAnd 3 10);
expected = 2;
};
testBitOr = {
expr = (bitOr 3 10);
expected = 11;
};
testBitXor = {
expr = (bitXor 3 10);
expected = 9;
};
# STRINGS
testConcatMapStrings = {
@ -198,6 +213,30 @@ runTests {
};
# ATTRSETS
# code from the example
testRecursiveUpdateUntil = {
expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) {
# first attribute set
foo.bar = 1;
foo.baz = 2;
bar = 3;
} {
#second attribute set
foo.bar = 1;
foo.quz = 2;
baz = 4;
};
expected = {
foo.bar = 1; # 'foo.*' from the second set
foo.quz = 2; #
bar = 3; # 'bar' from the first set
baz = 4; # 'baz' from the second set
};
};
# GENERATORS
# these tests assume attributes are converted to lists
# in alphabetical order
@ -317,7 +356,8 @@ runTests {
expr = mapAttrs (const (generators.toPretty {})) rec {
int = 42;
bool = true;
string = "fnord";
string = ''fno"rd'';
path = /. + "/foo"; # toPath returns a string
null_ = null;
function = x: x;
functionArgs = { arg ? 4, foo }: arg;
@ -328,13 +368,14 @@ runTests {
expected = rec {
int = "42";
bool = "true";
string = "\"fnord\"";
string = ''"fno\"rd"'';
path = "/foo";
null_ = "null";
function = "<λ>";
functionArgs = "<λ:{(arg),foo}>";
list = "[ 3 4 ${function} [ false ] ]";
attrs = "{ \"foo\" = null; \"foo bar\" = \"baz\"; }";
drv = "<δ>";
drv = "<δ:test>";
};
};
@ -363,10 +404,6 @@ runTests {
resRem7 = res6.replace (a: removeAttrs a ["a"]);
resReplace6 = let x = defaultOverridableDelayableArgs id { a = 7; mergeAttrBy = { a = builtins.add; }; };
x2 = x.merge { a = 20; }; # now we have 27
in (x2.replace) { a = 10; }; # and override the value by 10
# fixed tests (delayed args): (when using them add some comments, please)
resFixed1 =
let x = defaultOverridableDelayableArgs id ( x: { a = 7; c = x.fixed.b; });

View File

@ -136,7 +136,18 @@ checkConfigOutput "true" "$@" ./define-module-check.nix
# Check coerced value.
checkConfigOutput "\"42\"" config.value ./declare-coerced-value.nix
checkConfigOutput "\"24\"" config.value ./declare-coerced-value.nix ./define-value-string.nix
checkConfigError 'The option value .* in .* is not.*string or signed integer.*' config.value ./declare-coerced-value.nix ./define-value-list.nix
checkConfigError 'The option value .* in .* is not.*string or signed integer convertible to it' config.value ./declare-coerced-value.nix ./define-value-list.nix
# Check coerced value with unsound coercion
checkConfigOutput "12" config.value ./declare-coerced-value-unsound.nix
checkConfigError 'The option value .* in .* is not.*8 bit signed integer.* or string convertible to it' config.value ./declare-coerced-value-unsound.nix ./define-value-string-bigint.nix
checkConfigError 'unrecognised JSON value' config.value ./declare-coerced-value-unsound.nix ./define-value-string-arbitrary.nix
# Check loaOf with long list.
checkConfigOutput "1 2 3 4 5 6 7 8 9 10" config.result ./loaOf-with-long-list.nix
# Check loaOf with many merges of lists.
checkConfigOutput "1 2 3 4 5 6 7 8 9 10" config.result ./loaOf-with-many-list-merges.nix
cat <<EOF
====== module tests ======

View File

@ -0,0 +1,10 @@
{ lib, ... }:
{
options = {
value = lib.mkOption {
default = "12";
type = lib.types.coercedTo lib.types.str lib.toInt lib.types.ints.s8;
};
};
}

View File

@ -0,0 +1,3 @@
{
value = "foobar";
}

View File

@ -0,0 +1,3 @@
{
value = "1000";
}

View File

@ -0,0 +1,19 @@
{ config, lib, ... }:
{
options = {
loaOfInt = lib.mkOption {
type = lib.types.loaOf lib.types.int;
};
result = lib.mkOption {
type = lib.types.str;
};
};
config = {
loaOfInt = [ 1 2 3 4 5 6 7 8 9 10 ];
result = toString (lib.attrValues config.loaOfInt);
};
}

View File

@ -0,0 +1,19 @@
{ config, lib, ... }:
{
options = {
loaOfInt = lib.mkOption {
type = lib.types.loaOf lib.types.int;
};
result = lib.mkOption {
type = lib.types.str;
};
};
config = {
loaOfInt = lib.mkMerge (map lib.singleton [ 1 2 3 4 5 6 7 8 9 10 ]);
result = toString (lib.attrValues config.loaOfInt);
};
}

View File

@ -22,7 +22,7 @@ in with lib.systems.doubles; lib.runTests {
cygwin = assertTrue (mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ]);
darwin = assertTrue (mseteq darwin [ "x86_64-darwin" ]);
freebsd = assertTrue (mseteq freebsd [ "i686-freebsd" "x86_64-freebsd" ]);
gnu = assertTrue (mseteq gnu (linux /* ++ hurd ++ kfreebsd ++ ... */));
gnu = assertTrue (mseteq gnu (linux /* ++ kfreebsd ++ ... */));
illumos = assertTrue (mseteq illumos [ "x86_64-solaris" ]);
linux = assertTrue (mseteq linux [ "i686-linux" "x86_64-linux" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "mipsel-linux" ]);
netbsd = assertTrue (mseteq netbsd [ "i686-netbsd" "x86_64-netbsd" ]);

View File

@ -1,6 +1,9 @@
{ lib }:
rec {
## Simple (higher order) functions
/* The identity function
For when you need a function that does nothing.
@ -22,7 +25,7 @@ rec {
## Named versions corresponding to some builtin operators.
/* Concat two strings */
/* Concatenate two lists */
concat = x: y: x ++ y;
/* boolean or */
@ -31,6 +34,24 @@ rec {
/* boolean and */
and = x: y: x && y;
/* bitwise and */
bitAnd = builtins.bitAnd
or import ./zip-int-bits.nix
(a: b: if a==1 && b==1 then 1 else 0);
/* bitwise or */
bitOr = builtins.bitOr
or import ./zip-int-bits.nix
(a: b: if a==1 || b==1 then 1 else 0);
/* bitwise xor */
bitXor = builtins.bitXor
or import ./zip-int-bits.nix
(a: b: if a!=b then 1 else 0);
/* bitwise not */
bitNot = builtins.sub (-1);
/* Convert a boolean to a string.
Note that toString on a bool returns "1" and "".
*/
@ -44,29 +65,54 @@ rec {
*/
mergeAttrs = x: y: x // y;
# Flip the order of the arguments of a binary function.
/* Flip the order of the arguments of a binary function.
Example:
flip concat [1] [2]
=> [ 2 1 ]
*/
flip = f: a: b: f b a;
# Apply function if argument is non-null
/* Apply function if argument is non-null.
Example:
mapNullable (x: x+1) null
=> null
mapNullable (x: x+1) 22
=> 23
*/
mapNullable = f: a: if isNull a then a else f a;
# Pull in some builtins not included elsewhere.
inherit (builtins)
pathExists readFile isBool
isInt add sub lessThan
isInt isFloat add sub lessThan
seq deepSeq genericClosure;
inherit (lib.strings) fileContents;
# Return the Nixpkgs version number.
nixpkgsVersion =
let suffixFile = ../.version-suffix; in
fileContents ../.version
+ (if pathExists suffixFile then fileContents suffixFile else "pre-git");
## nixpks version strings
# The current full nixpkgs version number.
version = release + versionSuffix;
# The current nixpkgs version number as string.
release = lib.strings.fileContents ../.version;
# The current nixpkgs version suffix as string.
versionSuffix =
let suffixFile = ../.version-suffix;
in if pathExists suffixFile
then lib.strings.fileContents suffixFile
else "pre-git";
nixpkgsVersion = builtins.trace "`lib.nixpkgsVersion` is deprecated, use `lib.version` instead!" version;
# Whether we're being called by nix-shell.
inNixShell = builtins.getEnv "IN_NIX_SHELL" != "";
## Integer operations
# Return minimum/maximum of two numbers.
min = x: y: if x < y then x else y;
max = x: y: if x > y then x else y;
@ -81,6 +127,9 @@ rec {
*/
mod = base: int: base - (int * (builtins.div base int));
## Comparisons
/* C-style comparisons
a < b, compare a b => -1
@ -110,17 +159,20 @@ rec {
cmp "fooa" "a" => -1
# while
compare "fooa" "a" => 1
*/
splitByAndCompare = p: yes: no: a: b:
if p a
then if p b then yes a b else -1
else if p b then 1 else no a b;
/* Reads a JSON file. */
importJSON = path:
builtins.fromJSON (builtins.readFile path);
## Warnings and asserts
/* See https://github.com/NixOS/nix/issues/749. Eventually we'd like these
to expand to Nix builtins that carry metadata so that Nix can filter out
the INFO messages without parsing the message string.
@ -136,28 +188,36 @@ rec {
warn = msg: builtins.trace "WARNING: ${msg}";
info = msg: builtins.trace "INFO: ${msg}";
# | Add metadata about expected function arguments to a function.
# The metadata should match the format given by
# builtins.functionArgs, i.e. a set from expected argument to a bool
# representing whether that argument has a default or not.
# setFunctionArgs : (a → b) → Map String Bool → (a → b)
#
# This function is necessary because you can't dynamically create a
# function of the { a, b ? foo, ... }: format, but some facilities
# like callPackage expect to be able to query expected arguments.
## Function annotations
/* Add metadata about expected function arguments to a function.
The metadata should match the format given by
builtins.functionArgs, i.e. a set from expected argument to a bool
representing whether that argument has a default or not.
setFunctionArgs : (a b) Map String Bool (a b)
This function is necessary because you can't dynamically create a
function of the { a, b ? foo, ... }: format, but some facilities
like callPackage expect to be able to query expected arguments.
*/
setFunctionArgs = f: args:
{ # TODO: Should we add call-time "type" checking like built in?
__functor = self: f;
__functionArgs = args;
};
# | Extract the expected function arguments from a function.
# This works both with nix-native { a, b ? foo, ... }: style
# functions and functions with args set with 'setFunctionArgs'. It
# has the same return type and semantics as builtins.functionArgs.
# setFunctionArgs : (a → b) → Map String Bool.
/* Extract the expected function arguments from a function.
This works both with nix-native { a, b ? foo, ... }: style
functions and functions with args set with 'setFunctionArgs'. It
has the same return type and semantics as builtins.functionArgs.
setFunctionArgs : (a b) Map String Bool.
*/
functionArgs = f: f.__functionArgs or (builtins.functionArgs f);
/* Check whether something is a function or something
annotated with function args.
*/
isFunction = f: builtins.isFunction f ||
(f ? __functor && isFunction (f.__functor f));
}

View File

@ -8,7 +8,7 @@ with lib.trivial;
with lib.strings;
let
inherit (lib.modules) mergeDefinitions filterOverrides;
inherit (lib.modules) mergeDefinitions;
outer_types =
rec {
isType = type: x: (x._type or "") == type;
@ -167,6 +167,13 @@ rec {
# s32 = sign 32 4294967296;
};
float = mkOptionType rec {
name = "float";
description = "floating point number";
check = isFloat;
merge = mergeOneOption;
};
str = mkOptionType {
name = "str";
description = "string";
@ -280,24 +287,34 @@ rec {
# List or attribute set of ...
loaOf = elemType:
let
convertIfList = defIdx: def:
convertAllLists = defs:
let
padWidth = stringLength (toString (length defs));
unnamedPrefix = i: "unnamed-" + fixedWidthNumber padWidth i + ".";
in
imap1 (i: convertIfList (unnamedPrefix i)) defs;
convertIfList = unnamedPrefix: def:
if isList def.value then
let
padWidth = stringLength (toString (length def.value));
unnamed = i: unnamedPrefix + fixedWidthNumber padWidth i;
in
{ inherit (def) file;
value = listToAttrs (
imap1 (elemIdx: elem:
{ name = elem.name or "unnamed-${toString defIdx}.${toString elemIdx}";
{ name = elem.name or (unnamed elemIdx);
value = elem;
}) def.value);
}
else
def;
listOnly = listOf elemType;
attrOnly = attrsOf elemType;
in mkOptionType rec {
name = "loaOf";
description = "list or attribute set of ${elemType.description}s";
check = x: isList x || isAttrs x;
merge = loc: defs: attrOnly.merge loc (imap1 convertIfList defs);
merge = loc: defs: attrOnly.merge loc (convertAllLists defs);
getSubOptions = prefix: elemType.getSubOptions (prefix ++ ["<name?>"]);
getSubModules = elemType.getSubModules;
substSubModules = m: loaOf (elemType.substSubModules m);
@ -361,7 +378,13 @@ rec {
# This is mandatory as some option declaration might use the
# "name" attribute given as argument of the submodule and use it
# as the default of option declarations.
args.name = "&lt;name&gt;";
#
# Using lookalike unicode single angle quotation marks because
# of the docbook transformation the options receive. In all uses
# &gt; and &lt; wouldn't be encoded correctly so the encoded values
# would be used, and use of `<` and `>` would break the XML document.
# It shouldn't cause an issue since this is cosmetic for the manual.
args.name = "name";
}).options;
getSubModules = opts';
substSubModules = m: submodule m;
@ -419,16 +442,13 @@ rec {
assert coercedType.getSubModules == null;
mkOptionType rec {
name = "coercedTo";
description = "${finalType.description} or ${coercedType.description}";
check = x: finalType.check x || coercedType.check x;
description = "${finalType.description} or ${coercedType.description} convertible to it";
check = x: finalType.check x || (coercedType.check x && finalType.check (coerceFunc x));
merge = loc: defs:
let
coerceVal = val:
if finalType.check val then val
else let
coerced = coerceFunc val;
in assert finalType.check coerced; coerced;
else coerceFunc val;
in finalType.merge loc (map (def: def // { value = coerceVal def.value; }) defs);
getSubOptions = finalType.getSubOptions;
getSubModules = finalType.getSubModules;

39
lib/zip-int-bits.nix Normal file
View File

@ -0,0 +1,39 @@
/* Helper function to implement a fallback for the bit operators
`bitAnd`, `bitOr` and `bitXOr` on older nix version.
See ./trivial.nix
*/
f: x: y:
let
# (intToBits 6) -> [ 0 1 1 ]
intToBits = x:
if x == 0 || x == -1 then
[]
else
let
headbit = if (x / 2) * 2 != x then 1 else 0; # x & 1
tailbits = if x < 0 then ((x + 1) / 2) - 1 else x / 2; # x >> 1
in
[headbit] ++ (intToBits tailbits);
# (bitsToInt [ 0 1 1 ] 0) -> 6
# (bitsToInt [ 0 1 0 ] 1) -> -6
bitsToInt = l: signum:
if l == [] then
(if signum == 0 then 0 else -1)
else
(builtins.head l) + (2 * (bitsToInt (builtins.tail l) signum));
xsignum = if x < 0 then 1 else 0;
ysignum = if y < 0 then 1 else 0;
zipListsWith' = fst: snd:
if fst==[] && snd==[] then
[]
else if fst==[] then
[(f xsignum (builtins.head snd))] ++ (zipListsWith' [] (builtins.tail snd))
else if snd==[] then
[(f (builtins.head fst) ysignum )] ++ (zipListsWith' (builtins.tail fst) [] )
else
[(f (builtins.head fst) (builtins.head snd))] ++ (zipListsWith' (builtins.tail fst) (builtins.tail snd));
in
assert (builtins.isInt x) && (builtins.isInt y);
bitsToInt (zipListsWith' (intToBits x) (intToBits y)) (f xsignum ysignum)

File diff suppressed because it is too large Load Diff

View File

@ -6,13 +6,11 @@
$ copy-tarballs.pl --expr 'import <nixpkgs/maintainers/scripts/all-tarballs.nix>'
*/
removeAttrs (import ../../pkgs/top-level/release.nix
import ../../pkgs/top-level/release.nix
{ # Don't apply hydraJob to jobs, because then we can't get to the
# dependency graph.
scrubJobs = false;
# No need to evaluate on i686.
supportedSystems = [ "x86_64-linux" ];
})
[ # Remove jobs whose evaluation depends on a writable Nix store.
"tarball" "unstable" "darwin-tested"
]
limitedSupportedSystems = [];
}

View File

@ -1,5 +1,5 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python -p pythonFull pythonPackages.requests pythonPackages.pyquery pythonPackages.click
#!nix-shell -i python3 -p 'python3.withPackages(ps: with ps; [ requests pyquery click ])'
# To use, just execute this script with --help to display help.
@ -16,7 +16,7 @@ maintainers_json = subprocess.check_output([
'nix-instantiate', '-E', 'import ./maintainers/maintainer-list.nix {}', '--eval', '--json'
])
maintainers = json.loads(maintainers_json)
MAINTAINERS = {v: k for k, v in maintainers.iteritems()}
MAINTAINERS = {v: k for k, v in maintainers.items()}
def get_response_text(url):
@ -45,6 +45,17 @@ def get_maintainers(attr_name):
except:
return []
def print_build(table_row):
a = pq(table_row)('a')[1]
print("- [ ] [{}]({})".format(a.text, a.get('href')), flush=True)
maintainers = get_maintainers(a.text)
if maintainers:
print(" - maintainers: {}".format(", ".join(map(lambda u: '@' + u, maintainers))))
# TODO: print last three persons that touched this file
# TODO: pinpoint the diff that broke this build, or maybe it's transient or maybe it never worked?
sys.stdout.flush()
@click.command()
@click.option(
@ -73,23 +84,17 @@ def cli(jobset):
# TODO: aborted evaluations
# TODO: dependency failed without propagated builds
print('\nFailures:')
for tr in d('img[alt="Failed"]').parents('tr'):
a = pq(tr)('a')[1]
print("- [ ] [{}]({})".format(a.text, a.get('href')))
print_build(tr)
sys.stdout.flush()
maintainers = get_maintainers(a.text)
if maintainers:
print(" - maintainers: {}".format(", ".join(map(lambda u: '@' + u, maintainers))))
# TODO: print last three persons that touched this file
# TODO: pinpoint the diff that broke this build, or maybe it's transient or maybe it never worked?
sys.stdout.flush()
print('\nDependency failures:')
for tr in d('img[alt="Dependency failed"]').parents('tr'):
print_build(tr)
if __name__ == "__main__":
try:
cli()
except:
except Exception as e:
import pdb;pdb.post_mortem()

View File

@ -4,7 +4,7 @@ stdenv.mkDerivation {
name = "nix-generate-from-cpan-3";
buildInputs = with perlPackages; [
makeWrapper perl CPANMeta GetoptLongDescriptive CPANPLUS Readonly Log4Perl
makeWrapper perl CPANMeta GetoptLongDescriptive CPANPLUS Readonly LogLog4perl
];
phases = [ "installPhase" ];

View File

@ -1,4 +1,5 @@
#! /run/current-system/sw/bin/perl -w
#! /usr/bin/env nix-shell
#! nix-shell -i perl -p perl perlPackages.XMLSimple
use strict;
use List::Util qw(min);

View File

@ -262,7 +262,7 @@ def _update_package(path, target):
if new_version == version:
logging.info("Path {}: no update available for {}.".format(path, pname))
return False
elif new_version <= version:
elif Version(new_version) <= Version(version):
raise ValueError("downgrade for {}.".format(pname))
if not new_sha256:
raise ValueError("no file available for {}.".format(pname))

2
nixos/doc/manual/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
generated
manual-combined.xml

29
nixos/doc/manual/Makefile Normal file
View File

@ -0,0 +1,29 @@
.PHONY: all
all: manual-combined.xml format
.PHONY: debug
debug: generated manual-combined.xml
manual-combined.xml: generated *.xml
rm -f ./manual-combined.xml
nix-shell --packages xmloscopy \
--run "xmloscopy --docbook5 ./manual.xml ./manual-combined.xml"
.PHONY: format
format:
find . -iname '*.xml' -type f -print0 | xargs -0 -I{} -n1 \
xmlformat --config-file "../xmlformat.conf" -i {}
.PHONY: fix-misc-xml
fix-misc-xml:
find . -iname '*.xml' -type f \
-exec ../varlistentry-fixer.rb {} ';'
.PHONY: clean
clean:
rm -f manual-combined.xml generated
generated: ./options-to-docbook.xsl
nix-build ../../release.nix \
--attr manualGeneratedSources.x86_64-linux \
--out-link ./generated

View File

@ -3,63 +3,88 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-boot-problems">
<title>Boot Problems</title>
<title>Boot Problems</title>
<para>If NixOS fails to boot, there are a number of kernel command
line parameters that may help you to identify or fix the issue. You
can add these parameters in the GRUB boot menu by pressing “e” to
modify the selected boot entry and editing the line starting with
<literal>linux</literal>. The following are some useful kernel command
line parameters that are recognised by the NixOS boot scripts or by
systemd:
<variablelist>
<varlistentry><term><literal>boot.shell_on_fail</literal></term>
<listitem><para>Start a root shell if something goes wrong in
stage 1 of the boot process (the initial ramdisk). This is
disabled by default because there is no authentication for the
root shell.</para></listitem>
<para>
If NixOS fails to boot, there are a number of kernel command line parameters
that may help you to identify or fix the issue. You can add these parameters
in the GRUB boot menu by pressing “e” to modify the selected boot entry
and editing the line starting with <literal>linux</literal>. The following
are some useful kernel command line parameters that are recognised by the
NixOS boot scripts or by systemd:
<variablelist>
<varlistentry>
<term>
<literal>boot.shell_on_fail</literal>
</term>
<listitem>
<para>
Start a root shell if something goes wrong in stage 1 of the boot process
(the initial ramdisk). This is disabled by default because there is no
authentication for the root shell.
</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>boot.debug1</literal></term>
<listitem><para>Start an interactive shell in stage 1 before
anything useful has been done. That is, no modules have been
loaded and no file systems have been mounted, except for
<filename>/proc</filename> and
<filename>/sys</filename>.</para></listitem>
<varlistentry>
<term>
<literal>boot.debug1</literal>
</term>
<listitem>
<para>
Start an interactive shell in stage 1 before anything useful has been
done. That is, no modules have been loaded and no file systems have been
mounted, except for <filename>/proc</filename> and
<filename>/sys</filename>.
</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>boot.trace</literal></term>
<listitem><para>Print every shell command executed by the stage 1
and 2 boot scripts.</para></listitem>
<varlistentry>
<term>
<literal>boot.trace</literal>
</term>
<listitem>
<para>
Print every shell command executed by the stage 1 and 2 boot scripts.
</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>single</literal></term>
<listitem><para>Boot into rescue mode (a.k.a. single user mode).
This will cause systemd to start nothing but the unit
<literal>rescue.target</literal>, which runs
<command>sulogin</command> to prompt for the root password and
start a root login shell. Exiting the shell causes the system to
continue with the normal boot process.</para></listitem>
<varlistentry>
<term>
<literal>single</literal>
</term>
<listitem>
<para>
Boot into rescue mode (a.k.a. single user mode). This will cause systemd
to start nothing but the unit <literal>rescue.target</literal>, which
runs <command>sulogin</command> to prompt for the root password and start
a root login shell. Exiting the shell causes the system to continue with
the normal boot process.
</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>systemd.log_level=debug systemd.log_target=console</literal></term>
<listitem><para>Make systemd very verbose and send log messages to
the console instead of the journal.</para></listitem>
<varlistentry>
<term>
<literal>systemd.log_level=debug systemd.log_target=console</literal>
</term>
<listitem>
<para>
Make systemd very verbose and send log messages to the console instead of
the journal.
</para>
</listitem>
</varlistentry>
</variablelist>
For more parameters recognised by systemd, see <citerefentry>
<refentrytitle>systemd</refentrytitle>
<manvolnum>1</manvolnum></citerefentry>.
</para>
</variablelist>
For more parameters recognised by systemd, see
<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>.</para>
<para>If no login prompts or X11 login screens appear (e.g. due to
hanging dependencies), you can press Alt+ArrowUp. If youre lucky,
this will start rescue mode (described above). (Also note that since
most units have a 90-second timeout before systemd gives up on them,
the <command>agetty</command> login prompts should appear eventually
unless something is very wrong.)</para>
<para>
If no login prompts or X11 login screens appear (e.g. due to hanging
dependencies), you can press Alt+ArrowUp. If youre lucky, this will start
rescue mode (described above). (Also note that since most units have a
90-second timeout before systemd gives up on them, the
<command>agetty</command> login prompts should appear eventually unless
something is very wrong.)
</para>
</section>

View File

@ -3,60 +3,51 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-nix-gc">
<title>Cleaning the Nix Store</title>
<para>Nix has a purely functional model, meaning that packages are
never upgraded in place. Instead new versions of packages end up in a
different location in the Nix store (<filename>/nix/store</filename>).
You should periodically run Nixs <emphasis>garbage
collector</emphasis> to remove old, unreferenced packages. This is
easy:
<title>Cleaning the Nix Store</title>
<para>
Nix has a purely functional model, meaning that packages are never upgraded
in place. Instead new versions of packages end up in a different location in
the Nix store (<filename>/nix/store</filename>). You should periodically run
Nixs <emphasis>garbage collector</emphasis> to remove old, unreferenced
packages. This is easy:
<screen>
$ nix-collect-garbage
</screen>
Alternatively, you can use a systemd unit that does the same in the
background:
Alternatively, you can use a systemd unit that does the same in the
background:
<screen>
# systemctl start nix-gc.service
</screen>
You can tell NixOS in <filename>configuration.nix</filename> to run
this unit automatically at certain points in time, for instance, every
night at 03:15:
You can tell NixOS in <filename>configuration.nix</filename> to run this unit
automatically at certain points in time, for instance, every night at 03:15:
<programlisting>
nix.gc.automatic = true;
nix.gc.dates = "03:15";
<xref linkend="opt-nix.gc.automatic"/> = true;
<xref linkend="opt-nix.gc.dates"/> = "03:15";
</programlisting>
</para>
<para>The commands above do not remove garbage collector roots, such
as old system configurations. Thus they do not remove the ability to
roll back to previous configurations. The following command deletes
old roots, removing the ability to roll back to them:
</para>
<para>
The commands above do not remove garbage collector roots, such as old system
configurations. Thus they do not remove the ability to roll back to previous
configurations. The following command deletes old roots, removing the ability
to roll back to them:
<screen>
$ nix-collect-garbage -d
</screen>
You can also do this for specific profiles, e.g.
You can also do this for specific profiles, e.g.
<screen>
$ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
</screen>
Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>.</para>
<para>Another way to reclaim disk space (often as much as 40% of the
size of the Nix store) is to run Nixs store optimiser, which seeks
out identical files in the store and replaces them with hard links to
a single copy.
Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>.
</para>
<para>
Another way to reclaim disk space (often as much as 40% of the size of the
Nix store) is to run Nixs store optimiser, which seeks out identical files
in the store and replaces them with hard links to a single copy.
<screen>
$ nix-store --optimise
</screen>
Since this command needs to read the entire Nix store, it can take
quite a while to finish.</para>
Since this command needs to read the entire Nix store, it can take quite a
while to finish.
</para>
</chapter>

View File

@ -3,15 +3,13 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-container-networking">
<title>Container Networking</title>
<title>Container Networking</title>
<para>When you create a container using <literal>nixos-container
create</literal>, it gets it own private IPv4 address in the range
<literal>10.233.0.0/16</literal>. You can get the containers IPv4
address as follows:
<para>
When you create a container using <literal>nixos-container create</literal>,
it gets it own private IPv4 address in the range
<literal>10.233.0.0/16</literal>. You can get the containers IPv4 address
as follows:
<screen>
# nixos-container show-ip foo
10.233.4.2
@ -19,40 +17,39 @@ address as follows:
$ ping -c1 10.233.4.2
64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
</screen>
</para>
</para>
<para>Networking is implemented using a pair of virtual Ethernet
devices. The network interface in the container is called
<literal>eth0</literal>, while the matching interface in the host is
called <literal>ve-<replaceable>container-name</replaceable></literal>
(e.g., <literal>ve-foo</literal>). The container has its own network
namespace and the <literal>CAP_NET_ADMIN</literal> capability, so it
can perform arbitrary network configuration such as setting up
firewall rules, without affecting or having access to the hosts
network.</para>
<para>By default, containers cannot talk to the outside network. If
you want that, you should set up Network Address Translation (NAT)
rules on the host to rewrite container traffic to use your external
IP address. This can be accomplished using the following configuration
on the host:
<para>
Networking is implemented using a pair of virtual Ethernet devices. The
network interface in the container is called <literal>eth0</literal>, while
the matching interface in the host is called
<literal>ve-<replaceable>container-name</replaceable></literal> (e.g.,
<literal>ve-foo</literal>). The container has its own network namespace and
the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary
network configuration such as setting up firewall rules, without affecting or
having access to the hosts network.
</para>
<para>
By default, containers cannot talk to the outside network. If you want that,
you should set up Network Address Translation (NAT) rules on the host to
rewrite container traffic to use your external IP address. This can be
accomplished using the following configuration on the host:
<programlisting>
networking.nat.enable = true;
networking.nat.internalInterfaces = ["ve-+"];
networking.nat.externalInterface = "eth0";
<xref linkend="opt-networking.nat.enable"/> = true;
<xref linkend="opt-networking.nat.internalInterfaces"/> = ["ve-+"];
<xref linkend="opt-networking.nat.externalInterface"/> = "eth0";
</programlisting>
where <literal>eth0</literal> should be replaced with the desired
external interface. Note that <literal>ve-+</literal> is a wildcard
that matches all container interfaces.</para>
<para>If you are using Network Manager, you need to explicitly prevent
it from managing container interfaces:
where <literal>eth0</literal> should be replaced with the desired external
interface. Note that <literal>ve-+</literal> is a wildcard that matches all
container interfaces.
</para>
<para>
If you are using Network Manager, you need to explicitly prevent it from
managing container interfaces:
<programlisting>
networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
</programlisting>
</para>
</para>
</section>

View File

@ -3,32 +3,32 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="ch-containers">
<title>Container Management</title>
<para>NixOS allows you to easily run other NixOS instances as
<emphasis>containers</emphasis>. Containers are a light-weight
approach to virtualisation that runs software in the container at the
same speed as in the host system. NixOS containers share the Nix store
of the host, making container creation very efficient.</para>
<warning><para>Currently, NixOS containers are not perfectly isolated
from the host system. This means that a user with root access to the
container can do things that affect the host. So you should not give
container root access to untrusted users.</para></warning>
<para>NixOS containers can be created in two ways: imperatively, using
the command <command>nixos-container</command>, and declaratively, by
specifying them in your <filename>configuration.nix</filename>. The
declarative approach implies that containers get upgraded along with
your host system when you run <command>nixos-rebuild</command>, which
is often not what you want. By contrast, in the imperative approach,
containers are configured and updated independently from the host
system.</para>
<xi:include href="imperative-containers.xml" />
<xi:include href="declarative-containers.xml" />
<xi:include href="container-networking.xml" />
<title>Container Management</title>
<para>
NixOS allows you to easily run other NixOS instances as
<emphasis>containers</emphasis>. Containers are a light-weight approach to
virtualisation that runs software in the container at the same speed as in
the host system. NixOS containers share the Nix store of the host, making
container creation very efficient.
</para>
<warning>
<para>
Currently, NixOS containers are not perfectly isolated from the host system.
This means that a user with root access to the container can do things that
affect the host. So you should not give container root access to untrusted
users.
</para>
</warning>
<para>
NixOS containers can be created in two ways: imperatively, using the command
<command>nixos-container</command>, and declaratively, by specifying them in
your <filename>configuration.nix</filename>. The declarative approach implies
that containers get upgraded along with your host system when you run
<command>nixos-rebuild</command>, which is often not what you want. By
contrast, in the imperative approach, containers are configured and updated
independently from the host system.
</para>
<xi:include href="imperative-containers.xml" />
<xi:include href="declarative-containers.xml" />
<xi:include href="container-networking.xml" />
</chapter>

View File

@ -3,20 +3,18 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-cgroups">
<title>Control Groups</title>
<para>To keep track of the processes in a running system, systemd uses
<emphasis>control groups</emphasis> (cgroups). A control group is a
set of processes used to allocate resources such as CPU, memory or I/O
bandwidth. There can be multiple control group hierarchies, allowing
each kind of resource to be managed independently.</para>
<para>The command <command>systemd-cgls</command> lists all control
groups in the <literal>systemd</literal> hierarchy, which is what
systemd uses to keep track of the processes belonging to each service
or user session:
<title>Control Groups</title>
<para>
To keep track of the processes in a running system, systemd uses
<emphasis>control groups</emphasis> (cgroups). A control group is a set of
processes used to allocate resources such as CPU, memory or I/O bandwidth.
There can be multiple control group hierarchies, allowing each kind of
resource to be managed independently.
</para>
<para>
The command <command>systemd-cgls</command> lists all control groups in the
<literal>systemd</literal> hierarchy, which is what systemd uses to keep
track of the processes belonging to each service or user session:
<screen>
$ systemd-cgls
├─user
@ -34,40 +32,34 @@ $ systemd-cgls
│ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
└─ <replaceable>...</replaceable>
</screen>
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in
the CPU hierarchy, which allows per-cgroup CPU scheduling priorities.
By default, every systemd service gets its own CPU cgroup, while all
user sessions are in the top-level CPU cgroup. This ensures, for
instance, that a thousand run-away processes in the
<literal>httpd.service</literal> cgroup cannot starve the CPU for one
process in the <literal>postgresql.service</literal> cgroup. (By
contrast, it they were in the same cgroup, then the PostgreSQL process
would get 1/1001 of the cgroups CPU time.) You can limit a services
CPU share in <filename>configuration.nix</filename>:
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU
hierarchy, which allows per-cgroup CPU scheduling priorities. By default,
every systemd service gets its own CPU cgroup, while all user sessions are in
the top-level CPU cgroup. This ensures, for instance, that a thousand
run-away processes in the <literal>httpd.service</literal> cgroup cannot
starve the CPU for one process in the <literal>postgresql.service</literal>
cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL
process would get 1/1001 of the cgroups CPU time.) You can limit a
services CPU share in <filename>configuration.nix</filename>:
<programlisting>
systemd.services.httpd.serviceConfig.CPUShares = 512;
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.CPUShares = 512;
</programlisting>
By default, every cgroup has 1024 CPU shares, so this will halve the
CPU allocation of the <literal>httpd.service</literal> cgroup.</para>
<para>There also is a <literal>memory</literal> hierarchy that
controls memory allocation limits; by default, all processes are in
the top-level cgroup, so any service or session can exhaust all
available memory. Per-cgroup memory limits can be specified in
<filename>configuration.nix</filename>; for instance, to limit
<literal>httpd.service</literal> to 512 MiB of RAM (excluding swap):
By default, every cgroup has 1024 CPU shares, so this will halve the CPU
allocation of the <literal>httpd.service</literal> cgroup.
</para>
<para>
There also is a <literal>memory</literal> hierarchy that controls memory
allocation limits; by default, all processes are in the top-level cgroup, so
any service or session can exhaust all available memory. Per-cgroup memory
limits can be specified in <filename>configuration.nix</filename>; for
instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM
(excluding swap):
<programlisting>
systemd.services.httpd.serviceConfig.MemoryLimit = "512M";
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.MemoryLimit = "512M";
</programlisting>
</para>
<para>The command <command>systemd-cgtop</command> shows a
continuously updated list of all cgroups with their CPU and memory
usage.</para>
</para>
<para>
The command <command>systemd-cgtop</command> shows a continuously updated
list of all cgroups with their CPU and memory usage.
</para>
</chapter>

View File

@ -3,58 +3,58 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-declarative-containers">
<title>Declarative Container Specification</title>
<title>Declarative Container Specification</title>
<para>You can also specify containers and their configuration in the
hosts <filename>configuration.nix</filename>. For example, the
following specifies that there shall be a container named
<literal>database</literal> running PostgreSQL:
<para>
You can also specify containers and their configuration in the hosts
<filename>configuration.nix</filename>. For example, the following specifies
that there shall be a container named <literal>database</literal> running
PostgreSQL:
<programlisting>
containers.database =
{ config =
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
{ <xref linkend="opt-services.postgresql.enable"/> = true;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql96;
};
};
</programlisting>
If you run <literal>nixos-rebuild switch</literal>, the container will be
built. If the container was already running, it will be updated in place,
without rebooting. The container can be configured to start automatically by
setting <literal>containers.database.autoStart = true</literal> in its
configuration.
</para>
If you run <literal>nixos-rebuild switch</literal>, the container will
be built. If the container was already running, it will be
updated in place, without rebooting. The container can be configured to
start automatically by setting <literal>containers.database.autoStart = true</literal>
in its configuration.</para>
<para>By default, declarative containers share the network namespace
of the host, meaning that they can listen on (privileged)
ports. However, they cannot change the network configuration. You can
give a container its own network as follows:
<para>
By default, declarative containers share the network namespace of the host,
meaning that they can listen on (privileged) ports. However, they cannot
change the network configuration. You can give a container its own network as
follows:
<programlisting>
containers.database =
{ privateNetwork = true;
hostAddress = "192.168.100.10";
localAddress = "192.168.100.11";
};
containers.database = {
<link linkend="opt-containers._name_.privateNetwork">privateNetwork</link> = true;
<link linkend="opt-containers._name_.hostAddress">hostAddress</link> = "192.168.100.10";
<link linkend="opt-containers._name_.localAddress">localAddress</link> = "192.168.100.11";
};
</programlisting>
This gives the container a private virtual Ethernet interface with IP address
<literal>192.168.100.11</literal>, which is hooked up to a virtual Ethernet
interface on the host with IP address <literal>192.168.100.10</literal>. (See
the next section for details on container networking.)
</para>
This gives the container a private virtual Ethernet interface with IP
address <literal>192.168.100.11</literal>, which is hooked up to a
virtual Ethernet interface on the host with IP address
<literal>192.168.100.10</literal>. (See the next section for details
on container networking.)</para>
<para>To disable the container, just remove it from
<filename>configuration.nix</filename> and run <literal>nixos-rebuild
switch</literal>. Note that this will not delete the root directory of
the container in <literal>/var/lib/containers</literal>. Containers can be
destroyed using the imperative method: <literal>nixos-container destroy
foo</literal>.</para>
<para>Declarative containers can be started and stopped using the
corresponding systemd service, e.g. <literal>systemctl start
container@database</literal>.</para>
<para>
To disable the container, just remove it from
<filename>configuration.nix</filename> and run <literal>nixos-rebuild
switch</literal>. Note that this will not delete the root directory of the
container in <literal>/var/lib/containers</literal>. Containers can be
destroyed using the imperative method: <literal>nixos-container destroy
foo</literal>.
</para>
<para>
Declarative containers can be started and stopped using the corresponding
systemd service, e.g. <literal>systemctl start container@database</literal>.
</para>
</section>

View File

@ -3,131 +3,114 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-imperative-containers">
<title>Imperative Container Management</title>
<title>Imperative Container Management</title>
<para>Well cover imperative container management using
<command>nixos-container</command> first.
Be aware that container management is currently only possible
as <literal>root</literal>.</para>
<para>You create a container with
identifier <literal>foo</literal> as follows:
<para>
Well cover imperative container management using
<command>nixos-container</command> first. Be aware that container management
is currently only possible as <literal>root</literal>.
</para>
<para>
You create a container with identifier <literal>foo</literal> as follows:
<screen>
# nixos-container create foo
</screen>
This creates the containers root directory in
<filename>/var/lib/containers/foo</filename> and a small configuration
file in <filename>/etc/containers/foo.conf</filename>. It also builds
the containers initial system configuration and stores it in
<filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You
can modify the initial configuration of the container on the command
line. For instance, to create a container that has
<command>sshd</command> running, with the given public key for
<literal>root</literal>:
This creates the containers root directory in
<filename>/var/lib/containers/foo</filename> and a small configuration file
in <filename>/etc/containers/foo.conf</filename>. It also builds the
containers initial system configuration and stores it in
<filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You can
modify the initial configuration of the container on the command line. For
instance, to create a container that has <command>sshd</command> running,
with the given public key for <literal>root</literal>:
<screen>
# nixos-container create foo --config '
services.openssh.enable = true;
users.extraUsers.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];
<xref linkend="opt-services.openssh.enable"/> = true;
<link linkend="opt-users.users._name__.openssh.authorizedKeys.keys">users.users.root.openssh.authorizedKeys.keys</link> = ["ssh-dss AAAAB3N…"];
'
</screen>
</para>
</para>
<para>Creating a container does not start it. To start the container,
run:
<para>
Creating a container does not start it. To start the container, run:
<screen>
# nixos-container start foo
</screen>
This command will return as soon as the container has booted and has
reached <literal>multi-user.target</literal>. On the host, the
container runs within a systemd unit called
<literal>container@<replaceable>container-name</replaceable>.service</literal>.
Thus, if something went wrong, you can get status info using
<command>systemctl</command>:
This command will return as soon as the container has booted and has reached
<literal>multi-user.target</literal>. On the host, the container runs within
a systemd unit called
<literal>container@<replaceable>container-name</replaceable>.service</literal>.
Thus, if something went wrong, you can get status info using
<command>systemctl</command>:
<screen>
# systemctl status container@foo
</screen>
</para>
</para>
<para>If the container has started successfully, you can log in as
root using the <command>root-login</command> operation:
<para>
If the container has started successfully, you can log in as root using the
<command>root-login</command> operation:
<screen>
# nixos-container root-login foo
[root@foo:~]#
</screen>
Note that only root on the host can do this (since there is no
authentication). You can also get a regular login prompt using the
<command>login</command> operation, which is available to all users on
the host:
Note that only root on the host can do this (since there is no
authentication). You can also get a regular login prompt using the
<command>login</command> operation, which is available to all users on the
host:
<screen>
# nixos-container login foo
foo login: alice
Password: ***
</screen>
With <command>nixos-container run</command>, you can execute arbitrary
commands in the container:
With <command>nixos-container run</command>, you can execute arbitrary
commands in the container:
<screen>
# nixos-container run foo -- uname -a
Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
</screen>
</para>
</para>
<para>There are several ways to change the configuration of the
container. First, on the host, you can edit
<literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>,
and run
<para>
There are several ways to change the configuration of the container. First,
on the host, you can edit
<literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>,
and run
<screen>
# nixos-container update foo
</screen>
This will build and activate the new configuration. You can also
specify a new configuration on the command line:
This will build and activate the new configuration. You can also specify a
new configuration on the command line:
<screen>
# nixos-container update foo --config '
services.httpd.enable = true;
services.httpd.adminAddr = "foo@example.org";
networking.firewall.allowedTCPPorts = [ 80 ];
<xref linkend="opt-services.httpd.enable"/> = true;
<xref linkend="opt-services.httpd.adminAddr"/> = "foo@example.org";
<xref linkend="opt-networking.firewall.allowedTCPPorts"/> = [ 80 ];
'
# curl http://$(nixos-container show-ip foo)/
&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…
</screen>
However, note that this will overwrite the containers
<filename>/etc/nixos/configuration.nix</filename>.
</para>
However, note that this will overwrite the containers
<filename>/etc/nixos/configuration.nix</filename>.</para>
<para>Alternatively, you can change the configuration from within the
container itself by running <command>nixos-rebuild switch</command>
inside the container. Note that the container by default does not have
a copy of the NixOS channel, so you should run <command>nix-channel
--update</command> first.</para>
<para>Containers can be stopped and started using
<literal>nixos-container stop</literal> and <literal>nixos-container
start</literal>, respectively, or by using
<command>systemctl</command> on the containers service unit. To
destroy a container, including its file system, do
<para>
Alternatively, you can change the configuration from within the container
itself by running <command>nixos-rebuild switch</command> inside the
container. Note that the container by default does not have a copy of the
NixOS channel, so you should run <command>nix-channel --update</command>
first.
</para>
<para>
Containers can be stopped and started using <literal>nixos-container
stop</literal> and <literal>nixos-container start</literal>, respectively, or
by using <command>systemctl</command> on the containers service unit. To
destroy a container, including its file system, do
<screen>
# nixos-container destroy foo
</screen>
</para>
</para>
</section>

View File

@ -3,26 +3,20 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-logging">
<title>Logging</title>
<para>System-wide logging is provided by systemds
<emphasis>journal</emphasis>, which subsumes traditional logging
daemons such as syslogd and klogd. Log entries are kept in binary
files in <filename>/var/log/journal/</filename>. The command
<literal>journalctl</literal> allows you to see the contents of the
journal. For example,
<title>Logging</title>
<para>
System-wide logging is provided by systemds <emphasis>journal</emphasis>,
which subsumes traditional logging daemons such as syslogd and klogd. Log
entries are kept in binary files in <filename>/var/log/journal/</filename>.
The command <literal>journalctl</literal> allows you to see the contents of
the journal. For example,
<screen>
$ journalctl -b
</screen>
shows all journal entries since the last reboot. (The output of
<command>journalctl</command> is piped into <command>less</command> by
default.) You can use various options and match operators to restrict
output to messages of interest. For instance, to get all messages
from PostgreSQL:
shows all journal entries since the last reboot. (The output of
<command>journalctl</command> is piped into <command>less</command> by
default.) You can use various options and match operators to restrict output
to messages of interest. For instance, to get all messages from PostgreSQL:
<screen>
$ journalctl -u postgresql.service
-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
@ -32,21 +26,18 @@ Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down
Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG: database system was shut down at 2013-01-07 15:44:14 CET
Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to accept connections
</screen>
Or to get all messages since the last reboot that have at least a
“critical” severity level:
Or to get all messages since the last reboot that have at least a
“critical” severity level:
<screen>
$ journalctl -b -p crit
Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)
</screen>
</para>
<para>The system journal is readable by root and by users in the
<literal>wheel</literal> and <literal>systemd-journal</literal>
groups. All users have a private journal that can be read using
<command>journalctl</command>.</para>
</para>
<para>
The system journal is readable by root and by users in the
<literal>wheel</literal> and <literal>systemd-journal</literal> groups. All
users have a private journal that can be read using
<command>journalctl</command>.
</para>
</chapter>

View File

@ -3,16 +3,14 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-maintenance-mode">
<title>Maintenance Mode</title>
<title>Maintenance Mode</title>
<para>You can enter rescue mode by running:
<para>
You can enter rescue mode by running:
<screen>
# systemctl rescue</screen>
This will eventually give you a single-user root shell. Systemd will
stop (almost) all system services. To get out of maintenance mode,
just exit from the rescue shell.</para>
This will eventually give you a single-user root shell. Systemd will stop
(almost) all system services. To get out of maintenance mode, just exit from
the rescue shell.
</para>
</section>

View File

@ -3,31 +3,25 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-nix-network-issues">
<title>Network Problems</title>
<title>Network Problems</title>
<para>Nix uses a so-called <emphasis>binary cache</emphasis> to
optimise building a package from source into downloading it as a
pre-built binary. That is, whenever a command like
<command>nixos-rebuild</command> needs a path in the Nix store, Nix
will try to download that path from the Internet rather than build it
from source. The default binary cache is
<uri>https://cache.nixos.org/</uri>. If this cache is unreachable,
Nix operations may take a long time due to HTTP connection timeouts.
You can disable the use of the binary cache by adding <option>--option
use-binary-caches false</option>, e.g.
<para>
Nix uses a so-called <emphasis>binary cache</emphasis> to optimise building a
package from source into downloading it as a pre-built binary. That is,
whenever a command like <command>nixos-rebuild</command> needs a path in the
Nix store, Nix will try to download that path from the Internet rather than
build it from source. The default binary cache is
<uri>https://cache.nixos.org/</uri>. If this cache is unreachable, Nix
operations may take a long time due to HTTP connection timeouts. You can
disable the use of the binary cache by adding <option>--option
use-binary-caches false</option>, e.g.
<screen>
# nixos-rebuild switch --option use-binary-caches false
</screen>
If you have an alternative binary cache at your disposal, you can use
it instead:
If you have an alternative binary cache at your disposal, you can use it
instead:
<screen>
# nixos-rebuild switch --option binary-caches http://my-cache.example.org/
</screen>
</para>
</para>
</section>

View File

@ -3,42 +3,33 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-rebooting">
<title>Rebooting and Shutting Down</title>
<para>The system can be shut down (and automatically powered off) by
doing:
<title>Rebooting and Shutting Down</title>
<para>
The system can be shut down (and automatically powered off) by doing:
<screen>
# shutdown
</screen>
This is equivalent to running <command>systemctl
poweroff</command>.</para>
<para>To reboot the system, run
This is equivalent to running <command>systemctl poweroff</command>.
</para>
<para>
To reboot the system, run
<screen>
# reboot
</screen>
which is equivalent to <command>systemctl reboot</command>.
Alternatively, you can quickly reboot the system using
<literal>kexec</literal>, which bypasses the BIOS by directly loading
the new kernel into memory:
which is equivalent to <command>systemctl reboot</command>. Alternatively,
you can quickly reboot the system using <literal>kexec</literal>, which
bypasses the BIOS by directly loading the new kernel into memory:
<screen>
# systemctl kexec
</screen>
</para>
<para>The machine can be suspended to RAM (if supported) using
<command>systemctl suspend</command>, and suspended to disk using
<command>systemctl hibernate</command>.</para>
<para>These commands can be run by any user who is logged in locally,
i.e. on a virtual console or in X11; otherwise, the user is asked for
authentication.</para>
</para>
<para>
The machine can be suspended to RAM (if supported) using <command>systemctl
suspend</command>, and suspended to disk using <command>systemctl
hibernate</command>.
</para>
<para>
These commands can be run by any user who is logged in locally, i.e. on a
virtual console or in X11; otherwise, the user is asked for authentication.
</para>
</chapter>

View File

@ -3,46 +3,39 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-rollback">
<title>Rolling Back Configuration Changes</title>
<title>Rolling Back Configuration Changes</title>
<para>After running <command>nixos-rebuild</command> to switch to a
new configuration, you may find that the new configuration doesnt
work very well. In that case, there are several ways to return to a
previous configuration.</para>
<para>First, the GRUB boot manager allows you to boot into any
previous configuration that hasnt been garbage-collected. These
configurations can be found under the GRUB submenu “NixOS - All
configurations”. This is especially useful if the new configuration
fails to boot. After the system has booted, you can make the selected
configuration the default for subsequent boots:
<para>
After running <command>nixos-rebuild</command> to switch to a new
configuration, you may find that the new configuration doesnt work very
well. In that case, there are several ways to return to a previous
configuration.
</para>
<para>
First, the GRUB boot manager allows you to boot into any previous
configuration that hasnt been garbage-collected. These configurations can
be found under the GRUB submenu “NixOS - All configurations”. This is
especially useful if the new configuration fails to boot. After the system
has booted, you can make the selected configuration the default for
subsequent boots:
<screen>
# /run/current-system/bin/switch-to-configuration boot</screen>
</para>
</para>
<para>Second, you can switch to the previous configuration in a running
system:
<para>
Second, you can switch to the previous configuration in a running system:
<screen>
# nixos-rebuild switch --rollback</screen>
This is equivalent to running:
This is equivalent to running:
<screen>
# /nix/var/nix/profiles/system-<replaceable>N</replaceable>-link/bin/switch-to-configuration switch</screen>
where <replaceable>N</replaceable> is the number of the NixOS system
configuration. To get a list of the available configurations, do:
where <replaceable>N</replaceable> is the number of the NixOS system
configuration. To get a list of the available configurations, do:
<screen>
$ ls -l /nix/var/nix/profiles/system-*-link
<replaceable>...</replaceable>
lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055
</screen>
</para>
</para>
</section>

View File

@ -3,22 +3,19 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="ch-running">
<title>Administration</title>
<partintro>
<para>This chapter describes various aspects of managing a running
NixOS system, such as how to use the <command>systemd</command>
service manager.</para>
</partintro>
<xi:include href="service-mgmt.xml" />
<xi:include href="rebooting.xml" />
<xi:include href="user-sessions.xml" />
<xi:include href="control-groups.xml" />
<xi:include href="logging.xml" />
<xi:include href="cleaning-store.xml" />
<xi:include href="containers.xml" />
<xi:include href="troubleshooting.xml" />
<title>Administration</title>
<partintro>
<para>
This chapter describes various aspects of managing a running NixOS system,
such as how to use the <command>systemd</command> service manager.
</para>
</partintro>
<xi:include href="service-mgmt.xml" />
<xi:include href="rebooting.xml" />
<xi:include href="user-sessions.xml" />
<xi:include href="control-groups.xml" />
<xi:include href="logging.xml" />
<xi:include href="cleaning-store.xml" />
<xi:include href="containers.xml" />
<xi:include href="troubleshooting.xml" />
</part>

View File

@ -3,26 +3,23 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-systemctl">
<title>Service Management</title>
<para>In NixOS, all system services are started and monitored using
the systemd program. Systemd is the “init” process of the system
(i.e. PID 1), the parent of all other processes. It manages a set of
so-called “units”, which can be things like system services
(programs), but also mount points, swap files, devices, targets
(groups of units) and more. Units can have complex dependencies; for
instance, one unit can require that another unit must be successfully
started before the first unit can be started. When the system boots,
it starts a unit named <literal>default.target</literal>; the
dependencies of this unit cause all system services to be started,
file systems to be mounted, swap files to be activated, and so
on.</para>
<para>The command <command>systemctl</command> is the main way to
interact with <command>systemd</command>. Without any arguments, it
shows the status of active units:
<title>Service Management</title>
<para>
In NixOS, all system services are started and monitored using the systemd
program. Systemd is the “init” process of the system (i.e. PID 1), the
parent of all other processes. It manages a set of so-called “units”,
which can be things like system services (programs), but also mount points,
swap files, devices, targets (groups of units) and more. Units can have
complex dependencies; for instance, one unit can require that another unit
must be successfully started before the first unit can be started. When the
system boots, it starts a unit named <literal>default.target</literal>; the
dependencies of this unit cause all system services to be started, file
systems to be mounted, swap files to be activated, and so on.
</para>
<para>
The command <command>systemctl</command> is the main way to interact with
<command>systemd</command>. Without any arguments, it shows the status of
active units:
<screen>
$ systemctl
-.mount loaded active mounted /
@ -31,12 +28,10 @@ sshd.service loaded active running SSH Daemon
graphical.target loaded active active Graphical Interface
<replaceable>...</replaceable>
</screen>
</para>
<para>You can ask for detailed status information about a unit, for
instance, the PostgreSQL database service:
</para>
<para>
You can ask for detailed status information about a unit, for instance, the
PostgreSQL database service:
<screen>
$ systemctl status postgresql.service
postgresql.service - PostgreSQL Server
@ -56,28 +51,22 @@ Jan 07 15:55:57 hagbard postgres[2390]: [1-1] LOG: database system is ready to
Jan 07 15:55:57 hagbard postgres[2420]: [1-1] LOG: autovacuum launcher started
Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.
</screen>
Note that this shows the status of the unit (active and running), all
the processes belonging to the service, as well as the most recent log
messages from the service.
</para>
<para>Units can be stopped, started or restarted:
Note that this shows the status of the unit (active and running), all the
processes belonging to the service, as well as the most recent log messages
from the service.
</para>
<para>
Units can be stopped, started or restarted:
<screen>
# systemctl stop postgresql.service
# systemctl start postgresql.service
# systemctl restart postgresql.service
</screen>
These operations are synchronous: they wait until the service has
finished starting or stopping (or has failed). Starting a unit will
cause the dependencies of that unit to be started as well (if
necessary).</para>
These operations are synchronous: they wait until the service has finished
starting or stopping (or has failed). Starting a unit will cause the
dependencies of that unit to be started as well (if necessary).
</para>
<!-- - cgroups: each service and user session is a cgroup
- cgroup resource management -->
</chapter>

View File

@ -3,35 +3,34 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-nix-store-corruption">
<title>Nix Store Corruption</title>
<title>Nix Store Corruption</title>
<para>After a system crash, its possible for files in the Nix store
to become corrupted. (For instance, the Ext4 file system has the
tendency to replace un-synced files with zero bytes.) NixOS tries
hard to prevent this from happening: it performs a
<command>sync</command> before switching to a new configuration, and
Nixs database is fully transactional. If corruption still occurs,
you may be able to fix it automatically.</para>
<para>If the corruption is in a path in the closure of the NixOS
system configuration, you can fix it by doing
<para>
After a system crash, its possible for files in the Nix store to become
corrupted. (For instance, the Ext4 file system has the tendency to replace
un-synced files with zero bytes.) NixOS tries hard to prevent this from
happening: it performs a <command>sync</command> before switching to a new
configuration, and Nixs database is fully transactional. If corruption
still occurs, you may be able to fix it automatically.
</para>
<para>
If the corruption is in a path in the closure of the NixOS system
configuration, you can fix it by doing
<screen>
# nixos-rebuild switch --repair
</screen>
This will cause Nix to check every path in the closure, and if its
cryptographic hash differs from the hash recorded in Nixs database, the
path is rebuilt or redownloaded.
</para>
This will cause Nix to check every path in the closure, and if its
cryptographic hash differs from the hash recorded in Nixs database,
the path is rebuilt or redownloaded.</para>
<para>You can also scan the entire Nix store for corrupt paths:
<para>
You can also scan the entire Nix store for corrupt paths:
<screen>
# nix-store --verify --check-contents --repair
</screen>
Any corrupt paths will be redownloaded if theyre available in a
binary cache; otherwise, they cannot be repaired.</para>
Any corrupt paths will be redownloaded if theyre available in a binary
cache; otherwise, they cannot be repaired.
</para>
</section>

View File

@ -3,16 +3,14 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="ch-troubleshooting">
<title>Troubleshooting</title>
<para>This chapter describes solutions to common problems you might
encounter when you manage your NixOS system.</para>
<xi:include href="boot-problems.xml" />
<xi:include href="maintenance-mode.xml" />
<xi:include href="rollback.xml" />
<xi:include href="store-corruption.xml" />
<xi:include href="network-problems.xml" />
<title>Troubleshooting</title>
<para>
This chapter describes solutions to common problems you might encounter when
you manage your NixOS system.
</para>
<xi:include href="boot-problems.xml" />
<xi:include href="maintenance-mode.xml" />
<xi:include href="rollback.xml" />
<xi:include href="store-corruption.xml" />
<xi:include href="network-problems.xml" />
</chapter>

View File

@ -3,14 +3,12 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-user-sessions">
<title>User Sessions</title>
<para>Systemd keeps track of all users who are logged into the system
(e.g. on a virtual console or remotely via SSH). The command
<command>loginctl</command> allows querying and manipulating user
sessions. For instance, to list all user sessions:
<title>User Sessions</title>
<para>
Systemd keeps track of all users who are logged into the system (e.g. on a
virtual console or remotely via SSH). The command <command>loginctl</command>
allows querying and manipulating user sessions. For instance, to list all
user sessions:
<screen>
$ loginctl
SESSION UID USER SEAT
@ -18,12 +16,10 @@ $ loginctl
c3 0 root seat0
c4 500 alice
</screen>
This shows that two users are logged in locally, while another is
logged in remotely. (“Seats” are essentially the combinations of
displays and input devices attached to the system; usually, there is
only one seat.) To get information about a session:
This shows that two users are logged in locally, while another is logged in
remotely. (“Seats” are essentially the combinations of displays and input
devices attached to the system; usually, there is only one seat.) To get
information about a session:
<screen>
$ loginctl session-status c3
c3 - root (0)
@ -38,16 +34,12 @@ c3 - root (0)
├─10339 -bash
└─10355 w3m nixos.org
</screen>
This shows that the user is logged in on virtual console 3. It also
lists the processes belonging to this session. Since systemd keeps
track of this, you can terminate a session in a way that ensures that
all the sessions processes are gone:
This shows that the user is logged in on virtual console 3. It also lists the
processes belonging to this session. Since systemd keeps track of this, you
can terminate a session in a way that ensures that all the sessions
processes are gone:
<screen>
# loginctl terminate-session c3
</screen>
</para>
</para>
</chapter>

View File

@ -3,15 +3,14 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-module-abstractions">
<title>Abstractions</title>
<title>Abstractions</title>
<para>If you find yourself repeating yourself over and over, its time
to abstract. Take, for instance, this Apache HTTP Server configuration:
<para>
If you find yourself repeating yourself over and over, its time to
abstract. Take, for instance, this Apache HTTP Server configuration:
<programlisting>
{
services.httpd.virtualHosts =
<xref linkend="opt-services.httpd.virtualHosts"/> =
[ { hostName = "example.org";
documentRoot = "/webroot";
adminAddr = "alice@example.org";
@ -28,11 +27,9 @@ to abstract. Take, for instance, this Apache HTTP Server configuration:
];
}
</programlisting>
It defines two virtual hosts with nearly identical configuration; the
only difference is that the second one has SSL enabled. To prevent
this duplication, we can use a <literal>let</literal>:
It defines two virtual hosts with nearly identical configuration; the only
difference is that the second one has SSL enabled. To prevent this
duplication, we can use a <literal>let</literal>:
<programlisting>
let
exampleOrgCommon =
@ -43,7 +40,7 @@ let
};
in
{
services.httpd.virtualHosts =
<xref linkend="opt-services.httpd.virtualHosts"/> =
[ exampleOrgCommon
(exampleOrgCommon // {
enableSSL = true;
@ -53,40 +50,38 @@ in
];
}
</programlisting>
The <literal>let exampleOrgCommon = <replaceable>...</replaceable></literal>
defines a variable named <literal>exampleOrgCommon</literal>. The
<literal>//</literal> operator merges two attribute sets, so the
configuration of the second virtual host is the set
<literal>exampleOrgCommon</literal> extended with the SSL options.
</para>
The <literal>let exampleOrgCommon =
<replaceable>...</replaceable></literal> defines a variable named
<literal>exampleOrgCommon</literal>. The <literal>//</literal>
operator merges two attribute sets, so the configuration of the second
virtual host is the set <literal>exampleOrgCommon</literal> extended
with the SSL options.</para>
<para>You can write a <literal>let</literal> wherever an expression is
allowed. Thus, you also could have written:
<para>
You can write a <literal>let</literal> wherever an expression is allowed.
Thus, you also could have written:
<programlisting>
{
services.httpd.virtualHosts =
<xref linkend="opt-services.httpd.virtualHosts"/> =
let exampleOrgCommon = <replaceable>...</replaceable>; in
[ exampleOrgCommon
(exampleOrgCommon // { <replaceable>...</replaceable> })
];
}
</programlisting>
but not <literal>{ let exampleOrgCommon = <replaceable>...</replaceable>; in
<replaceable>...</replaceable>; }</literal> since attributes (as opposed to
attribute values) are not expressions.
</para>
but not <literal>{ let exampleOrgCommon =
<replaceable>...</replaceable>; in <replaceable>...</replaceable>;
}</literal> since attributes (as opposed to attribute values) are not
expressions.</para>
<para><emphasis>Functions</emphasis> provide another method of
abstraction. For instance, suppose that we want to generate lots of
different virtual hosts, all with identical configuration except for
the host name. This can be done as follows:
<para>
<emphasis>Functions</emphasis> provide another method of abstraction. For
instance, suppose that we want to generate lots of different virtual hosts,
all with identical configuration except for the host name. This can be done
as follows:
<programlisting>
{
services.httpd.virtualHosts =
<xref linkend="opt-services.httpd.virtualHosts"/> =
let
makeVirtualHost = name:
{ hostName = name;
@ -101,38 +96,36 @@ the host name. This can be done as follows:
];
}
</programlisting>
Here, <varname>makeVirtualHost</varname> is a function that takes a single
argument <literal>name</literal> and returns the configuration for a virtual
host. That function is then called for several names to produce the list of
virtual host configurations.
</para>
Here, <varname>makeVirtualHost</varname> is a function that takes a
single argument <literal>name</literal> and returns the configuration
for a virtual host. That function is then called for several names to
produce the list of virtual host configurations.</para>
<para>We can further improve on this by using the function
<varname>map</varname>, which applies another function to every
element in a list:
<para>
We can further improve on this by using the function <varname>map</varname>,
which applies another function to every element in a list:
<programlisting>
{
services.httpd.virtualHosts =
<xref linkend="opt-services.httpd.virtualHosts"/> =
let
makeVirtualHost = <replaceable>...</replaceable>;
in map makeVirtualHost
[ "example.org" "example.com" "example.gov" "example.nl" ];
}
</programlisting>
(The function <literal>map</literal> is called a <emphasis>higher-order
function</emphasis> because it takes another function as an argument.)
</para>
(The function <literal>map</literal> is called a
<emphasis>higher-order function</emphasis> because it takes another
function as an argument.)</para>
<para>What if you need more than one argument, for instance, if we
want to use a different <literal>documentRoot</literal> for each
virtual host? Then we can make <varname>makeVirtualHost</varname> a
function that takes a <emphasis>set</emphasis> as its argument, like this:
<para>
What if you need more than one argument, for instance, if we want to use a
different <literal>documentRoot</literal> for each virtual host? Then we can
make <varname>makeVirtualHost</varname> a function that takes a
<emphasis>set</emphasis> as its argument, like this:
<programlisting>
{
services.httpd.virtualHosts =
<xref linkend="opt-services.httpd.virtualHosts"/> =
let
makeVirtualHost = { name, root }:
{ hostName = name;
@ -147,10 +140,9 @@ function that takes a <emphasis>set</emphasis> as its argument, like this:
];
}
</programlisting>
But in this case (where every root is a subdirectory of
<filename>/sites</filename> named after the virtual host), it would
have been shorter to define <varname>makeVirtualHost</varname> as
But in this case (where every root is a subdirectory of
<filename>/sites</filename> named after the virtual host), it would have been
shorter to define <varname>makeVirtualHost</varname> as
<programlisting>
makeVirtualHost = name:
{ hostName = name;
@ -158,9 +150,7 @@ makeVirtualHost = name:
adminAddr = "alice@example.org";
};
</programlisting>
Here, the construct
<literal>${<replaceable>...</replaceable>}</literal> allows the result
of an expression to be spliced into a string.</para>
Here, the construct <literal>${<replaceable>...</replaceable>}</literal>
allows the result of an expression to be spliced into a string.
</para>
</section>

Some files were not shown because too many files have changed in this diff Show More