Not all systems need to be online to boot up. So, don’t pull
network-online.target into multi-user.target. Services that need
online network can still require it.
This increases my boot time from ~9s to ~5s.
1d61efb7f177f7b70c467ab4940fde0a3481d4dc accidentially changed the
restartTriggers of systemd-networkd.service` to point to the attribute
name (in this case, a location relative to `/etc`), instead of the
location of the network-related unit files in the nix store.
This caused systemd-networkd to not get restarted on activation of new
networking config, if the file name hasn't changed.
Fix this, by pointing this back to the location in the nix store.
Add a distinctive `unit-script` prefix to systemd unit scripts to make
them easier to find in the store directory. Do not add this prefix to
actual script file name as it clutters logs.
Current journal output from services started by `script` rather than
`ExexStart` is unreadable because the name of the file (which journalctl
records and outputs) quite literally takes 1/3 of the screen (on smaller
screens).
Make it shorter. In particular:
* Drop the `unit-script` prefix as it is not very useful.
* Use `writeShellScriptBin` to write them because:
* It has a `checkPhase` which is better than no checkPhase.
* The script itself ends up having a short name.
systemd-tmpfiles will load all files in lexicographic order and ignores rules
for the same path in later files with a warning Since we apply the default rules
provided by systemd, we should load user-defines rules first so users have a
chance to override defaults.
Commit 1d2c3279311e4f03fcf164e1366f2fda9f4bfccf in the upstream kernel
repository removed support for the scalar x86_64 and i586 AES
assembly implementations, since the generic AES implementation generated
by the compiler is faster for both platforms. Remove the modules from
the cryptoModules list. This causes a regression for kernel versions
>=5.4 which include the removal. This should have no negative impact on
AES performance on older kernels since the generic implementation should
be faster there as well since the implementation was hardly touched from
its initial submission.
Fixes#84842
In contrast to `.service`-units, it's not possible to declare an
`overrides.conf`, however this is done by `generateUnits` for `.nspawn`
units as well. This change breaks the build if you have two derivations
configuring one nspawn unit.
This will happen in a case like this:
``` nix
{ pkgs, ... }: {
systemd.packages = [
(pkgs.writeTextDir "etc/systemd/nspawn/container0.nspawn" ''
[Files]
Bind=/tmp
'')
];
systemd.nspawn.container0 = {
/* ... */
};
}
```
This option allows replacing the tmpfs mounted on / by
the live CD's init script with a physical device
Since nixOS symlinks everything there's no trouble
at all.
That enables the user to easily use a nixOS live CD
as a portable installation.
Note that due to some limitations in how the store is mounted
currently only the non-store things are persisted.
Dropbear lags behind OpenSSH significantly in both support for modern
key formats like `ssh-ed25519`, let alone the recently-introduced
U2F/FIDO2-based `sk-ssh-ed25519@openssh.com` (as I found when I switched
my `authorizedKeys` over to it and promptly locked myself out of my
server's initrd SSH, breaking reboots), as well as security features
like multiprocess isolation. Using the same SSH daemon for stage-1 and
the main system ensures key formats will always remain compatible, as
well as more conveniently allowing the sharing of configuration and
host keys.
The main reason to use Dropbear over OpenSSH would be initrd space
concerns, but NixOS initrds are already large (17 MiB currently on my
server), and the size difference between the two isn't huge (the test's
initrd goes from 9.7 MiB to 12 MiB with this change). If the size is
still a problem, then it would be easy to shrink sshd down to a few
hundred kilobytes by using an initrd-specific build that uses musl and
disables things like Kerberos support.
This passes the test and works on my server, but more rigorous testing
and review from people who use initrd SSH would be appreciated!
`$toplevel/system` of a system closure with `x86_64` kernel and `i686` userland should contain "x86_64-linux".
If `$toplevel/system` contains "i686-linux", the closure will be run using `qemu-system-i386`, which is able to run `x86_64` kernel on most Intel CPU, but fails on AMD.
So this fix is for a rare case of `x86_64` kernel + `i686` userland + AMD CPU
This mirrors the behaviour of systemd - It's udev that parses `.link`
files, not `systemd-networkd`.
This was originally applied in 36ef112a477034fc6d1d9170bf1bcda0140a8d1d,
but was reverted due to 1115959a8d4d73ad73341563dc8bbf52230a281e causing
evaluation errors on hydra.
...even when networkd is disabled
This reverts commit ce78f3ac701017008aa7f1db387b871b7ae65e01, reversing
changes made to dc34da0755b3c36469965659c0ee4a1337e81c05.
I'm sorry; Hydra has been unable to evaluate, always returning
> error: unexpected EOF reading a line
and I've been unable to reproduce the problem locally. Bisecting
pointed to this merge, but I still can't see what exactly was wrong.
This is to facilitate units that should _only_ be manually started and
not activated when a configuration is switched to.
More specifically this is to be used by the new Nixops deploy-*
targets created in https://github.com/NixOS/nixops/pull/1245 that are
triggered by Nixops before/after switch-to-configuration is called.
This avoids a possible surprise if the user is using `nixpkgs.system`
and `nesting.children`. `nesting.children` is expected to ignore all
parent configuration so we shouldn't propagate the user-facing option
`nixpkgs.system`. To avoid doing so, we introduce a new internal
option for holding the value passed to eval-config.nix, and use that
when recursing for nesting.
The current behavior lets `system` default to
`builtins.currentSystem`. The system value specified to
`eval-config.nix` has very low precedence, so this should compose
properly.
Fixes#80806
Depending on the network management backend being used, if the interface
configuration in stage 1 is not cleared, there might still be some old
addresses or routes from stage 1 present in stage 2 after network
configuration has finished.