zfs-mount-generatorgenerate systemd mount units for ZFS filesystems


zfs-mount-generator is a systemd.generator(7) that generates native systemd.mount(5) units for configured ZFS datasets.

or none.
off. Skipped if only noauto datasets exist for a given mountpoint and there's more than one. Datasets with take precedence over ones with noauto for the same mountpoint. Sets logical noauto flag if noauto. Encryption roots always generate zfs-load-key@root.service, even if off.
=, relatime=, =, =, =, =, =
Used to generate mount options equivalent to zfs mount.
=, keylocation=
If the dataset is an encryption root, its mount unit will bind to zfs-load-key@root.service, with additional dependencies as follows:
None, uses systemd-ask-password(1)
=URL (et al.)
=, After=: network-online.target
The service also uses the same Wants=, After=, Requires=, and RequiresMountsFor=, as the mount unit.
=path[ path]…
Requires= for the mount- and key-loading unit.
=path[ path]…
RequiresMountsFor= for the mount- and key-loading unit.
=unit[ unit]…
Before= for the mount unit.
=unit[ unit]…
After= for the mount unit.
=unit[ unit]…
Sets logical noauto flag (see below). If not none, sets WantedBy= for the mount unit.
=unit[ unit]…
Sets logical noauto flag (see below). If not none, sets RequiredBy= for the mount unit.
Waxes or wanes strength of default reverse dependencies of the mount unit, see below.
on. Defaults to off.

Additionally, unless the pool the dataset resides on is imported at generation time, both units gain Wants=zfs-import.target and After=zfs-import.target.

Additionally, unless the logical noauto flag is set, the mount unit gains a reverse-dependency for local-fs.target of strength

= + Before=
= + Before=

Because ZFS pools may not be available very early in the boot process, information on ZFS mountpoints must be stored separately. The output of

zfs list -Ho name,⟨every property above in order⟩
for datasets that should be mounted by systemd should be kept at @sysconfdir@/zfs/zfs-list.cache/poolname, and, if writeable, will be kept synchronized for the entire pool by the history_event-zfs-list-cacher.sh ZEDLET, if enabled (see zed(8)).

If the environment variable is nonzero (or unset and /proc/cmdline contains ""), print summary accounting information at the end.

To begin, enable tracking for the pool:

# touch @sysconfdir@/zfs/zfs-list.cache/poolname
Then enable the tracking ZEDLET:
# ln -s @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh @sysconfdir@/zfs/zed.d
# systemctl enable zfs-zed.service
# systemctl restart zfs-zed.service

If no history event is in the queue, inject one to ensure the ZEDLET runs to refresh the cache file by setting a monitored property somewhere on the pool:

# zfs set relatime=off poolname/dset
# zfs inherit relatime poolname/dset

To test the generator output:

$ mkdir /tmp/zfs-mount-generator
$ @systemdgeneratordir@/zfs-mount-generator /tmp/zfs-mount-generator
If the generated units are satisfactory, instruct systemd to re-run all generators:
# systemctl daemon-reload

systemd.mount(5), zfs(5), systemd.generator(7), zed(8), zpool-events(8)

May 31, 2021 Debian