properties of ZFS storage pools
Each pool has several properties associated with it. Some properties are
read-only statistics while others are configurable and change the behavior of
The following are read-only properties:
- Amount of storage used within the pool. See
free for more information.
- Percentage of pool space used. This property can also be referred to by
its shortened column name, cap.
- Amount of uninitialized space within the pool or device that can be used
to increase the total capacity of the pool. On whole-disk vdevs, this is
the space beyond the end of the GPT – typically occurring when a
LUN is dynamically expanded or a disk replaced with a larger one. On
partition vdevs, this is the space appended to the partition after it was
added to the pool – most likely by resizing it in-place. The space
can be claimed for the pool by bringing it online with
autoexpand=on or using
- The amount of fragmentation in the pool. As the amount of space
allocated increases, it becomes more
difficult to locate free space. This may
result in lower write performance compared to pools with more unfragmented
- The amount of free space available in the pool. By contrast, the
property describes how much new data can be written to ZFS
filesystems/volumes. The zpool free property
is not generally useful for this purpose, and can be substantially more
than the zfs available space. This
discrepancy is due to several factors, including raidz parity; zfs
reservation, quota, refreservation, and refquota properties; and space set
aside by spa_slop_shift (see
zfs(4) for more information).
- After a file system or snapshot is destroyed, the space it was using is
returned to the pool asynchronously. freeing
is the amount of space remaining to be reclaimed. Over time
freeing will decrease while
- The current health of the pool. Health can be one of
- A unique identifier for the pool.
- A unique identifier for the pool. Unlike the
guid property, this identifier is generated
every time we load the pool (i.e. does not persist across imports/exports)
and never changes while the pool is loaded (even if a
reguid operation takes place).
- Total size of the storage pool.
- Information about unsupported features that are enabled on the pool. See
zpool-features(7) for details.
The space usage properties report actual physical space available to the storage
pool. The physical space can be different from the total amount of space that
any contained datasets can actually use. The amount of space used in a raidz
configuration depends on the characteristics of the data being written. In
addition, ZFS reserves some space for internal accounting that the
command takes into account, but the
command does not. For non-full
pools of a reasonable size, these effects should be invisible. For small
pools, or pools that are close to being completely full, these discrepancies
may become more noticeable.
The following property can be set at creation time and import time:
- Alternate root directory. If set, this directory is prepended to any mount
points within the pool. This can be used when examining an unknown pool
where the mount points cannot be trusted, or in an alternate boot
environment, where the typical paths are not valid.
altroot is not a persistent property. It is
valid only while the system is up. Setting
altroot defaults to using
though this may be overridden using an explicit setting.
The following property can be set only at import time:
- If set to on, the pool will be imported in
read-only mode. This property can also be referred to by its shortened
column name, rdonly.
The following properties can be set at creation time and import time, and later
changed with the
- Pool sector size exponent, to the power of 2
(internally referred to as ashift). Values
from 9 to 16, inclusive, are valid; also, the value 0 (the default) means
to auto-detect using the kernel's block layer and a ZFS internal exception
list. I/O operations will be aligned to the specified size boundaries.
Additionally, the minimum (disk) write size will be set to the specified
size, so this represents a space vs. performance trade-off. For optimal
performance, the pool sector size should be greater than or equal to the
sector size of the underlying disks. The typical case for setting this
property is when performance is important and the underlying disks use
4KiB sectors but report 512B sectors to the OS (for compatibility
reasons); in that case, set
is 1<<12 =
4096). When set, this property is used as the
default hint value in subsequent vdev operations (add, attach and
replace). Changing this value will not modify any existing vdev, not even
on disk replacement; however it can be used, for instance, to replace a
dying 512B sectors disk with a newer 4KiB sectors device: this will
probably result in bad performance but at the same time could prevent loss
- Controls automatic pool expansion when the underlying LUN is grown. If set
to on, the pool will be resized according to
the size of the expanded device. If the device is part of a mirror or
raidz then all devices within that mirror/raidz group must be expanded
before the new space is made available to the pool. The default behavior
is off. This property can also be referred to
by its shortened column name, expand.
- Controls automatic device replacement. If set to
off, device replacement must be initiated by
the administrator by using the
replace command. If set to
on, any new device, found in the same
physical location as a device that previously belonged to the pool, is
automatically formatted and replaced. The default behavior is
off. This property can also be referred to by
its shortened column name, replace.
Autoreplace can also be used with virtual disks (like device mapper)
provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf.
See the vdev_id(8) manual page for more
details. Autoreplace and autoonline require the ZFS Event Daemon be
configured and running. See the zed(8) manual
page for more details.
- When set to on space which has been recently
freed, and is no longer allocated by the pool, will be periodically
trimmed. This allows block device vdevs which support BLKDISCARD, such as
SSDs, or file vdevs on which the underlying file system supports
hole-punching, to reclaim unused blocks. The default value for this
property is off.
Automatic TRIM does not immediately reclaim blocks after a free. Instead, it
will optimistically delay allowing smaller ranges to be aggregated into a
few larger ones. These can then be issued more efficiently to the storage.
TRIM on L2ARC devices is enabled by setting
l2arc_trim_ahead > 0.
Be aware that automatic trimming of recently freed data blocks can put
significant stress on the underlying storage devices. This will vary
depending of how well the specific device handles these commands. For
lower-end devices it is often possible to achieve most of the benefits of
automatic trimming by running an on-demand (manual) TRIM periodically
- Identifies the default bootable dataset for the root pool. This property
is expected to be set mainly by the installation and upgrade programs. Not
all Linux distribution boot processes use the bootfs property.
- Controls the location of where the pool configuration is cached.
Discovering all pools on system startup requires a cached copy of the
configuration data that is stored on the root file system. All pools in
this cache are automatically imported when the system boots. Some
environments, such as install and clustering, need to cache this
information in a different location so that pools are not automatically
imported. Setting this property caches the pool configuration in a
different location that can later be imported with
-c. Setting it to the value
none creates a temporary pool that is never
cached, and the “” (empty string) uses the default location.
Multiple pools can share the same cache file. Because the kernel destroys
and recreates this file when pools are added and removed, care should be
taken when attempting to access this file. When the last pool using a
cachefile is exported or destroyed, the file
will be empty.
- A text string consisting of printable ASCII characters that will be stored
such that it is available even if the pool becomes faulted. An
administrator can provide additional information about a pool using this
- Specifies that the pool maintain compatibility with specific feature sets.
When set to off (or unset) compatibility is
disabled (all features may be enabled); when set to
legacyno features may be enabled. When set to
a comma-separated list of filenames (each filename may either be an
absolute path, or relative to
lists of requested features are read from those files, separated by
whitespace and/or commas. Only features present in all files may be
zpool-upgrade(8) for more information on the
operation of compatibility feature sets.
- This property is deprecated and no longer has any effect.
- Controls whether a non-privileged user is granted access based on the
dataset permissions defined on the dataset. See
zfs(8) for more information on ZFS delegated
- Controls the system behavior in the event of catastrophic pool failure.
This condition is typically a result of a loss of connectivity to the
underlying storage device(s) or a failure of all devices within the pool.
The behavior of such an event is determined as follows:
- Blocks all I/O access until the device connectivity is recovered and
the errors are cleared. This is the default behavior.
EIO to any new write I/O
requests but allows reads to any of the remaining healthy devices. Any
write requests that have yet to be committed to disk would be
- Prints out a message to the console and generates a system crash
- The value of this property is the current state of
feature_name. The only valid value when
setting this property is enabled which moves
feature_name to the enabled state. See
zpool-features(7) for details on feature
- Controls whether information about snapshots associated with this pool is
list is run without the
-t option. The default value is
off. This property can also be referred to by
its shortened name, listsnaps.
- Controls whether a pool activity check should be performed during
import. When a pool is determined to be
active it cannot be imported, even with the
-f option. This property is intended to
be used in failover configurations where multiple hosts have access to a
pool on shared storage.
Multihost provides protection on import only. It does not protect against an
individual device being used in multiple pools, regardless of the type of
vdev. See the discussion under
When this property is on, periodic writes to storage occur to show the pool
is in use. See zfs_multihost_interval in the
zfs(4) manual page. In order to enable this
property each host must set a unique hostid. See
spl(4) for additional details. The default
value is off.
- The current on-disk version of the pool. This can be increased, but never
decreased. The preferred method of updating pools is with the
upgrade command, though this property
can be used when a specific version is needed for backwards compatibility.
Once feature flags are enabled on a pool this property will no longer have