|ZFSPROPS(7)||Miscellaneous Information Manual||ZFSPROPS(7)|
native and user-defined properties of ZFS datasetsUser Properties section, below.
1536M, 1.5g, 1.50GB. The values of non-numeric properties are case sensitive and must be lowercase, except for mountpoint, sharenfs, and sharesmb. The following native properties consist of read-only statistics about the dataset. These properties can be neither set, nor inherited. Native properties apply to all dataset types unless otherwise noted.
- The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. Because space is shared within a pool, availability can be limited by any number of factors, including physical pool size, quotas, reservations, or other datasets within the pool. This property can also be referred to by its shortened column name, avail.
- For non-snapshots, the compression ratio achieved for the
used space of this dataset, expressed as a
multiplier. The used property includes
descendant datasets, and, for clones, does not include the space shared
with the origin snapshot. For snapshots, the
compressratio is the same as the
refcompressratio property. Compression can be
turned on by running:
setcompression=on dataset. The default value is off.
- The transaction group (txg) in which the dataset was created. Bookmarks have the same createtxg as the snapshot they are initially tied to. This property is suitable for ordering a list of snapshots, e.g. for incremental send and receive.
- The time this dataset was created.
- For snapshots, this property is a comma-separated list of filesystems or
volumes which are clones of this snapshot. The clones'
origin property is this snapshot. If the
clones property is not empty, then this
snapshot can not be destroyed (even with the
-foptions). The roles of origin and clone can be swapped by promoting the clone with the
- This property is on if the snapshot has been
marked for deferred destroy by using the
-dcommand. Otherwise, the property is off.
- For encrypted datasets, indicates where the dataset is currently
inheriting its encryption key from. Loading or unloading a key for the
encryptionroot will implicitly load / unload
the key for any inheriting datasets (see
unload-keyfor details). Clones will always share an encryption key with their origin. See the Encryption section of zfs-load-key(8) for details.
- The total number of filesystems and volumes that exist under this location in the dataset tree. This value is only available when a filesystem_limit has been set somewhere in the tree under which the dataset resides.
- Indicates if an encryption key is currently loaded into ZFS. The possible
values are none,
- The 64 bit GUID of this dataset or bookmark which does not change over its entire lifetime. When a snapshot is sent to another pool, the received snapshot has the same GUID. Thus, the guid is suitable to identify a snapshot across pools.
- The amount of space that is “logically” accessible by this dataset. See the referenced property. The logical space ignores the effect of the compression and copies properties, giving a quantity closer to the amount of data that applications see. However, it does include space consumed by metadata. This property can also be referred to by its shortened column name, lrefer.
- The amount of space that is “logically” consumed by this dataset and all its descendents. See the used property. The logical space ignores the effect of the compression and copies properties, giving a quantity closer to the amount of data that applications see. However, it does include space consumed by metadata. This property can also be referred to by its shortened column name, lused.
- For file systems, indicates whether the file system is currently mounted. This property can be either yes or no.
- A unique identifier for this dataset within the pool. Unlike the dataset's guid, the objsetid of a dataset is not transferred to other pools when the snapshot is copied with a send/receive operation. The objsetid can be reused (for a new dataset) after the dataset is deleted.
- For cloned file systems or volumes, the snapshot from which the clone was created. See also the clones property.
- For filesystems or volumes which have saved partially-completed state from
-s, this opaque token can be provided to
-tto resume and complete the
- For bookmarks, this is the list of snapshot guids the bookmark contains a redaction list for. For snapshots, this is the list of snapshot guids the snapshot is redacted with respect to.
- The amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical. This property can also be referred to by its shortened column name, refer.
- The compression ratio achieved for the referenced space of this dataset, expressed as a multiplier. See also the compressratio property.
- The total number of snapshots that exist under this location in the dataset tree. This value is only available when a snapshot_limit has been set somewhere in the tree under which the dataset resides.
- The type of dataset: filesystem, volume, snapshot, or bookmark.
- The amount of space consumed by this dataset and all its descendents. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that is freed if this dataset is recursively destroyed, is the greater of its space used and its reservation. The used space of a snapshot (see the Snapshots section of zfsconcepts(7)) is space that is referenced exclusively by this snapshot. If this snapshot is destroyed, the amount of used space will be freed. Space that is shared by multiple snapshots isn't accounted for in this metric. When a snapshot is destroyed, space that was previously shared with this snapshot can become unique to snapshots adjacent to it, thus changing the used space of those snapshots. The used space of the latest snapshot can also be affected by changes in the file system. Note that the used space of a snapshot is a subset of the written space of the snapshot. The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using fsync(2) or O_SYNC does not necessarily guarantee that the space usage information is updated immediately.
- The usedby* properties decompose the
used properties into the various reasons that
space is used. Specifically, used
These properties are only available for datasets created on
zpool“version 13” pools.
- The amount of space used by children of this dataset, which would be freed if all the dataset's children were destroyed.
- The amount of space used by this dataset itself, which would be freed if the dataset were destroyed (after first removing any refreservation and destroying any necessary snapshots or descendents).
- The amount of space used by a refreservation set on this dataset, which would be freed if the refreservation was removed.
- The amount of space consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties because space can be shared by multiple snapshots.
- The amount of space consumed by the specified user in this dataset. Space
is charged to the owner of each file, as displayed by
-l. The amount of space charged is displayed by
-s. See the
userspacecommand for more information. Unprivileged users can access only their own space usage. The root user, or a user who has been granted the userused privilege with
allow, can access everyone's usage. The userused@... properties are not displayed by
getall. The user's name must be appended after the @ symbol, using one of the following forms:
- POSIX name (“joe”)
- POSIX numeric ID (“789”)
- SID name (“joe.smith@mydomain”)
- SID numeric ID (“S-1-123-456-789”)
- The userobjused property is similar to
userused but instead it counts the number of
objects consumed by a user. This property counts all objects allocated on
behalf of the user, it may differ from the results of system tools such as
-i. When the property xattr=on is set on a file system additional objects will be created per-file to store extended attributes. These additional objects are reflected in the userobjused value and are counted against the user's userobjquota. When a file system is configured to use xattr=sa no additional internal objects are normally required.
- This property is set to the number of user holds on this snapshot. User
holds are set by using the
- The amount of space consumed by the specified group in this dataset. Space
is charged to the group of each file, as displayed by
-l. See the userused@user property for more information. Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the groupused privilege with
allow, can access all groups' usage.
- The number of objects consumed by the specified group in this dataset.
Multiple objects may be charged to the group for each file when extended
attributes are in use. See the
property for more information.
Unprivileged users can only access their own groups' space usage. The root
user, or a user who has been granted the
groupobjused privilege with
allow, can access all groups' usage.
- The amount of space consumed by the specified project in this dataset.
Project is identified via the project identifier (ID) that is object-based
numeral attribute. An object can inherit the project ID from its parent
object (if the parent has the flag of inherit project ID that can be set
and changed via
-s) when being created. The privileged user can set and change object's project ID via
-sanytime. Space is charged to the project of each file, as displayed by
zfs project. See the userused@user property for more information. The root user, or a user who has been granted the projectused privilege with
zfs allow, can access all projects' usage.
- The projectobjused is similar to
projectused but instead it counts the number
of objects consumed by project. When the property
xattr=on is set
on a fileset, ZFS will create additional objects per-file to store
extended attributes. These additional objects are reflected in the
projectobjused value and are counted against
the project's projectobjquota. When a
filesystem is configured to use
additional internal objects are required. See the
property for more information.
The root user, or a user who has been granted the
projectobjused privilege with
zfs allow, can access all projects' objects usage.
- For volumes, specifies the block size of the volume. The blocksize cannot be changed once the volume has been written, so it should be set at volume creation time. The default blocksize for volumes is 16 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid. This property can also be referred to by its shortened column name, volblock.
- The amount of space referenced by this dataset, that was written since the previous snapshot (i.e. that is not referenced by the previous snapshot).
- The amount of referenced space written to this dataset since the specified snapshot. This is the space that is referenced by this dataset but was not referenced by the specified snapshot. The snapshot may be specified as a short snapshot name (just the part after the @), in which case it will be interpreted as a snapshot in the same filesystem as this dataset. The snapshot may be a full snapshot name (filesystem@snapshot), which for clones may be a snapshot in the origin's filesystem (or the origin of the origin's filesystem, etc.)
- Controls how ACEs are inherited when files and directories are created.
- does not inherit any ACEs.
- only inherits inheritable ACEs that specify “deny” permissions.
- default, removes the write_acl and write_owner permissions when the ACE is inherited.
- inherits all inheritable ACEs without any modifications.
- same meaning as passthrough, except that the owner@, group@, and everyone@ ACEs inherit the execute permission only if the file creation mode also requests the execute bit.
- Controls how an ACL is modified during chmod(2) and how inherited ACEs are
modified by the file creation mode:
- default, deletes all ACEs except for those representing the mode of the file or directory requested by chmod(2).
- reduces permissions granted in all ALLOW entries found in the ACL such that they are no greater than the group permissions specified by chmod(2).
- indicates that no changes are made to the ACL other than creating or updating the necessary ACL entries to represent the new mode of the file or directory.
- will cause the chmod(2) operation to return an error when used on any file or directory which has a non-trivial ACL whose entries can not be represented by a mode. chmod(2) is required to change the set user ID, set group ID, or sticky bits on a file or directory, as they do not have equivalent ACL entries. In order to use chmod(2) on a file or directory with a non-trivial ACL when aclmode is set to restricted, you must first remove all ACL entries which do not represent the current mode.
- Controls whether ACLs are enabled and if so what type of ACL to use. When
this property is set to a type of ACL not supported by the current
platform, the behavior is the same as if it were set to
- default on Linux, when a file system has the acltype property set to off then ACLs are disabled.
- an alias for off
- default on FreeBSD, indicates that NFSv4-style ZFS ACLs should be used. These ACLs can be managed with the getfacl(1) and setfacl(1). The nfsv4 ZFS ACL type is not yet supported on Linux.
- indicates POSIX ACLs should be used. POSIX ACLs are specific to Linux and are not functional on other platforms. POSIX ACLs are stored as an extended attribute and therefore will not overwrite any existing NFSv4 ACLs which may be set.
- an alias for posix
- Controls whether the access time for files is updated when they are read. Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities. The values on and off are equivalent to the atime and noatime mount options. The default value is on. See also relatime below.
- If this property is set to off, the file
system cannot be mounted, and is ignored by
-a. Setting this property to off is similar to setting the mountpoint property to none, except that the dataset still has a normal mountpoint property, which can be inherited. Setting this property to off allows datasets to be used solely as a mechanism to inherit properties. One example of setting canmount=off is to have two datasets with the same mountpoint, so that the children of both datasets appear in the same directory, but might have different inherited characteristics. When set to noauto, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the
-acommand or unmounted by the
-acommand. This property is not inherited.
- Controls the checksum used to verify data integrity. The default value is on, which automatically selects an appropriate algorithm (currently, fletcher4, but this may change in future releases). The value off disables integrity checking on user data. The value noparity not only disables integrity but also disables maintaining parity for user data. This setting is used internally by a dump device residing on a RAID-Z pool and should not be used by any other dataset. Disabling checksums is NOT a recommended practice. The sha512, skein, and edonr checksum algorithms require enabling the appropriate features on the pool. Please see zpool-features(7) for more information on these algorithms. Changing this property affects only newly-written data.
- Controls the compression algorithm used for this dataset. Setting compression to on indicates that the current default compression algorithm should be used. The default balances compression and decompression speed, with compression ratio and is expected to work well on a wide variety of workloads. Unlike all other settings for this property, on does not select a fixed compression type. As new compression algorithms are added to ZFS and enabled on a pool, the default compression algorithm may change. The current default compression algorithm is either lzjb or, if the lz4_compress feature is enabled, lz4. The lz4 compression algorithm is a high-performance replacement for the lzjb algorithm. It features significantly faster compression and decompression, as well as a moderately higher compression ratio than lzjb, but can only be used on pools with the lz4_compress feature set to enabled. See zpool-features(7) for details on ZFS feature flags and the lz4_compress feature. The lzjb compression algorithm is optimized for performance while providing decent data compression. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N, where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). The zstd compression algorithm provides both high compression ratios and good performance. You can specify the zstd level by using the value zstd-N, where N is an integer from 1 (fastest) to 19 (best compression ratio). zstd is equivalent to zstd-3. Faster speeds at the cost of the compression ratio can be requested by setting a negative zstd level. This is done using zstd-fast-N, where N is an integer in [1-9,10,20,30,...,100,500,1000] which maps to a negative zstd level. The lower the level the faster the compression - 1000 provides the fastest compression and lowest compression ratio. zstd-fast is equivalent to zstd-fast-1. The zle compression algorithm compresses runs of zeros. This property can also be referred to by its shortened column name compress. Changing this property affects only newly-written data. When any setting except off is selected, compression will explicitly check for blocks consisting of only zeroes (the NUL byte). When a zero-filled block is detected, it is stored as a hole and not compressed using the indicated compression algorithm. Any block being compressed must be no larger than 7/8 of its original size after compression, otherwise the compression will not be considered worthwhile and the block saved uncompressed. Note that when the logical block is less than 8 times the disk sector size this effectively reduces the necessary compression ratio; for example, 8kB blocks on disks with 4kB disk sectors must compress to 1/2 or less of their original size.
- This flag sets the SELinux context for all files in the file system under a mount point for that file system. See selinux(8) for more information.
- This flag sets the SELinux context for the file system file system being mounted. See selinux(8) for more information.
- This flag sets the SELinux default context for unlabeled files. See selinux(8) for more information.
- This flag sets the SELinux context for the root inode of the file system. See selinux(8) for more information.
- Controls the number of copies of data stored for this dataset. These
copies are in addition to any redundancy provided by the pool, for
example, mirroring or RAID-Z. The copies are stored on different disks, if
possible. The space used by multiple copies is charged to the associated
file and dataset, changing the used property
and counting against quotas and reservations.
Changing this property only affects newly-written data. Therefore, set this
property at file system creation time by using the
-ocopies=N option. Remember that ZFS will not import a pool with a missing top-level vdev. Do NOT create, for example a two-disk striped pool and set copies=2 on some datasets thinking you have setup redundancy for them. When a disk fails you will not be able to import the pool and will have lost all of your data. Encrypted datasets may not have copies=3 since the implementation stores some encryption metadata where the third copy would normally be.
- Controls whether device nodes can be opened on this file system. The default value is on. The values on and off are equivalent to the dev and nodev mount options.
- Configures deduplication for a dataset. The default value is off. The default deduplication checksum is sha256 (this may change in the future). When dedup is enabled, the checksum defined here overrides the checksum property. Setting the value to verify has the same effect as the setting sha256,verify. If set to verify, ZFS will do a byte-to-byte comparison in case of two blocks having the same signature to make sure the block contents are identical. Specifying verify is mandatory for the edonr algorithm. Unless necessary, deduplication should not be enabled on a system. See the Deduplication section of zfsconcepts(7).
- Specifies a compatibility mode or literal value for the size of dnodes in the file system. The default value is legacy. Setting this property to a value other than legacy requires the large_dnode pool feature to be enabled. Consider setting dnodesize to auto if the dataset uses the xattr=sa property setting and the workload makes heavy use of extended attributes. This may be applicable to SELinux-enabled systems, Lustre servers, and Samba servers, for example. Literal values are supported for cases where the optimal size is known in advance and for performance testing. Leave dnodesize set to legacy if you need to receive a send stream of this dataset on a pool that doesn't enable the large_dnode feature, or if you need to import this pool on a system that doesn't support the large_dnode feature. This property can also be referred to by its shortened column name, dnsize.
- Controls the encryption cipher suite (block cipher, key length, and mode) used for this dataset. Requires the encryption feature to be enabled on the pool. Requires a keyformat to be set at dataset creation time. Selecting encryption=on when creating a dataset indicates that the default encryption suite will be selected, which is currently aes-256-gcm. In order to provide consistent data protection, encryption must be specified at dataset creation time and it cannot be changed afterwards. For more details and caveats about encryption see the Encryption section of zfs-load-key(8).
- Controls what format the user's encryption key will be provided as. This
property is only set when the dataset is encrypted.
Raw keys and hex keys must be 32 bytes long (regardless of the chosen
encryption suite) and must be randomly generated. A raw key can be
generated with the following command:
Passphrases must be between 8 and 512 bytes long and will be processed through PBKDF2 before being used (see the pbkdf2iters property). Even though the encryption suite cannot be changed after dataset creation, the keyformat can be with
ddif=/dev/urandom bs=32 count=1 of=/path/to/output/key
- Controls where the user's encryption key will be loaded from by default
for commands such as
-l. This property is only set for encrypted datasets which are encryption roots. If unspecified, the default is prompt. Even though the encryption suite cannot be changed after dataset creation, the keylocation can be with either
change-key. If prompt is selected ZFS will ask for the key at the command prompt when it is required to access the encrypted data (see
load-keyfor details). This setting will also allow the key to be passed in via the standard input stream, but users should be careful not to place keys which should be kept secret on the command line. If a file URI is selected, the key will be loaded from the specified absolute file path. If an HTTPS or HTTP URL is selected, it will be GETted using fetch(3), libcurl, or nothing, depending on compile-time configuration and run-time availability. The SSL_CA_CERT_FILE environment variable can be set to set the location of the concatenated certificate store. The SSL_CA_CERT_PATH environment variable can be set to override the location of the directory containing the certificate authority bundle. The SSL_CLIENT_CERT_FILE and SSL_CLIENT_KEY_FILE environment variables can be set to configure the path to the client certificate and its key.
- Controls the number of PBKDF2 iterations that a
passphrase encryption key should be run
through when processing it into an encryption key. This property is only
defined when encryption is enabled and a keyformat of
passphrase is selected. The goal of PBKDF2 is
to significantly increase the computational difficulty needed to brute
force a user's passphrase. This is accomplished by forcing the attacker to
run each passphrase through a computationally expensive hashing function
many times before they arrive at the resulting key. A user who actually
knows the passphrase will only have to pay this cost once. As CPUs become
better at processing, this number should be raised to ensure that a brute
force attack is still not possible. The current default is
350000 and the minimum is
100000. This property may be changed with
- Controls whether processes can be executed from within this file system. The default value is on. The values on and off are equivalent to the exec and noexec mount options.
- Limits the number of filesystems and volumes that can exist under this point in the dataset tree. The limit is not enforced if the user is allowed to change the limit. Setting a filesystem_limit to on a descendent of a filesystem that already has a filesystem_limit does not override the ancestor's filesystem_limit, but rather imposes an additional limit. This feature must be enabled to be used (see zpool-features(7)).
- This value represents the threshold block size for including small file blocks into the special allocation class. Blocks smaller than or equal to this value will be assigned to the special allocation class while greater blocks will be assigned to the regular class. Valid values are zero or a power of two from 512B up to 1M. The default size is 0 which means no small file blocks will be allocated in the special class. Before setting this property, a special class vdev must be added to the pool. See zpoolconcepts(7) for more details on the special allocation class.
- Controls the mount point used for this file system. See the Mount Points section of zfsconcepts(7) for more information on how this property is used. When the mountpoint property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location.
- Controls whether the file system should be mounted with nbmand (Non-blocking mandatory locks). This is used for SMB clients. Changes to this property only take effect when the file system is umounted and remounted. Support for these locks is scarce and not described by POSIX.
- Allow mounting on a busy directory or a directory which already contains files or directories. This is the default mount behavior for Linux and FreeBSD file systems. On these platforms the property is on by default. Set to off to disable overlay mounts for consistency with OpenZFS on other platforms.
- Controls what is cached in the primary cache (ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all.
- Limits the amount of space a dataset and its descendents can consume. This property enforces a hard limit on the amount of space used. This includes all space consumed by descendents, including file systems and snapshots. Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit. Quotas cannot be set on volumes, as the volsize property acts as an implicit quota.
- Limits the number of snapshots that can be created on a dataset and its descendents. Setting a snapshot_limit on a descendent of a dataset that already has a snapshot_limit does not override the ancestor's snapshot_limit, but rather imposes an additional limit. The limit is not enforced if the user is allowed to change the limit. For example, this means that recursive snapshots taken from the global zone are counted against each delegated dataset within a zone. This feature must be enabled to be used (see zpool-features(7)).
- Limits the amount of space consumed by the specified user. User space
consumption is identified by the
Enforcement of user quotas may be delayed by several seconds. This delay
means that a user might exceed their quota before the system notices that
they are over quota and begins to refuse additional writes with the
EDQUOTerror message. See the
userspacecommand for more information. Unprivileged users can only access their own groups' space usage. The root user, or a user who has been granted the userquota privilege with
allow, can get and set everyone's quota. This property is not available on volumes, on file systems before version 4, or on pools before version 15. The userquota@... properties are not displayed by
getall. The user's name must be appended after the @ symbol, using one of the following forms:
- POSIX name (“joe”)
- POSIX numeric ID (“789”)
- SID name (“joe.smith@mydomain”)
- SID numeric ID (“S-1-123-456-789”)
- The userobjquota is similar to userquota but it limits the number of objects a user can create. Please refer to userobjused for more information about how objects are counted.
- Limits the amount of space consumed by the specified group. Group space
consumption is identified by the
Unprivileged users can access only their own groups' space usage. The root
user, or a user who has been granted the
groupquota privilege with
allow, can get and set all groups' quotas.
- The groupobjquota is similar to groupquota but it limits number of objects a group can consume. Please refer to userobjused for more information about how objects are counted.
- Limits the amount of space consumed by the specified project. Project
space consumption is identified by the
property. Please refer to projectused for
more information about how project is identified and set/changed.
The root user, or a user who has been granted the
projectquota privilege with
zfs allow, can access all projects' quota.
- The projectobjquota is similar to projectquota but it limits number of objects a project can consume. Please refer to userobjused for more information about how objects are counted.
- Controls whether this dataset can be modified. The default value is off. The values on and off are equivalent to the ro and rw mount options. This property can also be referred to by its shortened column name, rdonly.
- Specifies a suggested block size for files in the file system. This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns. For databases that create very large files but access them in small random chunks, these algorithms may be suboptimal. Specifying a recordsize greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance. The size specified must be a power of two greater than or equal to 512B and less than or equal to 128kB. If the large_blocks feature is enabled on the pool, the size may be up to 1MB. See zpool-features(7) for details on ZFS feature flags. Changing the file system's recordsize affects only files created afterward; existing files are unaffected. This property can also be referred to by its shortened column name, recsize.
- Controls what types of metadata are stored redundantly. ZFS stores an extra copy of metadata, so that if a single block is corrupted, the amount of user data lost is limited. This extra copy is in addition to any redundancy provided at the pool level (e.g. by mirroring or RAID-Z), and is in addition to an extra copy specified by the copies property (up to a total of 3 copies). For example if the pool is mirrored, copies=2, and redundant_metadata=most, then ZFS stores 6 copies of most metadata, and 4 copies of data and some metadata. When set to all, ZFS stores an extra copy of all metadata. If a single on-disk block is corrupt, at worst a single block of user data (which is recordsize bytes long) can be lost. When set to most, ZFS stores an extra copy of most types of metadata. This can improve performance of random writes, because less metadata must be written. In practice, at worst about 100 blocks (of recordsize bytes each) of user data can be lost if a single on-disk block is corrupt. The exact behavior of which metadata blocks are stored redundantly may change in future releases. The default value is all.
- Limits the amount of space a dataset can consume. This property enforces a hard limit on the amount of space used. This hard limit does not include space used by descendents, including file systems and snapshots.
- The minimum amount of space guaranteed to a dataset, not including its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations. If refreservation is set, a snapshot is only allowed if there is enough free pool space outside of this reservation to accommodate the current number of “referenced” bytes in the dataset. If refreservation is set to auto, a volume is thick provisioned (or “not sparse”). refreservation=auto is only supported on volumes. See volsize in the Native Properties section for more information about sparse volumes. This property can also be referred to by its shortened column name, refreserv.
- Controls the manner in which the access time is updated when atime=on is set. Turning this property on causes the access time to be updated relative to the modify or change time. Access time is only updated if the previous access time was earlier than the current modify or change time or if the existing access time hasn't been updated within the past 24 hours. The default value is off. The values on and off are equivalent to the relatime and norelatime mount options.
- The minimum amount of space guaranteed to a dataset and its descendants. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by its reservation. Reservations are accounted for in the parent datasets' space used, and count against the parent datasets' quotas and reservations. This property can also be referred to by its shortened column name, reserv.
- Controls what is cached in the secondary cache (L2ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all.
- Controls whether the setuid bit is respected for the file system. The default value is on. The values on and off are equivalent to the suid and nosuid mount options.
- Controls whether the file system is shared by using
Samba USERSHARES and what options are to be
used. Otherwise, the file system is automatically shared and unshared with
unsharecommands. If the property is set to on, the net(8) command is invoked to create a USERSHARE. Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name. The constructed name is a copy of the dataset name except that the characters in the dataset name, which would be invalid in the resource name, are replaced with underscore (_) characters. Linux does not currently support additional options which might be available on Solaris. If the sharesmb property is set to off, the file systems are unshared. The share is created with the ACL (Access Control List) "Everyone:F" ("F" stands for "full permissions", i.e. read and write permissions) and no guest access (which means Samba must be able to authenticate a real user, system passwd/shadow, LDAP or smbpasswd based) by default. This means that any additional access control (disallow specific user specific access etc) must be done on the underlying file system.
- Controls whether the file system is shared via NFS, and what options are
to be used. A file system with a sharenfs
property of off is managed with the
exportfs(8) command and entries in the
/etc/exports file. Otherwise, the file
system is automatically shared and unshared with the
unsharecommands. If the property is set to on, the dataset is shared using the default options:Please note that the options are comma-separated, unlike those found in exports(5). This is done to negate the need for quoting, as well as to make parsing with scripts easier. See exports(5) for the meaning of the default options. Otherwise, the exportfs(8) command is invoked with options equivalent to the contents of this property. When the sharenfs property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously off, or if they were shared before the property was changed. If the new property is off, the file systems are unshared.
- Provide a hint to ZFS about handling of synchronous requests in this dataset. If logbias is set to latency (the default), ZFS will use pool log devices (if configured) to handle the requests at low latency. If logbias is set to throughput, ZFS will not use configured pool log devices. ZFS will instead optimize synchronous operations for global pool throughput and efficient use of resources.
- Controls whether the volume snapshot devices under /dev/zvol/⟨pool⟩ are hidden or visible. The default value is hidden.
- Controls whether the .zfs directory is hidden or visible in the root of the file system as discussed in the Snapshots section of zfsconcepts(7). The default value is hidden.
- Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC). standard is the POSIX-specified behavior of ensuring all synchronous requests are written to stable storage and all devices are flushed to ensure data is not cached by device controllers (this is the default). always causes every file system transaction to be written and flushed before its system call returns. This has a large performance penalty. disabled disables synchronous requests. File system transactions are only committed to stable storage periodically. This option will give the highest performance. However, it is very dangerous as ZFS would be ignoring the synchronous transaction demands of applications such as databases or NFS. Administrators should only use this option when the risks are understood.
- The on-disk version of this file system, which is independent of the pool
version. This property can only be set to later supported versions. See
- For volumes, specifies the logical size of the volume. By default,
creating a volume establishes a reservation of equal size. For storage
pools with a version number of 9 or higher, a
refreservation is set instead. Any changes to
volsize are reflected in an equivalent change
to the reservation (or refreservation). The
volsize can only be set to a multiple of
volblocksize, and cannot be zero.
The reservation is kept equal to the volume's logical size to prevent
unexpected behavior for consumers. Without the reservation, the volume
could run out of space, resulting in undefined behavior or data
corruption, depending on how the volume is used. These effects can also
occur when the volume size is changed while it is in use (particularly
when shrinking the size). Extreme care should be used when adjusting the
Though not recommended, a “sparse volume” (also known as
“thin provisioned”) can be created by specifying the
-soption to the
-Vcommand, or by changing the value of the refreservation property (or reservation property on pool version 8 or earlier) after the volume has been created. A “sparse volume” is a volume where the value of refreservation is less than the size of the volume plus the space required to store its metadata. Consequently, writes to a sparse volume can fail with
ENOSPCwhen the pool is low on space. For a sparse volume, changes to volsize are not reflected in the refreservation. A volume that is not sparse is said to be “thick provisioned”. A sparse volume can become thick provisioned by setting refreservation to auto.
- This property specifies how volumes should be exposed to the OS. Setting it to full exposes volumes as fully fledged block devices, providing maximal functionality. The value geom is just an alias for full and is kept for compatibility. Setting it to dev hides its partitions. Volumes with property set to none are not exposed outside ZFS, but can be snapshotted, cloned, replicated, etc, that can be suitable for backup purposes. Value default means that volumes exposition is controlled by system-wide tunable zvol_volmode, where full, dev and none are encoded as 1, 2 and 3 respectively. The default value is full.
- Controls whether regular files should be scanned for viruses when a file is opened and closed. In addition to enabling this property, the virus scan service must also be enabled for virus scanning to occur. The default value is off. This property is not used by OpenZFS.
- Controls whether extended attributes are enabled for this file system. Two styles of extended attributes are supported: either directory based or system attribute based. The default value of on enables directory based extended attributes. This style of extended attribute imposes no practical limit on either the size or number of attributes which can be set on a file. Although under Linux the getxattr(2) and setxattr(2) system calls limit the maximum size to 64K. This is the most compatible style of extended attribute and is supported by all ZFS implementations. System attribute based xattrs can be enabled by setting the value to sa. The key advantage of this type of xattr is improved performance. Storing extended attributes as system attributes significantly decreases the amount of disk IO required. Up to 64K of data may be stored per-file in the space reserved for system attributes. If there is not enough space available for an extended attribute then it will be automatically written as a directory based xattr. System attribute based extended attributes are not accessible on platforms which do not support the xattr=sa feature. OpenZFS supports xattr=sa on both FreeBSD and Linux. The use of system attribute based xattrs is strongly encouraged for users of SELinux or POSIX ACLs. Both of these features heavily rely on extended attributes and benefit significantly from the reduced access time. The values on and off are equivalent to the xattr and noxattr mount options.
- Controls whether the dataset is managed from a jail. See zfs-jail(8) for more information. Jails are a FreeBSD feature and are not relevant on other platforms. The default value is off.
- Controls whether the dataset is managed from a non-global zone. Zones are a Solaris feature and are not relevant on other platforms. The default value is off.
createcommands, these properties are inherited from the parent dataset. If the parent dataset lacks these properties due to having been created prior to these features being supported, the new file system will have the default values for these properties.
- Indicates whether the file name matching algorithm used by the file system should be case-sensitive, case-insensitive, or allow a combination of both styles of matching. The default value for the casesensitivity property is sensitive. Traditionally, UNIX and POSIX file systems have case-sensitive file names. The mixed value for the casesensitivity property indicates that the file system can support requests for both case-sensitive and case-insensitive matching behavior. Currently, case-insensitive matching behavior on a file system that supports mixed behavior is limited to the SMB server product. For more information about the mixed value behavior, see the "ZFS Administration Guide".
- Indicates whether the file system should perform a unicode normalization of file names whenever two file names are compared, and which normalization algorithm should be used. File names are always stored unmodified, names are normalized as part of any comparison process. If this property is set to a legal value other than none, and the utf8only property was left unspecified, the utf8only property is automatically set to on. The default value of the normalization property is none. This property cannot be changed after the file system is created.
- Indicates whether the file system should reject file names that include characters that are not present in the UTF-8 character code set. If this property is explicitly set to off, the normalization property must either not be explicitly set or be set to none. The default value for the utf8only property is off. This property cannot be changed after the file system is created.
mountcommand for normal file systems, its mount options are set according to its properties. The correlation between properties and mount options is as follows:
-ooption, without affecting the property that is stored on disk. The values specified on the command line override the values stored in the dataset. The nosuid option is an alias for nodevices,nosetuid. These properties are reported as “temporary” by the
getcommand. If the properties are changed while the dataset is mounted, the new setting overrides any temporary settings.
set, and so forth) can be used to manipulate both native properties and user properties. Use the
inheritcommand to clear a user property. If the property is not defined in any parent dataset, it is removed entirely. Property values are limited to 8192 bytes.
|May 24, 2021||Debian|