zpool-events.8
ZPOOL-EVENTS(8) | System Manager's Manual | ZPOOL-EVENTS(8) |
NAME
zpool-events
—
list recent events generated by kernel
SYNOPSIS
zpool |
events [-vHf ]
[pool] |
zpool |
events -c |
DESCRIPTION
Lists all recent events generated by the ZFS kernel modules. These events are consumed by the zed(8) and used to automate administrative tasks such as replacing a failed device with a hot spare. For more information about the subclasses and event payloads that can be generated see EVENTS and the following sections.
OPTIONS
EVENTS
These are the different event subclasses. The full event name would be ereport.fs.zfs.SUBCLASS, but only the last part is listed here.
- checksum
- Issued when a checksum error has been detected.
- io
- Issued when there is an I/O error in a vdev in the pool.
- data
- Issued when there have been data errors in the pool.
- deadman
- Issued when an I/O request is determined to be "hung", this can be caused by lost completion events due to flaky hardware or drivers. See zfs_deadman_failmode in zfs(4) for additional information regarding "hung" I/O detection and configuration.
- delay
- Issued when a completed I/O request exceeds the maximum allowed time specified by the zio_slow_io_ms module parameter. This can be an indicator of problems with the underlying storage device. The number of delay events is ratelimited by the zfs_slow_io_events_per_second module parameter.
- dio_verify
- Issued when there was a checksum verify error after a Direct I/O write has been issued. This event can only take place if the module parameter zfs_vdev_direct_write_verify is not set to zero. See zfs(4) for more details on the zfs_vdev_direct_write_verify module paramter.
- config
- Issued every time a vdev change have been done to the pool.
- zpool
- Issued when a pool cannot be imported.
- zpool.destroy
- Issued when a pool is destroyed.
- zpool.export
- Issued when a pool is exported.
- zpool.import
- Issued when a pool is imported.
- zpool.reguid
- Issued when a REGUID (new unique identifier for the pool have been regenerated) have been detected.
- vdev.unknown
- Issued when the vdev is unknown. Such as trying to clear device errors on a vdev that have failed/been kicked from the system/pool and is no longer available.
- vdev.open_failed
- Issued when a vdev could not be opened (because it didn't exist for example).
- vdev.corrupt_data
- Issued when corrupt data have been detected on a vdev.
- vdev.no_replicas
- Issued when there are no more replicas to sustain the pool. This would lead to the pool being DEGRADED.
- vdev.bad_guid_sum
- Issued when a missing device in the pool have been detected.
- vdev.too_small
- Issued when the system (kernel) have removed a device, and ZFS notices that the device isn't there any more. This is usually followed by a probe_failure event.
- vdev.bad_label
- Issued when the label is OK but invalid.
- vdev.bad_ashift
- Issued when the ashift alignment requirement has increased.
- vdev.remove
- Issued when a vdev is detached from a mirror (or a spare detached from a vdev where it have been used to replace a failed drive - only works if the original drive have been re-added).
- vdev.clear
- Issued when clearing device errors in a pool. Such as running
zpool
clear
on a device in the pool. - vdev.check
- Issued when a check to see if a given vdev could be opened is started.
- vdev.spare
- Issued when a spare have kicked in to replace a failed device.
- vdev.autoexpand
- Issued when a vdev can be automatically expanded.
- io_failure
- Issued when there is an I/O failure in a vdev in the pool.
- probe_failure
- Issued when a probe fails on a vdev. This would occur if a vdev have been kicked from the system outside of ZFS (such as the kernel have removed the device).
- log_replay
- Issued when the intent log cannot be replayed. The can occur in the case of a missing or damaged log device.
- resilver.start
- Issued when a resilver is started.
- resilver.finish
- Issued when the running resilver have finished.
- scrub.start
- Issued when a scrub is started on a pool.
- scrub.finish
- Issued when a pool has finished scrubbing.
- scrub.abort
- Issued when a scrub is aborted on a pool.
- scrub.resume
- Issued when a scrub is resumed on a pool.
- scrub.paused
- Issued when a scrub is paused on a pool.
- bootfs.vdev.attach
PAYLOADS
This is the payload (data, information) that accompanies an event.
For zed(8), these are set to uppercase and prefixed with ZEVENT_.
- pool
- Pool name.
- pool_failmode
- Failmode - wait, continue, or panic. See the failmode property in zpoolprops(7) for more information.
- pool_guid
- The GUID of the pool.
- pool_context
- The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, 4=recover 5=error).
- vdev_guid
- The GUID of the vdev in question (the vdev failing or operated upon with
zpool
clear
, etc.). - vdev_type
- Type of vdev - disk, file, mirror, etc. See the Virtual Devices section of zpoolconcepts(7) for more information on possible values.
- vdev_path
- Full path of the vdev, including any -partX.
- vdev_devid
- ID of vdev (if any).
- vdev_fru
- Physical FRU location.
- vdev_state
- State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to open, 5=faulted, 6=degraded, 7=healthy).
- vdev_ashift
- The ashift value of the vdev.
- vdev_complete_ts
- The time the last I/O request completed for the specified vdev.
- vdev_delta_ts
- The time since the last I/O request completed for the specified vdev.
- vdev_spare_paths
- List of spares, including full path and any -partX.
- vdev_spare_guids
- GUID(s) of spares.
- vdev_read_errors
- How many read errors that have been detected on the vdev.
- vdev_write_errors
- How many write errors that have been detected on the vdev.
- vdev_cksum_errors
- How many checksum errors that have been detected on the vdev.
- parent_guid
- GUID of the vdev parent.
- parent_type
- Type of parent. See vdev_type.
- parent_path
- Path of the vdev parent (if any).
- parent_devid
- ID of the vdev parent (if any).
- zio_objset
- The object set number for a given I/O request.
- zio_object
- The object number for a given I/O request.
- zio_level
- The indirect level for the block. Level 0 is the lowest level and includes data blocks. Values > 0 indicate metadata blocks at the appropriate level.
- zio_blkid
- The block ID for a given I/O request.
- zio_err
- The error number for a failure when handling a given I/O request, compatible with errno(3) with the value of EBADE used to indicate a ZFS checksum error.
- zio_offset
- The offset in bytes of where to write the I/O request for the specified vdev.
- zio_size
- The size in bytes of the I/O request.
- zio_flags
- The current flags describing how the I/O request should be handled. See the I/O FLAGS section for the full list of I/O flags.
- zio_stage
- The current stage of the I/O in the pipeline. See the I/O STAGES section for a full list of all the I/O stages.
- zio_pipeline
- The valid pipeline stages for the I/O. See the I/O STAGES section for a full list of all the I/O stages.
- zio_delay
- The time elapsed (in nanoseconds) waiting for the block layer to complete the I/O request. Unlike zio_delta, this does not include any vdev queuing time and is therefore solely a measure of the block layer performance.
- zio_timestamp
- The time when a given I/O request was submitted.
- zio_delta
- The time required to service a given I/O request.
- prev_state
- The previous state of the vdev.
- cksum_algorithm
- Checksum algorithm used. See zfsprops(7) for more information on the available checksum algorithms.
- cksum_byteswap
- Whether or not the data is byteswapped.
- bad_ranges
- [start, end) pairs of corruption offsets. Offsets are always aligned on a 64-bit boundary, and can include some gaps of non-corruption. (See bad_ranges_min_gap)
- bad_ranges_min_gap
- In order to bound the size of the bad_ranges array, gaps of non-corruption less than or equal to bad_ranges_min_gap bytes have been merged with adjacent corruption. Always at least 8 bytes, since corruption is detected on a 64-bit word basis.
- bad_range_sets
- This array has one element per range in bad_ranges. Each element contains the count of bits in that range which were clear in the good data and set in the bad data.
- bad_range_clears
- This array has one element per range in bad_ranges. Each element contains the count of bits for that range which were set in the good data and clear in the bad data.
- bad_set_bits
- If this field exists, it is an array of (bad data & ~(good data)); that is, the bits set in the bad data which are cleared in the good data. Each element corresponds a byte whose offset is in a range in bad_ranges, and the array is ordered by offset. Thus, the first element is the first byte in the first bad_ranges range, and the last element is the last byte in the last bad_ranges range.
- bad_cleared_bits
- Like bad_set_bits, but contains (good data & ~(bad data)); that is, the bits set in the good data which are cleared in the bad data.
I/O STAGES
The ZFS I/O pipeline is comprised of various stages which are defined below. The individual stages are used to construct these basic I/O operations: Read, Write, Free, Claim, Flush and Trim. These stages may be set on an event to describe the life cycle of a given I/O request.
Stage | Bit Mask | Operations |
ZIO_STAGE_OPEN | 0x00000001 | RWFCXT |
ZIO_STAGE_READ_BP_INIT | 0x00000002 | R----- |
ZIO_STAGE_WRITE_BP_INIT | 0x00000004 | -W---- |
ZIO_STAGE_FREE_BP_INIT | 0x00000008 | --F--- |
ZIO_STAGE_ISSUE_ASYNC | 0x00000010 | -WF--T |
ZIO_STAGE_WRITE_COMPRESS | 0x00000020 | -W---- |
ZIO_STAGE_ENCRYPT | 0x00000040 | -W---- |
ZIO_STAGE_CHECKSUM_GENERATE | 0x00000080 | -W---- |
ZIO_STAGE_NOP_WRITE | 0x00000100 | -W---- |
ZIO_STAGE_BRT_FREE | 0x00000200 | --F--- |
ZIO_STAGE_DDT_READ_START | 0x00000400 | R----- |
ZIO_STAGE_DDT_READ_DONE | 0x00000800 | R----- |
ZIO_STAGE_DDT_WRITE | 0x00001000 | -W---- |
ZIO_STAGE_DDT_FREE | 0x00002000 | --F--- |
ZIO_STAGE_GANG_ASSEMBLE | 0x00004000 | RWFC-- |
ZIO_STAGE_GANG_ISSUE | 0x00008000 | RWFC-- |
ZIO_STAGE_DVA_THROTTLE | 0x00010000 | -W---- |
ZIO_STAGE_DVA_ALLOCATE | 0x00020000 | -W---- |
ZIO_STAGE_DVA_FREE | 0x00040000 | --F--- |
ZIO_STAGE_DVA_CLAIM | 0x00080000 | ---C-- |
ZIO_STAGE_READY | 0x00100000 | RWFCIT |
ZIO_STAGE_VDEV_IO_START | 0x00200000 | RW--XT |
ZIO_STAGE_VDEV_IO_DONE | 0x00400000 | RW--XT |
ZIO_STAGE_VDEV_IO_ASSESS | 0x00800000 | RW--XT |
ZIO_STAGE_CHECKSUM_VERIFY | 0x01000000 | R----- |
ZIO_STAGE_DIO_CHECKSUM_VERIFY | 0x02000000 | -W---- |
ZIO_STAGE_DONE | 0x04000000 | RWFCXT |
I/O FLAGS
Every I/O request in the pipeline contains a set of flags which describe its function and are used to govern its behavior. These flags will be set in an event as a zio_flags payload entry.
Flag | Bit Mask |
ZIO_FLAG_DONT_AGGREGATE | 0x00000001 |
ZIO_FLAG_IO_REPAIR | 0x00000002 |
ZIO_FLAG_SELF_HEAL | 0x00000004 |
ZIO_FLAG_RESILVER | 0x00000008 |
ZIO_FLAG_SCRUB | 0x00000010 |
ZIO_FLAG_SCAN_THREAD | 0x00000020 |
ZIO_FLAG_PHYSICAL | 0x00000040 |
ZIO_FLAG_CANFAIL | 0x00000080 |
ZIO_FLAG_SPECULATIVE | 0x00000100 |
ZIO_FLAG_CONFIG_WRITER | 0x00000200 |
ZIO_FLAG_DONT_RETRY | 0x00000400 |
ZIO_FLAG_NODATA | 0x00001000 |
ZIO_FLAG_INDUCE_DAMAGE | 0x00002000 |
ZIO_FLAG_IO_ALLOCATING | 0x00004000 |
ZIO_FLAG_IO_RETRY | 0x00008000 |
ZIO_FLAG_PROBE | 0x00010000 |
ZIO_FLAG_TRYHARD | 0x00020000 |
ZIO_FLAG_OPTIONAL | 0x00040000 |
ZIO_FLAG_DONT_QUEUE | 0x00080000 |
ZIO_FLAG_DONT_PROPAGATE | 0x00100000 |
ZIO_FLAG_IO_BYPASS | 0x00200000 |
ZIO_FLAG_IO_REWRITE | 0x00400000 |
ZIO_FLAG_RAW_COMPRESS | 0x00800000 |
ZIO_FLAG_RAW_ENCRYPT | 0x01000000 |
ZIO_FLAG_GANG_CHILD | 0x02000000 |
ZIO_FLAG_DDT_CHILD | 0x04000000 |
ZIO_FLAG_GODFATHER | 0x08000000 |
ZIO_FLAG_NOPWRITE | 0x10000000 |
ZIO_FLAG_REEXECUTED | 0x20000000 |
ZIO_FLAG_DELEGATED | 0x40000000 |
ZIO_FLAG_FASTWRITE | 0x80000000 |
SEE ALSO
February 28, 2024 | Debian |