zpool-status.8

ZPOOL-STATUS(8) System Manager's Manual ZPOOL-STATUS(8)

zpool-statusshow detailed health status for ZFS storage pools

zpool status [-dDegiLpPstvx] [-T u|d] [-c [SCRIPT1[,SCRIPT2]…]] [pool]… [interval [count]] [-j [--json-int, --json-flat-vdevs, --json-pool-key-guid]]

Displays the detailed health status for the given pools. If no pool is specified, then the status of each pool in the system is displayed. For more information on pool and device health, see the Device Failure and Recovery section of zpoolconcepts(7).

If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change.

Display vdev enclosure slot power status (on or off).
[SCRIPT1[,SCRIPT2]…]
Run a script (or scripts) on each vdev and include the output as a new column in the zpool status output. See the -c option of zpool iostat for complete details.
[--json-int, --json-flat-vdevs, --json-pool-key-guid]
Display the status for ZFS pools in JSON format. Specify to display numbers in integer format instead of strings. Specify to display vdevs in flat hierarchy instead of nested vdev objects. Specify to set pool GUID as key for pool objects instead of pool names.
Display the number of Direct I/O write checksum verify errors that have occured on a top-level VDEV. See zfs_vdev_direct_write_verify in zfs(4) for details about the conditions that can cause Direct I/O write checksum verify failures to occur.
Display a histogram of deduplication statistics, showing the allocated (physically present on disk) and referenced (logically referenced in the pool) block counts and sizes by reference count. If repeated, (-DD), also shows statistics on how much of the DDT is resident in the ARC.
Only show unhealthy vdevs (not-ONLINE or with errors).
Display vdev GUIDs instead of the normal device names These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands.
Display vdev initialization status.
Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it.
Display numbers in parsable (exact) values.
Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the -L flag.
Display the number of leaf vdev slow I/O operations. This is the number of I/O operations that didn't complete in milliseconds ( by default). This does not necessarily mean the I/O operations failed to complete, just took an unreasonably long amount of time. This may indicate a problem with the underlying storage.
Display vdev TRIM status.
u|d
Display a time stamp. Specify u for a printed representation of the internal representation of time. See time(1). Specify d for standard date format. See date(1).
Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub. If the head_errlog feature is enabled and files containing errors have been removed then the respective filenames will not be reported in subsequent runs of this command.
Only display status for pools that are exhibiting errors or are otherwise unavailable. Warnings about pools not using the latest on-disk format will not be included.

Additional columns can be added to the zpool status and zpool iostat output with -c.

# zpool status -c vendor,model,size
   NAME     STATE  READ WRITE CKSUM vendor  model        size
   tank     ONLINE 0    0     0
   mirror-0 ONLINE 0    0     0
   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T

# zpool iostat -vc size
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write  size
----------  -----  -----  -----  -----  -----  -----  ----
rpool       14.6G  54.9G      4     55   250K  2.69M
  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
----------  -----  -----  -----  -----  -----  -----  ----

zpool status can output in JSON format if -j is specified. -c can be used to run a script on each VDEV.

# zpool status -j -c vendor,model,size | jq
{
  "output_version": {
    "command": "zpool status",
    "vers_major": 0,
    "vers_minor": 1
  },
  "pools": {
    "tank": {
      "name": "tank",
      "state": "ONLINE",
      "guid": "3920273586464696295",
      "txg": "16597",
      "spa_version": "5000",
      "zpl_version": "5",
      "status": "OK",
      "vdevs": {
        "tank": {
          "name": "tank",
          "alloc_space": "62.6G",
          "total_space": "15.0T",
          "def_space": "11.3T",
          "read_errors": "0",
          "write_errors": "0",
          "checksum_errors": "0",
          "vdevs": {
            "raidz1-0": {
              "name": "raidz1-0",
              "vdev_type": "raidz",
              "guid": "763132626387621737",
              "state": "HEALTHY",
              "alloc_space": "62.5G",
              "total_space": "10.9T",
              "def_space": "7.26T",
              "rep_dev_size": "10.9T",
              "read_errors": "0",
              "write_errors": "0",
              "checksum_errors": "0",
              "vdevs": {
                "ca1eb824-c371-491d-ac13-37637e35c683": {
                  "name": "ca1eb824-c371-491d-ac13-37637e35c683",
                  "vdev_type": "disk",
                  "guid": "12841765308123764671",
                  "path": "/dev/disk/by-partuuid/ca1eb824-c371-491d-ac13-37637e35c683",
                  "state": "HEALTHY",
                  "rep_dev_size": "3.64T",
                  "phys_space": "3.64T",
                  "read_errors": "0",
                  "write_errors": "0",
                  "checksum_errors": "0",
                  "vendor": "ATA",
                  "model": "WDC WD40EFZX-68AWUN0",
                  "size": "3.6T"
                },
                "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7": {
                  "name": "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7",
                  "vdev_type": "disk",
                  "guid": "1527839927278881561",
                  "path": "/dev/disk/by-partuuid/97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7",
                  "state": "HEALTHY",
                  "rep_dev_size": "3.64T",
                  "phys_space": "3.64T",
                  "read_errors": "0",
                  "write_errors": "0",
                  "checksum_errors": "0",
                  "vendor": "ATA",
                  "model": "WDC WD40EFZX-68AWUN0",
                  "size": "3.6T"
                },
                "e9ddba5f-f948-4734-a472-cb8aa5f0ff65": {
                  "name": "e9ddba5f-f948-4734-a472-cb8aa5f0ff65",
                  "vdev_type": "disk",
                  "guid": "6982750226085199860",
                  "path": "/dev/disk/by-partuuid/e9ddba5f-f948-4734-a472-cb8aa5f0ff65",
                  "state": "HEALTHY",
                  "rep_dev_size": "3.64T",
                  "phys_space": "3.64T",
                  "read_errors": "0",
                  "write_errors": "0",
                  "checksum_errors": "0",
                  "vendor": "ATA",
                  "model": "WDC WD40EFZX-68AWUN0",
                  "size": "3.6T"
                }
              }
            }
          }
        }
      },
      "dedup": {
        "mirror-2": {
          "name": "mirror-2",
          "vdev_type": "mirror",
          "guid": "2227766268377771003",
          "state": "HEALTHY",
          "alloc_space": "89.1M",
          "total_space": "3.62T",
          "def_space": "3.62T",
          "rep_dev_size": "3.62T",
          "read_errors": "0",
          "write_errors": "0",
          "checksum_errors": "0",
          "vdevs": {
            "db017360-d8e9-4163-961b-144ca75293a3": {
              "name": "db017360-d8e9-4163-961b-144ca75293a3",
              "vdev_type": "disk",
              "guid": "17880913061695450307",
              "path": "/dev/disk/by-partuuid/db017360-d8e9-4163-961b-144ca75293a3",
              "state": "HEALTHY",
              "rep_dev_size": "3.63T",
              "phys_space": "3.64T",
              "read_errors": "0",
              "write_errors": "0",
              "checksum_errors": "0",
              "vendor": "ATA",
              "model": "WDC WD40EFZX-68AWUN0",
              "size": "3.6T"
            },
            "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f": {
              "name": "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f",
              "vdev_type": "disk",
              "guid": "10276374011610020557",
              "path": "/dev/disk/by-partuuid/952c3baf-b08a-4a8c-b7fa-33a07af5fe6f",
              "state": "HEALTHY",
              "rep_dev_size": "3.63T",
              "phys_space": "3.64T",
              "read_errors": "0",
              "write_errors": "0",
              "checksum_errors": "0",
              "vendor": "ATA",
              "model": "WDC WD40EFZX-68AWUN0",
              "size": "3.6T"
            }
          }
        }
      },
      "special": {
        "25d418f8-92bd-4327-b59f-7ef5d5f50d81": {
          "name": "25d418f8-92bd-4327-b59f-7ef5d5f50d81",
          "vdev_type": "disk",
          "guid": "3935742873387713123",
          "path": "/dev/disk/by-partuuid/25d418f8-92bd-4327-b59f-7ef5d5f50d81",
          "state": "HEALTHY",
          "alloc_space": "37.4M",
          "total_space": "444G",
          "def_space": "444G",
          "rep_dev_size": "444G",
          "phys_space": "447G",
          "read_errors": "0",
          "write_errors": "0",
          "checksum_errors": "0",
          "vendor": "ATA",
          "model": "Micron_5300_MTFDDAK480TDS",
          "size": "447.1G"
        }
      },
      "error_count": "0"
    }
  }
}

zpool-events(8), zpool-history(8), zpool-iostat(8), zpool-list(8), zpool-resilver(8), zpool-scrub(8), zpool-wait(8)

February 14, 2024 Debian