[OmniOS-discuss] CKSUM error

Martin Truhlář martin.truhlar at archcon.cz
Thu Apr 28 11:44:43 UTC 2016


root at archnas:/root# zpool status -x
  pool: dpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 15h7m with 0 errors on Sun Apr 24 10:07:38 2016
config:

        NAME                       STATE     READ WRITE CKSUM
        dpool                      ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c1t50014EE00400FA16d0  ONLINE       0     0     0
            c1t50014EE2B40F14DBd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c1t50014EE05950B131d0  ONLINE       0     0     0
            c1t50014EE2B5E5A6B8d0  ONLINE       0     0     0
          mirror-2                 ONLINE       0     0     0
            c1t50014EE05958C51Bd0  ONLINE       0     0     0
            c1t50014EE0595617ACd0  ONLINE       0     0     0
          mirror-3                 ONLINE       0     0     0
            c1t50014EE0596B5DF9d0  ONLINE       0     0     1
            c1t50014EE0AEAE9B65d0  ONLINE       0     0     0
          mirror-5                 ONLINE       0     0     0
            c1t50014EE0AEABB8E7d0  ONLINE       0     0     0
            c1t50014EE0AEB44327d0  ONLINE       0     0     0
        logs
          c1t55CD2E4000050AC9d0    ONLINE       0     0     0
        cache
          c1t55CD2E4000339A59d0    ONLINE       0     0     0
        spares
          c2t2d0                   AVAIL

errors: No known data errors



root at archnas:/root# zpool iostat -v
                              capacity     operations    bandwidth
pool                       alloc   free   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
dpool                      2.48T  2.05T    567    214  4.42M  1.74M
  mirror                    533G   395G    112     20   896K   137K
    c1t50014EE00400FA16d0      -      -     20      7   746K   139K
    c1t50014EE2B40F14DBd0      -      -     16      6   743K   139K
  mirror                    519G   409G    110     34   885K   243K
    c1t50014EE05950B131d0      -      -     19     10   774K   245K
    c1t50014EE2B5E5A6B8d0      -      -     17     10   774K   245K
  mirror                    518G   410G    112     35   896K   251K
    c1t50014EE05958C51Bd0      -      -     19     10   776K   252K
    c1t50014EE0595617ACd0      -      -     20     10   778K   252K
  mirror                    519G   409G    112     38   897K   265K
    c1t50014EE0596B5DF9d0      -      -     19     11   779K   267K
    c1t50014EE0AEAE9B65d0      -      -     19     11   777K   266K
  mirror                    454G   474G    119     39   956K   274K
    c1t50014EE0AEABB8E7d0      -      -     20     10   762K   276K
    c1t50014EE0AEB44327d0      -      -     20     10   763K   276K
logs                           -      -      -      -      -      -
  c1t55CD2E4000050AC9d0    52.7M   222G      0     45      1   614K
cache                          -      -      -      -      -      -
  c1t55CD2E4000339A59d0     147G  21.1G     19      2   159K   287K
-------------------------  -----  -----  -----  -----  -----  -----
epool                      2.37T  8.50T    162    181  1.27M  3.35M
  raidz1                   2.37T  8.50T    162    181  1.27M  3.35M
    c1t50014EE20CA7D920d0      -      -     36     20   714K  1.35M
    c1t50014EE20CF9CAD6d0      -      -     18     17   352K  1.02M
    c1t50014EE20CF9E0D8d0      -      -     36     20   714K  1.35M
    c1t50014EE2B7A5FDF7d0      -      -     18     17   354K  1.03M
-------------------------  -----  -----  -----  -----  -----  -----
rpool                      18.8G   445G      0      0  19.9K  2.28K
  mirror                   18.8G   445G      0      0  19.9K  2.28K
    c2t5d0s0                   -      -      0      0  19.8K  2.89K
    c2t0d0s0                   -      -      0      0    352  21.7K
-------------------------  -----  -----  -----  -----  -----  -----


From: Jozsef Brogyanyi [mailto:brogyi at gmail.com] 
Sent: Thursday, April 28, 2016 1:06 PM
To: Martin Truhlář <martin.truhlar at archcon.cz>
Subject: Re: [OmniOS-discuss] CKSUM error

Hi Martin
Can you try the scrub command on your system? Can you issue the zpool status -x ? Please send me the next command output zpool iostat -v. I'm curious about something. Thanks.
Disk error or cable error if the scrub not help.

Brogyi

2016-04-28 10:09 GMT+02:00 Martin Truhlář <martin.truhlar at archcon.cz>:
Hello,

Should I be worried, that one of my mirrored disks gave me checksum error? (mirror 3) Is any reaction required?

NAME                       STATE     READ WRITE CKSUM      CAP            Product /napp-it   IOstat mess
        dpool                      ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c1t50014EE00400FA16d0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE2B40F14DBd0  ONLINE       0     0     0      1 TB           WDC WD1003FBYX-0   S:0 H:0 T:0
          mirror-1                 ONLINE       0     0     0
            c1t50014EE05950B131d0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE2B5E5A6B8d0  ONLINE       0     0     0      1 TB           WDC WD1003FBYZ-0   S:0 H:0 T:0
          mirror-2                 ONLINE       0     0     0
            c1t50014EE05958C51Bd0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0595617ACd0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
          mirror-3                 ONLINE       0     0     0
            c1t50014EE0596B5DF9d0  ONLINE       0     0     1      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0AEAE9B65d0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
          mirror-5                 ONLINE       0     0     0
            c1t50014EE0AEABB8E7d0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0AEB44327d0  ONLINE       0     0     0      1 TB           WDC WD1002F9YZ-0   S:0 H:0 T:0
        logs
          c1t55CD2E4000050AC9d0    ONLINE       0     0     0      240.1 GB       INTEL SSDSC2CW24   S:0 H:0 T:0
        cache
          c1t55CD2E4000339A59d0    ONLINE       0     0     0      180 GB         INTEL SSDSC2BW18   S:0 H:0 T:0
        spares
          c2t2d0                   AVAIL         1 TB           WDC WD10EFRX-68F   S:0 H:0 T:0

Thank you in advance for any advice
Martin
_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss at lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list