[OmniOS-discuss] ZFS data corruption

wuffers moo at wuffers.net
Wed Sep 2 15:18:33 UTC 2015


Hmm, today I did another zpool status and noticed another ZFS corruption
error:

  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 184h28m with 0 errors on Wed Aug  5 06:38:32
2015
config:

[config snipped]

errors: Permanent errors have been detected in the following files:

        tank/vmware-64k-5tb-6:<0x1>

I have not had any PSODs, and the SAN has been not been rebooted since the
last time I ran the scrub.

# uptime
 10:52am  up 50 days  7:55,  2 users,  load average: 2.32, 2.10, 1.96

I am now migrating the VMs on that storage to the previously vacated
datastore that reported ZFS corruption, which disappeared after the scrub.
After moving the VMs, I'll run another scrub.

Is there anything I should be checking? The only thing that jumps out at me
right now is that the ZeusRAMs are reporting some illegal requests in
iostat, but AFAIK they've always done that whenever I've checked iostat so
I thought that was normal for a log device.

I also did a dump from the STMF trace buffer as per above (the read
failures look almost identical except for the resid):

#  echo '*stmf_trace_buf/s'  | mdb -k | more
0xffffff431dcd8000:             :0005385: Imported the LU
600144f09084e251000051
91a7f50001
:0005385: sbd_lp_cb: import_lu failed, ret = 149, err_ret = 4
:0005387: Imported the LU 600144f09084e251000051ec2f0f0002
:0005389: Imported the LU 600144f09084e25100005286cef70001
:0005390: Imported the LU 600144f09084e25100005286cf160002
:0005392: Imported the LU 600144f09084e25100005286cf240003
:0005393: Imported the LU 600144f09084e25100005286cf310004
:0005394: Imported the LU 600144f09084e25100005286cf3b0005
:0005396: Imported the LU 600144f09084e251000052919b220001
:0005397: Imported the LU 600144f09084e251000052919b340002
:0005398: Imported the LU 600144f09084e251000052919b460003
:0005399: Imported the LU 600144f09084e251000052919b560004
:0005401: Imported the LU 600144f09084e251000052919b630005
:0005402: Imported the LU 600144f09084e2510000550331e50001
:0005403: Imported the LU 600144f09084e2510000550331ea0002
:338370566: UIO_READ failed, ret = 5, resid = 65536
:338370566: UIO_READ failed, ret = 5, resid = 65536

Are we dealing with some sort of new bug? I am on 14..


On Mon, Aug 24, 2015 at 5:54 AM, Stephan Budach <stephan.budach at jvm.de>
wrote:

> Am 22.08.15 um 19:02 schrieb Doug Hughes:
>
> I've been experiencing spontaneous checksum failure/corruption on read at
> the zvol level recently on a box running r12 as well. None of the disks
> show any errors. All of the errors show up at the zvol level until all the
> disks in the vol get marked as degraded and then a reboot clears it up.
> repeated scrubs find files to delete, but then after additional heavy read
> I/O activity, more checksum on read errors occur, and more files need to be
> removed. So far on r14 I haven't seen this, but I'm keeping an eye on it.
>
> The write activity on this server is very low. I'm currently trying to
> evacuate it with zfs send | mbuffer to another host over 10g, so the read
> activity is very high and consistent over a long period of time since I
> have to move about 10TB.
>
> This morning, I received another of these zvol errors, which was also
> reported up to my RAC cluster. I haven't  fully checked that yet, but I
> think the ASM/ADVM simply issued a re-read and was happy with the result.
> Otherwise ASM would have issued a read against the mirror side and probably
> have taken the "faulty" failure group offline, which it didn't.
>
> However, I was wondering how to get some more information from the STMF
> framework and found a post, how to read from the STMF trace buffer…
>
> root at nfsvmpool07:/root#  echo '*stmf_trace_buf/s'  | mdb -k | more
> 0xffffff090f828000:             :0002579: Imported the LU
> 600144f090860e6b000055
> 0c3a290001
> :0002580: Imported the LU 600144f090860e6b0000550c3e240002
> :0002581: Imported the LU 600144f090860e6b0000550c3e270003
> :0002603: Imported the LU 600144f090860e6b000055925a120001
> :0002604: Imported the LU 600144f090860e6b000055a50ebf0002
> :0002604: Imported the LU 600144f090860e6b000055a8f7d70003
> :0002605: Imported the LU 600144f090860e6b000055a8f7e30004
> :150815416: UIO_READ failed, ret = 5, resid = 131072
> :224314824: UIO_READ failed, ret = 5, resid = 131072
>
> So, this basically shows two read errors, which is consistent with the
> incidents I had on this system. Unfortuanetly, this doesn't buy me much
> more, since I don't know how to track that further down, but it seems that
> COMSTAR had issues reading from the zvol.
>
> Is it possible to debug this further?
>
>
> On 8/21/2015 2:06 AM, wuffers wrote:
>
> Oh, the PSOD is not caused by the corruption in ZFS - I suspect it was the
> other way around (VMware host PSOD -> ZFS corruption). I've experienced the
> PSOD before, it may be related to IO issues which I outlined in another
> post here:
> http://lists.omniti.com/pipermail/omnios-discuss/2015-June/005222.html
>
> Nobody chimed in, but it's an ongoing issue. I need to dedicate more time
> to troubleshoot but other projects are taking my attention right now
> (coupled with a personal house move time is at a premium!).
>
> Also, I've had many improper shutdowns of the hosts and VMs, and this was
> the first time I've seen a ZFS corruption.
>
> I know I'm repeating myself, but my question is still:
> - Can I safely use this block device again now that it reports no errors?
> Again, I've moved all data off of it.. and there are no other signs of
> hardware issues. Recreate it?
>
> On Wed, Aug 19, 2015 at 12:49 PM, Stephan Budach <stephan.budach at jvm.de>
> wrote:
>
>> Hi Joerg,
>>
>> Am 19.08.15 um 14:59 schrieb Joerg Goltermann:
>>
>> Hi,
>>>
>>> the PSOD you got can cause the problems on your exchange database.
>>>
>>> Can you check the ESXi logs for the root cause of the PSOD?
>>>
>>> I never got a PSOD on such a "corruption". I still think this is
>>> a "cosmetic" bug, but this should be verified by one of the ZFS
>>> developers ...
>>>
>>>  - Joerg
>>
>>
>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150902/7182b7d5/attachment-0001.html>


More information about the OmniOS-discuss mailing list