<div dir="ltr">Hmm, today I did another zpool status and noticed another ZFS corruption error:<div><br></div><div><div> pool: tank</div><div> state: ONLINE</div><div>status: One or more devices has experienced an error resulting in data</div><div> corruption. Applications may be affected.</div><div>action: Restore the file in question if possible. Otherwise restore the</div><div> entire pool from backup.</div><div> see: <a href="http://illumos.org/msg/ZFS-8000-8A">http://illumos.org/msg/ZFS-8000-8A</a></div><div> scan: scrub repaired 0 in 184h28m with 0 errors on Wed Aug 5 06:38:32 2015</div><div>config:</div><div><br></div><div>[config snipped]</div><div><br></div><div>errors: Permanent errors have been detected in the following files:</div><div><br></div><div> tank/vmware-64k-5tb-6:<0x1></div></div><div><br></div><div>I have not had any PSODs, and the SAN has been not been rebooted since the last time I ran the scrub.</div><div><br></div><div><div># uptime</div><div> 10:52am up 50 days 7:55, 2 users, load average: 2.32, 2.10, 1.96</div></div><div><br></div><div>I am now migrating the VMs on that storage to the previously vacated datastore that reported ZFS corruption, which disappeared after the scrub. After moving the VMs, I'll run another scrub.</div><div><br></div><div>Is there anything I should be checking? The only thing that jumps out at me right now is that the ZeusRAMs are reporting some illegal requests in iostat, but AFAIK they've always done that whenever I've checked iostat so I thought that was normal for a log device.</div><div><br></div><div>I also did a dump from the STMF trace buffer as per above (the read failures look almost identical except for the resid):</div><div><br></div><div><div># echo '*stmf_trace_buf/s' | mdb -k | more</div><div>0xffffff431dcd8000: :0005385: Imported the LU 600144f09084e251000051</div><div>91a7f50001</div><div>:0005385: sbd_lp_cb: import_lu failed, ret = 149, err_ret = 4</div><div>:0005387: Imported the LU 600144f09084e251000051ec2f0f0002</div><div>:0005389: Imported the LU 600144f09084e25100005286cef70001</div><div>:0005390: Imported the LU 600144f09084e25100005286cf160002</div><div>:0005392: Imported the LU 600144f09084e25100005286cf240003</div><div>:0005393: Imported the LU 600144f09084e25100005286cf310004</div><div>:0005394: Imported the LU 600144f09084e25100005286cf3b0005</div><div>:0005396: Imported the LU 600144f09084e251000052919b220001</div><div>:0005397: Imported the LU 600144f09084e251000052919b340002</div><div>:0005398: Imported the LU 600144f09084e251000052919b460003</div><div>:0005399: Imported the LU 600144f09084e251000052919b560004</div><div>:0005401: Imported the LU 600144f09084e251000052919b630005</div><div>:0005402: Imported the LU 600144f09084e2510000550331e50001</div><div>:0005403: Imported the LU 600144f09084e2510000550331ea0002</div><div>:338370566: UIO_READ failed, ret = 5, resid = 65536</div><div>:338370566: UIO_READ failed, ret = 5, resid = 65536</div></div><div><br></div><div>Are we dealing with some sort of new bug? I am on 14..</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 24, 2015 at 5:54 AM, Stephan Budach <span dir="ltr"><<a href="mailto:stephan.budach@jvm.de" target="_blank">stephan.budach@jvm.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>Am 22.08.15 um 19:02 schrieb Doug
Hughes:<br>
</div><span class="">
<blockquote type="cite">
I've been experiencing spontaneous checksum failure/corruption on
read at the zvol level recently on a box running r12 as well. None
of the disks show any errors. All of the errors show up at the
zvol level until all the disks in the vol get marked as degraded
and then a reboot clears it up. repeated scrubs find files to
delete, but then after additional heavy read I/O activity, more
checksum on read errors occur, and more files need to be removed.
So far on r14 I haven't seen this, but I'm keeping an eye on it.<br>
<br>
The write activity on this server is very low. I'm currently
trying to evacuate it with zfs send | mbuffer to another host over
10g, so the read activity is very high and consistent over a long
period of time since I have to move about 10TB.<br>
<br>
</blockquote></span>
This morning, I received another of these zvol errors, which was
also reported up to my RAC cluster. I haven't fully checked that
yet, but I think the ASM/ADVM simply issued a re-read and was happy
with the result. Otherwise ASM would have issued a read against the
mirror side and probably have taken the "faulty" failure group
offline, which it didn't.<br>
<br>
However, I was wondering how to get some more information from the
STMF framework and found a post, how to read from the STMF trace
buffer…<br>
<br>
<tt>root@nfsvmpool07:/root# echo '*stmf_trace_buf/s' | mdb -k |
more</tt><tt><br>
</tt><tt>0xffffff090f828000: :0002579: Imported the LU
600144f090860e6b000055</tt><tt><br>
</tt><tt>0c3a290001</tt><tt><br>
</tt><tt>:0002580: Imported the LU 600144f090860e6b0000550c3e240002</tt><tt><br>
</tt><tt>:0002581: Imported the LU 600144f090860e6b0000550c3e270003</tt><tt><br>
</tt><tt>:0002603: Imported the LU 600144f090860e6b000055925a120001</tt><tt><br>
</tt><tt>:0002604: Imported the LU 600144f090860e6b000055a50ebf0002</tt><tt><br>
</tt><tt>:0002604: Imported the LU 600144f090860e6b000055a8f7d70003</tt><tt><br>
</tt><tt>:0002605: Imported the LU 600144f090860e6b000055a8f7e30004</tt><tt><br>
</tt><tt>:150815416: UIO_READ failed, ret = 5, resid = 131072</tt><tt><br>
</tt><tt>:224314824: UIO_READ failed, ret = 5, resid = 131072</tt><tt><br>
</tt><br>
So, this basically shows two read errors, which is consistent with
the incidents I had on this system. Unfortuanetly, this doesn't buy
me much more, since I don't know how to track that further down, but
it seems that COMSTAR had issues reading from the zvol.<br>
<br>
Is it possible to debug this further?<span class=""><br>
<br>
<blockquote type="cite"> <br>
<div>On 8/21/2015 2:06 AM, wuffers wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Oh, the PSOD is not caused by the corruption in
ZFS - I suspect it was the other way around (VMware host PSOD
-> ZFS corruption). I've experienced the PSOD before, it
may be related to IO issues which I outlined in another post
here:
<div><a href="http://lists.omniti.com/pipermail/omnios-discuss/2015-June/005222.html" target="_blank">http://lists.omniti.com/pipermail/omnios-discuss/2015-June/005222.html</a></div>
<div><br>
</div>
<div>Nobody chimed in, but it's an ongoing issue. I need to
dedicate more time to troubleshoot but other projects are
taking my attention right now (coupled with a personal house
move time is at a premium!).<br>
<div><br>
</div>
<div>Also, I've had many improper shutdowns of the hosts and
VMs, and this was the first time I've seen a ZFS
corruption. </div>
<div><br>
</div>
<div>I know I'm repeating myself, but my question is still:</div>
<div>- Can I safely use this block device again now that it
reports no errors? Again, I've moved all data off of it..
and there are no other signs of hardware issues. Recreate
it? <br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Aug 19, 2015 at 12:49
PM, Stephan Budach <span dir="ltr"><<a href="mailto:stephan.budach@jvm.de" target="_blank">stephan.budach@jvm.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
Joerg,<br>
<br>
Am 19.08.15 um 14:59 schrieb Joerg Goltermann:
<div>
<div><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Hi,<br>
<br>
the PSOD you got can cause the problems on
your exchange database.<br>
<br>
Can you check the ESXi logs for the root cause
of the PSOD?<br>
<br>
I never got a PSOD on such a "corruption". I
still think this is<br>
a "cosmetic" bug, but this should be verified
by one of the ZFS<br>
developers ...<br>
<br>
- Joerg</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</blockquote>
<br>
<br>
</span></div>
<br>_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" rel="noreferrer" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
<br></blockquote></div><br></div>