<div dir="ltr"><div><div>When it pours, it rains. With  r151006y, I had two kernel panics in quick succession while trying to create some zero thick eager disks (4 at the same time) in ESXi. They are now "kernel heap corruption detected" instead of anon_decref.<br>
<br></div><div>Kernel panic 2 (dump info: <a href="https://drive.google.com/file/d/0B7mCJnZUzJPKMHhqZHJnaDEzYkk">https://drive.google.com/file/d/0B7mCJnZUzJPKMHhqZHJnaDEzYkk</a>)<br><a href="http://i.imgur.com/eIssxmc.png?1">http://i.imgur.com/eIssxmc.png?1</a><br>
<a href="http://i.imgur.com/MXJy4zP.png?1">http://i.imgur.com/MXJy4zP.png?1</a><br><br></div>TIME                           UUID                                 SUNW-MSG-ID<br>Nov 16 2013 00:51:24.912170000 5998ba1e-3aa5-ccac-e885-be4897cfcfe8 SUNOS-8000-KL<br>
<br>  TIME                 CLASS                                 ENA<br>  Nov 16 00:51:24.8638 ireport.os.sunos.panic.dump_available 0x0000000000000000<br>  Nov 16 00:49:58.8671 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000<br>
<br>nvlist version: 0<br>        version = 0x0<br>        class = list.suspect<br>        uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8<br>        code = SUNOS-8000-KL<br>        diag-time = 1384581084 866703<br>        de = fmd:///module/software-diagnosis<br>
        fault-list-sz = 0x1<br>        fault-list = (array of embedded nvlists)<br>        (start fault-list[0])<br>        nvlist version: 0<br>                version = 0x0<br>                class = defect.sunos.kernel.panic<br>
                certainty = 0x64<br>                asru = sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8<br>                resource = sw:///:path=/var/crash/unknown/.5998ba1e-3aa5-ccac-e885-be4897cfcfe8<br>
                savecore-succcess = 1<br>                dump-dir = /var/crash/unknown<br>                dump-files = vmdump.1<br>                os-instance-uuid = 5998ba1e-3aa5-ccac-e885-be4897cfcfe8<br>                panicstr = kernel heap corruption detected<br>
                panicstack = fffffffffba49c04 () | genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | genunix:kmem_depot_ws_reap+5d () | genunix:kmem_cache_magazine_purge+118 () | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | unix:thread_start+8 () |<br>
                crashtime = 1384577735<br>                panic-time = Fri Nov 15 23:55:35 2013 EST<br>        (end fault-list[0])<br><br>        fault-status = 0x1<br>        severity = Major<br>        __ttl = 0x1<br>        __tod = 0x528707dc 0x365e9c10<br>
<br>kernel panic 3 (dump info: <a href="https://drive.google.com/file/d/0B7mCJnZUzJPKbnZIeWZzQjhUOTQ">https://drive.google.com/file/d/0B7mCJnZUzJPKbnZIeWZzQjhUOTQ</a>):<br></div>(looked the same, no screenshots)<br><br><div>
TIME                           UUID                                 SUNW-MSG-ID<br>Nov 16 2013 01:44:43.327489000 a6592c60-199f-ead5-9586-ff013bf5ab2d SUNOS-8000-KL<br><br>  TIME                 CLASS                                 ENA<br>
  Nov 16 01:44:43.2941 ireport.os.sunos.panic.dump_available 0x0000000000000000<br>  Nov 16 01:44:03.5356 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000<br><br>nvlist version: 0<br>        version = 0x0<br>
        class = list.suspect<br>        uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d<br>        code = SUNOS-8000-KL<br>        diag-time = 1384584283 296816<br>        de = fmd:///module/software-diagnosis<br>        fault-list-sz = 0x1<br>
        fault-list = (array of embedded nvlists)<br>        (start fault-list[0])<br>        nvlist version: 0<br>                version = 0x0<br>                class = defect.sunos.kernel.panic<br>                certainty = 0x64<br>
                asru = sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d<br>                resource = sw:///:path=/var/crash/unknown/.a6592c60-199f-ead5-9586-ff013bf5ab2d<br>                savecore-succcess = 1<br>
                dump-dir = /var/crash/unknown<br>                dump-files = vmdump.2<br>                os-instance-uuid = a6592c60-199f-ead5-9586-ff013bf5ab2d<br>                panicstr = kernel heap corruption detected<br>
                panicstack = fffffffffba49c04 () | genunix:kmem_slab_free+c1 () | genunix:kmem_magazine_destroy+6e () | genunix:kmem_cache_magazine_purge+dc () | genunix:kmem_cache_magazine_resize+40 () | genunix:taskq_thread+2d0 () | unix:thread_start+8 () |<br>
                crashtime = 1384582658<br>                panic-time = Sat Nov 16 01:17:38 2013 EST<br>        (end fault-list[0])<br><br>        fault-status = 0x1<br>        severity = Major<br>        __ttl = 0x1<br>        __tod = 0x5287145b 0x138515e8<br>
<br><br>---<br></div><div>Now, having looked through all 3, I can see in the first two there were some warnings:<br><pre>WARNING: /<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss">pci at 0</a>,0/<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss">pci8086,3c08 at 3</a>/<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss">pci1000,3030 at 0</a> (mpt_sas1):
        mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31120303<br><br></pre>The /var/adm/message also had a sprinkling of these:<br>Nov 15 23:36:43 san1 scsi: [ID 243001 kern.warning] WARNING: /pci@0,0/pci8086,3c08@3/pci1000,3030@0 (mpt_sas1):<br>
Nov 15 23:36:43 san1    mptsas_handle_event: IOCStatus=0x8000, IOCLogInfo=0x31120303<br>Nov 15 23:36:43 san1 scsi: [ID 365881 <a href="http://kern.info">kern.info</a>] /pci@0,0/pci8086,3c08@3/pci1000,3030@0 (mpt_sas1):<br>
Nov 15 23:36:43 san1    Log info 0x31120303 received for target 10.<br>Nov 15 23:36:43 san1    scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc<br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Following this <a href="http://lists.omniti.com/pipermail/omnios-discuss/2013-March/000544.html">http://lists.omniti.com/pipermail/omnios-discuss/2013-March/000544.html</a> to map the target disk, it's my Stec ZeusRAM ZIL drive that's configured as a mirror (if I've done it right). I didn't see these errors in the 3rd dump, so don't know if it's contributing. I may try to do a memtest tomorrow on the system just in case it's some hardware issues.<br>
<br>My zpool status shows all my drives okay with no known data errors.<br><br></div><div class="gmail_extra">Not sure how to proceed from here.. my Hyper-V hosts have been using the SAN with no issues for 2+ months since it's been up and configured, using SRP and IB. I'd expect the VM hosts to crash before my SAN does.<br>
<br></div><div class="gmail_extra">Of course, I can make the vmdump.x files available to anyone who wants to look at them (7GB, 8GB, 4GB).<br></div><div class="gmail_extra"><br></div></div>