<font face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size="2"><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><br></div><font color="#990099" style="font-family: Verdana, Arial, Helvetica, sans-serif;">-----"OmniOS-discuss" <omnios-discuss-bounces@lists.omniti.com> skrev: -----</font><div class="iNotesHistory" style="padding-left: 5px;"><div style="padding-right: 0px; padding-left: 5px; border-left-style: solid; border-left-color: black; border-left-width: 2px;"><font face="Verdana, Arial, Helvetica, sans-serif">Till: omnios-discuss@lists.omniti.com</font><br><font face="Verdana, Arial, Helvetica, sans-serif">Från: Saso Kiselkov </font><skiselkov.ml@gmail.com><br><font face="Verdana, Arial, Helvetica, sans-serif">Sänt av: "OmniOS-discuss" </font><omnios-discuss-bounces@lists.omniti.com><br><font face="Verdana, Arial, Helvetica, sans-serif">Datum: 2014-04-07 11:38</font><br><font face="Verdana, Arial, Helvetica, sans-serif">Ärende: Re: [OmniOS-discuss] crash</font><br><br><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">On 4/7/14, 11:19 AM, Johan Kragsterman wrote:<br>> <br>> Hej!<br>> <br>> <br>> Got a crash here, that I would like someone have a look at.<br>> <br>> [..snip..]<br>> <br>>> ::stack<br>> vpanic()<br>> vdev_deadman+0x10b(ffffff0a277f0540)<br>> vdev_deadman+0x4a(ffffff0a1eea6040)<br>> vdev_deadman+0x4a(ffffff0a1dfea580)<br>> spa_deadman+0xad(ffffff0a1cd8a580)<br>> cyclic_softint+0xf3(fffffffffbc30d20, 0)<br>> cbe_low_level+0x14()<br>> av_dispatch_softvect+0x78(2)<br>> dispatch_softint+0x39(0, 0)<br>> switch_sp_and_call+0x13()<br>> dosoftint+0x44(ffffff0045805a50)<br>> do_interrupt+0xba(ffffff0045805a50, 1)<br>> _interrupt+0xba()<br>> acpi_cpu_cstate+0x11b(ffffff0a1ce9e670)<br>> cpu_acpi_idle+0x8d()<br>> cpu_idle_adaptive+0x13()<br>> idle+0xa7()<br>> thread_start+8()<br>> [..snip..]<br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5a545088 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5dc38160 timed out<br>> <br>> WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5dc642e0 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a57020388 timed out<br>> <br>> WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5fe32b90 timed out<br>> <br>> NOTICE: SUNW-MSG-ID: SUNOS-8000-0G, TYPE: Error, VER: 1, SEVERITY: Major<br>> <br>> <br>> panic[cpu0]/thread=ffffff00458cbc40: <br>> I/O to pool 'mainpool' appears to be hung.<br>> <br>> <br>> ffffff00458cba20 zfs:vdev_deadman+10b ()<br>> ffffff00458cba70 zfs:vdev_deadman+4a ()<br>> ffffff00458cbac0 zfs:vdev_deadman+4a ()<br>> ffffff00458cbaf0 zfs:spa_deadman+ad ()<br>> ffffff00458cbb90 genunix:cyclic_softint+f3 ()<br>> ffffff00458cbba0 unix:cbe_low_level+14 ()<br>> ffffff00458cbbf0 unix:av_dispatch_softvect+78 ()<br>> ffffff00458cbc20 unix:dispatch_softint+39 ()<br>> ffffff00458059a0 unix:switch_sp_and_call+13 ()<br>> ffffff00458059e0 unix:dosoftint+44 ()<br>> ffffff0045805a40 unix:do_interrupt+ba ()<br>> ffffff0045805a50 unix:cmnint+ba ()<br>> ffffff0045805bc0 unix:acpi_cpu_cstate+11b ()<br>> ffffff0045805bf0 unix:cpu_acpi_idle+8d ()<br>> ffffff0045805c00 unix:cpu_idle_adaptive+13 ()<br>> ffffff0045805c20 unix:idle+a7 () <br>> ffffff0045805c30 unix:thread_start+8 ()<br>> <br>> syncing file systems... <br>> done<br>> dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel<br>> NOTICE: ahci0: ahci_tran_reset_dport port 0 reset port<br>> <br>> Would be nice to get some info about this from someone that got some more clues than I got...<br><br>Essentially, this says that your SATA controller hung in a bad state<br>that isn't recoverable:<br><a href="https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/fs/zfs/spa_misc.c#L256-L261">https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/fs/zfs/spa_misc.c#L256-L261</a><br><br>I'd suspect the SATA controller. If this panic comes with any<br>regularity, try working around the SATA controller by using a substitute<br>HBA and disabling the old one to see if it goes away.<br><br>Cheers,<br>-- <br>Saso</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">Thanks, Saso!!</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">Yeah, that was my instict too(though I don't have the knowledge to interpret it), it is the mo'bo's integrated SATA controller, I thought I use that for simplicity.</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">I'll see if it comes back, and if it does, I'll change to an HBA, at least for the "mainpool".</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">Perhaps I can let the os/rpool stay on the SATA controller?</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">This server/workstation is for my new home server, not ready for work yet, so right now it isn't critical.</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3">I include som more mdb stuff from the crash, if someone is interested:</font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div><font face="Courier New,Courier,monospace" size="3"><div>> ::cpuinfo -v</div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 0 fffffffffbc3b620 1b 0 0 101 no no t-2 ffffff00458cbc40 sched</div><div> | |</div><div> RUNNING <--+ +--> PIL THREAD</div><div> READY 2 ffffff00458cbc40</div><div> EXISTS - ffffff0045805c40 (idle)</div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 1 ffffff0a1dd5f540 1f 0 0 -1 no no t-0 ffffff0045bf4c40</div><div> (idle)</div><div> | </div><div> RUNNING <--+ </div><div> READY </div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 2 ffffff0a1dd5e040 1f 0 0 -1 no no t-0 ffffff0045c90c40</div><div> (idle)</div><div> | </div><div> RUNNING <--+ </div><div> READY </div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 3 ffffff0a1dd56b00 1f 1 0 -1 no no t-2 ffffff0045cf5c40</div><div> (idle)</div><div> | |</div><div> RUNNING <--+ +--> PRI THREAD PROC</div><div> READY 60 ffffff0045bb8c40 sched</div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 4 ffffff0a1dd51500 1f 1 0 -1 no no t-7 ffffff0046139c40</div><div> (idle)</div><div> | |</div><div> RUNNING <--+ +--> PRI THREAD PROC</div><div> READY 60 ffffff00459cdc40 sched</div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 5 ffffff0a1de86540 1f 0 0 -1 no no t-0 ffffff00463b9c40</div><div> (idle)</div><div> | </div><div> RUNNING <--+ </div><div> READY </div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 6 ffffff0a1de85040 1f 0 0 -1 no no t-1 ffffff004641ec40</div><div> (idle)</div><div> | </div><div> RUNNING <--+ </div><div> READY </div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div> ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC</div><div> 7 ffffff0a1de7e500 1f 0 0 -1 no no t-0 ffffff0046580c40</div><div> (idle)</div><div> | </div><div> RUNNING <--+ </div><div> READY </div><div> QUIESCED </div><div> EXISTS </div><div> ENABLE </div><div><br></div><div>> ::ps</div><div>S PID PPID PGID SID UID FLAGS ADDR NAME</div><div>R 0 0 0 0 0 0x00000001 fffffffffbc2eb80 sched</div><div>R 143 0 0 0 0 0x00020001 ffffff0a1ea74080 zpool-mainpool</div><div>R 3 0 0 0 0 0x00020001 ffffff0a1dfb8028 fsflush</div><div>R 2 0 0 0 0 0x00020001 ffffff0a1dfbb020 pageout</div><div>R 1 0 0 0 0 0x4a004000 ffffff0a1dfbf018 init</div><div>R 620 1 620 620 1 0x42000000 ffffff0a2ae00030 nfsd</div><div>R 618 1 618 618 0 0x42000000 ffffff0a2ae05028 mountd</div><div>R 577 1 577 577 0 0x42000000 ffffff0a2748e0a8 fmd</div><div>R 600 1 597 597 0 0x4a004000 ffffff0a2aa6d0b0 perl</div><div>R 575 1 573 573 0 0x42000000 ffffff0a2ae17010 sshd</div><div>R 634 575 573 573 0 0x42010000 ffffff0a2ae0f018 sshd</div><div>R 635 634 573 573 100 0x52010000 ffffff0a2ad13008 sshd</div><div>R 638 635 638 638 100 0x42014000 ffffff0a2ad14000 ksh93</div><div>R 644 638 644 638 0 0x4a014000 ffffff0a2981f050 bash</div><div>R 650 644 650 638 0 0x4a014000 ffffff0a356d6048 vmedubuntu.sh</div><div>R 655 650 650 638 0 0x4a004000 ffffff0a297fd028 </div><div>qemu-system-x86_</div><div>R 569 1 569 569 0 0x42000000 ffffff0a1e9d6070 syslogd</div><div>R 554 1 553 553 0 0x42000000 ffffff0a297070c0 in.routed</div><div>R 519 1 519 519 1 0x42000000 ffffff0a2a9ed098 lockd</div><div>R 495 1 495 495 0 0x42000000 ffffff0a1e7bb050 automountd</div><div>R 498 495 495 495 0 0x42000000 ffffff0a2aa480a0 automountd</div><div>R 489 1 489 489 1 0x52000000 ffffff0a2aa670c8 nfsmapid</div><div>R 476 1 476 476 0 0x42000000 ffffff0a2aa6c0b8 inetd</div><div>R 479 1 478 478 0 0x42000000 ffffff0a29960090 in.ndpd</div><div>R 477 1 477 477 1 0x42000000 ffffff0a2988d068 statd</div><div>R 455 1 455 455 1 0x42000000 ffffff0a2aa310a8 rpcbind</div><div>R 436 1 436 436 0 0x42000000 ffffff0a29833048 reparsed</div><div>R 424 1 424 424 0 0x42010000 ffffff0a29879078 cron</div><div>R 361 1 361 361 0 0x42000000 ffffff0a29704000 hald</div><div>R 362 361 361 361 0 0x4a004000 ffffff0a1e887058 hald-runner</div><div>R 382 362 361 361 0 0x4a004000 ffffff0a2984b058 hald-addon-acpi</div><div>R 380 362 361 361 0 0x4a004000 ffffff0a28feb0b8 </div><div>hald-addon-cpufr</div><div>R 373 362 361 361 0 0x4a004000 ffffff0a29876080 </div><div>hald-addon-netwo</div><div>R 393 1 393 393 0 0x42000000 ffffff0a1ee580a0 utmpd</div><div>R 368 1 363 363 0 0x42000000 ffffff0a29806018 iscsid</div><div>R 294 1 294 294 0 0x42000000 ffffff0a1e9e0068 picld</div><div>R 291 1 291 291 0 0x42000000 ffffff0a1e6dc040 nscd</div><div>R 230 1 230 230 0 0x42000000 ffffff0a2973c010 powerd</div><div>R 222 1 222 222 0 0x42000000 ffffff0a29890060 dbus-daemon</div><div>R 218 1 218 218 0 0x42000000 ffffff0a29702008 devfsadm</div><div>R 202 1 202 202 0 0x42000000 ffffff0a29834040 syseventd</div><div>R 208 1 208 208 0 0x42000000 ffffff0a2748d0b0 zonestatd</div><div>R 166 1 161 161 0 0x42000000 ffffff0a1ebe6088 pfexecd</div><div>R 95 1 94 94 0 0x42020000 ffffff0a1e9d5078 dhcpagent</div><div>R 49 1 49 49 16 0x42000000 ffffff0a1e87e060 ipmgmtd</div><div>R 47 1 47 47 15 0x52000000 ffffff0a1ec29098 dlmgmtd</div><div>R 44 1 43 43 17 0x42020000 ffffff0a1e32c030 netcfgd</div><div>R 12 1 12 12 0 0x42000000 ffffff0a1e723048 svc.configd</div><div>R 10 1 10 10 0 0x42000000 ffffff0a1e675038 svc.startd</div><div>R 421 10 421 421 0 0x4a014000 ffffff0a1ec02090 sac</div><div>R 427 421 421 421 0 0x4a014000 ffffff0a297f2030 ttymon</div><div>R 394 10 394 394 0 0x4a004000 ffffff0a297ff020 ttymon</div><div>R 6 0 0 0 0 0x00020001 ffffff0a1ca7b010 zpool-rpool</div><div>R 4 0 0 0 0 0x00020001 ffffff0a14012008 kcfpoold</div><div>> </div><div>> ::panicinfo</div><div> cpu 0</div><div> thread ffffff00458cbc40</div><div> message I/O to pool 'mainpool' appears to be hung.</div><div> rdi fffffffff7a52a00</div><div> rsi ffffff00458cb960</div><div> rdx fffffffff7a52a00</div><div> rcx ffffff0a36bcd0f0</div><div> r8 20</div><div> r9 a</div><div> rax ffffff00458cb980</div><div> rbx fffffffff7a52a00</div><div> rbp ffffff00458cb9d0</div><div> r10 0</div><div> r11 ffffff00458cb870</div><div> r12 ffffff0a1cd8a580</div><div> r13 e9fea51f56</div><div> r14 ffffff0a277f0540</div><div> r15 ffffff0a57ac93b0</div><div> fsbase fffffd7fff084a40</div><div> gsbase fffffffffbc30d20</div><div> ds 38</div><div> es 38</div><div> fs 0</div><div> gs 0</div><div> trapno 0</div><div> err 0</div><div> rip fffffffffb85e8a0</div><div> cs 30</div><div> rflags 246</div><div> rsp ffffff00458cb958</div><div> ss 38</div><div> gdt_hi 0</div><div> gdt_lo e000ffff</div><div> idt_hi 0</div><div> idt_lo d000ffff</div><div> ldt 0</div><div> task 70</div><div> cr0 8005003b</div><div> cr2 187056769680</div><div> cr3 bc00000</div><div> cr4 26f8</div><div>> </div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><br></div></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br></font></div><div style="font-family: Verdana, Arial, Helvetica, sans-serif;"><font face="Courier New,Courier,monospace" size="3"><br>_______________________________________________<br>OmniOS-discuss mailing list<br>OmniOS-discuss@lists.omniti.com<br><a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br><br></font></div></omnios-discuss-bounces@lists.omniti.com></skiselkov.ml@gmail.com></div></div><div></div></font>