[OmniOS-discuss] crash

Johan Kragsterman johan.kragsterman at capvert.se
Mon Apr 7 09:19:05 UTC 2014


Hej!


Got a crash here, that I would like someone have a look at.

Hardware is a Dell T5500 workstation with dual Xeon L5520 and 36 GB ram, OS/rpool on an Intel SSD SLC, "mainppol" on mirrored Seagate ST4000VN000(new) with an SSD Samsung 840 EVO(new) as L2arc.
Disabled bge0 on mo'bo', and a quad intel gbit nic as the working interfaces.


I run a single kvm vm, edubuntu 13.10 on the machine. The crash came when I built a new chroot environment for the ltsp thin client system.



I give you the info about the crash and what I've done to get it visible here:


OmniOS 5.11     omnios-6de5e81  2013.11.27

OmniOS v11 r151008

root at omni:/var/crash/unknown# ls
bounds  unix.0  vmcore.0  vmdump.0
root at omni:/var/crash/unknown# mdb -k unix.0 vmcore.0
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp scsi_vhci zfs sata sd ip hook neti sockfs arp usba uhci stmf stmf_sbd md lofs random idm nfs crypto ptm kvm cpc smbsrv ufs logindmux nsmb ]
> ::status
debugging crash dump vmcore.0 (64-bit) from omni
operating system: 5.11 omnios-6de5e81 (i86pc)
image uuid: a5e10116-5ed1-68ce-eba1-86f6ade3d5f5
panic message: I/O to pool 'mainpool' appears to be hung.
dump content: kernel pages only
> ::stack
vpanic()
vdev_deadman+0x10b(ffffff0a277f0540)
vdev_deadman+0x4a(ffffff0a1eea6040)
vdev_deadman+0x4a(ffffff0a1dfea580)
spa_deadman+0xad(ffffff0a1cd8a580)
cyclic_softint+0xf3(fffffffffbc30d20, 0)
cbe_low_level+0x14()
av_dispatch_softvect+0x78(2)
dispatch_softint+0x39(0, 0)
switch_sp_and_call+0x13()
dosoftint+0x44(ffffff0045805a50)
do_interrupt+0xba(ffffff0045805a50, 1)
_interrupt+0xba()
acpi_cpu_cstate+0x11b(ffffff0a1ce9e670)
cpu_acpi_idle+0x8d()
cpu_idle_adaptive+0x13()
idle+0xa7()
thread_start+8()
> ::msgbuf
MESSAGE                                                               
NOTICE: vnic1001 link down
NOTICE: e1000g3 link up, 1000 Mbps, full duplex
NOTICE: vnic1001 link up, 1000 Mbps, unknown duplex
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x526849 data 8
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x526849 data 8
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0
unhandled wrmsr: 0x0 data 0           
vcpu 1 received sipi with vector # 10
kvm_lapic_reset: vcpu=ffffff0a36c8e000, id=1, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 2 received sipi with vector # 10
kvm_lapic_reset: vcpu=ffffff0a36c86000, id=2, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 3 received sipi with vector # 10
kvm_lapic_reset: vcpu=ffffff0a36c7e000, id=3, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 4 received sipi with vector # 10
vcpu 7 received sipi with vector # 10
vcpu 6 received sipi with vector # 10
kvm_lapic_reset: vcpu=ffffff0a36cbe000, id=7, base_msr= fee00800 PRIx64 base_add
ress=fee00000
kvm_lapic_reset: vcpu=ffffff0a36cc6000, id=6, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 5 received sipi with vector # 10
kvm_lapic_reset: vcpu=ffffff0a36c76000, id=4, base_msr= fee00800 PRIx64 base_add
ress=fee00000
kvm_lapic_reset: vcpu=ffffff0a36cce000, id=5, base_msr= fee00800 PRIx64 base_add
ress=fee00000
unhandled wrmsr: 0x0 data 0
vcpu 1 received sipi with vector # 98 
kvm_lapic_reset: vcpu=ffffff0a36c8e000, id=1, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 2 received sipi with vector # 98
kvm_lapic_reset: vcpu=ffffff0a36c86000, id=2, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 3 received sipi with vector # 98
kvm_lapic_reset: vcpu=ffffff0a36c7e000, id=3, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 4 received sipi with vector # 98
kvm_lapic_reset: vcpu=ffffff0a36c76000, id=4, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 5 received sipi with vector # 98
kvm_lapic_reset: vcpu=ffffff0a36cce000, id=5, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 6 received sipi with vector # 98
kvm_lapic_reset: vcpu=ffffff0a36cc6000, id=6, base_msr= fee00800 PRIx64 base_add
ress=fee00000
vcpu 7 received sipi with vector # 98
kvm_lapic_reset: vcpu=ffffff0a36cbe000, id=7, base_msr= fee00800 PRIx64 base_add
ress=fee00000
unhandled rdmsr: 0xfe89f030
unhandled wrmsr: 0x525f43 data 2000000001
unhandled rdmsr: 0xfe89f030           
unhandled wrmsr: 0x525f43 data 2000000001
unhandled rdmsr: 0xfe89f030
unhandled wrmsr: 0x525f43 data 2000000001
unhandled rdmsr: 0xfe89f030
unhandled wrmsr: 0x525f43 data 2000000001
unhandled rdmsr: 0xfe89f030
unhandled wrmsr: 0x525f43 data 2000000001
unhandled rdmsr: 0xfe89f030
unhandled wrmsr: 0x525f43 data 2000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
unhandled rdmsr: 0xff31ca8c
unhandled wrmsr: 0x525f43 data 10000000001
NOTICE: e1000g3 link down
NOTICE: vnic1001 link down
NOTICE: e1000g3 link up, 100 Mbps, full duplex
NOTICE: vnic1001 link up, 100 Mbps, unknown duplex
WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5a545088 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5dc38160 timed out

WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5dc642e0 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a57020388 timed out

WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 0 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 1 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5fe32b90 timed out

WARNING: ahci0: watchdog port 2 satapkt 0xffffff0a5fe32b90 timed out

NOTICE: SUNW-MSG-ID: SUNOS-8000-0G, TYPE: Error, VER: 1, SEVERITY: Major


panic[cpu0]/thread=ffffff00458cbc40: 
I/O to pool 'mainpool' appears to be hung.


ffffff00458cba20 zfs:vdev_deadman+10b ()
ffffff00458cba70 zfs:vdev_deadman+4a ()
ffffff00458cbac0 zfs:vdev_deadman+4a ()
ffffff00458cbaf0 zfs:spa_deadman+ad ()
ffffff00458cbb90 genunix:cyclic_softint+f3 ()
ffffff00458cbba0 unix:cbe_low_level+14 ()
ffffff00458cbbf0 unix:av_dispatch_softvect+78 ()
ffffff00458cbc20 unix:dispatch_softint+39 ()
ffffff00458059a0 unix:switch_sp_and_call+13 ()
ffffff00458059e0 unix:dosoftint+44 ()
ffffff0045805a40 unix:do_interrupt+ba ()
ffffff0045805a50 unix:cmnint+ba ()
ffffff0045805bc0 unix:acpi_cpu_cstate+11b ()
ffffff0045805bf0 unix:cpu_acpi_idle+8d ()
ffffff0045805c00 unix:cpu_idle_adaptive+13 ()
ffffff0045805c20 unix:idle+a7 ()      
ffffff0045805c30 unix:thread_start+8 ()

syncing file systems...               
 done
dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
NOTICE: ahci0: ahci_tran_reset_dport port 0 reset port



Would be nice to get some info about this from someone that got some more clues than I got...



Best regards from/Med vänliga hälsningar från

Johan Kragsterman

Capvert



More information about the OmniOS-discuss mailing list