From venture37 at gmail.com Tue Sep 1 11:02:15 2015 From: venture37 at gmail.com (Sevan / Venture37) Date: Tue, 1 Sep 2015 12:02:15 +0100 Subject: [OmniOS-discuss] pkgsrc-current OmniOS 170cea2/i386 2015-07-09 21:35 In-Reply-To: References: <811B3E16-0565-4F36-94F3-AACC0CBB2D59@omniti.com> <650A82F9-E07E-44CF-AE9C-4EAA4FFE87F5@omniti.com> <3CB02866-6686-4D80-869E-7D5ED0A6D088@omniti.com> Message-ID: <55E585B7.7000005@gmail.com> > On 27 July 2015 at 15:26, Dan McDonald wrote: >> Where is libgomp pulled from in gettext? I checked all the >> binaries from the gnu-gettext package, and none seek libgomp, >> unless they do so by ldload(). I managed to look into this again over the bank holiday. It turns out the issue is actually in the pkgsrc devel/gettext-tools package. As all the necessary bits for OpenMP are present in the system supplied compiler, gettext links against libgomp by default in its test cases but can't run the binaries as /opt/gcc-4.8.1/lib is not in the linkers search path. The reason this is not an issue on your bundled version of gettext is because you disable OpenMP support. The following change resolves the issue in the pkgsrc tree (it has not yet been committed to the tree yet, I'm not sure if it's a good idea to just switch it off or whether it needs something more intricate). Index: Makefile =================================================================== RCS file: /cvsroot/pkgsrc/devel/gettext-tools/Makefile,v retrieving revision 1.29 diff -u -r1.29 Makefile --- Makefile 14 Jun 2015 21:39:20 -0000 1.29 +++ Makefile 1 Sep 2015 10:02:36 -0000 @@ -18,6 +18,7 @@ CONFIGURE_ARGS+= --with-xz CONFIGURE_ARGS+= --without-included-gettext CONFIGURE_ARGS+= --without-emacs +CONFIGURE_ARGS+= --disable-openmp CONFIGURE_ENV+= GCJ= ac_cv_prog_GCJ= ac_cv_prog_JAR= CONFIGURE_ENV+= HAVE_GCJ_IN_PATH= CONFIGURE_ENV+= HAVE_JAVAC_IN_PATH= It wasn't until I moved the OS binaries out of the search path that it became apparent the issues were with the generated binaries. I misunderstood the configure stage output and assumed that the error was on the previously logged steps. checking for gmsgfmt... /usr/bin/gmsgfmt ld.so.1: xgettext: fatal: libgomp.so.1: open failed: No such file or directory ld.so.1: msgmerge: fatal: libgomp.so.1: open failed: No such file or directory ld.so.1: msgfmt: fatal: libgomp.so.1: open failed: No such file or directory configure: error: GNU gettext tools not found; required for intltool Hopefully, the necessary change will be in soon to make it in for the 2015Q3 pkgsrc release next month. I'll post an update if anything happens. Regards Sevan / Venture37 From venture37 at gmail.com Tue Sep 1 13:45:29 2015 From: venture37 at gmail.com (Sevan / Venture37) Date: Tue, 1 Sep 2015 14:45:29 +0100 Subject: [OmniOS-discuss] pkgsrc-current OmniOS 170cea2/i386 2015-07-09 21:35 In-Reply-To: <55E585B7.7000005@gmail.com> References: <811B3E16-0565-4F36-94F3-AACC0CBB2D59@omniti.com> <650A82F9-E07E-44CF-AE9C-4EAA4FFE87F5@omniti.com> <3CB02866-6686-4D80-869E-7D5ED0A6D088@omniti.com> <55E585B7.7000005@gmail.com> Message-ID: <55E5ABF9.2000402@gmail.com> On 1 September 2015 at 12:02, Sevan / Venture37 wrote: > Hopefully, the necessary change will be in soon to make it in for the > 2015Q3 pkgsrc release next month. I'll post an update if anything happens. The change is in, we disable the use of openmp libraries in gettext by default now. http://mail-index.netbsd.org/pkgsrc-changes/2015/09/01/msg129268.html 2015Q3 should be a great release for pkgsrc on OmniOS, with more than 13300 packages building natively on r151014 for those that choose to build their own :) http://mail-index.netbsd.org/pkgsrc-bulk/2015/08/29/msg011956.html Sevan / Venture37 From danmcd at omniti.com Tue Sep 1 13:46:25 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 1 Sep 2015 09:46:25 -0400 Subject: [OmniOS-discuss] pkgsrc-current OmniOS 170cea2/i386 2015-07-09 21:35 In-Reply-To: <55E5ABF9.2000402@gmail.com> References: <811B3E16-0565-4F36-94F3-AACC0CBB2D59@omniti.com> <650A82F9-E07E-44CF-AE9C-4EAA4FFE87F5@omniti.com> <3CB02866-6686-4D80-869E-7D5ED0A6D088@omniti.com> <55E585B7.7000005@gmail.com> <55E5ABF9.2000402@gmail.com> Message-ID: <4893F6CE-10AE-4B13-BCB0-CD6E5D093F2F@omniti.com> > On Sep 1, 2015, at 9:45 AM, Sevan / Venture37 wrote: > > > 2015Q3 should be a great release for pkgsrc on OmniOS, with more than > 13300 packages building natively on r151014 for those that choose to > build their own :) > http://mail-index.netbsd.org/pkgsrc-bulk/2015/08/29/msg011956.html Thank you VERY much for your efforts here, Sevan. Dan From richard at netbsd.org Tue Sep 1 15:01:24 2015 From: richard at netbsd.org (Richard PALO) Date: Tue, 01 Sep 2015 17:01:24 +0200 Subject: [OmniOS-discuss] pkgsrc-current OmniOS 170cea2/i386 2015-07-09 21:35 In-Reply-To: <55E5ABF9.2000402@gmail.com> References: <811B3E16-0565-4F36-94F3-AACC0CBB2D59@omniti.com> <650A82F9-E07E-44CF-AE9C-4EAA4FFE87F5@omniti.com> <3CB02866-6686-4D80-869E-7D5ED0A6D088@omniti.com> <55E585B7.7000005@gmail.com> <55E5ABF9.2000402@gmail.com> Message-ID: <55E5BDC4.9050700@netbsd.org> Le 01/09/15 15:45, Sevan / Venture37 a ?crit : > > > On 1 September 2015 at 12:02, Sevan / Venture37 wrote: >> Hopefully, the necessary change will be in soon to make it in for the >> 2015Q3 pkgsrc release next month. I'll post an update if anything happens. > > The change is in, we disable the use of openmp libraries in gettext by > default now. > http://mail-index.netbsd.org/pkgsrc-changes/2015/09/01/msg129268.html > > 2015Q3 should be a great release for pkgsrc on OmniOS, with more than > 13300 packages building natively on r151014 for those that choose to > build their own :) > http://mail-index.netbsd.org/pkgsrc-bulk/2015/08/29/msg011956.html > > > Sevan / Venture37 > Unfortunately I don't feel this change is correct, even though it may help your builds and seem not really important... There is still an fishy problem in your config, and I certainly don't see this issue after a proper bootstrap. Can you post the output from the following (substituting your bootstrapped PREFIX)? > $ grep compiler_lib_search <$PREFIX>/bin/libtool As I once mentioned to you, these should point to the right base for gcc (native or pkgsrc compiler). -- Richard PALO From venture37 at gmail.com Tue Sep 1 16:10:19 2015 From: venture37 at gmail.com (Sevan / Venture37) Date: Tue, 1 Sep 2015 17:10:19 +0100 Subject: [OmniOS-discuss] pkgsrc-current OmniOS 170cea2/i386 2015-07-09 21:35 In-Reply-To: <55E5BDC4.9050700@netbsd.org> References: <811B3E16-0565-4F36-94F3-AACC0CBB2D59@omniti.com> <650A82F9-E07E-44CF-AE9C-4EAA4FFE87F5@omniti.com> <3CB02866-6686-4D80-869E-7D5ED0A6D088@omniti.com> <55E585B7.7000005@gmail.com> <55E5ABF9.2000402@gmail.com> <55E5BDC4.9050700@netbsd.org> Message-ID: Hi Richard, On 1 September 2015 at 16:01, Richard PALO wrote: > Unfortunately I don't feel this change is correct, even though it may help your builds > and seem not really important... > > There is still an fishy problem in your config, and I certainly don't see this issue after a proper bootstrap. It would be good if you could qualify what a proper bootstrap is? > Can you post the output from the following (substituting your bootstrapped PREFIX)? > >> $ grep compiler_lib_search <$PREFIX>/bin/libtool > > As I once mentioned to you, these should point to the right base for gcc (native or pkgsrc compiler). from an unprivileged bootstrap, made yesterday by running pkgsrc/bootstrap/bootstrap --unprivileged -bash-4.3$ grep compiler_lib_search ~/pkg/bin/libtool compiler_lib_search_dirs="" compiler_lib_search_path="" libs="$predeps $libs $compiler_lib_search_path $postdeps" searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path" case " $predeps $postdeps $compiler_lib_search_path " in compiler_lib_search_dirs="/opt/gcc-4.8.1/lib/gcc/i386-pc-solaris2.11/4.8.1 /opt/gcc-4.8.1/lib/gcc/i386-pc-solaris2.11/4.8.1/../../.." compiler_lib_search_path="-L/opt/gcc-4.8.1/lib/gcc/i386-pc-solaris2.11/4.8.1 -L/opt/gcc-4.8.1/lib/gcc/i386-pc-solaris2.11/4.8.1/../../.." compiler_lib_search_dirs="" compiler_lib_search_path="" compiler_lib_search_dirs="" compiler_lib_search_path="" compiler_lib_search_dirs="" compiler_lib_search_path="" compiler_lib_search_dirs="" compiler_lib_search_path="" compiler_lib_search_dirs="" compiler_lib_search_path="" Without disabling openmp support you should find that libgomp is a dependency for the version of msgfmt from devel/gettext-tools when using the native version of GCC, on r151014 it's GCC 4.81. But, the path to libgomp is unresolvable because its location is not in the search path (I previously kludged around this by adding /opt/gcc-4.81/lib to the ld search path using crle(1)). It's not a compile time issue, it's a run time issue. Sevan From venture37 at gmail.com Tue Sep 1 16:15:54 2015 From: venture37 at gmail.com (Sevan / Venture37) Date: Tue, 1 Sep 2015 17:15:54 +0100 Subject: [OmniOS-discuss] pkgsrc-current OmniOS 170cea2/i386 2015-07-09 21:35 In-Reply-To: References: <811B3E16-0565-4F36-94F3-AACC0CBB2D59@omniti.com> <650A82F9-E07E-44CF-AE9C-4EAA4FFE87F5@omniti.com> <3CB02866-6686-4D80-869E-7D5ED0A6D088@omniti.com> <55E585B7.7000005@gmail.com> <55E5ABF9.2000402@gmail.com> <55E5BDC4.9050700@netbsd.org> Message-ID: On 1 September 2015 at 17:10, Sevan / Venture37 wrote: > Without disabling openmp support you should find that libgomp is a > dependency for the version of msgfmt from devel/gettext-tools when > using the native version of GCC, on r151014 it's GCC 4.81. Using ldd ~/pkg/bin/msgfmt In my case. From prasadhk at gmail.com Tue Sep 1 12:13:55 2015 From: prasadhk at gmail.com (prasad) Date: Tue, 1 Sep 2015 12:13:55 +0000 (UTC) Subject: [OmniOS-discuss] Dell vs. Supermicro and any recommendations.. References: <81F2A38D-5298-4FB8-B0BB-24E4506D6040@omniti.com> <201503121415.t2CEFBcC022831@elvis.arl.psu.edu> Message-ID: Andy, We are also facing same issue in Dell R730xd with OmniOS latest stable. We tried with LSI drivers version 6-607-02-00 and 6-605-01-00. But no luck. We have enabled HBA mode is raid controller. Lot ioerrors are reporting in iostat. Need some help. Regards, Prasad From danmcd at omniti.com Wed Sep 2 11:46:03 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 2 Sep 2015 07:46:03 -0400 Subject: [OmniOS-discuss] Dell vs. Supermicro and any recommendations.. In-Reply-To: References: <81F2A38D-5298-4FB8-B0BB-24E4506D6040@omniti.com> <201503121415.t2CEFBcC022831@elvis.arl.psu.edu> Message-ID: <31BC9EC0-9548-4486-B7C2-01E0298CAE3C@omniti.com> Brought this thread back from cold storage... > On Sep 1, 2015, at 8:13 AM, prasad wrote: > > Andy, > > We are also facing same issue in Dell R730xd with OmniOS latest stable. > We tried with LSI drivers version 6-607-02-00 and 6-605-01-00. But no > luck. > We have enabled HBA mode is raid controller. Lot ioerrors are reporting > in iostat. Which controller are you using again? And furthermore, unless LSI's drivers are for illumos specifically, you may be encountering other problems. I'd recommend getting an mpt_sas controller, and make sure the IT firmware is NOT v20, but v18 or v19. Dan From moo at wuffers.net Wed Sep 2 15:18:33 2015 From: moo at wuffers.net (wuffers) Date: Wed, 2 Sep 2015 11:18:33 -0400 Subject: [OmniOS-discuss] ZFS data corruption In-Reply-To: <55DAE9D5.2020908@jvm.de> References: <20150814182127.13a8a2a3@sleipner.datanom.net> <55D0C453.60703@jvm.de> <55D1CDB5.1040309@osn.de> <55D47DA9.5030907@osn.de> <55D4B381.30504@jvm.de> <55D8AB14.3010705@will.to> <55DAE9D5.2020908@jvm.de> Message-ID: Hmm, today I did another zpool status and noticed another ZFS corruption error: pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 184h28m with 0 errors on Wed Aug 5 06:38:32 2015 config: [config snipped] errors: Permanent errors have been detected in the following files: tank/vmware-64k-5tb-6:<0x1> I have not had any PSODs, and the SAN has been not been rebooted since the last time I ran the scrub. # uptime 10:52am up 50 days 7:55, 2 users, load average: 2.32, 2.10, 1.96 I am now migrating the VMs on that storage to the previously vacated datastore that reported ZFS corruption, which disappeared after the scrub. After moving the VMs, I'll run another scrub. Is there anything I should be checking? The only thing that jumps out at me right now is that the ZeusRAMs are reporting some illegal requests in iostat, but AFAIK they've always done that whenever I've checked iostat so I thought that was normal for a log device. I also did a dump from the STMF trace buffer as per above (the read failures look almost identical except for the resid): # echo '*stmf_trace_buf/s' | mdb -k | more 0xffffff431dcd8000: :0005385: Imported the LU 600144f09084e251000051 91a7f50001 :0005385: sbd_lp_cb: import_lu failed, ret = 149, err_ret = 4 :0005387: Imported the LU 600144f09084e251000051ec2f0f0002 :0005389: Imported the LU 600144f09084e25100005286cef70001 :0005390: Imported the LU 600144f09084e25100005286cf160002 :0005392: Imported the LU 600144f09084e25100005286cf240003 :0005393: Imported the LU 600144f09084e25100005286cf310004 :0005394: Imported the LU 600144f09084e25100005286cf3b0005 :0005396: Imported the LU 600144f09084e251000052919b220001 :0005397: Imported the LU 600144f09084e251000052919b340002 :0005398: Imported the LU 600144f09084e251000052919b460003 :0005399: Imported the LU 600144f09084e251000052919b560004 :0005401: Imported the LU 600144f09084e251000052919b630005 :0005402: Imported the LU 600144f09084e2510000550331e50001 :0005403: Imported the LU 600144f09084e2510000550331ea0002 :338370566: UIO_READ failed, ret = 5, resid = 65536 :338370566: UIO_READ failed, ret = 5, resid = 65536 Are we dealing with some sort of new bug? I am on 14.. On Mon, Aug 24, 2015 at 5:54 AM, Stephan Budach wrote: > Am 22.08.15 um 19:02 schrieb Doug Hughes: > > I've been experiencing spontaneous checksum failure/corruption on read at > the zvol level recently on a box running r12 as well. None of the disks > show any errors. All of the errors show up at the zvol level until all the > disks in the vol get marked as degraded and then a reboot clears it up. > repeated scrubs find files to delete, but then after additional heavy read > I/O activity, more checksum on read errors occur, and more files need to be > removed. So far on r14 I haven't seen this, but I'm keeping an eye on it. > > The write activity on this server is very low. I'm currently trying to > evacuate it with zfs send | mbuffer to another host over 10g, so the read > activity is very high and consistent over a long period of time since I > have to move about 10TB. > > This morning, I received another of these zvol errors, which was also > reported up to my RAC cluster. I haven't fully checked that yet, but I > think the ASM/ADVM simply issued a re-read and was happy with the result. > Otherwise ASM would have issued a read against the mirror side and probably > have taken the "faulty" failure group offline, which it didn't. > > However, I was wondering how to get some more information from the STMF > framework and found a post, how to read from the STMF trace buffer? > > root at nfsvmpool07:/root# echo '*stmf_trace_buf/s' | mdb -k | more > 0xffffff090f828000: :0002579: Imported the LU > 600144f090860e6b000055 > 0c3a290001 > :0002580: Imported the LU 600144f090860e6b0000550c3e240002 > :0002581: Imported the LU 600144f090860e6b0000550c3e270003 > :0002603: Imported the LU 600144f090860e6b000055925a120001 > :0002604: Imported the LU 600144f090860e6b000055a50ebf0002 > :0002604: Imported the LU 600144f090860e6b000055a8f7d70003 > :0002605: Imported the LU 600144f090860e6b000055a8f7e30004 > :150815416: UIO_READ failed, ret = 5, resid = 131072 > :224314824: UIO_READ failed, ret = 5, resid = 131072 > > So, this basically shows two read errors, which is consistent with the > incidents I had on this system. Unfortuanetly, this doesn't buy me much > more, since I don't know how to track that further down, but it seems that > COMSTAR had issues reading from the zvol. > > Is it possible to debug this further? > > > On 8/21/2015 2:06 AM, wuffers wrote: > > Oh, the PSOD is not caused by the corruption in ZFS - I suspect it was the > other way around (VMware host PSOD -> ZFS corruption). I've experienced the > PSOD before, it may be related to IO issues which I outlined in another > post here: > http://lists.omniti.com/pipermail/omnios-discuss/2015-June/005222.html > > Nobody chimed in, but it's an ongoing issue. I need to dedicate more time > to troubleshoot but other projects are taking my attention right now > (coupled with a personal house move time is at a premium!). > > Also, I've had many improper shutdowns of the hosts and VMs, and this was > the first time I've seen a ZFS corruption. > > I know I'm repeating myself, but my question is still: > - Can I safely use this block device again now that it reports no errors? > Again, I've moved all data off of it.. and there are no other signs of > hardware issues. Recreate it? > > On Wed, Aug 19, 2015 at 12:49 PM, Stephan Budach > wrote: > >> Hi Joerg, >> >> Am 19.08.15 um 14:59 schrieb Joerg Goltermann: >> >> Hi, >>> >>> the PSOD you got can cause the problems on your exchange database. >>> >>> Can you check the ESXi logs for the root cause of the PSOD? >>> >>> I never got a PSOD on such a "corruption". I still think this is >>> a "cosmetic" bug, but this should be verified by one of the ZFS >>> developers ... >>> >>> - Joerg >> >> > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nagele at wildbit.com Wed Sep 2 15:33:21 2015 From: nagele at wildbit.com (Chris Nagele) Date: Wed, 2 Sep 2015 11:33:21 -0400 Subject: [OmniOS-discuss] 2.5" JBOD enclosures Message-ID: I'm looking into a new build for a storage server, possibly using rsf-1 with two head nodes and a JBOD. I'd like to move to all SSD storage this time. Does anyone have advice on JBOD enclosures that might work? I am looking for: * More than 24 2.5" bays * Hot-swap JBOD components (if possible) * SATA support (since I am using SSDs) I'm not used to externally attached storage, so any advice on HBAs (9300-8e, etc) and backplanes with the SSDs are appreciated. Thanks, Chris From mail at steffenwagner.com Thu Sep 3 11:13:44 2015 From: mail at steffenwagner.com (Steffen Wagner) Date: Thu, 03 Sep 2015 13:13:44 +0200 Subject: [OmniOS-discuss] OmniOS / Nappit slow iscsi / ZFS performance with Proxmox In-Reply-To: <21987.8122.276207.700528@glaurung.bb-c.de> References: <001a01d0e32e$951a1660$bf4e4320$@steffenwagner.com> <21987.8122.276207.700528@glaurung.bb-c.de> Message-ID: <2d2796eb5468784618611b1f3c4d788b@steffenwagner.com> Hi Volker, thanks for this information! I am now just running the connection for storage with one nic (10GbE). Regards, Steffen -- Steffen Wagner August-Bebel-Stra?e 61 D-68199 Mannheim M +49 (0) 1523 3544688 E mail at steffenwagner.com I http://wagnst.de Get my public GnuPG key: mail steffenwagner com http://http-keys.gnupg.net/pks/lookup?op=get&search=0x8A3406FB4688EE99 Am 2015-08-30 17:22, schrieb vab at bb-c.de: >> The systems are currently connected through a 1 GBit link for >> general WAN and LAN communitcation and a 20 GBit link (two 10 GBit >> links aggregated) for the iSCSI communication. > > This may or may not make a difference but if you do link aggregation > and then use the link only from one client IP then you will only get > one connection on one of the two aggregated links. In case of iSCSI > I think it is better to configure the two links separately and then > use multipathing. > > > Regards -- Volker From mail at steffenwagner.com Thu Sep 3 11:18:14 2015 From: mail at steffenwagner.com (Steffen Wagner) Date: Thu, 03 Sep 2015 13:18:14 +0200 Subject: [OmniOS-discuss] OmniOS / Nappit slow iscsi / ZFS performance with Proxmox In-Reply-To: <58169405-2CCA-4C66-92DE-52B1192FFADA@lji.org> References: <001a01d0e32e$951a1660$bf4e4320$@steffenwagner.com> <58169405-2CCA-4C66-92DE-52B1192FFADA@lji.org> Message-ID: <3ca0a91b611c0c1e9539f94d0e9135f1@steffenwagner.com> Hi Michael, - I am running several VLANs for splitting them up. I have one VLAN for Cluster communication (all nodes are connected through LACP Channel) and one VLAN (also LACP Channel) for VM Traffic. The storage an app servers are direct attached with 10GbE, so all networks are definately splitted. - I have set MTU to 9000 and enabled Jumbo Frames in my HP Procurve Switch.. Meanwhile I got some performance improvements with setting recordsize for pool to 64k and enabling write back cache for all LU's... I have about 250MB/s with random tests (load of tank is then around 40-50%), which is quite good and okay for me. If someone has more helpful advices to tune comstar / ZFS / ... parameters I appreciate this highly! Thank you very much, Steffen -- Steffen Wagner August-Bebel-Stra?e 61 D-68199 Mannheim M +49 (0) 1523 3544688 E mail at steffenwagner.com I http://wagnst.de Get my public GnuPG key: mail steffenwagner com http://http-keys.gnupg.net/pks/lookup?op=get&search=0x8A3406FB4688EE99 Am 2015-08-31 00:45, schrieb Michael Talbott: > This may be a given, but, since you didn't mention this in your > network topology.. Make sure the 1g LAN link is on a different subnet > than the 20g iscsi link. Otherwise iscsi traffic might be flowing > through the 1g link. Also jumbo frames can help with iscsi. > > Additionally, dd speed tests from /dev/zero to a zfs disk are highly > misleading if you have any compression enabled on the zfs disk (since > only 512 bytes of disk is actually written for nearly any amount of > consecutive zeros) > > Michael > Sent from my iPhone > > On Aug 30, 2015, at 7:17 AM, Steffen Wagner > wrote: > >> Hi everyone! >> >> I just setup a small network with 2 nodes: >> >> * 1 proxmox host on Debian Wheezy hosting KVM VMs >> >> * 1 napp-it host on OmniOS stable >> >> The systems are currently connected through a 1 GBit link for >> general WAN and LAN communitcation and a 20 GBit link (two 10 GBit >> links aggregated) for the iSCSI communication. >> >> Both connection's bandwidth was confirmed using iperf. >> >> The napp-it system currently has one pool (tank) consisting of 2 >> mirror vdevs. The 4 disks are SAS3 disks connected to a SAS2 >> backplane and directly attached (no expander) to the LSI SAS3008 >> (9300-8i) HBA. >> >> Comstar is running on that Machine with 1 target (vm-storage) in 1 >> target group (vm-storage-group). >> >> Proxmox has this iSCSI target configured as a "ZFS over iSCSI" >> storage using a block size of 8k and the "Write cache" option >> enabled. >> >> This is where the problem starts: >> >> dd if=/dev/zero of=/tank/test bs=1G count=20 conv=fdatasync >> >> This dd test yields around 300 MB/s directly on the napp-it system. >> >> dd if=/dev/zero of=/home/test bs=1G count=20 conv=fdatasync >> >> This dd test yields around 100 MB/s on a VM with it's disk on the >> napp-it system connected via iSCSI. >> >> The problem here is not the absolute numbers as these tests do not >> provide accurate numbers, the problem is the difference between the >> two values. I expected at least something around 80% of the local >> bandwidth, but this is usually around 30% or less. >> >> What I noticed during the tests: When running the test locally on >> the napp-it system, all disks will be fully utilized (read using >> iostat -x 1). When running the test inside a VM, the disk >> utilization barely reaches 30% (which seems to reflect the results >> of the bandwidth displayed by dd). >> >> These 30% are only reached, if the locical unit of the VM disk has >> the writeback cache enabled. Disabling it results in 20-30 MB/s with >> the dd test mentioned above. Enabling it also increases the disk >> utilization. >> >> These values are also seen during the disk migration. Migrating one >> disk results in slow speed and low disk utilization. Migrating >> several disks in parallel will evetually cause 100% disk >> utilization. >> >> I also tested a NFS share as VM storage in proxmox. Running the same >> test inside a VM on the NFS share yields results around 200-220 >> MB/s. This is better (and shows that the traffic is going over the >> fast link between the servers), but not really yet as I still lose a >> third. >> >> I am fairly new to the Solaris and ZFS world, so any help is greatly >> appreciated. >> >> Thanks in advance! >> >> Steffen > >> _______________________________________________ >> OmniOS-discuss mailing list >> OmniOS-discuss at lists.omniti.com >> http://lists.omniti.com/mailman/listinfo/omnios-discuss [1] > > > Links: > ------ > [1] http://lists.omniti.com/mailman/listinfo/omnios-discuss From henson at acm.org Thu Sep 3 21:37:28 2015 From: henson at acm.org (Paul B. Henson) Date: Thu, 03 Sep 2015 14:37:28 -0700 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> Message-ID: <080301d0e690$bb5eb690$321c23b0$@acm.org> > From: Dan McDonald > Sent: Tuesday, August 11, 2015 7:16 AM > > I think the packaging update may be a bit more complicated than just pushing > out openssh, but I don't think it's untenable. Just wondering if there was any further news on this. Joyent just pushed out a change to their illumos branch that removes SunSSH completely and replaces it with OpenSSH which made me think about it :). They actually tweaked upstream OpenSSH to accept some of the SunSSH specific options and in a couple other minor ways be more compatible as a drop in replacement. Personally I don't care about that, vanilla openssh would work for me :), but their changes might be of interest to other omnios users, or if openssh becomes the default omnios ssh implementation rather than an optional after install replacement. Thanks. From danmcd at omniti.com Thu Sep 3 21:55:59 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 3 Sep 2015 17:55:59 -0400 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <080301d0e690$bb5eb690$321c23b0$@acm.org> References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> Message-ID: <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> I knew Joyent was working on this. I hope they upstream it soon. I have 7.1p1 in the upcoming bloody, with only the light patching already in omnios-build, plus the recent Lauri T changes. Dan Sent from my iPhone (typos, autocorrect, and all) On Sep 3, 2015, at 5:37 PM, Paul B. Henson wrote: >> From: Dan McDonald >> Sent: Tuesday, August 11, 2015 7:16 AM >> >> I think the packaging update may be a bit more complicated than just > pushing >> out openssh, but I don't think it's untenable. > > Just wondering if there was any further news on this. Joyent just pushed out > a change to their illumos branch that removes SunSSH completely and replaces > it with OpenSSH which made me think about it :). They actually tweaked > upstream OpenSSH to accept some of the SunSSH specific options and in a > couple other minor ways be more compatible as a drop in replacement. > Personally I don't care about that, vanilla openssh would work for me :), > but their changes might be of interest to other omnios users, or if openssh > becomes the default omnios ssh implementation rather than an optional after > install replacement. > > Thanks. > From henson at acm.org Thu Sep 3 22:12:57 2015 From: henson at acm.org (Paul B. Henson) Date: Thu, 03 Sep 2015 15:12:57 -0700 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> Message-ID: <080401d0e695$b099a800$11ccf800$@acm.org> > From: Dan McDonald > Sent: Thursday, September 03, 2015 2:56 PM > > I knew Joyent was working on this. I hope they upstream it soon. I have 7.1p1 > in the upcoming bloody, with only the light patching already in omnios-build, > plus the recent Lauri T changes. Is upstream going to be amenable to ditching SunSSH? As I recall from the last time the topic was broached, there were a fair number of people who did not want to lose the SunSSH specific changes (RBAC, a couple of other things I don't recall offhand). Perhaps as SunSSH gets more and more obsolete, with only an occasional interoperability bandaid back ported there will be less resistance. From richard at netbsd.org Tue Sep 8 04:32:59 2015 From: richard at netbsd.org (Richard PALO) Date: Tue, 08 Sep 2015 06:32:59 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <55E2C3E9.9000702@netbsd.org> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> Message-ID: Thought I would try snoop with port 22. >From omnios, in one window I issued: > pfexec snoop -rv -d e1000g0 port 22 |& tee snoop.out >From another I connected to the OI machine and did nothing further (as it hangs in that direction too): > ssh xx.xx.xxx.xx In the attached snoop.output, I edited snoop.out to put in a comment after the initial connection (search for "pause after connection") before the traffic seemingly when things go sour... I notice a Window changed to 1024?? At the moment I'm running with the gate @ 2ed96329a073f74bd33f766ab982be14f3205bc9 -- Richard PALO -------------- next part -------------- A non-text attachment was scrubbed... Name: snoop.output.gz Type: application/gzip Size: 3764 bytes Desc: not available URL: From danmcd at omniti.com Tue Sep 8 12:12:39 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 8 Sep 2015 08:12:39 -0400 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> Message-ID: <5EE7303C-7920-4087-9B0F-5FB15E9315C7@omniti.com> > On Sep 8, 2015, at 12:32 AM, Richard PALO wrote: > > before the traffic seemingly when things go sour... I notice a Window changed to 1024?? Which side is advertising the window change again? And which side is running -gate from 2ed96329a073f74bd33f766ab982be14f3205bc9 ? This thread has been paged out, so to speak, for long enough. Can you give me the context of which machine is running what to explain the context of the snoop file? Thanks, Dan From stephan.budach at JVM.DE Wed Sep 9 14:23:03 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Wed, 9 Sep 2015 16:23:03 +0200 Subject: [OmniOS-discuss] dladm hanging when creating an aggr after ramping of mtu Message-ID: <55F040C7.5020304@jvm.de> Hi, a couple of months ago I was having some issues with dladm and setting the MTU size to 9000 on some ixgbe interfaces. At that time, dladm went some kind of unresponsive after I had set the MTU on such an interface. This was back in the 006 days and today I wanted to do the same on some 014 systems. This time, I was able to set the MTU to 9216 as it is set on my Nexus Fex's, but upon creating a link aggregation using dladm create-aggr -l ixgbe1 -l ixgbe3 aggr0 dladm is hanging again and the network traffic from ixgbe0 was suspended for some length of time, such as that the iSCSI initiator on the other end of ixgbe0 was starting to throw iSCSI connection errors. Luckily the connection resumed before the iSCSI timeout was due, but dladm is still hanging around waiting for whatever? Is there any option to get rid of dladm without taking down the whole box? Thanks, Stephan From danmcd at omniti.com Wed Sep 9 14:30:13 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 9 Sep 2015 10:30:13 -0400 Subject: [OmniOS-discuss] dladm hanging when creating an aggr after ramping of mtu In-Reply-To: <55F040C7.5020304@jvm.de> References: <55F040C7.5020304@jvm.de> Message-ID: <090C40BE-F732-4AAB-A6F0-22F6355CDD12@omniti.com> > On Sep 9, 2015, at 10:23 AM, Stephan Budach wrote: > > Is there any option to get rid of dladm without taking down the whole box? First, see what it's locked on: pstack `pgrep dladm` if you know the PID of the hung process, substitute it for the argument to pstack. I'm assuming you've tried killing this process? Dan From martin.truhlar at archcon.cz Wed Sep 9 16:04:22 2015 From: martin.truhlar at archcon.cz (=?utf-8?B?TWFydGluIFRydWhsw6HFmQ==?=) Date: Wed, 9 Sep 2015 18:04:22 +0200 Subject: [OmniOS-discuss] data gone ...? In-Reply-To: References: Message-ID: Hi Ben, Actually, I have a KVM above virtualized Windows server and KVM provides all necesary services, like iSCSI. I suspect as a problem here lack of free space on the pool, because I had more then 90% filled with data and omnios pool signalised 0% free space. The fastest solution has been deletion of this disk, restoration from backup and cleanup some useless data (there wasn't any databases or system disks). Now I have unfortunately a different problem for that I start new fiber. Thank you for your time. Martin Truhlar -----Original Message----- From: Ben Kitching [mailto:narratorben at icloud.com] Sent: Saturday, August 15, 2015 12:37 PM To: Martin Truhl?? Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] data gone ...? Hi Martin, You say that you are exporting a volume over iSCSI to your windows server. I assume that means you have an NTFS (or other windows filesystem) sitting on top of the iSCSI volume? It might be worth using windows tools to check the integrity of that filesystem as it may be that rather than ZFS that is causing problems. Are you using the built in Windows iSCSI initiator? I?ve had problems with this in past on versions of windows older than windows 8 / server 2012 due to it not supporting iSCSI unmap commands and therefore being unable to tell ZFS to free blocks when files are deleted. You can see if you are having this problem by comparing the free space reported by both windows and ZFS. If there is a disparity then you are likely experiencing this problem and could ultimately end up in a situation where ZFS will stop allowing writes because it thinks the volume is full no matter how many files you delete from the windows end. I saw this manifest as errors with the NTFS filesystem on the windows end as from Windows point of view it has free space and can?t understand why it isn?t allowed to write, it sees it as an error. On 15 Aug 2015, at 00:38, Martin Truhl?? wrote: Hallo everyone, I have a little problem here. I'm using OmniOS v11 r151014 with nappit 0.9f5 and 3 pools (2 data pool and a system) There is a problem with epool that I'm sharing by iSCSI to Windows 2008 SBS server. This pool is few days old, but used disks are about 5 years old. Obviously something happen with one 500GB disk (S:0 H:106 T:12), but data on epool seems to be in a good condition. But. I had a problem with accessing some data on that pool and today most of them (roughly 2/3) have disappeared. But ZFS seems to be ok and available space epool indicates is the same as day before. I welcome any advice. Martin Truhlar pool: dpool state: ONLINE scan: scrub repaired 0 in 14h11m with 0 errors on Thu Aug 13 14:34:21 2015 config: NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess dpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t50014EE00400FA16d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE2B40F14DBd0 ONLINE 0 0 0 1 TB WDC WD1003FBYX-0 S:0 H:0 T:0 mirror-1 ONLINE 0 0 0 c1t50014EE05950B131d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE2B5E5A6B8d0 ONLINE 0 0 0 1 TB WDC WD1003FBYZ-0 S:0 H:0 T:0 mirror-2 ONLINE 0 0 0 c1t50014EE05958C51Bd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0595617ACd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 mirror-3 ONLINE 0 0 0 c1t50014EE0AEAE7540d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0AEAE9B65d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 logs mirror-4 ONLINE 0 0 0 c1t55CD2E404B88ABE1d0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 c1t55CD2E404B88E4CFd0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 cache c1t55CD2E4000339A59d0 ONLINE 0 0 0 180 GB INTEL SSDSC2BW18 S:0 H:0 T:0 errors: No known data errors pool: epool state: ONLINE scan: scrub repaired 0 in 6h26m with 0 errors on Fri Aug 14 07:17:03 2015 config: NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess epool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t50014EE1578AC0B5d0 ONLINE 0 0 0 500.1 GB WDC WD5002ABYS-0 S:0 H:0 T:0 c1t50014EE1578B1091d0 ONLINE 0 0 0 500.1 GB WDC WD5002ABYS-0 S:0 H:106 T:12 c1t50014EE1ACD9A82Bd0 ONLINE 0 0 0 500.1 GB WDC WD5002ABYS-0 S:0 H:1 T:0 c1t50014EE1ACD9AC4Ed0 ONLINE 0 0 0 500.1 GB WDC WD5002ABYS-0 S:0 H:1 T:0 errors: No known data errors _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From martin.truhlar at archcon.cz Wed Sep 9 16:24:03 2015 From: martin.truhlar at archcon.cz (=?iso-8859-2?Q?Martin_Truhl=E1=F8?=) Date: Wed, 9 Sep 2015 18:24:03 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance Message-ID: Hello everybody, I have a problem here, I can't move with. My Windows server runs as virtual machine under KVM. I'm using a 10GB network card. On this hw configuration I expect much better performance than I'm getting. Two less important disks uses KVM cache, that improve performance a bit. But I don't want to use KVM's cache for system and databases disks and there I'm getting 6MB/s for writing. Also 4K writing is low even with KVM cache. This is a pool disk configuration. pool: dpool state: ONLINE scan: scrub repaired 0 in 14h11m with 0 errors on Thu Aug 13 14:34:21 2015 config: NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess dpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t50014EE00400FA16d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE2B40F14DBd0 ONLINE 0 0 0 1 TB WDC WD1003FBYX-0 S:0 H:0 T:0 mirror-1 ONLINE 0 0 0 c1t50014EE05950B131d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE2B5E5A6B8d0 ONLINE 0 0 0 1 TB WDC WD1003FBYZ-0 S:0 H:0 T:0 mirror-2 ONLINE 0 0 0 c1t50014EE05958C51Bd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0595617ACd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 mirror-3 ONLINE 0 0 0 c1t50014EE0AEAE7540d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0AEAE9B65d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 logs mirror-4 ONLINE 0 0 0 c1t55CD2E404B88ABE1d0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 c1t55CD2E404B88E4CFd0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 cache c1t55CD2E4000339A59d0 ONLINE 0 0 0 180 GB INTEL SSDSC2BW18 S:0 H:0 T:0 errors: No known data errors Any advice for performance improvement appreciated. Martin Truhlar -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pic.PNG Type: image/png Size: 148736 bytes Desc: pic.PNG URL: From danmcd at omniti.com Wed Sep 9 16:32:23 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 9 Sep 2015 12:32:23 -0400 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: References: Message-ID: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> > On Sep 9, 2015, at 12:24 PM, Martin Truhl?? wrote: > > Hello everybody, > > I have a problem here, I can?t move with. My Windows server runs as virtual machine under KVM. I?m using a 10GB network card. On this hw configuration I expect much better performance than I?m getting. Two less important disks uses KVM cache, that improve performance a bit. But I don?t want to use KVM?s cache for system and databases disks and there I?m getting 6MB/s for writing. Also 4K writing is low even with KVM cache. So you have windows on KVM, and KVM is using iSCSI to speak to OmniOS? That's a lot of indirection... Question: What's the MTU on the 10Gig Link? Dan From jtyocum at uw.edu Wed Sep 9 16:58:27 2015 From: jtyocum at uw.edu (John Yocum) Date: Wed, 9 Sep 2015 09:58:27 -0700 Subject: [OmniOS-discuss] ipfilter refresh bug? Message-ID: <55F06533.9040204@uw.edu> I'm running omnios-8c08411. Yep, it's old. New system running latest LTS will be setup in the next couple months, and this system will be redone. Anyway... I'm running into a possible bug in the processing of ipfilter's ippools. I added some new addresses to an existing pool, and ran 'svcadm refresh ipfilter:default', only to find my ippool didn't change. The logs showed: [ Sep 2 14:00:39 Rereading configuration. ] [ Sep 2 14:00:39 Executing refresh method ("/lib/svc/method/ipfilter reload"). ] 0 objects flushed load_pool:SIOCLOOKUPADDTABLE: File exists load_pool:SIOCLOOKUPADDTABLE: File exists Set 1 now inactive filter sync'd 0 entries flushed from NAT table 4 entries flushed from NAT list [ Sep 2 14:00:39 Method "refresh" exited with status 0. ] So, is this just an intended behavior or a bug? I've found a few mailing list posts from 10 years ago about it, but no explanation. http://unix.derkeiler.com/Newsgroups/comp.unix.solaris/2005-12/msg00652.html Thanks! -- John Yocum, Systems Administrator, DEOHS From danmcd at omniti.com Wed Sep 9 17:29:06 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 9 Sep 2015 13:29:06 -0400 Subject: [OmniOS-discuss] ipfilter refresh bug? In-Reply-To: <55F06533.9040204@uw.edu> References: <55F06533.9040204@uw.edu> Message-ID: > On Sep 9, 2015, at 12:58 PM, John Yocum wrote: > > ran 'svcadm refresh ipfilter:default', only to find my ippool didn't change. Try "svcadm restart" instead. The "file exists" may indicate rule changes that collide with older ones. Check afterwards, in case you malformed the inputs as well. Dan From stephan.budach at JVM.DE Wed Sep 9 19:01:49 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Wed, 9 Sep 2015 21:01:49 +0200 Subject: [OmniOS-discuss] dladm hanging when creating an aggr after ramping of mtu In-Reply-To: <090C40BE-F732-4AAB-A6F0-22F6355CDD12@omniti.com> References: <55F040C7.5020304@jvm.de> <090C40BE-F732-4AAB-A6F0-22F6355CDD12@omniti.com> Message-ID: <55F0821D.5090200@jvm.de> Am 09.09.15 um 16:30 schrieb Dan McDonald: >> On Sep 9, 2015, at 10:23 AM, Stephan Budach wrote: >> >> Is there any option to get rid of dladm without taking down the whole box? > First, see what it's locked on: > > pstack `pgrep dladm` > > if you know the PID of the hung process, substitute it for the argument to pstack. > > I'm assuming you've tried killing this process? > > Dan > Yeah, I tried that and guess what, just right after hitting the send button dladm vanished. However, it had done something already resulting in me being unable to create aggr0 again, since it's already present, alas not visible. Somehow, this seems to strike, if I try to create an aggregation on a card that has already one port in use, in this case ixgbe0 is already up and running. So, I just tried to create aggr0 only from ixgbe3, which went smoothly, however adding ixgbe1 to aggr0 has resulted in dladm not returning, so I am going for a pstack now? ?well, that was that. Unfortuanetly, right when I got the pstack, which basically only showed the command line I called dladm with? dladm add-aggr -l ixgbe0 aggr0 My targets on the other end went nuts amd I had to perform some work on those cluster behind those? It actually ended up in booting the 014 node and after the reboot aggr0 had been configured. I do have some more of those boxes left, where I want to ramp up the mtu size and configure link aggregation afterwards, I am sure, I can make that happen again? ;) Cheers, Stephan From richard at netbsd.org Thu Sep 10 08:13:07 2015 From: richard at netbsd.org (Richard PALO) Date: Thu, 10 Sep 2015 10:13:07 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <5EE7303C-7920-4087-9B0F-5FB15E9315C7@omniti.com> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5EE7303C-7920-4087-9B0F-5FB15E9315C7@omniti.com> Message-ID: <55F13B93.2020702@netbsd.org> Le 08/09/15 14:12, Dan McDonald a ?crit : > >> On Sep 8, 2015, at 12:32 AM, Richard PALO wrote: >> >> before the traffic seemingly when things go sour... I notice a Window changed to 1024?? > > Which side is advertising the window change again? And which side is running -gate from 2ed96329a073f74bd33f766ab982be14f3205bc9 ? > > This thread has been paged out, so to speak, for long enough. Can you give me the context of which machine is running what to explain the context of the snoop file? > > Thanks, > Dan > snoop is running on the omnios machine (omnis) with [near]latest gate *and* is the initiator of the ssh session (having address 192.168.0.6) to the OI target (xx.xx.xxx.xx) on the LAN, looking at 'arp -an' >e1000g0 192.168.0.1 255.255.255.255 00:24:d4:78:eb:ac >e1000g0 192.168.0.6 255.255.255.255 SPLA 00:25:90:f3:5c:8c omnis is 192.168.0.6 and 192.168.0.1 is my freebox adsl router. from what I can gather, the window change request is in a packet "arriving" so it's probably not requested by the local omnis machine. From omnios at citrus-it.net Thu Sep 10 08:40:14 2015 From: omnios at citrus-it.net (Andy Fiddaman) Date: Thu, 10 Sep 2015 08:40:14 +0000 (UTC) Subject: [OmniOS-discuss] Dell vs. Supermicro and any recommendations.. In-Reply-To: References: <81F2A38D-5298-4FB8-B0BB-24E4506D6040@omniti.com> <201503121415.t2CEFBcC022831@elvis.arl.psu.edu> Message-ID: Hi Prasad, We're running OmniOS r151014 on some Dell R730s with PERC H730 Mini HBAs. Even though this is not a recommended combination, with the updates we made to the driver a few months ago, we're not seeing any problems. We have some more driver updates that haven't yet been upstreamed but these mostly concentrate on further improving performance and fixing some smaller bugs and we aren't yet running these on production systems. Unfortunately the driver's version string wasn't changed with the updates but here's the checksum of the one we're running: reaper# (188) modinfo | grep sas 55 fffffffff7ac2000 1d070 172 1 mr_sas (6.503.00.00ILLUMOS) reaper# (189) digest -a sha1 /kernel/drv/amd64/mr_sas 2c81e48297a06a585cf5ae31ee92a537e7563962 reaper# (190) Here are the firmware versions from one of our HBAs. Are yours the same? reaper# (185) megacli -AdpAllInfo -a0 Versions ================ Product Name : PERC H730 Mini Serial No : XXXXX FW Package Build: 25.2.2-0004 Mfg. Data ================ Mfg. Date : 12/18/14 Rework Date : 12/18/14 Revision No : A00 Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 6.18.03.0_4.16.07.00_0x06070400 Ctrl-R Version : 5.03-0010 FW Version : 4.241.00-4163 NVDATA Version : 3.1310.00-0084 Boot Block Version : 3.02.00.00-0000 and the enclosure (which I expect will be different on the xd): reaper# (198) megacli -EncInfo -aALL Number of enclosures on adapter 0 -- 1 Enclosure type : SES Inquiry data : Vendor Identification : DP Product Identification : BP13G+EXP Product Revision Level : 1.09 Vendor Specific : Check these configuration parameters too: reaper# (192) megacli -AdpAllInfo -a0 | egrep 'JBOD|Direct PD' Enable JBOD : Yes Direct PD Mapping : Yes Enable JBOD : Yes reaper# (195) echo '::mr_sas -dtv' | mdb -k mrsas_t inst max_fw_cmds intr_type =========================================== fffff00902f9d000 0 927 MSI-X /pci at 0,0/pci8086,2f02 at 1/pci1028,1f49 at 0 Physical/Logical Target ----------------------- Physical sd 0 Physical sd 1 ... Physical sd 15 vendor_id device_id subsysvid subsysid -------------------------------------- 0x1000 0x5d 0x1028 0x1f49 and, finally, check the adapter log file: reaper# (207) megacli -AdpAlILog -a0 We only see a handful of log entries a day with the current driver - with the old one the log was being hammered (which is one of the things that caused the performance issues). Hope that helps, Andy On Tue, 1 Sep 2015, prasad wrote: ; Andy, ; ; We are also facing same issue in Dell R730xd with OmniOS latest stable. ; We tried with LSI drivers version 6-607-02-00 and 6-605-01-00. But no ; luck. ; We have enabled HBA mode is raid controller. Lot ioerrors are reporting ; in iostat. ; ; Need some help. ; ; Regards, ; Prasad ; ; ; ; ; _______________________________________________ ; OmniOS-discuss mailing list ; OmniOS-discuss at lists.omniti.com ; http://lists.omniti.com/mailman/listinfo/omnios-discuss ; -- Citrus IT Limited | +44 (0)870 199 8000 | enquiries at citrus-it.co.uk Rock House Farm | Green Moor | Wortley | Sheffield | S35 7DQ Registered in England and Wales | Company number 4899123 From richard at netbsd.org Thu Sep 10 09:28:12 2015 From: richard at netbsd.org (Richard PALO) Date: Thu, 10 Sep 2015 11:28:12 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <5EE7303C-7920-4087-9B0F-5FB15E9315C7@omniti.com> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5EE7303C-7920-4087-9B0F-5FB15E9315C7@omniti.com> Message-ID: <55F14D2C.3010403@netbsd.org> Le 08/09/15 14:12, Dan McDonald a ?crit : > >> On Sep 8, 2015, at 12:32 AM, Richard PALO wrote: >> >> before the traffic seemingly when things go sour... I notice a Window changed to 1024?? > > Which side is advertising the window change again? And which side is running -gate from 2ed96329a073f74bd33f766ab982be14f3205bc9 ? > > This thread has been paged out, so to speak, for long enough. Can you give me the context of which machine is running what to explain the context of the snoop file? > > Thanks, > Dan > Just for completeness, same histoire from the OI side, snoop and ssh here, 192.168.1.2 is smicro (oi_151a9) > e1000g0 192.168.1.1 255.255.255.255 00:12:ef:21:9c:f8 > e1000g0 192.168.1.2 255.255.255.255 SPLA 00:30:48:f4:33:f0 and 192.168.1.1 is an Orange Business Services SDSL router. -------------- next part -------------- A non-text attachment was scrubbed... Name: snoop-OI.output.gz Type: application/gzip Size: 3310 bytes Desc: not available URL: From danmcd at omniti.com Thu Sep 10 11:53:00 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 10 Sep 2015 07:53:00 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC Message-ID: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> If you are using a zpool with r151014 and you have an L2ARC ("cache") vdev, I recommend at this time disabling it. You may disable it by uttering: zpool remove For example: zpool remove data c2t2d0 The bug in question has a good analysis here: https://www.illumos.org/issues/6214 This bug can lead to problems ranging from false-positives on zpool scrub all the way up to actual pool corruption. We will be updating the package repo AND the install media once 6214 is upstreamed to illumos-gate, and pulled back into the r151014 branch of illumos-omnios. The fix is undergoing some tests from ZFS experts right now to verify its correctness. So please disable your L2ARC/cache devices for maximum data safety. You can add them back after we update r151014 by uttering: zpool add cache PLEASE NOTE the "cache" indicator when you add back. If you omit this, the vdev is ADDED to your pool, an operation one can't reverse. zpool add data cache c2t2d0 Thanks, Dan From danmcd at omniti.com Thu Sep 10 11:53:47 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 10 Sep 2015 07:53:47 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> Message-ID: <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> > On Sep 10, 2015, at 7:53 AM, Dan McDonald wrote: > > If you are using a zpool with r151014 and you have an L2ARC ("cache") vdev, I recommend at this time disabling it. You may disable it by uttering: This also affects bloody as well. Dan From chip at innovates.com Thu Sep 10 13:36:58 2015 From: chip at innovates.com (Schweiss, Chip) Date: Thu, 10 Sep 2015 08:36:58 -0500 Subject: [OmniOS-discuss] Periodic SSH connect failures Message-ID: On OmniOS r151014 I use ssh with rsa-keys to allow my storage systems to communicate and launch things like 'zfs receive' Periodically the connection fails with "ssh_exchange_identification: Connection closed by remote host" When this happens about 1/2 the connection attempts will fail this way for about 10-20 minutes then thing return to normal. root at mir-dr-zfs01:/root# ssh -v mirpool02 OpenSSH_6.6, OpenSSL 1.0.1p 9 Jul 2015 debug1: Reading configuration data /etc/opt/csw/ssh/ssh_config debug1: Connecting to mirpool02 [10.28.125.130] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type 2 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6 ssh_exchange_identification: Connection closed by remote host root at mir-dr-zfs01:/root# echo $? 255 I've not been able to get logs out of the SunSSH server, turning things on in /etc/syslog.conf, doesn't seem to work. What am I am missing in trying to get more information out of the ssh server? I used OpenSSH client from OpenCSW, using the SunSSH client and the problem happens nearly twice as often. Any suggestions on how to make these connections robust? Thanks! -Chip -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.budach at JVM.DE Thu Sep 10 16:09:05 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Thu, 10 Sep 2015 18:09:05 +0200 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> Message-ID: <55F1AB21.7040201@jvm.de> Am 10.09.15 um 13:53 schrieb Dan McDonald: >> On Sep 10, 2015, at 7:53 AM, Dan McDonald wrote: >> >> If you are using a zpool with r151014 and you have an L2ARC ("cache") vdev, I recommend at this time disabling it. You may disable it by uttering: > This also affects bloody as well. > > Dan > Hi Dan, thanks for the heads-up! I will disable/remove my cache devices right away. This will also be a good test of how much of a hit we will take by that, performance-wise? ;) Thanks, Stephan From chip at innovates.com Thu Sep 10 16:15:12 2015 From: chip at innovates.com (Schweiss, Chip) Date: Thu, 10 Sep 2015 11:15:12 -0500 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> Message-ID: Is this limited to r151014 and bloody? I was under the impression this bug went back to the introduction of L2ARC compression. -Chip On Thu, Sep 10, 2015 at 6:53 AM, Dan McDonald wrote: > > > On Sep 10, 2015, at 7:53 AM, Dan McDonald wrote: > > > > If you are using a zpool with r151014 and you have an L2ARC ("cache") > vdev, I recommend at this time disabling it. You may disable it by > uttering: > > This also affects bloody as well. > > Dan > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at omniti.com Thu Sep 10 16:43:40 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 10 Sep 2015 12:43:40 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> Message-ID: > On Sep 10, 2015, at 12:15 PM, Schweiss, Chip wrote: > > Is this limited to r151014 and bloody? > > I was under the impression this bug went back to the introduction of L2ARC compression. Did you read the analysis of 6214? It calls out this commit as the cause: Author: Chris Williamson Date: Mon Dec 29 19:12:23 2014 -0800 5408 managing ZFS cache devices requires lots of RAM Reviewed by: Christopher Siden Reviewed by: George Wilson Reviewed by: Matthew Ahrens Reviewed by: Don Brady Reviewed by: Josef 'Jeff' Sipek Approved by: Garrett D'Amore That wasn't in '012, just '014 and later. Dan From chip at innovates.com Thu Sep 10 17:19:38 2015 From: chip at innovates.com (Schweiss, Chip) Date: Thu, 10 Sep 2015 12:19:38 -0500 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <83D851CF-DA50-4AE9-B43C-5FE8539F4C5F@omniti.com> Message-ID: On Thu, Sep 10, 2015 at 11:43 AM, Dan McDonald wrote: > > > On Sep 10, 2015, at 12:15 PM, Schweiss, Chip wrote: > > > > Is this limited to r151014 and bloody? > > > > I was under the impression this bug went back to the introduction of > L2ARC compression. > > Did you read the analysis of 6214? It calls out this commit as the cause: > > Author: Chris Williamson > Date: Mon Dec 29 19:12:23 2014 -0800 > > 5408 managing ZFS cache devices requires lots of RAM > Reviewed by: Christopher Siden > Reviewed by: George Wilson > Reviewed by: Matthew Ahrens > Reviewed by: Don Brady > Reviewed by: Josef 'Jeff' Sipek > Approved by: Garrett D'Amore > > That wasn't in '012, just '014 and later. > Sorry, I missed that. I was going off assumptions from other communications. -Chip > > Dan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtyocum at uw.edu Thu Sep 10 18:24:58 2015 From: jtyocum at uw.edu (John Yocum) Date: Thu, 10 Sep 2015 11:24:58 -0700 Subject: [OmniOS-discuss] ipfilter refresh bug? In-Reply-To: References: <55F06533.9040204@uw.edu> Message-ID: <55F1CAFA.1000207@uw.edu> On 09/09/2015 10:29 AM, Dan McDonald wrote: > >> On Sep 9, 2015, at 12:58 PM, John Yocum wrote: >> >> ran 'svcadm refresh ipfilter:default', only to find my ippool didn't change. > > Try "svcadm restart" instead. The "file exists" may indicate rule changes that collide with older ones. Check afterwards, in case you malformed the inputs as well. > > Dan > Thanks Dan, that worked. -- John Yocum, Systems Administrator, DEOHS From basil.crow at delphix.com Thu Sep 10 23:47:55 2015 From: basil.crow at delphix.com (Basil Crow) Date: Thu, 10 Sep 2015 16:47:55 -0700 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> Message-ID: Hi Dan and Lauri, On Thu, Sep 3, 2015 at 2:55 PM, Dan McDonald wrote: > I knew Joyent was working on this. I hope they upstream it soon. I have 7.1p1 in the upcoming bloody, with only the light patching already in omnios-build, plus the recent Lauri T changes. Joyent's patches to OpenSSH are here: https://github.com/joyent/illumos-extra/tree/master/openssh/Patches These patches make OpenSSH play nicer with the illumos PAM implementation and privilege model and add backwards compatibility with SunSSH, among other things. I recently upgraded Delphix's illumos distribution to use the OpenSSH package in OmniOS bloody. The transition hasn't been without some pain. For example, we realized that older SunSSH clients can't connect to modern OpenSSH servers with default settings (illumos issue #5283). Joyent has a patch that uses the key exchange compatibility mechanism to recognize old SunSSH versions and present a key exchange proposal that always includes the dh-group14 and dh-group1 algorithms (0031-Compatibility-for-SunSSH_1.5-should-include-old-DH-K.patch). We also realized that some of our tests were relying on the old SunSSH locale negotiation behavior to propagate locale settings from the SSH client to the SSH server. Joyent has a patch that preserves most of the old SunSSH locale negotiation behavior (0032-Accept-LANG-and-LC_-environment-variables-from-clien.patch). It would be great if some or all of Joyent's patches could be added to the OpenSSH build scripts in bloody. The various PAM- and privilege- related patches seem critical. While we can live without the backwards compatibility patches (and have been fixing our ecosystem to not rely on any SunSSH-specific functionality), having them would probably significantly ease the migration for most users. Basil From danmcd at omniti.com Thu Sep 10 23:49:09 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 10 Sep 2015 19:49:09 -0400 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> Message-ID: We take pull requests... :) Thanks! Dan From henson at acm.org Fri Sep 11 02:26:08 2015 From: henson at acm.org (Paul B. Henson) Date: Thu, 10 Sep 2015 19:26:08 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> Message-ID: <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> > From: Dan McDonald > Sent: Thursday, September 10, 2015 4:53 AM > > This bug can lead to problems ranging from false-positives on zpool scrub all the > way up to actual pool corruption. Ouch 8-/, thanks for the heads up. I removed my cache devices for now; are these problems something that is immediately visible, or could I possibly still have some latent corruption hiding somewhere that might not be noticed for months :(? My last scrub ran last Friday and said it repaired 0 with 0 errors and there were no known data errors. Now that the cache devices have been removed, would another successful scrub prove there has been no damage? Thanks. From danmcd at omniti.com Fri Sep 11 02:31:37 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 10 Sep 2015 22:31:37 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> Message-ID: I'm not sure about latent corruption. I do know that the worst-case corruption won't be caught by zpool scrub because the corrupt data checksums nicely. Dan Sent from my iPhone (typos, autocorrect, and all) On Sep 10, 2015, at 10:26 PM, Paul B. Henson wrote: >> From: Dan McDonald >> Sent: Thursday, September 10, 2015 4:53 AM >> >> This bug can lead to problems ranging from false-positives on zpool scrub > all the >> way up to actual pool corruption. > > Ouch 8-/, thanks for the heads up. I removed my cache devices for now; are > these problems something that is immediately visible, or could I possibly > still have some latent corruption hiding somewhere that might not be noticed > for months :(? > > My last scrub ran last Friday and said it repaired 0 with 0 errors and there > were no known data errors. Now that the cache devices have been removed, > would another successful scrub prove there has been no damage? > > Thanks. > > From henson at acm.org Fri Sep 11 02:36:37 2015 From: henson at acm.org (Paul B. Henson) Date: Thu, 10 Sep 2015 19:36:37 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> Message-ID: <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> > From: Dan McDonald > Sent: Thursday, September 10, 2015 7:32 PM > > I'm not sure about latent corruption. I do know that the worst-case corruption > won't be caught by zpool scrub because the corrupt data checksums nicely. Blerg 8-/. So any pool running an illumos version with this bug and cache devices might have an undetectable timebomb hiding somewhere that could cause a failure at some later point? That's a bit disturbing . Maybe I'll ask on the zfs list and see if they know of any way to check whether a given pool was hit. Thanks. From danmcd at omniti.com Fri Sep 11 03:25:57 2015 From: danmcd at omniti.com (Dan McDonald) Date: Thu, 10 Sep 2015 23:25:57 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> Message-ID: <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Run zdb(1M) on your pool. If that doesn't trip, that's better. Dan Sent from my iPhone (typos, autocorrect, and all) On Sep 10, 2015, at 10:36 PM, Paul B. Henson wrote: >> From: Dan McDonald >> Sent: Thursday, September 10, 2015 7:32 PM >> >> I'm not sure about latent corruption. I do know that the worst-case > corruption >> won't be caught by zpool scrub because the corrupt data checksums nicely. > > Blerg 8-/. So any pool running an illumos version with this bug and cache > devices might have an undetectable timebomb hiding somewhere that could > cause a failure at some later point? That's a bit disturbing . Maybe > I'll ask on the zfs list and see if they know of any way to check whether a > given pool was hit. > > Thanks. > From henson at acm.org Fri Sep 11 05:11:15 2015 From: henson at acm.org (Paul B. Henson) Date: Thu, 10 Sep 2015 22:11:15 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Message-ID: > On Sep 10, 2015, at 8:25 PM, Dan McDonald wrote: > > Run zdb(1M) on your pool. If that doesn't trip, that's better. Any suggestion as to the best invokation to check with? I looked through the man page but couldn't quite decipher which of the many options to use :). Or just run it with just the pool name and no specific options? Thanks... From danmcd at omniti.com Fri Sep 11 10:12:42 2015 From: danmcd at omniti.com (Dan McDonald) Date: Fri, 11 Sep 2015 06:12:42 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Message-ID: Pool + no options is good to start. Most of the other options are there for follow up runs if something goes wrong. Dan Sent from my iPhone (typos, autocorrect, and all) > On Sep 11, 2015, at 1:11 AM, Paul B. Henson wrote: > > >> On Sep 10, 2015, at 8:25 PM, Dan McDonald wrote: >> >> Run zdb(1M) on your pool. If that doesn't trip, that's better. > > Any suggestion as to the best invokation to check with? I looked through the man page but couldn't quite decipher which of the many options to use :). Or just run it with just the pool name and no specific options? > > Thanks... From alex at cooperi.net Fri Sep 11 16:49:32 2015 From: alex at cooperi.net (Alex Wilson) Date: Fri, 11 Sep 2015 09:49:32 -0700 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> Message-ID: <5DA8DDE0-B03E-4AC5-A2BE-B8BF1F3106B3@cooperi.net> Basil Crow wrote: > These patches make OpenSSH play nicer with the illumos PAM > implementation and privilege model and add backwards compatibility > with SunSSH, among other things. > > I recently upgraded Delphix's illumos distribution to use the OpenSSH > package in OmniOS bloody. The transition hasn't been without some > pain. For example,... > > It would be great if some or all of Joyent's patches could be added to > the OpenSSH build scripts in bloody. While OmniOS is perfectly welcome to grab our patches, I want to make sure I share this warning with everyone: we are still ironing out all the problems with these patches at the moment, and they can change fairly rapidly. If you don?t want to come along for the whole ride with us (and update it a lot), I?d probably recommend holding off for a SmartOS release cycle or two (i.e., about a month or so) At that time we?d like to start a conversation about upstreaming with Illumos anyway, and what the picture going forwards should be for SSH in the Illumos gate. But I want to head into that conversation with a patched OpenSSH that works and does what we need it to do. As noted in the README in illumos-extra, there are a few of the patches that I would like to clean up and propose to upstream OpenSSH for integration, too (such as dropping Illumos/Solaris privileges where appropriate). > The various PAM- and privilege-related patches seem critical. While we can > live without the backwards compatibility patches (and have been fixing our > ecosystem to not rely on any SunSSH-specific functionality), having them > would probably significantly ease the migration for most users. In a lot of ways the other distros have an easier time with these compatibility problems than SmartOS. Because we boot as a read-only live image we don?t currently have any means to perform config migration or give users information while they upgrade ? and it?s unclear what upgrade actually means, because users are largely used to being able to just boot onto whatever platform image they downloaded, whether older or newer. OmniOS probably doesn?t necessarily need as strict a religion of config backwards-compat as that which we?re subscribing to at the moment. I think the rest of the distro maintainers are also going to have other opinions about which parts of the compatibility problem they want to deal with and which parts they do not ? and this is why I think the model going forwards should be distros providing SSH and not the Illumos-gate. But I think we can have more of that conversation after everything is working and well-tested. From moo at wuffers.net Fri Sep 11 17:24:10 2015 From: moo at wuffers.net (wuffers) Date: Fri, 11 Sep 2015 13:24:10 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Message-ID: My pool is definitely slower without the cache devices even though I have 256GB of ARC. Hope the patch comes in soon and I can re-enable them. Does this bug have anything to do with the ZFS corruption I've been seeing as discussed in this thread: http://lists.omniti.com/pipermail/omnios-discuss/2015-August/005449.html My scrub is still running, but seems slower for some reason right now. On Fri, Sep 11, 2015 at 6:12 AM, Dan McDonald wrote: > Pool + no options is good to start. Most of the other options are there > for follow up runs if something goes wrong. > > Dan > > Sent from my iPhone (typos, autocorrect, and all) > > > On Sep 11, 2015, at 1:11 AM, Paul B. Henson wrote: > > > > > >> On Sep 10, 2015, at 8:25 PM, Dan McDonald wrote: > >> > >> Run zdb(1M) on your pool. If that doesn't trip, that's better. > > > > Any suggestion as to the best invokation to check with? I looked through > the man page but couldn't quite decipher which of the many options to use > :). Or just run it with just the pool name and no specific options? > > > > Thanks... > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan at syneto.net Fri Sep 11 17:32:50 2015 From: dan at syneto.net (Dan Vatca) Date: Fri, 11 Sep 2015 17:32:50 +0000 Subject: [OmniOS-discuss] Compiling Illumos-joyent Message-ID: Now that we can compile Illumos gate on Omnios I was thinking about what would it take to compile Illumos-joyent on OmniOS. Did anyone attempt that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary at genashor.com Fri Sep 11 17:12:11 2015 From: gary at genashor.com (Gary Gendel) Date: Fri, 11 Sep 2015 13:12:11 -0400 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <5DA8DDE0-B03E-4AC5-A2BE-B8BF1F3106B3@cooperi.net> References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> <5DA8DDE0-B03E-4AC5-A2BE-B8BF1F3106B3@cooperi.net> Message-ID: <55F30B6B.40304@genashor.com> Alex, A very sane response. Thanks. Gary On 09/11/2015 12:49 PM, Alex Wilson wrote: > Basil Crow wrote: >> These patches make OpenSSH play nicer with the illumos PAM >> implementation and privilege model and add backwards compatibility >> with SunSSH, among other things. >> >> I recently upgraded Delphix's illumos distribution to use the OpenSSH >> package in OmniOS bloody. The transition hasn't been without some >> pain. For example,... >> >> It would be great if some or all of Joyent's patches could be added to >> the OpenSSH build scripts in bloody. > While OmniOS is perfectly welcome to grab our patches, I want to make sure I > share this warning with everyone: we are still ironing out all the problems > with these patches at the moment, and they can change fairly rapidly. If you > don?t want to come along for the whole ride with us (and update it a lot), > I?d probably recommend holding off for a SmartOS release cycle or two (i.e., > about a month or so) > > At that time we?d like to start a conversation about upstreaming with > Illumos anyway, and what the picture going forwards should be for SSH in the > Illumos gate. But I want to head into that conversation with a patched > OpenSSH that works and does what we need it to do. > > As noted in the README in illumos-extra, there are a few of the patches that > I would like to clean up and propose to upstream OpenSSH for integration, > too (such as dropping Illumos/Solaris privileges where appropriate). > >> The various PAM- and privilege-related patches seem critical. While we can >> live without the backwards compatibility patches (and have been fixing our >> ecosystem to not rely on any SunSSH-specific functionality), having them >> would probably significantly ease the migration for most users. > In a lot of ways the other distros have an easier time with these > compatibility problems than SmartOS. Because we boot as a read-only live > image we don?t currently have any means to perform config migration or give > users information while they upgrade ? and it?s unclear what upgrade > actually means, because users are largely used to being able to just boot > onto whatever platform image they downloaded, whether older or newer. > > OmniOS probably doesn?t necessarily need as strict a religion of config > backwards-compat as that which we?re subscribing to at the moment. I think > the rest of the distro maintainers are also going to have other opinions > about which parts of the compatibility problem they want to deal with and > which parts they do not ? and this is why I think the model going forwards > should be distros providing SSH and not the Illumos-gate. > > But I think we can have more of that conversation after everything is > working and well-tested. > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3699 bytes Desc: S/MIME Cryptographic Signature URL: From paul.jochum at alcatel-lucent.com Fri Sep 11 18:24:50 2015 From: paul.jochum at alcatel-lucent.com (Paul Jochum) Date: Fri, 11 Sep 2015 13:24:50 -0500 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> Message-ID: <55F31C72.4000806@alcatel-lucent.com> Hi All: I had a lot of servers (each with more than 1 zpool on them), so I wrote a quick and dirty little script to find all of the cache drives and remove them. It doesn't have any type of error checking in it, but unless you remove the three hash marks on the line "### zpool remove $i $j), it won't do anything either (other than save a copy of your current zpool status under /root/zpool_status-15-09-11.with_cache)s. Hope this help some of you: export DATE=`date '+%y-%m-%d'` # Start by getting a current copy of the status zpool status > /root/zpool_status-$DATE.with_cache # find all of the pools on the server for i in `zpool list -H -o name` do # now, get all of the cache drives from that pool # (the sed script grabs the lines from "cache" to "spares" # the greps remove the "cache" and "spares" line # and the awk print the first item from each line, which is # the name of the drive for j in `zpool status $i | sed -n '/cache/,/spares/p' | \ grep -v cache | grep -v spares | \ awk '{print $1}'` do echo "zpool remove $i $j" # uncomment the next line, when you are ready to really run this ### zpool remove $i $j done done regards, Paul On 09/10/2015 06:53 AM, Dan McDonald wrote: > If you are using a zpool with r151014 and you have an L2ARC ("cache") vdev, I recommend at this time disabling it. You may disable it by uttering: > > zpool remove > > For example: > > zpool remove data c2t2d0 > > The bug in question has a good analysis here: > > https://www.illumos.org/issues/6214 > > This bug can lead to problems ranging from false-positives on zpool scrub all the way up to actual pool corruption. > > We will be updating the package repo AND the install media once 6214 is upstreamed to illumos-gate, and pulled back into the r151014 branch of illumos-omnios. The fix is undergoing some tests from ZFS experts right now to verify its correctness. > > So please disable your L2ARC/cache devices for maximum data safety. You can add them back after we update r151014 by uttering: > > zpool add cache > > PLEASE NOTE the "cache" indicator when you add back. If you omit this, the vdev is ADDED to your pool, an operation one can't reverse. > > zpool add data cache c2t2d0 > > Thanks, > Dan > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From mir at miras.org Fri Sep 11 18:33:50 2015 From: mir at miras.org (Michael Rasmussen) Date: Fri, 11 Sep 2015 20:33:50 +0200 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Message-ID: <20150911203350.63dc8cba@sleipner.datanom.net> On Fri, 11 Sep 2015 06:12:42 -0400 Dan McDonald wrote: > Pool + no options is good to start. Most of the other options are there for follow up runs if something goes wrong. > What should one look for from the zdb output to identify any errors? If zdb finds errors will it be written at the end of the output, eg. some status? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: Hummingbirds never remember the words to songs. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From mir at miras.org Fri Sep 11 18:36:37 2015 From: mir at miras.org (Michael Rasmussen) Date: Fri, 11 Sep 2015 20:36:37 +0200 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Message-ID: <20150911203637.1e9aa063@sleipner.datanom.net> On Fri, 11 Sep 2015 06:12:42 -0400 Dan McDonald wrote: > Pool + no options is good to start. Most of the other options are there for follow up runs if something goes wrong. > I am noticing output like this: 13.1G completed ( 5MB/s) estimated time remaining: 10hr 57min 48sec zdb_blkptr_cb: Got error 50 reading <51, 46437, 0, 11ad8> DVA[0]=<0:2dfb9e7000:f400> [L0 ZFS plain file] fletcher4 lz4 LE contiguous unique single size=20000L/f400P birth=14298739L/14298739P fill=1 cksum=10a5ddf8a0ea:1feec8299be1f76:96c8cc2c0803c6b0:aa5ac7b6c6b75f06 -- skipping Something to worry about?? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: You should go home. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From danmcd at omniti.com Fri Sep 11 18:37:41 2015 From: danmcd at omniti.com (Dan McDonald) Date: Fri, 11 Sep 2015 14:37:41 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <20150911203350.63dc8cba@sleipner.datanom.net> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> <20150911203350.63dc8cba@sleipner.datanom.net> Message-ID: <45D9B1A0-C2BB-4666-9194-7CCC58389214@omniti.com> > On Sep 11, 2015, at 2:33 PM, Michael Rasmussen wrote: > > What should one look for from the zdb output to identify any errors? Look for assertion failures, or other non-0 exits. Basically, zdb has the kernel zfs implementation in userspace. If a kernel would panic, it will also "panic" zdb, leaving a 'core' around. Dan From richard.elling at richardelling.com Fri Sep 11 18:47:02 2015 From: richard.elling at richardelling.com (Richard Elling) Date: Fri, 11 Sep 2015 11:47:02 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <45D9B1A0-C2BB-4666-9194-7CCC58389214@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> <20150911203350.63dc8cba@sleipner.datanom.net> <45D9B1A0-C2BB-4666-9194-7CCC58389214@omniti.com> Message-ID: > On Sep 11, 2015, at 11:37 AM, Dan McDonald wrote: > > >> On Sep 11, 2015, at 2:33 PM, Michael Rasmussen wrote: >> >> What should one look for from the zdb output to identify any errors? > > Look for assertion failures, or other non-0 exits. > > Basically, zdb has the kernel zfs implementation in userspace. If a kernel would panic, it will also "panic" zdb, leaving a 'core' around. Also recall that zdb reads from disks. Therefore if a pool is imported, it can get out of sync with current reality. That said, it should be reasonably ok for older data that is not being overwritten (deleted in the current pool with no snapshot) -- richard From henson at acm.org Fri Sep 11 20:58:36 2015 From: henson at acm.org (Paul B. Henson) Date: Fri, 11 Sep 2015 13:58:36 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> Message-ID: <20150911205835.GW3405@bender.unx.cpp.edu> On Fri, Sep 11, 2015 at 06:12:42AM -0400, Dan McDonald wrote: > Pool + no options is good to start. Most of the other options are > there for follow up runs if something goes wrong. Crap. It seg faults :(. I tried it with -A, -AA, and -AAA, and they all died at the same spot: [...] Dataset export/user/henson [ZPL], ID 129, cr_txg 1682568, 46.7G, 2603 objects [...] 6061 1 16K 3.00K 8K 3.00K 100.00 ZFS plain file 6062 1 16K 3.00K 8K 3.00K 100.00 ZFS plain file 6063 1 16K 3.00K 8K 3.00K 10 So that means there's something corrupt in that dataset? From danmcd at omniti.com Fri Sep 11 21:10:16 2015 From: danmcd at omniti.com (Dan McDonald) Date: Fri, 11 Sep 2015 17:10:16 -0400 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <20150911205835.GW3405@bender.unx.cpp.edu> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> <20150911205835.GW3405@bender.unx.cpp.edu> Message-ID: Get the corefile and share it. I have to disappear for now, but knowing where it crashed can help tell us why. Dan Sent from my iPhone (typos, autocorrect, and all) > On Sep 11, 2015, at 4:58 PM, Paul B. Henson wrote: > >> On Fri, Sep 11, 2015 at 06:12:42AM -0400, Dan McDonald wrote: >> Pool + no options is good to start. Most of the other options are >> there for follow up runs if something goes wrong. > > Crap. It seg faults :(. I tried it with -A, -AA, and -AAA, and they all > died at the same spot: > > [...] > Dataset export/user/henson [ZPL], ID 129, cr_txg 1682568, 46.7G, 2603 objects > [...] > 6061 1 16K 3.00K 8K 3.00K 100.00 ZFS plain file > 6062 1 16K 3.00K 8K 3.00K 100.00 ZFS plain file > 6063 1 16K 3.00K 8K 3.00K 10 > > So that means there's something corrupt in that dataset? > From henson at acm.org Fri Sep 11 21:48:53 2015 From: henson at acm.org (Paul B. Henson) Date: Fri, 11 Sep 2015 14:48:53 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> <20150911205835.GW3405@bender.unx.cpp.edu> Message-ID: <20150911214852.GZ3405@bender.unx.cpp.edu> On Fri, Sep 11, 2015 at 05:10:16PM -0400, Dan McDonald wrote: > Get the corefile and share it. I have to disappear for now, but > knowing where it crashed can help tell us why. I'm going to follow up on this on the zfs mailing list where I started a similar thread, I believe you're on that one too. Arne (who reported the bug) asked for a core dump too. Thanks much. From henson at acm.org Fri Sep 11 23:11:50 2015 From: henson at acm.org (Paul B. Henson) Date: Fri, 11 Sep 2015 16:11:50 -0700 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <5DA8DDE0-B03E-4AC5-A2BE-B8BF1F3106B3@cooperi.net> References: <20150811024922.GM3405@bender.unx.cpp.edu> <20150811063335.GD7722@gutsman.lotheac.fi> <20150811141051.GA9505@gutsman.lotheac.fi> <54AAAD81-AEC9-4620-B837-5E0D1382ED18@omniti.com> <080301d0e690$bb5eb690$321c23b0$@acm.org> <2929293F-0BAF-4A1D-A175-0CD7C0B5E747@omniti.com> <5DA8DDE0-B03E-4AC5-A2BE-B8BF1F3106B3@cooperi.net> Message-ID: <0ce801d0ece7$3d9c5340$b8d4f9c0$@acm.org> > From: Alex Wilson > Sent: Friday, September 11, 2015 9:50 AM > > OmniOS probably doesn?t necessarily need as strict a religion of config > backwards-compat as that which we?re subscribing to at the moment. As an omnios user, I would just as soon have an openssh as close to vanilla upstream as possible, knowing to get rid of legacy configuration that is no longer supported is what release notes are for :). Ideally the non-legacy patches could get upstreamed and reduce the workload on importing new versions of openssh into omnios. Thanks? From danmcd at omniti.com Sat Sep 12 01:46:33 2015 From: danmcd at omniti.com (Dan McDonald) Date: Fri, 11 Sep 2015 21:46:33 -0400 Subject: [OmniOS-discuss] Fwd: [illumos-commits] [illumos/illumos-gate] d4cd03: 6214 zpools going south References: <55f35f01c0a24_4e663fc7dd77d2bc11232c@hookshot-fe5-cp1-prd.iad.github.net.mail> Message-ID: I hope to build r151014 and bloody with this change over the weekend. This should make l2arc usage safe again. Please understand that some have seen POSSIBLE latent corruption, so be careful. Dan Sent from my iPhone (typos, autocorrect, and all) Begin forwarded message: > From: "GitHub" > Date: September 11, 2015 at 7:08:49 PM EDT > To: illumos-commits at lists.illumos.org > Subject: [illumos-commits] [illumos/illumos-gate] d4cd03: 6214 zpools going south > Reply-To: illumos-commits at lists.illumos.org > > Branch: refs/heads/master > Home: https://github.com/illumos/illumos-gate > Commit: d4cd038c92c36fd0ae35945831a8fc2975b5272c > https://github.com/illumos/illumos-gate/commit/d4cd038c92c36fd0ae35945831a8fc2975b5272c > Author: Arne Jansen > Date: 2015-09-11 (Fri, 11 Sep 2015) > > Changed paths: > M usr/src/uts/common/fs/zfs/arc.c > M usr/src/uts/common/fs/zfs/sys/arc.h > > Log Message: > ----------- > 6214 zpools going south > Reviewed by: Dan McDonald > Reviewed by: Igor Kozhukhov > Reviewed by: George Wilson > Reviewed by: Saso Kiselkov > Approved by: Matthew Ahrens > > > > > > ------------------------------------------- > illumos-commits > Modify Your Subscription: https://www.listbox.com/member/?member_id=21176600&id_secret=21176600-c6432c55 > Unsubscribe Now: https://www.listbox.com/unsubscribe/?member_id=21176600&id_secret=21176600-708d5a56&post_id=20150911190857:12852DB2-58DA-11E5-B3D7-AAF5EF10038B > Powered by Listbox: http://www.listbox.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mir at miras.org Sat Sep 12 07:51:20 2015 From: mir at miras.org (Michael Rasmussen) Date: Sat, 12 Sep 2015 09:51:20 +0200 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 and L2ARC In-Reply-To: <20150911214852.GZ3405@bender.unx.cpp.edu> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <0bd401d0ec39$37d9dc60$a78d9520$@acm.org> <0be001d0ec3a$af18f0d0$0d4ad270$@acm.org> <0A2DC60D-D715-4F96-B2E2-540B9BE48EAB@omniti.com> <20150911205835.GW3405@bender.unx.cpp.edu> <20150911214852.GZ3405@bender.unx.cpp.edu> Message-ID: <20150912095120.096c1067@sleipner.datanom.net> On Fri, 11 Sep 2015 14:48:53 -0700 "Paul B. Henson" wrote: > > I'm going to follow up on this on the zfs mailing list where I started a > similar thread, I believe you're on that one too. Arne (who reported the > bug) asked for a core dump too. > This was the last output before a core dump as well: bp count: 9714726 ganged count: 0 bp logical: 195145003520 avg: 20087 bp physical: 98523881984 avg: 10141 compression: 1.98 bp allocated: 99671800832 avg: 10259 compression: 1.96 bp deduped: 0 ref>1: 0 deduplication: 1.00 SPA allocated: 254987610112 used: 12.80% additional, non-pointer bps of type 0: 909373 Dittoed blocks on same vdev: 103472 Blocks LSIZE PSIZE ASIZE avg comp %Total Type - - - - - - - unallocated 2 32K 4.50K 13.5K 6.75K 7.11 0.00 object directory 6 3.00K 2K 6.00K 1K 1.50 0.00 object array 2 32K 16K 48.0K 24.0K 2.00 0.00 packed nvlist - - - - - - - packed nvlist size 2 32K 1.50K 4.50K 2.25K 21.33 0.00 bpobj - - - - - - - bpobj header - - - - - - - SPA space map header 99.4K 411M 313M 940M 9.45K 1.31 0.99 SPA space map 10 1.07M 1.07M 1.07M 110K 1.00 0.00 ZIL intent log 437 6.83M 588K 1.23M 2.87K 11.90 0.00 DMU dnode 11 22.0K 5.50K 11.5K 1.04K 4.00 0.00 DMU objset - - - - - - - DSL directory 55 31.0K 14.5K 43.5K 809 2.14 0.00 DSL directory child map 53 26.5K 10.0K 30.0K 579 2.65 0.00 DSL dataset snap map 106 1.60M 197K 591K 5.58K 8.29 0.00 DSL props - - - - - - - DSL dataset - - - - - - - ZFS znode - - - - - - - ZFS V0 ACL 926K 114G 56.2G 56.3G 62.2K 2.04 60.62 ZFS plain file 1.07K 4.64M 1.02M 2.05M 1.92K 4.53 0.00 ZFS directory 2 2K 1K 2K 1K 2.00 0.00 ZFS master node 2 127K 1K 2K 1K 126.50 0.00 ZFS delete queue 8.26M 66.8G 35.2G 35.6G 4.31K 1.90 38.39 zvol object 8 4K 3.00K 6.00K 768 1.33 0.00 zvol prop - - - - - - - other uint8[] - - - - - - - other uint64[] - - - - - - - other ZAP - - - - - - - persistent error log 11 1.27M 306K 917K 83.3K 4.24 0.00 SPA history - - - - - - - SPA history offsets - - - - - - - Pool properties - - - - - - - DSL permissions - - - - - - - ZFS ACL - - - - - - - ZFS SYSACL - - - - - - - FUID table - - - - - - - FUID table size 1 4K 1K 3.00K 3.00K 4.00 0.00 DSL dataset next clones - - - - - - - scan work queue - - - - - - - ZFS user/group used - - - - - - - ZFS user/group quota - - - - - - - snapshot refcount tags - - - - - - - DDT ZAP algorithm - - - - - - - DDT statistics - - - - - - - System attributes 2 1K 1K 2K 1K 1.00 0.00 SA master node 2 3.00K 1K 2K 1K 3.00 0.00 SA attr registration 4 64K 7.00K 14.0K 3.50K 9.14 0.00 SA attr layouts - - - - - - - scan translations - - - - - - - deduplicated block 54 27.0K 10.5K 31.5K 597 2.57 0.00 DSL deadlist map - - - - - - - DSL deadlist map hdr 9 8K 4K 12.0K 1.33K 2.00 0.00 DSL dir clones - - - - - - - bpobj subobj 26 272K 204K 611K 23.5K 1.34 0.00 deferred free - - - - - - - dedup ditto 5 33.5K 6.00K 18.0K 3.60K 5.58 0.00 other 9.26M 182G 91.8G 92.8G 10.0K 1.98 100.00 Total capacity operations bandwidth ---- errors ---- description used avail read write read write read write cksum vMotion 237G 1.58T 1001 0 10.5M 0 0 0 7.35K mirror 117G 811G 453 0 5.03M 0 0 0 15.1K /dev/dsk/c2t1d0s0 176 0 3.94M 0 0 0 15.1K /dev/dsk/c2t0d0s0 176 0 3.93M 0 0 0 15.1K mirror 121G 807G 547 0 5.50M 0 0 0 52 /dev/dsk/c2t2d0s0 237 0 4.26M 0 0 0 52 /dev/dsk/c2t3d0s0 237 0 4.27M 0 0 0 52 log /dev/dsk/c5t1d0p1 5.39M 7.12G 0 0 206 0 0 0 0 Segmentation Fault (core dumped) I have the core file but it is 4.3G in size. Does anyone have an interest in this core file? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: In charity there is no excess. -- Francis Bacon -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From alka at hfg-gmuend.de Mon Sep 14 16:20:47 2015 From: alka at hfg-gmuend.de (Guenther Alka) Date: Mon, 14 Sep 2015 18:20:47 +0200 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 - steps to check and repair... In-Reply-To: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> Message-ID: <55F6F3DF.3060404@hfg-gmuend.de> What is the recommended action on OmniOS 151014 about the L2Arc Problem 1. what is the recommended way to detect possible problems a. run scrub? seems useless b. run zdb pool and check for what You said: Look for assertion failures, or other non-0 exits. Is this the key for a corrupt pool? 2. when using an L2Arc and there is no obvious error detected by scrub or zdb a. trash the pool and restore from backup via rsync with possible file corruptions but ZFS structure is 100% ok then b. keep the pool and hope that there is no metadata corruption? c. some action to verify that at least the pool is ok: .... 3. when using an L2Arc and there is an error detected by scrub or zdb a. trash the pool and restore from backup with possible file corruption but pool is 100% ok b. keep the pool and hope that there is no metadata corruption c. some action to verify that at least the pool is ok: .... Is there an alert page at OmniOS wiki about? Gea Am 10.09.2015 um 13:53 schrieb Dan McDonald: > If you are using a zpool with r151014 and you have an L2ARC ("cache") vdev, I recommend at this time disabling it. You may disable it by uttering: > > zpool remove > > For example: > > zpool remove data c2t2d0 > > The bug in question has a good analysis here: > > https://www.illumos.org/issues/6214 > > This bug can lead to problems ranging from false-positives on zpool scrub all the way up to actual pool corruption. > > We will be updating the package repo AND the install media once 6214 is upstreamed to illumos-gate, and pulled back into the r151014 branch of illumos-omnios. The fix is undergoing some tests from ZFS experts right now to verify its correctness. > > So please disable your L2ARC/cache devices for maximum data safety. You can add them back after we update r151014 by uttering: > > zpool add cache > > PLEASE NOTE the "cache" indicator when you add back. If you omit this, the vdev is ADDED to your pool, an operation one can't reverse. > > zpool add data cache c2t2d0 > > Thanks, > Dan > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From doug at will.to Mon Sep 14 19:05:40 2015 From: doug at will.to (Doug Hughes) Date: Mon, 14 Sep 2015 15:05:40 -0400 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec Message-ID: Probably something for Illumos, but you guys may have seen this or may like to know. I've got a 10g connected Xyratex box running OmniOS, and I noticed that no matter how many streams (1, 2, 3) I only get 100MB/sec write throughput and it just tops out. Even with 1 stream. This is with the default lzjb compression on (fast option). I turned off compression and have 2 streams running now and am getting about 250-600MB/sec in aggregate. Much better! The compress ratio was only 1.02x - 1.03x so it's no great loss on this data. I just thought the 100MB/sec speed limit was interesting. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skiselkov.ml at gmail.com Mon Sep 14 19:08:30 2015 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Mon, 14 Sep 2015 21:08:30 +0200 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec In-Reply-To: References: Message-ID: <55F71B2E.4090305@gmail.com> On 9/14/15 9:05 PM, Doug Hughes wrote: > Probably something for Illumos, but you guys may have seen this or may > like to know. > > I've got a 10g connected Xyratex box running OmniOS, and I noticed that > no matter how many streams (1, 2, 3) I only get 100MB/sec write > throughput and it just tops out. Even with 1 stream. This is with the > default lzjb compression on (fast option). > > I turned off compression and have 2 streams running now and am getting > about 250-600MB/sec in aggregate. Much better! > > The compress ratio was only 1.02x - 1.03x so it's no great loss on this > data. I just thought the 100MB/sec speed limit was interesting. Try setting compression=lz4. It should perform much, much better than lzjb on incompressible data. -- Saso From doug at will.to Mon Sep 14 19:18:47 2015 From: doug at will.to (Doug Hughes) Date: Mon, 14 Sep 2015 15:18:47 -0400 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec In-Reply-To: <55F71B2E.4090305@gmail.com> References: <55F71B2E.4090305@gmail.com> Message-ID: That does seem to keep performance at much closer to parity. It still seems about 70-80% of peak vs what I was seeing before, but not that 100MB/sec bottleneck. On Mon, Sep 14, 2015 at 3:08 PM, Saso Kiselkov wrote: > On 9/14/15 9:05 PM, Doug Hughes wrote: > > Probably something for Illumos, but you guys may have seen this or may > > like to know. > > > > I've got a 10g connected Xyratex box running OmniOS, and I noticed that > > no matter how many streams (1, 2, 3) I only get 100MB/sec write > > throughput and it just tops out. Even with 1 stream. This is with the > > default lzjb compression on (fast option). > > > > I turned off compression and have 2 streams running now and am getting > > about 250-600MB/sec in aggregate. Much better! > > > > The compress ratio was only 1.02x - 1.03x so it's no great loss on this > > data. I just thought the 100MB/sec speed limit was interesting. > > Try setting compression=lz4. It should perform much, much better than > lzjb on incompressible data. > > -- > Saso > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skiselkov.ml at gmail.com Mon Sep 14 19:34:09 2015 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Mon, 14 Sep 2015 21:34:09 +0200 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec In-Reply-To: References: <55F71B2E.4090305@gmail.com> Message-ID: <55F72131.5060605@gmail.com> On 9/14/15 9:18 PM, Doug Hughes wrote: > That does seem to keep performance at much closer to parity. It still > seems about 70-80% of peak vs what I was seeing before, but not that > 100MB/sec bottleneck. Well, that's the reality of compression. Even the compressibility check is not free, but it's a lot less of an impact with lz4 than with lzjb. Cheers, -- Saso From matthew.lagoe at subrigo.net Mon Sep 14 19:40:29 2015 From: matthew.lagoe at subrigo.net (Matthew Lagoe) Date: Mon, 14 Sep 2015 12:40:29 -0700 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec In-Reply-To: <55F72131.5060605@gmail.com> References: <55F71B2E.4090305@gmail.com> <55F72131.5060605@gmail.com> Message-ID: <008f01d0ef25$38b72720$aa257560$@subrigo.net> Also I believe the compression is not threaded as well as it could be so you may be limited by the single core performance of your machine. -----Original Message----- From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Saso Kiselkov Sent: Monday, September 14, 2015 12:34 PM To: Doug Hughes Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec On 9/14/15 9:18 PM, Doug Hughes wrote: > That does seem to keep performance at much closer to parity. It still > seems about 70-80% of peak vs what I was seeing before, but not that > 100MB/sec bottleneck. Well, that's the reality of compression. Even the compressibility check is not free, but it's a lot less of an impact with lz4 than with lzjb. Cheers, -- Saso _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From skiselkov.ml at gmail.com Mon Sep 14 19:46:00 2015 From: skiselkov.ml at gmail.com (Saso Kiselkov) Date: Mon, 14 Sep 2015 21:46:00 +0200 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec In-Reply-To: <008f01d0ef25$38b72720$aa257560$@subrigo.net> References: <55F71B2E.4090305@gmail.com> <55F72131.5060605@gmail.com> <008f01d0ef25$38b72720$aa257560$@subrigo.net> Message-ID: <55F723F8.5040901@gmail.com> On 9/14/15 9:40 PM, Matthew Lagoe wrote: > Also I believe the compression is not threaded as well as it could be so you > may be limited by the single core performance of your machine. It is multi-threaded. Cheers, -- Saso From matthew.lagoe at subrigo.net Mon Sep 14 19:47:05 2015 From: matthew.lagoe at subrigo.net (Matthew Lagoe) Date: Mon, 14 Sep 2015 12:47:05 -0700 Subject: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec In-Reply-To: <55F723F8.5040901@gmail.com> References: <55F71B2E.4090305@gmail.com> <55F72131.5060605@gmail.com> <008f01d0ef25$38b72720$aa257560$@subrigo.net> <55F723F8.5040901@gmail.com> Message-ID: <009401d0ef26$2284f940$678eebc0$@subrigo.net> I know it is multithreaded just in my experience (at least historically) it wasn't completely multi-threaded and you could run into bottlenecks with spare cpu cores sitting idle. -----Original Message----- From: Saso Kiselkov [mailto:skiselkov.ml at gmail.com] Sent: Monday, September 14, 2015 12:46 PM To: Matthew Lagoe; 'Doug Hughes' Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs compression limits write throughput to 100MB/sec On 9/14/15 9:40 PM, Matthew Lagoe wrote: > Also I believe the compression is not threaded as well as it could be > so you may be limited by the single core performance of your machine. It is multi-threaded. Cheers, -- Saso From danmcd at omniti.com Mon Sep 14 21:57:58 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 14 Sep 2015 17:57:58 -0400 Subject: [OmniOS-discuss] OmniOS Bloody update Message-ID: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> Hello again! With one week left until Surge & illumos Day, I wanted to make sure an update to bloody happened. With recent illumos bugs (e.g. 6214) taking high priority, I wanted to make sure their fixes ended up in the bloody release (as well as appropriate ones making it back into r151014). New with this update out of omnios-build (now at master revision f01dd5c): - Mozilla NSS up to version 3.20. (Includes ca-bundle update.) - OpenSSH is now at version 7.1p1. - The kayak images now include previously missing bits. And highlights of illumos-omnios progress (now at master revision 23b18eb, meaning uname -v == omnios-23b18eb) are: - A fix to illumos 6214, which will prevent the existence of l2arc/cache devices from potentially corrupting data. - An additional pair of ZFS fixes from Delphix not yet upstreamed in illumos-gate. - Updated ses connector lists. - An htable_reap() fix from Joyent, which may prevent memory hogging and reap-related slowdowns. - New kstats for the NFS server (see illumos 6090). There will be one, possibly two, more bloody updates before I freeze for r151016. '016 will be a bit late this time (late October/early November), and one more bloody update will contain a potentially numerous upgrade of various omnios-build packages. I have updated the .iso, .usb-dd, and the kayak images as well. Happy updating! Dan From danmcd at omniti.com Mon Sep 14 21:58:27 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 14 Sep 2015 17:58:27 -0400 Subject: [OmniOS-discuss] OmniOS r151014 update - needs reboot! Message-ID: <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> I have updated release media as well as the IPS server. omnios-build branch r151014 is now on revision 437bddb. illumos-omnios branch r151014 is now on revision cffff65, which means "uname -v" now shows omnios-cffff65. Most importantly, this update fixes illumos 6214 for OmniOS. You should be able to restore your L2ARC devices using the method I mentioned in my last e-mail: zpool add cache PLEASE MAKE SURE YOU SPECIFY "cache" when adding the vdev, or else you will append 's space to your pool. Special thanks to illumos community member Arne "sensille" Jansen for both finding 6214 and fixing it. Additional fixes in this update include: - An additional set of ZFS fixes from Delphix. - Mozilla NSS up to version 3.20 (including ca-bundle). - Kernel htable fixes, which should improve kernel memory behavior in the face of reaping (illumos 6202). - Fault management topology changes for ses brought up to date with illumos-gate. - Small bug with zpool import fixed (illumos 1778). Because of the changes to zfs, this update requires that you reboot your system. Thank you! Dan From omen.wild at gmail.com Mon Sep 14 22:09:30 2015 From: omen.wild at gmail.com (Omen Wild) Date: Mon, 14 Sep 2015 15:09:30 -0700 Subject: [OmniOS-discuss] (no subject) Message-ID: <20150914220930.GA30739@mandarb.com> [ I originally posted this to the Illumos ZFS list but got no responses. ] We have an up to date OmniOS system that panics every time we try to unlink a specific file. We have a kernel pages-only crashdump and can reproduce easily. I can make the panic files available to an interested party. A zpool scrub turned up no errors or repairs. Mostly we are wondering how to clear the corruption off disk and worried what else might be corrupt since the scrub turns up no issues. Details below. When we first encountered the issue we were running with a version from mid-July: zfs at 0.5.11,5.11-0.151014:20150417T182430Z . After the first couple panics we upgraded to the newest (as of a couple days ago, zfs at 0.5.11,5.11-0.151014:20150818T161042Z) which still panics. # uname -a SunOS zaphod 5.11 omnios-d08e0e5 i86pc i386 i86pc The error looks like this: BAD TRAP: type=e (#pf Page fault) rp=ffffff002ed54b00 addr=e8 occurred in module "zfs" due to a NULL pointer dereference The panic stack looks like this in every case: param_preset die+0xdf trap+0xdb3 0xfffffffffb8001d6 zfs_remove+0x395 fop_remove+0x5b vn_removeat+0x382 unlinkat+0x59 _sys_sysenter_post_swapgs+0x149 It is triggered by trying to rm a specific file. ls'ing the file gives the error "Operation not applicable", ls'ing the directory shows ? in place of the data: ?????????? ? ? ? ? ? filename.html I have attached the output of: echo '::panicinfo\n::cpuinfo -v\n::threadlist -v 10\n::msgbuf\n*panic_thread::findstack -v\n::stacks' | mdb 7 I am a Solaris/OI/OmniOS debugging neophyte, but will happily run any commands recommended. Thanks Omen -------------- next part -------------- cpu 0 thread ffffff0730c5f840 message BAD TRAP: type=e (#pf Page fault) rp=ffffff002ed54b00 addr=e8 occurred in module "zfs" due to a NULL pointer dereference rdi ffffff0c8e5a5d80 rsi ffffff088f8e9900 rdx 0 rcx 1 r8 4df5181bb11fe1 r9 ffffff002ed549e8 rax 0 rbx 0 rbp ffffff002ed54d20 r10 fffffffffb85430c r11 fffffffffb800983 r12 ffffff0724340800 r13 ffffff0c8e5a5d80 r14 ffffff0c510a4980 r15 ffffff0c523303e0 fsbase 0 gsbase fffffffffbc30c40 ds 4b es 4b fs 0 gs 1c3 trapno e err 0 rip fffffffff7a4b805 cs 30 rflags 10246 rsp ffffff002ed54bf0 ss 38 gdt_hi 0 gdt_lo e00001ef idt_hi 0 idt_lo d0000fff ldt 0 task 70 cr0 8005003b cr2 e8 cr3 5fffc4000 cr4 26f8 ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 0 fffffffffbc3b540 1b 0 0 60 no no t-0 ffffff0730c5f840 rm | RUNNING <--+ READY EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 1 ffffff072349e500 1f 1 0 -1 no no t-0 ffffff002e493c40 (idle) | | RUNNING <--+ +--> PRI THREAD PROC READY 60 ffffff002f348c40 sched QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 2 ffffff072349d000 1f 0 0 -1 no no t-0 ffffff002e50ec40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 3 ffffff0723497ac0 1f 0 0 -1 no no t-0 ffffff002e5a6c40 (idle) | RUNNING <--+ READY QUIESCED EXISTS ENABLE ADDR PROC LWP CLS PRI WCHAN fffffffffbc2f9e0 fffffffffbc2ea80 fffffffffbc31500 0 96 0 PC: _resume_from_idle+0xf4 CMD: sched stack pointer for thread fffffffffbc2f9e0: fffffffffbc72130 [ fffffffffbc72130 _resume_from_idle+0xf4() ] swtch+0x141() sched+0x835() main+0x46c() _locore_start+0x90() ffffff002e005c40 fffffffffbc2ea80 0 0 -1 0 PC: _resume_from_idle+0xf4 THREAD: idle() stack pointer for thread ffffff002e005c40: ffffff002e005bd0 [ ffffff002e005bd0 _resume_from_idle+0xf4() ] swtch+0x141() idle+0xbc() thread_start+8() ffffff002e00bc40 fffffffffbc2ea80 0 0 60 fffffffffbcca91c PC: _resume_from_idle+0xf4 THREAD: thread_reaper() stack pointer for thread ffffff002e00bc40: ffffff002e00bb60 [ ffffff002e00bb60 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcca91c, fffffffffbcfb458) thread_reaper+0xb9() thread_start+8() ffffff002e011c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ee70 PC: _resume_from_idle+0xf4 TASKQ: kmem_move_taskq stack pointer for thread ffffff002e011c40: ffffff002e011a80 [ ffffff002e011a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ee70, ffffff06f6a2ee60) taskq_thread_wait+0xbe(ffffff06f6a2ee40, ffffff06f6a2ee60, ffffff06f6a2ee70 , ffffff002e011bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2ee40) thread_start+8() ffffff002e017c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ed58 PC: _resume_from_idle+0xf4 TASKQ: kmem_taskq stack pointer for thread ffffff002e017c40: ffffff002e017a80 [ ffffff002e017a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ed58, ffffff06f6a2ed48) taskq_thread_wait+0xbe(ffffff06f6a2ed28, ffffff06f6a2ed48, ffffff06f6a2ed58 , ffffff002e017bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2ed28) thread_start+8() ffffff002e01dc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ec40 PC: _resume_from_idle+0xf4 TASKQ: pseudo_nexus_enum_tq stack pointer for thread ffffff002e01dc40: ffffff002e01da80 [ ffffff002e01da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ec40, ffffff06f6a2ec30) taskq_thread_wait+0xbe(ffffff06f6a2ec10, ffffff06f6a2ec30, ffffff06f6a2ec40 , ffffff002e01dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2ec10) thread_start+8() ffffff002e023c40 fffffffffbc2ea80 0 0 60 fffffffffbd17ca0 PC: _resume_from_idle+0xf4 THREAD: scsi_hba_barrier_daemon() stack pointer for thread ffffff002e023c40: ffffff002e023b20 [ ffffff002e023b20 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbd17ca0, fffffffffbd17c98) scsi_hba_barrier_daemon+0xd6(0) thread_start+8() ffffff002e029c40 fffffffffbc2ea80 0 0 60 fffffffffbd17cb8 PC: _resume_from_idle+0xf4 THREAD: scsi_lunchg1_daemon() stack pointer for thread ffffff002e029c40: ffffff002e029630 [ ffffff002e029630 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbd17cb8, fffffffffbd17cb0) scsi_lunchg1_daemon+0x1de(0) thread_start+8() ffffff002e02fc40 fffffffffbc2ea80 0 0 60 fffffffffbd17cd0 PC: _resume_from_idle+0xf4 THREAD: scsi_lunchg2_daemon() stack pointer for thread ffffff002e02fc40: ffffff002e02fb30 [ ffffff002e02fb30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbd17cd0, fffffffffbd17cc8) scsi_lunchg2_daemon+0x121(0) thread_start+8() ffffff002e035c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2eb28 PC: _resume_from_idle+0xf4 TASKQ: scsi_vhci_nexus_enum_tq stack pointer for thread ffffff002e035c40: ffffff002e035a80 [ ffffff002e035a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2eb28, ffffff06f6a2eb18) taskq_thread_wait+0xbe(ffffff06f6a2eaf8, ffffff06f6a2eb18, ffffff06f6a2eb28 , ffffff002e035bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2eaf8) thread_start+8() ffffff002e08fc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e08fc40: ffffff002e08fa80 [ ffffff002e08fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e08fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e07dc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e07dc40: ffffff002e07da80 [ ffffff002e07da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e07dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e06bc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e06bc40: ffffff002e06ba80 [ ffffff002e06ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e06bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e05fc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e05fc40: ffffff002e05fa80 [ ffffff002e05fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e05fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e053c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e053c40: ffffff002e053a80 [ ffffff002e053a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e053bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e047c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e047c40: ffffff002e047a80 [ ffffff002e047a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e047bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e041c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e041c40: ffffff002e041a80 [ ffffff002e041a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e041bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e03bc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2ea10 PC: _resume_from_idle+0xf4 TASKQ: mdi_taskq stack pointer for thread ffffff002e03bc40: ffffff002e03ba80 [ ffffff002e03ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2ea10, ffffff06f6a2ea00) taskq_thread_wait+0xbe(ffffff06f6a2e9e0, ffffff06f6a2ea00, ffffff06f6a2ea10 , ffffff002e03bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e9e0) thread_start+8() ffffff002e04dc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e8f8 PC: _resume_from_idle+0xf4 TASKQ: vhci_taskq stack pointer for thread ffffff002e04dc40: ffffff002e04da80 [ ffffff002e04da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e8f8, ffffff06f6a2e8e8) taskq_thread_wait+0xbe(ffffff06f6a2e8c8, ffffff06f6a2e8e8, ffffff06f6a2e8f8 , ffffff002e04dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e8c8) thread_start+8() ffffff002e167c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e167c40: ffffff002e167a80 [ ffffff002e167a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e167bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e143c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e143c40: ffffff002e143a80 [ ffffff002e143a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e143bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e101c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e101c40: ffffff002e101a80 [ ffffff002e101a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e101bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e095c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e095c40: ffffff002e095a80 [ ffffff002e095a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e095bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e083c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e083c40: ffffff002e083a80 [ ffffff002e083a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e083bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e071c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e071c40: ffffff002e071a80 [ ffffff002e071a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e071bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e065c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e065c40: ffffff002e065a80 [ ffffff002e065a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e065bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e059c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e7e0 PC: _resume_from_idle+0xf4 TASKQ: vhci_update_pathstates stack pointer for thread ffffff002e059c40: ffffff002e059a80 [ ffffff002e059a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e7e0, ffffff06f6a2e7d0) taskq_thread_wait+0xbe(ffffff06f6a2e7b0, ffffff06f6a2e7d0, ffffff06f6a2e7e0 , ffffff002e059bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e7b0) thread_start+8() ffffff002e077c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e6c8 PC: _resume_from_idle+0xf4 TASKQ: npe_nexus_enum_tq stack pointer for thread ffffff002e077c40: ffffff002e077a80 [ ffffff002e077a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e6c8, ffffff06f6a2e6b8) taskq_thread_wait+0xbe(ffffff06f6a2e698, ffffff06f6a2e6b8, ffffff06f6a2e6c8 , ffffff002e077bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e698) thread_start+8() ffffff002e089c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e5b0 PC: _resume_from_idle+0xf4 TASKQ: isa_nexus_enum_tq stack pointer for thread ffffff002e089c40: ffffff002e089a80 [ ffffff002e089a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e5b0, ffffff06f6a2e5a0) taskq_thread_wait+0xbe(ffffff06f6a2e580, ffffff06f6a2e5a0, ffffff06f6a2e5b0 , ffffff002e089bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e580) thread_start+8() ffffff002e0e9c40 fffffffffbc2ea80 0 0 99 ffffff06f6a2e498 PC: _resume_from_idle+0xf4 TASKQ: ddi_periodic_taskq stack pointer for thread ffffff002e0e9c40: ffffff002e0e9a80 [ ffffff002e0e9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e498, ffffff06f6a2e488) taskq_thread_wait+0xbe(ffffff06f6a2e468, ffffff06f6a2e488, ffffff06f6a2e498 , ffffff002e0e9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e468) thread_start+8() ffffff002e0e3c40 fffffffffbc2ea80 0 0 99 ffffff06f6a2e498 PC: _resume_from_idle+0xf4 TASKQ: ddi_periodic_taskq stack pointer for thread ffffff002e0e3c40: ffffff002e0e3a80 [ ffffff002e0e3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e498, ffffff06f6a2e488) taskq_thread_wait+0xbe(ffffff06f6a2e468, ffffff06f6a2e488, ffffff06f6a2e498 , ffffff002e0e3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e468) thread_start+8() ffffff002e0ddc40 fffffffffbc2ea80 0 0 99 ffffff06f6a2e498 PC: _resume_from_idle+0xf4 TASKQ: ddi_periodic_taskq stack pointer for thread ffffff002e0ddc40: ffffff002e0dda80 [ ffffff002e0dda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e498, ffffff06f6a2e488) taskq_thread_wait+0xbe(ffffff06f6a2e468, ffffff06f6a2e488, ffffff06f6a2e498 , ffffff002e0ddbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e468) thread_start+8() ffffff002e0d7c40 fffffffffbc2ea80 0 0 99 ffffff06f6a2e498 PC: _resume_from_idle+0xf4 TASKQ: ddi_periodic_taskq stack pointer for thread ffffff002e0d7c40: ffffff002e0d7a80 [ ffffff002e0d7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e498, ffffff06f6a2e488) taskq_thread_wait+0xbe(ffffff06f6a2e468, ffffff06f6a2e488, ffffff06f6a2e498 , ffffff002e0d7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e468) thread_start+8() ffffff002e0f5c40 fffffffffbc2ea80 0 0 99 ffffff06f6a2e380 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e0f5c40: ffffff002e0f5a80 [ ffffff002e0f5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e380, ffffff06f6a2e370) taskq_thread_wait+0xbe(ffffff06f6a2e350, ffffff06f6a2e370, ffffff06f6a2e380 , ffffff002e0f5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e350) thread_start+8() ffffff002e0efc40 fffffffffbc2ea80 0 0 99 ffffff06f6a2e380 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e0efc40: ffffff002e0efa80 [ ffffff002e0efa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e380, ffffff06f6a2e370) taskq_thread_wait+0xbe(ffffff06f6a2e350, ffffff06f6a2e370, ffffff06f6a2e380 , ffffff002e0efbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e350) thread_start+8() ffffff002f110c40 fffffffffbc2ea80 0 0 60 ffffff0723543930 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff002f110c40: ffffff002f110a80 [ ffffff002f110a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543930, ffffff0723543920) taskq_thread_wait+0xbe(ffffff0723543900, ffffff0723543920, ffffff0723543930 , ffffff002f110bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543900) thread_start+8() ffffff002f10ac40 fffffffffbc2ea80 0 0 60 ffffff0723543930 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff002f10ac40: ffffff002f10aa80 [ ffffff002f10aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543930, ffffff0723543920) taskq_thread_wait+0xbe(ffffff0723543900, ffffff0723543920, ffffff0723543930 , ffffff002f10abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543900) thread_start+8() ffffff002f104c40 fffffffffbc2ea80 0 0 60 ffffff0723543930 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff002f104c40: ffffff002f104a80 [ ffffff002f104a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543930, ffffff0723543920) taskq_thread_wait+0xbe(ffffff0723543900, ffffff0723543920, ffffff0723543930 , ffffff002f104bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543900) thread_start+8() ffffff002f0fec40 fffffffffbc2ea80 0 0 60 ffffff0723543930 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_2_ stack pointer for thread ffffff002f0fec40: ffffff002f0fea80 [ ffffff002f0fea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543930, ffffff0723543920) taskq_thread_wait+0xbe(ffffff0723543900, ffffff0723543920, ffffff0723543930 , ffffff002f0febc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543900) thread_start+8() ffffff002f342c40 fffffffffbc2ea80 0 0 60 ffffff0723543700 PC: _resume_from_idle+0xf4 TASKQ: hubd_nexus_enum_tq stack pointer for thread ffffff002f342c40: ffffff002f342a80 [ ffffff002f342a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543700, ffffff07235436f0) taskq_thread_wait+0xbe(ffffff07235436d0, ffffff07235436f0, ffffff0723543700 , ffffff002f342bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07235436d0) thread_start+8() ffffff002f544c40 fffffffffbc2ea80 0 0 60 ffffff07235434d0 PC: _resume_from_idle+0xf4 TASKQ: USB_hubd_81_pipehndl_tq_0 stack pointer for thread ffffff002f544c40: ffffff002f544a80 [ ffffff002f544a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235434d0, ffffff07235434c0) taskq_thread_wait+0xbe(ffffff07235434a0, ffffff07235434c0, ffffff07235434d0 , ffffff002f544bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07235434a0) thread_start+8() ffffff002f53ec40 fffffffffbc2ea80 0 0 60 ffffff07235434d0 PC: _resume_from_idle+0xf4 TASKQ: USB_hubd_81_pipehndl_tq_0 stack pointer for thread ffffff002f53ec40: ffffff002f53ea80 [ ffffff002f53ea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235434d0, ffffff07235434c0) taskq_thread_wait+0xbe(ffffff07235434a0, ffffff07235434c0, ffffff07235434d0 , ffffff002f53ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07235434a0) thread_start+8() ffffff002f68fc40 fffffffffbc2ea80 0 0 99 ffffff0722feabd0 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002f68fc40: ffffff002f68fb40 [ ffffff002f68fb40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feabd0, ffffff0722feab90) squeue_worker+0x104(ffffff0722feab80) thread_start+8() ffffff002f695c40 fffffffffbc2ea80 0 0 99 ffffff0722feabd2 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002f695c40: ffffff002f695b00 [ ffffff002f695b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feabd2, ffffff0722feab90) squeue_polling_thread+0xa9(ffffff0722feab80) thread_start+8() ffffff002f69bc40 fffffffffbc2ea80 0 0 99 ffffff0722feab10 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002f69bc40: ffffff002f69bb40 [ ffffff002f69bb40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feab10, ffffff0722feaad0) squeue_worker+0x104(ffffff0722feaac0) thread_start+8() ffffff002f6a1c40 fffffffffbc2ea80 0 0 99 ffffff0722feab12 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002f6a1c40: ffffff002f6a1b00 [ ffffff002f6a1b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feab12, ffffff0722feaad0) squeue_polling_thread+0xa9(ffffff0722feaac0) thread_start+8() ffffff002f0acc40 fffffffffbc2ea80 0 0 99 ffffff0724a24d28 PC: _resume_from_idle+0xf4 THREAD: mac_srs_worker() stack pointer for thread ffffff002f0acc40: ffffff002f0acb30 [ ffffff002f0acb30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a24d28, ffffff0724a24d00) mac_srs_worker+0x141(ffffff0724a24d00) thread_start+8() ffffff002f0b2c40 fffffffffbc2ea80 0 0 99 ffffff0724a259e8 PC: _resume_from_idle+0xf4 THREAD: mac_srs_worker() stack pointer for thread ffffff002f0b2c40: ffffff002f0b2b30 [ ffffff002f0b2b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a259e8, ffffff0724a259c0) mac_srs_worker+0x141(ffffff0724a259c0) thread_start+8() ffffff002f0b8c40 fffffffffbc2ea80 0 0 99 ffffff0724a259ea PC: _resume_from_idle+0xf4 THREAD: mac_rx_srs_poll_ring() stack pointer for thread ffffff002f0b8c40: ffffff002f0b8b10 [ ffffff002f0b8b10 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a259ea, ffffff0724a259c0) mac_rx_srs_poll_ring+0xad(ffffff0724a259c0) thread_start+8() ffffff002f0bec40 fffffffffbc2ea80 0 0 99 ffffff0724a266a8 PC: _resume_from_idle+0xf4 THREAD: mac_srs_worker() stack pointer for thread ffffff002f0bec40: ffffff002f0beb30 [ ffffff002f0beb30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a266a8, ffffff0724a26680) mac_srs_worker+0x141(ffffff0724a26680) thread_start+8() ffffff002f0c4c40 fffffffffbc2ea80 0 0 99 ffffff0727662d20 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f0c4c40: ffffff002f0c4b30 [ ffffff002f0c4b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727662d20, ffffff0727662c80) mac_soft_ring_worker+0xb1(ffffff0727662c80) thread_start+8() ffffff002f0cac40 fffffffffbc2ea80 0 0 99 ffffff0727662ea0 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f0cac40: ffffff002f0cab30 [ ffffff002f0cab30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727662ea0, ffffff0727662e00) mac_soft_ring_worker+0xb1(ffffff0727662e00) thread_start+8() ffffff002f0d0c40 fffffffffbc2ea80 0 0 99 ffffff07276670e0 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f0d0c40: ffffff002f0d0b30 [ ffffff002f0d0b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07276670e0, ffffff0727667040) mac_soft_ring_worker+0xb1(ffffff0727667040) thread_start+8() ffffff002f677c40 fffffffffbc2ea80 0 0 99 ffffff07276676e0 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f677c40: ffffff002f677b30 [ ffffff002f677b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07276676e0, ffffff0727667640) mac_soft_ring_worker+0xb1(ffffff0727667640) thread_start+8() ffffff002f67dc40 fffffffffbc2ea80 0 0 99 ffffff0727667860 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f67dc40: ffffff002f67db30 [ ffffff002f67db30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727667860, ffffff07276677c0) mac_soft_ring_worker+0xb1(ffffff07276677c0) thread_start+8() ffffff002f683c40 fffffffffbc2ea80 0 0 99 ffffff07276679e0 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f683c40: ffffff002f683b30 [ ffffff002f683b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07276679e0, ffffff0727667940) mac_soft_ring_worker+0xb1(ffffff0727667940) thread_start+8() ffffff002f338c40 fffffffffbc2ea80 0 0 99 ffffff0724a27368 PC: _resume_from_idle+0xf4 THREAD: mac_srs_worker() stack pointer for thread ffffff002f338c40: ffffff002f338b30 [ ffffff002f338b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a27368, ffffff0724a27340) mac_srs_worker+0x141(ffffff0724a27340) thread_start+8() ffffff002f14dc40 fffffffffbc2ea80 0 0 99 ffffff0724a24068 PC: _resume_from_idle+0xf4 THREAD: mac_srs_worker() stack pointer for thread ffffff002f14dc40: ffffff002f14db30 [ ffffff002f14db30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a24068, ffffff0724a24040) mac_srs_worker+0x141(ffffff0724a24040) thread_start+8() ffffff002f153c40 fffffffffbc2ea80 0 0 99 ffffff0724a2406a PC: _resume_from_idle+0xf4 THREAD: mac_rx_srs_poll_ring() stack pointer for thread ffffff002f153c40: ffffff002f153b10 [ ffffff002f153b10 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724a2406a, ffffff0724a24040) mac_rx_srs_poll_ring+0xad(ffffff0724a24040) thread_start+8() ffffff002f159c40 fffffffffbc2ea80 0 0 99 ffffff072a327328 PC: _resume_from_idle+0xf4 THREAD: mac_srs_worker() stack pointer for thread ffffff002f159c40: ffffff002f159b30 [ ffffff002f159b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a327328, ffffff072a327300) mac_srs_worker+0x141(ffffff072a327300) thread_start+8() ffffff002f15fc40 fffffffffbc2ea80 0 0 99 ffffff0727662420 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f15fc40: ffffff002f15fb30 [ ffffff002f15fb30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727662420, ffffff0727662380) mac_soft_ring_worker+0xb1(ffffff0727662380) thread_start+8() ffffff002f165c40 fffffffffbc2ea80 0 0 99 ffffff07276625a0 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f165c40: ffffff002f165b30 [ ffffff002f165b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07276625a0, ffffff0727662500) mac_soft_ring_worker+0xb1(ffffff0727662500) thread_start+8() ffffff002f16bc40 fffffffffbc2ea80 0 0 99 ffffff0727662720 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f16bc40: ffffff002f16bb30 [ ffffff002f16bb30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727662720, ffffff0727662680) mac_soft_ring_worker+0xb1(ffffff0727662680) thread_start+8() ffffff002f171c40 fffffffffbc2ea80 0 0 99 ffffff0727667b60 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f171c40: ffffff002f171b30 [ ffffff002f171b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727667b60, ffffff0727667ac0) mac_soft_ring_worker+0xb1(ffffff0727667ac0) thread_start+8() ffffff002f177c40 fffffffffbc2ea80 0 0 99 ffffff0727667ce0 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f177c40: ffffff002f177b30 [ ffffff002f177b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727667ce0, ffffff0727667c40) mac_soft_ring_worker+0xb1(ffffff0727667c40) thread_start+8() ffffff002f17dc40 fffffffffbc2ea80 0 0 99 ffffff0727667e60 PC: _resume_from_idle+0xf4 THREAD: mac_soft_ring_worker() stack pointer for thread ffffff002f17dc40: ffffff002f17db30 [ ffffff002f17db30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0727667e60, ffffff0727667dc0) mac_soft_ring_worker+0xb1(ffffff0727667dc0) thread_start+8() ffffff002f183c40 fffffffffbc2ea80 0 0 99 ffffff0722feaa50 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002f183c40: ffffff002f183b40 [ ffffff002f183b40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feaa50, ffffff0722feaa10) squeue_worker+0x104(ffffff0722feaa00) thread_start+8() ffffff002f189c40 fffffffffbc2ea80 0 0 99 ffffff0722feaa52 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002f189c40: ffffff002f189b00 [ ffffff002f189b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feaa52, ffffff0722feaa10) squeue_polling_thread+0xa9(ffffff0722feaa00) thread_start+8() ffffff002f18fc40 fffffffffbc2ea80 0 0 99 ffffff0722fea990 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002f18fc40: ffffff002f18fb40 [ ffffff002f18fb40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722fea990, ffffff0722fea950) squeue_worker+0x104(ffffff0722fea940) thread_start+8() ffffff002f195c40 fffffffffbc2ea80 0 0 99 ffffff0722fea992 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002f195c40: ffffff002f195b00 [ ffffff002f195b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722fea992, ffffff0722fea950) squeue_polling_thread+0xa9(ffffff0722fea940) thread_start+8() ffffff002f473c40 fffffffffbc2ea80 0 0 60 ffffff07233e1cf0 PC: _resume_from_idle+0xf4 TASKQ: system_taskq stack pointer for thread ffffff002f473c40: ffffff002f473a30 [ ffffff002f473a30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff07233e1cf0, ffffff06f6b1ca00, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff07233e1cf0, ffffff06f6b1ca00, 7530, 4) taskq_thread_wait+0x64(ffffff06f6a2e238, ffffff06f6b1ca00, ffffff07233e1cf0 , ffffff002f473bc0, 7530) taskq_d_thread+0x145(ffffff07233e1cc0) thread_start+8() ffffff002e107c40 fffffffffbc2ea80 0 0 60 ffffff0730f7c458 PC: _resume_from_idle+0xf4 TASKQ: system_taskq stack pointer for thread ffffff002e107c40: ffffff002e107a30 [ ffffff002e107a30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff0730f7c458, ffffff06f6b1ca80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff0730f7c458, ffffff06f6b1ca80, 7530, 4) taskq_thread_wait+0x64(ffffff06f6a2e238, ffffff06f6b1ca80, ffffff0730f7c458 , ffffff002e107bc0, 7530) taskq_d_thread+0x145(ffffff0730f7c428) thread_start+8() ffffff00307e8c40 fffffffffbc2ea80 0 0 60 ffffff072a7db7c8 PC: _resume_from_idle+0xf4 TASKQ: system_taskq stack pointer for thread ffffff00307e8c40: ffffff00307e8a30 [ ffffff00307e8a30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff072a7db7c8, ffffff06f6b1cb00, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff072a7db7c8, ffffff06f6b1cb00, 7530, 4) taskq_thread_wait+0x64(ffffff06f6a2e238, ffffff06f6b1cb00, ffffff072a7db7c8 , ffffff00307e8bc0, 7530) taskq_d_thread+0x145(ffffff072a7db798) thread_start+8() ffffff002f348c40 fffffffffbc2ea80 0 0 60 0 PC: _resume_from_idle+0xf4 TASKQ: system_taskq stack pointer for thread ffffff002f348c40: ffffff002f348a30 [ ffffff002f348a30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff072a7db988, ffffff06f6b1cb80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff072a7db988, ffffff06f6b1cb80, 7530, 4) taskq_thread_wait+0x64(ffffff06f6a2e238, ffffff06f6b1cb80, ffffff072a7db988 , ffffff002f348bc0, 7530) taskq_d_thread+0x145(ffffff072a7db958) thread_start+8() ffffff00307cac40 fffffffffbc2ea80 0 0 60 ffffff0730f7c538 PC: _resume_from_idle+0xf4 TASKQ: system_taskq stack pointer for thread ffffff00307cac40: ffffff00307caa30 [ ffffff00307caa30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff0730f7c538, ffffff06f6b1ca80, 45d964b800, 989680, 0) cv_reltimedwait+0x51(ffffff0730f7c538, ffffff06f6b1ca80, 7530, 4) taskq_thread_wait+0x64(ffffff06f6a2e238, ffffff06f6b1ca80, ffffff0730f7c538 , ffffff00307cabc0, 7530) taskq_d_thread+0x145(ffffff0730f7c508) thread_start+8() ffffff002e0fbc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e268 PC: _resume_from_idle+0xf4 TASKQ: system_taskq stack pointer for thread ffffff002e0fbc40: ffffff002e0fba80 [ ffffff002e0fba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e268, ffffff06f6a2e258) taskq_thread_wait+0xbe(ffffff06f6a2e238, ffffff06f6a2e258, ffffff06f6a2e268 , ffffff002e0fbbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e238) thread_start+8() ffffff002f369c40 fffffffffbc2ea80 0 0 60 ffffff07235433b8 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff002f369c40: ffffff002f369a80 [ ffffff002f369a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235433b8, ffffff07235433a8) taskq_thread_wait+0xbe(ffffff0723543388, ffffff07235433a8, ffffff07235433b8 , ffffff002f369bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543388) thread_start+8() ffffff002f0ecc40 fffffffffbc2ea80 0 0 60 ffffff07235433b8 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff002f0ecc40: ffffff002f0eca80 [ ffffff002f0eca80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235433b8, ffffff07235433a8) taskq_thread_wait+0xbe(ffffff0723543388, ffffff07235433a8, ffffff07235433b8 , ffffff002f0ecbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543388) thread_start+8() ffffff002f0e6c40 fffffffffbc2ea80 0 0 60 ffffff07235433b8 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff002f0e6c40: ffffff002f0e6a80 [ ffffff002f0e6a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235433b8, ffffff07235433a8) taskq_thread_wait+0xbe(ffffff0723543388, ffffff07235433a8, ffffff07235433b8 , ffffff002f0e6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543388) thread_start+8() ffffff002f0f8c40 fffffffffbc2ea80 0 0 60 ffffff07235433b8 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_4_ stack pointer for thread ffffff002f0f8c40: ffffff002f0f8a80 [ ffffff002f0f8a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235433b8, ffffff07235433a8) taskq_thread_wait+0xbe(ffffff0723543388, ffffff07235433a8, ffffff07235433b8 , ffffff002f0f8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543388) thread_start+8() ffffff002e5fac40 fffffffffbc2ea80 0 0 60 ffffff07235432a0 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff002e5fac40: ffffff002e5faa80 [ ffffff002e5faa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235432a0, ffffff0723543290) taskq_thread_wait+0xbe(ffffff0723543270, ffffff0723543290, ffffff07235432a0 , ffffff002e5fabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543270) thread_start+8() ffffff002e5f4c40 fffffffffbc2ea80 0 0 60 ffffff07235432a0 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff002e5f4c40: ffffff002e5f4a80 [ ffffff002e5f4a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235432a0, ffffff0723543290) taskq_thread_wait+0xbe(ffffff0723543270, ffffff0723543290, ffffff07235432a0 , ffffff002e5f4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543270) thread_start+8() ffffff002e5eec40 fffffffffbc2ea80 0 0 60 ffffff07235432a0 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff002e5eec40: ffffff002e5eea80 [ ffffff002e5eea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235432a0, ffffff0723543290) taskq_thread_wait+0xbe(ffffff0723543270, ffffff0723543290, ffffff07235432a0 , ffffff002e5eebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543270) thread_start+8() ffffff002e572c40 fffffffffbc2ea80 0 0 60 ffffff07235432a0 PC: _resume_from_idle+0xf4 TASKQ: USB_device_0_pipehndl_tq_6_ stack pointer for thread ffffff002e572c40: ffffff002e572a80 [ ffffff002e572a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07235432a0, ffffff0723543290) taskq_thread_wait+0xbe(ffffff0723543270, ffffff0723543290, ffffff07235432a0 , ffffff002e572bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543270) thread_start+8() ffffff002e72ac40 fffffffffbc2ea80 0 0 60 ffffff0723543188 PC: _resume_from_idle+0xf4 TASKQ: usb_mid_nexus_enum_tq stack pointer for thread ffffff002e72ac40: ffffff002e72aa80 [ ffffff002e72aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543188, ffffff0723543178) taskq_thread_wait+0xbe(ffffff0723543158, ffffff0723543178, ffffff0723543188 , ffffff002e72abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543158) thread_start+8() ffffff002f5a4c40 fffffffffbc2ea80 0 0 60 ffffff0723543070 PC: _resume_from_idle+0xf4 TASKQ: USB_hid_81_pipehndl_tq_0 stack pointer for thread ffffff002f5a4c40: ffffff002f5a4a80 [ ffffff002f5a4a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543070, ffffff0723543060) taskq_thread_wait+0xbe(ffffff0723543040, ffffff0723543060, ffffff0723543070 , ffffff002f5a4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543040) thread_start+8() ffffff002f59ec40 fffffffffbc2ea80 0 0 60 ffffff0723543070 PC: _resume_from_idle+0xf4 TASKQ: USB_hid_81_pipehndl_tq_0 stack pointer for thread ffffff002f59ec40: ffffff002f59ea80 [ ffffff002f59ea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543070, ffffff0723543060) taskq_thread_wait+0xbe(ffffff0723543040, ffffff0723543060, ffffff0723543070 , ffffff002f59ebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543040) thread_start+8() ffffff002f640c40 fffffffffbc2ea80 0 0 60 ffffff0724078eb0 PC: _resume_from_idle+0xf4 TASKQ: USB_hid_81_pipehndl_tq_1 stack pointer for thread ffffff002f640c40: ffffff002f640a80 [ ffffff002f640a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078eb0, ffffff0724078ea0) taskq_thread_wait+0xbe(ffffff0724078e80, ffffff0724078ea0, ffffff0724078eb0 , ffffff002f640bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078e80) thread_start+8() ffffff002f63ac40 fffffffffbc2ea80 0 0 60 ffffff0724078eb0 PC: _resume_from_idle+0xf4 TASKQ: USB_hid_81_pipehndl_tq_1 stack pointer for thread ffffff002f63ac40: ffffff002f63aa80 [ ffffff002f63aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078eb0, ffffff0724078ea0) taskq_thread_wait+0xbe(ffffff0724078e80, ffffff0724078ea0, ffffff0724078eb0 , ffffff002f63abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078e80) thread_start+8() ffffff002e11fc40 fffffffffbc2ea80 0 0 60 fffffffffbcfae8c PC: _resume_from_idle+0xf4 THREAD: streams_bufcall_service() stack pointer for thread ffffff002e11fc40: ffffff002e11fb70 [ ffffff002e11fb70 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcfae8c, fffffffffbcfaf10) streams_bufcall_service+0x8d() thread_start+8() ffffff002e125c40 fffffffffbc2ea80 0 0 60 fffffffffbcca058 PC: _resume_from_idle+0xf4 THREAD: streams_qbkgrnd_service() stack pointer for thread ffffff002e125c40: ffffff002e125b50 [ ffffff002e125b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcca058, fffffffffbcca050) streams_qbkgrnd_service+0x151() thread_start+8() ffffff002e12bc40 fffffffffbc2ea80 0 0 60 fffffffffbcca05a PC: _resume_from_idle+0xf4 THREAD: streams_sqbkgrnd_service() stack pointer for thread ffffff002e12bc40: ffffff002e12bb60 [ ffffff002e12bb60 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcca05a, fffffffffbcca050) streams_sqbkgrnd_service+0xe5() thread_start+8() ffffff002e131c40 fffffffffbc2ea80 0 0 60 fffffffffbc31de0 PC: _resume_from_idle+0xf4 THREAD: page_capture_thread() stack pointer for thread ffffff002e131c40: ffffff002e131ae0 [ ffffff002e131ae0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbc31de0, fffffffffbc2d938, 117a12daa00, 989680, 0) cv_reltimedwait+0x51(fffffffffbc31de0, fffffffffbc2d938, 1d524, 4) page_capture_thread+0xb1() thread_start+8() ffffff002f65cc40 ffffff06f6b44008 ffffff06fe21da40 0 60 ffffff06f6b4de90 PC: _resume_from_idle+0xf4 CMD: kcfpoold stack pointer for thread ffffff002f65cc40: ffffff002f65c9f0 [ ffffff002f65c9f0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff06f6b4de90, ffffff06f6b4de98, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff06f6b4de90, ffffff06f6b4de98, 1770, 4) kcfpool_svc+0x84(0) thread_start+8() ffffff002e2efc40 ffffff06f6b44008 ffffff06fe21a880 0 60 ffffff06f6b4de90 PC: _resume_from_idle+0xf4 CMD: kcfpoold stack pointer for thread ffffff002e2efc40: ffffff002e2ef9f0 [ ffffff002e2ef9f0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff06f6b4de90, ffffff06f6b4de98, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff06f6b4de90, ffffff06f6b4de98, 1770, 4) kcfpool_svc+0x84(0) thread_start+8() ffffff002fc6bc40 ffffff06f6b44008 ffffff06fe212900 0 60 ffffff06f6b4de90 PC: _resume_from_idle+0xf4 CMD: kcfpoold stack pointer for thread ffffff002fc6bc40: ffffff002fc6b9f0 [ ffffff002fc6b9f0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff06f6b4de90, ffffff06f6b4de98, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff06f6b4de90, ffffff06f6b4de98, 1770, 4) kcfpool_svc+0x84(0) thread_start+8() ffffff002fbffc40 ffffff06f6b44008 ffffff06fe2168c0 0 60 ffffff06f6b4de90 PC: _resume_from_idle+0xf4 CMD: kcfpoold stack pointer for thread ffffff002fbffc40: ffffff002fbff9f0 [ ffffff002fbff9f0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff06f6b4de90, ffffff06f6b4de98, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff06f6b4de90, ffffff06f6b4de98, 1770, 4) kcfpool_svc+0x84(0) thread_start+8() ffffff002e137c40 ffffff06f6b44008 ffffff06f134e840 0 60 ffffff06f69df3f4 PC: _resume_from_idle+0xf4 CMD: kcfpoold stack pointer for thread ffffff002e137c40: ffffff002e1379d0 [ ffffff002e1379d0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff06f69df3f4, ffffff06f69df3f8, df8475800, 989680, 0) cv_reltimedwait+0x51(ffffff06f69df3f4, ffffff06f69df3f8, 1770, 4) kcfpoold+0xf6(0) thread_start+8() ffffff002e13dc40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e150 PC: _resume_from_idle+0xf4 TASKQ: dbu_evict stack pointer for thread ffffff002e13dc40: ffffff002e13da80 [ ffffff002e13da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e150, ffffff06f6a2e140) taskq_thread_wait+0xbe(ffffff06f6a2e120, ffffff06f6a2e140, ffffff06f6a2e150 , ffffff002e13dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e120) thread_start+8() ffffff002e149c40 fffffffffbc2ea80 0 0 60 fffffffffbd1e450 PC: _resume_from_idle+0xf4 THREAD: arc_reclaim_thread() stack pointer for thread ffffff002e149c40: ffffff002e149aa0 [ ffffff002e149aa0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd1e450, fffffffffbd1e448, 3b9aca00, 989680 , 0) cv_timedwait+0x5c(fffffffffbd1e450, fffffffffbd1e448, 90a783) arc_reclaim_thread+0x13e() thread_start+8() ffffff002e14fc40 fffffffffbc2ea80 0 0 60 fffffffffbd1e468 PC: _resume_from_idle+0xf4 THREAD: arc_user_evicts_thread() stack pointer for thread ffffff002e14fc40: ffffff002e14fac0 [ ffffff002e14fac0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd1e468, fffffffffbd1e460, 3b9aca00, 989680 , 0) cv_timedwait+0x5c(fffffffffbd1e468, fffffffffbd1e460, 90a783) arc_user_evicts_thread+0xd9() thread_start+8() ffffff002e155c40 fffffffffbc2ea80 0 0 60 fffffffffbd22880 PC: _resume_from_idle+0xf4 THREAD: l2arc_feed_thread() stack pointer for thread ffffff002e155c40: ffffff002e155a90 [ ffffff002e155a90 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd22880, fffffffffbd22878, 3b9aca00, 989680 , 0) cv_timedwait+0x5c(fffffffffbd22880, fffffffffbd22878, 90a764) l2arc_feed_thread+0xad() thread_start+8() ffffff002e15bc40 fffffffffbc2ea80 0 0 60 fffffffffbcf0730 PC: _resume_from_idle+0xf4 THREAD: pm_dep_thread() stack pointer for thread ffffff002e15bc40: ffffff002e15bb60 [ ffffff002e15bb60 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcf0730, fffffffffbcf1db8) pm_dep_thread+0xbd() thread_start+8() ffffff002e161c40 fffffffffbc2ea80 0 0 60 ffffff06f6a2e038 PC: _resume_from_idle+0xf4 TASKQ: ppm_nexus_enum_tq stack pointer for thread ffffff002e161c40: ffffff002e161a80 [ ffffff002e161a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6a2e038, ffffff06f6a2e028) taskq_thread_wait+0xbe(ffffff06f6a2e008, ffffff06f6a2e028, ffffff06f6a2e038 , ffffff002e161bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06f6a2e008) thread_start+8() ffffff002e16dc40 fffffffffbc2ea80 0 0 60 ffffff06fdf76e78 PC: _resume_from_idle+0xf4 TASKQ: ahci_nexus_enum_tq stack pointer for thread ffffff002e16dc40: ffffff002e16da80 [ ffffff002e16da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76e78, ffffff06fdf76e68) taskq_thread_wait+0xbe(ffffff06fdf76e48, ffffff06fdf76e68, ffffff06fdf76e78 , ffffff002e16dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76e48) thread_start+8() ffffff002e179c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76d60 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port0 stack pointer for thread ffffff002e179c40: ffffff002e179a80 [ ffffff002e179a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76d60, ffffff06fdf76d50) taskq_thread_wait+0xbe(ffffff06fdf76d30, ffffff06fdf76d50, ffffff06fdf76d60 , ffffff002e179bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76d30) thread_start+8() ffffff002e173c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76d60 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port0 stack pointer for thread ffffff002e173c40: ffffff002e173a80 [ ffffff002e173a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76d60, ffffff06fdf76d50) taskq_thread_wait+0xbe(ffffff06fdf76d30, ffffff06fdf76d50, ffffff06fdf76d60 , ffffff002e173bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76d30) thread_start+8() ffffff002e185c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76c48 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port1 stack pointer for thread ffffff002e185c40: ffffff002e185a80 [ ffffff002e185a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76c48, ffffff06fdf76c38) taskq_thread_wait+0xbe(ffffff06fdf76c18, ffffff06fdf76c38, ffffff06fdf76c48 , ffffff002e185bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76c18) thread_start+8() ffffff002e17fc40 fffffffffbc2ea80 0 0 60 ffffff06fdf76c48 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port1 stack pointer for thread ffffff002e17fc40: ffffff002e17fa80 [ ffffff002e17fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76c48, ffffff06fdf76c38) taskq_thread_wait+0xbe(ffffff06fdf76c18, ffffff06fdf76c38, ffffff06fdf76c48 , ffffff002e17fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76c18) thread_start+8() ffffff002e191c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76b30 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port2 stack pointer for thread ffffff002e191c40: ffffff002e191a80 [ ffffff002e191a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76b30, ffffff06fdf76b20) taskq_thread_wait+0xbe(ffffff06fdf76b00, ffffff06fdf76b20, ffffff06fdf76b30 , ffffff002e191bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76b00) thread_start+8() ffffff002e18bc40 fffffffffbc2ea80 0 0 60 ffffff06fdf76b30 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port2 stack pointer for thread ffffff002e18bc40: ffffff002e18ba80 [ ffffff002e18ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76b30, ffffff06fdf76b20) taskq_thread_wait+0xbe(ffffff06fdf76b00, ffffff06fdf76b20, ffffff06fdf76b30 , ffffff002e18bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76b00) thread_start+8() ffffff002e19dc40 fffffffffbc2ea80 0 0 60 ffffff06fdf76a18 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port3 stack pointer for thread ffffff002e19dc40: ffffff002e19da80 [ ffffff002e19da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76a18, ffffff06fdf76a08) taskq_thread_wait+0xbe(ffffff06fdf769e8, ffffff06fdf76a08, ffffff06fdf76a18 , ffffff002e19dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf769e8) thread_start+8() ffffff002e197c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76a18 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port3 stack pointer for thread ffffff002e197c40: ffffff002e197a80 [ ffffff002e197a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76a18, ffffff06fdf76a08) taskq_thread_wait+0xbe(ffffff06fdf769e8, ffffff06fdf76a08, ffffff06fdf76a18 , ffffff002e197bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf769e8) thread_start+8() ffffff002e1a9c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76900 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port4 stack pointer for thread ffffff002e1a9c40: ffffff002e1a9a80 [ ffffff002e1a9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76900, ffffff06fdf768f0) taskq_thread_wait+0xbe(ffffff06fdf768d0, ffffff06fdf768f0, ffffff06fdf76900 , ffffff002e1a9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf768d0) thread_start+8() ffffff002e1a3c40 fffffffffbc2ea80 0 0 60 ffffff06fdf76900 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port4 stack pointer for thread ffffff002e1a3c40: ffffff002e1a3a80 [ ffffff002e1a3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76900, ffffff06fdf768f0) taskq_thread_wait+0xbe(ffffff06fdf768d0, ffffff06fdf768f0, ffffff06fdf76900 , ffffff002e1a3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf768d0) thread_start+8() ffffff002e1b5c40 fffffffffbc2ea80 0 0 60 ffffff06fdf767e8 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port5 stack pointer for thread ffffff002e1b5c40: ffffff002e1b5a80 [ ffffff002e1b5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf767e8, ffffff06fdf767d8) taskq_thread_wait+0xbe(ffffff06fdf767b8, ffffff06fdf767d8, ffffff06fdf767e8 , ffffff002e1b5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf767b8) thread_start+8() ffffff002e1afc40 fffffffffbc2ea80 0 0 60 ffffff06fdf767e8 PC: _resume_from_idle+0xf4 TASKQ: ahci_event_handle_taskq_port5 stack pointer for thread ffffff002e1afc40: ffffff002e1afa80 [ ffffff002e1afa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf767e8, ffffff06fdf767d8) taskq_thread_wait+0xbe(ffffff06fdf767b8, ffffff06fdf767d8, ffffff06fdf767e8 , ffffff002e1afbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf767b8) thread_start+8() ffffff002e1bbc40 fffffffffbc2ea80 0 0 60 ffffff06fdf766d0 PC: _resume_from_idle+0xf4 TASKQ: pci15d9_f580_0 stack pointer for thread ffffff002e1bbc40: ffffff002e1bba80 [ ffffff002e1bba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf766d0, ffffff06fdf766c0) taskq_thread_wait+0xbe(ffffff06fdf766a0, ffffff06fdf766c0, ffffff06fdf766d0 , ffffff002e1bbbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf766a0) thread_start+8() ffffff002e1c1c40 fffffffffbc2ea80 0 0 60 fffffffffbd34440 PC: _resume_from_idle+0xf4 THREAD: sata_event_daemon() stack pointer for thread ffffff002e1c1c40: ffffff002e1c1b00 [ ffffff002e1c1b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbd34440, fffffffffbd34438, 2faf080, 989680 , 0) cv_reltimedwait+0x51(fffffffffbd34440, fffffffffbd34438, 5, 4) sata_event_daemon+0xff(fffffffffbd34428) thread_start+8() ffffff002e1f1c40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1f1c40: ffffff002e1f1a80 [ ffffff002e1f1a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1f1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1ebc40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1ebc40: ffffff002e1eba80 [ ffffff002e1eba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1ebbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1e5c40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1e5c40: ffffff002e1e5a80 [ ffffff002e1e5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1e5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1dfc40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1dfc40: ffffff002e1dfa80 [ ffffff002e1dfa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1dfbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1d9c40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1d9c40: ffffff002e1d9a80 [ ffffff002e1d9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1d9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1d3c40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1d3c40: ffffff002e1d3a80 [ ffffff002e1d3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1d3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1cdc40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1cdc40: ffffff002e1cda80 [ ffffff002e1cda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1cdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1c7c40 fffffffffbc2ea80 0 0 97 ffffff06fdf765b8 PC: _resume_from_idle+0xf4 TASKQ: sd_drv_taskq stack pointer for thread ffffff002e1c7c40: ffffff002e1c7a80 [ ffffff002e1c7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf765b8, ffffff06fdf765a8) taskq_thread_wait+0xbe(ffffff06fdf76588, ffffff06fdf765a8, ffffff06fdf765b8 , ffffff002e1c7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76588) thread_start+8() ffffff002e1f7c40 fffffffffbc2ea80 0 0 97 ffffff06fdf764a0 PC: _resume_from_idle+0xf4 TASKQ: sd_rmw_taskq stack pointer for thread ffffff002e1f7c40: ffffff002e1f7a80 [ ffffff002e1f7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf764a0, ffffff06fdf76490) taskq_thread_wait+0xbe(ffffff06fdf76470, ffffff06fdf76490, ffffff06fdf764a0 , ffffff002e1f7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76470) thread_start+8() ffffff002e203c40 fffffffffbc2ea80 0 0 97 ffffff06fdf76388 PC: _resume_from_idle+0xf4 TASKQ: xbuf_taskq stack pointer for thread ffffff002e203c40: ffffff002e203a80 [ ffffff002e203a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76388, ffffff06fdf76378) taskq_thread_wait+0xbe(ffffff06fdf76358, ffffff06fdf76378, ffffff06fdf76388 , ffffff002e203bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76358) thread_start+8() ffffff002e250c40 ffffff06fe1fd010 ffffff06f134d340 2 0 ffffff06fe1e7398 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e250c40: ffffff002e250990 [ ffffff002e250990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7398, ffffff06fe1e7388) taskq_thread_wait+0xbe(ffffff06fe1e7368, ffffff06fe1e7388, ffffff06fe1e7398 , ffffff002e250ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7368) thread_start+8() ffffff002e256c40 ffffff06fe1fd010 ffffff06f134cc40 2 99 ffffff06fe1e74b0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e256c40: ffffff002e256990 [ ffffff002e256990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e74b0, ffffff06fe1e74a0) taskq_thread_wait+0xbe(ffffff06fe1e7480, ffffff06fe1e74a0, ffffff06fe1e74b0 , ffffff002e256ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7480) thread_start+8() ffffff002e221c40 ffffff06fe1fd010 ffffff06f1349380 2 99 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e221c40: ffffff002e221990 [ ffffff002e221990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e221ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e21bc40 ffffff06fe1fd010 ffffff06f1349a80 2 99 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e21bc40: ffffff002e21b990 [ ffffff002e21b990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e21bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e20fc40 ffffff06fe1fd010 ffffff06f134a180 2 0 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e20fc40: ffffff002e20f990 [ ffffff002e20f990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e20fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e27ac40 ffffff06fe1fd010 ffffff06f134a880 2 0 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e27ac40: ffffff002e27a990 [ ffffff002e27a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e27aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e268c40 ffffff06fe1fd010 ffffff06f134b040 2 0 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e268c40: ffffff002e268990 [ ffffff002e268990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e268ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e26ec40 ffffff06fe1fd010 ffffff06f134b740 2 0 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e26ec40: ffffff002e26e990 [ ffffff002e26e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e26ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e262c40 ffffff06fe1fd010 ffffff06f134be40 2 0 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e262c40: ffffff002e262990 [ ffffff002e262990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e262ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e25cc40 ffffff06fe1fd010 ffffff06f134c540 2 0 ffffff06fe1e75c8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e25cc40: ffffff002e25c990 [ ffffff002e25c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e75c8, ffffff06fe1e75b8) taskq_thread_wait+0xbe(ffffff06fe1e7598, ffffff06fe1e75b8, ffffff06fe1e75c8 , ffffff002e25cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7598) thread_start+8() ffffff002e7dec40 ffffff06fe1fd010 ffffff06fe60f5c0 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7dec40: ffffff002e7de990 [ ffffff002e7de990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7dead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7d8c40 ffffff06fe1fd010 ffffff06fe60fcc0 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7d8c40: ffffff002e7d8990 [ ffffff002e7d8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7d8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7d2c40 ffffff06fe1fd010 ffffff06fe6103c0 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7d2c40: ffffff002e7d2990 [ ffffff002e7d2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7d2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7ccc40 ffffff06fe1fd010 ffffff06fe610ac0 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7ccc40: ffffff002e7cc990 [ ffffff002e7cc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7ccad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7c6c40 ffffff06fe1fd010 ffffff06fe6111c0 2 99 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7c6c40: ffffff002e7c6990 [ ffffff002e7c6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7c6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7c0c40 ffffff06fe1fd010 ffffff06fe6118c0 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7c0c40: ffffff002e7c0990 [ ffffff002e7c0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7c0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7bac40 ffffff06fe1fd010 ffffff06fe612080 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7bac40: ffffff002e7ba990 [ ffffff002e7ba990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7baad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7b4c40 ffffff06fe1fd010 ffffff06fe612780 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7b4c40: ffffff002e7b4990 [ ffffff002e7b4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7b4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7aec40 ffffff06fe1fd010 ffffff06fe612e80 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7aec40: ffffff002e7ae990 [ ffffff002e7ae990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e7aead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e233c40 ffffff06fe1fd010 ffffff06f1347780 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e233c40: ffffff002e233990 [ ffffff002e233990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e233ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e22dc40 ffffff06fe1fd010 ffffff06f1348c80 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e22dc40: ffffff002e22d990 [ ffffff002e22d990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e22dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e227c40 ffffff06fe1fd010 ffffff06f1348580 2 0 ffffff06fe1e76e0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e227c40: ffffff002e227990 [ ffffff002e227990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e76e0, ffffff06fe1e76d0) taskq_thread_wait+0xbe(ffffff06fe1e76b0, ffffff06fe1e76d0, ffffff06fe1e76e0 , ffffff002e227ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e76b0) thread_start+8() ffffff002e7a8c40 ffffff06fe1fd010 ffffff06fe613580 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7a8c40: ffffff002e7a8990 [ ffffff002e7a8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e7a8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e7a2c40 ffffff06fe1fd010 ffffff06fe613c80 2 99 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7a2c40: ffffff002e7a2990 [ ffffff002e7a2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e7a2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e79cc40 ffffff06fe1fd010 ffffff06fe614380 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e79cc40: ffffff002e79c990 [ ffffff002e79c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e79cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e796c40 ffffff06fe1fd010 ffffff06fe614a80 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e796c40: ffffff002e796990 [ ffffff002e796990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e796ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e790c40 ffffff06fe1fd010 ffffff06fe615180 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e790c40: ffffff002e790990 [ ffffff002e790990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e790ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e78ac40 ffffff06fe1fd010 ffffff06fe615880 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e78ac40: ffffff002e78a990 [ ffffff002e78a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e78aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e784c40 ffffff06fe1fd010 ffffff06fe26c040 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e784c40: ffffff002e784990 [ ffffff002e784990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e784ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e77ec40 ffffff06fe1fd010 ffffff06fe26c740 2 99 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e77ec40: ffffff002e77e990 [ ffffff002e77e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e77ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e778c40 ffffff06fe1fd010 ffffff06fe26ce40 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e778c40: ffffff002e778990 [ ffffff002e778990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e778ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e772c40 ffffff06fe1fd010 ffffff06fe26d540 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e772c40: ffffff002e772990 [ ffffff002e772990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e772ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e23fc40 ffffff06fe1fd010 ffffff06fe26dc40 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e23fc40: ffffff002e23f990 [ ffffff002e23f990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e23fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e239c40 ffffff06fe1fd010 ffffff06fe26e340 2 0 ffffff06fe1e77f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e239c40: ffffff002e239990 [ ffffff002e239990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e77f8, ffffff06fe1e77e8) taskq_thread_wait+0xbe(ffffff06fe1e77c8, ffffff06fe1e77e8, ffffff06fe1e77f8 , ffffff002e239ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e77c8) thread_start+8() ffffff002e826c40 ffffff06fe1fd010 ffffff06fe60a100 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e826c40: ffffff002e826990 [ ffffff002e826990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e826ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e820c40 ffffff06fe1fd010 ffffff06fe60a800 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e820c40: ffffff002e820990 [ ffffff002e820990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e820ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e81ac40 ffffff06fe1fd010 ffffff06fe60af00 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e81ac40: ffffff002e81a990 [ ffffff002e81a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e81aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e814c40 ffffff06fe1fd010 ffffff06fe60b600 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e814c40: ffffff002e814990 [ ffffff002e814990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e814ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e80ec40 ffffff06fe1fd010 ffffff06fe60bd00 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e80ec40: ffffff002e80e990 [ ffffff002e80e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e80ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e808c40 ffffff06fe1fd010 ffffff06fe60c400 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e808c40: ffffff002e808990 [ ffffff002e808990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e808ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e802c40 ffffff06fe1fd010 ffffff06fe60cb00 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e802c40: ffffff002e802990 [ ffffff002e802990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e802ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e7fcc40 ffffff06fe1fd010 ffffff06fe60d200 2 99 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7fcc40: ffffff002e7fc990 [ ffffff002e7fc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e7fcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e7f6c40 ffffff06fe1fd010 ffffff06fe60d900 2 99 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7f6c40: ffffff002e7f6990 [ ffffff002e7f6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e7f6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e7f0c40 ffffff06fe1fd010 ffffff06fe60e0c0 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7f0c40: ffffff002e7f0990 [ ffffff002e7f0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e7f0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e7eac40 ffffff06fe1fd010 ffffff06fe60e7c0 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7eac40: ffffff002e7ea990 [ ffffff002e7ea990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e7eaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e7e4c40 ffffff06fe1fd010 ffffff06fe60eec0 2 0 ffffff06fe1e7910 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e7e4c40: ffffff002e7e4990 [ ffffff002e7e4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7910, ffffff06fe1e7900) taskq_thread_wait+0xbe(ffffff06fe1e78e0, ffffff06fe1e7900, ffffff06fe1e7910 , ffffff002e7e4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e78e0) thread_start+8() ffffff002e86ec40 ffffff06fe1fd010 ffffff06fe644a40 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e86ec40: ffffff002e86e990 [ ffffff002e86e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e86ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e868c40 ffffff06fe1fd010 ffffff06fe645140 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e868c40: ffffff002e868990 [ ffffff002e868990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e868ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e862c40 ffffff06fe1fd010 ffffff06fe645840 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e862c40: ffffff002e862990 [ ffffff002e862990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e862ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e85cc40 ffffff06fe1fd010 ffffff06fe646000 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e85cc40: ffffff002e85c990 [ ffffff002e85c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e85cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e856c40 ffffff06fe1fd010 ffffff06fe646700 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e856c40: ffffff002e856990 [ ffffff002e856990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e856ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e850c40 ffffff06fe1fd010 ffffff06fe646e00 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e850c40: ffffff002e850990 [ ffffff002e850990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e850ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e84ac40 ffffff06fe1fd010 ffffff06fe647500 2 99 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e84ac40: ffffff002e84a990 [ ffffff002e84a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e84aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e844c40 ffffff06fe1fd010 ffffff06fe647c00 2 99 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e844c40: ffffff002e844990 [ ffffff002e844990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e844ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e83ec40 ffffff06fe1fd010 ffffff06fe648300 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e83ec40: ffffff002e83e990 [ ffffff002e83e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e83ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e838c40 ffffff06fe1fd010 ffffff06fe648a00 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e838c40: ffffff002e838990 [ ffffff002e838990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e838ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e832c40 ffffff06fe1fd010 ffffff06fe649100 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e832c40: ffffff002e832990 [ ffffff002e832990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e832ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e82cc40 ffffff06fe1fd010 ffffff06fe649800 2 0 ffffff06fe1e7a28 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e82cc40: ffffff002e82c990 [ ffffff002e82c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7a28, ffffff06fe1e7a18) taskq_thread_wait+0xbe(ffffff06fe1e79f8, ffffff06fe1e7a18, ffffff06fe1e7a28 , ffffff002e82cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e79f8) thread_start+8() ffffff002e8b6c40 ffffff06fe1fd010 ffffff06fe63f580 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8b6c40: ffffff002e8b6990 [ ffffff002e8b6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e8b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e8b0c40 ffffff06fe1fd010 ffffff06fe63fc80 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8b0c40: ffffff002e8b0990 [ ffffff002e8b0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e8b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e8aac40 ffffff06fe1fd010 ffffff06fe640380 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8aac40: ffffff002e8aa990 [ ffffff002e8aa990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e8aaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e8a4c40 ffffff06fe1fd010 ffffff06fe640a80 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8a4c40: ffffff002e8a4990 [ ffffff002e8a4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e8a4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e89ec40 ffffff06fe1fd010 ffffff06fe641180 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e89ec40: ffffff002e89e990 [ ffffff002e89e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e89ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e898c40 ffffff06fe1fd010 ffffff06fe641880 2 99 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e898c40: ffffff002e898990 [ ffffff002e898990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e898ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e892c40 ffffff06fe1fd010 ffffff06fe642040 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e892c40: ffffff002e892990 [ ffffff002e892990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e892ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e88cc40 ffffff06fe1fd010 ffffff06fe642740 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e88cc40: ffffff002e88c990 [ ffffff002e88c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e88cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e886c40 ffffff06fe1fd010 ffffff06fe642e40 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e886c40: ffffff002e886990 [ ffffff002e886990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e886ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e880c40 ffffff06fe1fd010 ffffff06fe643540 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e880c40: ffffff002e880990 [ ffffff002e880990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e880ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e87ac40 ffffff06fe1fd010 ffffff06fe643c40 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e87ac40: ffffff002e87a990 [ ffffff002e87a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e87aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e874c40 ffffff06fe1fd010 ffffff06fe644340 2 0 ffffff06fe1e7b40 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e874c40: ffffff002e874990 [ ffffff002e874990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7b40, ffffff06fe1e7b30) taskq_thread_wait+0xbe(ffffff06fe1e7b10, ffffff06fe1e7b30, ffffff06fe1e7b40 , ffffff002e874ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7b10) thread_start+8() ffffff002e8fec40 ffffff06fe1fd010 ffffff06fe63a0c0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8fec40: ffffff002e8fe990 [ ffffff002e8fe990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8fead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8f8c40 ffffff06fe1fd010 ffffff06fe63a7c0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8f8c40: ffffff002e8f8990 [ ffffff002e8f8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8f8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8f2c40 ffffff06fe1fd010 ffffff06fe63aec0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8f2c40: ffffff002e8f2990 [ ffffff002e8f2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8f2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8ecc40 ffffff06fe1fd010 ffffff06fe63b5c0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8ecc40: ffffff002e8ec990 [ ffffff002e8ec990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8ecad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8e6c40 ffffff06fe1fd010 ffffff06fe63bcc0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8e6c40: ffffff002e8e6990 [ ffffff002e8e6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8e6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8e0c40 ffffff06fe1fd010 ffffff06fe63c3c0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8e0c40: ffffff002e8e0990 [ ffffff002e8e0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8e0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8dac40 ffffff06fe1fd010 ffffff06fe63cac0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8dac40: ffffff002e8da990 [ ffffff002e8da990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8daad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8d4c40 ffffff06fe1fd010 ffffff06fe63d1c0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8d4c40: ffffff002e8d4990 [ ffffff002e8d4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8d4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8cec40 ffffff06fe1fd010 ffffff06fe63d8c0 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8cec40: ffffff002e8ce990 [ ffffff002e8ce990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8cead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8c8c40 ffffff06fe1fd010 ffffff06fe63e080 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8c8c40: ffffff002e8c8990 [ ffffff002e8c8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8c2c40 ffffff06fe1fd010 ffffff06fe63e780 2 0 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8c2c40: ffffff002e8c2990 [ ffffff002e8c2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e8bcc40 ffffff06fe1fd010 ffffff06fe63ee80 2 99 ffffff06fe1e7c58 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e8bcc40: ffffff002e8bc990 [ ffffff002e8bc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7c58, ffffff06fe1e7c48) taskq_thread_wait+0xbe(ffffff06fe1e7c28, ffffff06fe1e7c48, ffffff06fe1e7c58 , ffffff002e8bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7c28) thread_start+8() ffffff002e946c40 ffffff06fe1fd010 ffffff06fe634a00 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e946c40: ffffff002e946990 [ ffffff002e946990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e946ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e940c40 ffffff06fe1fd010 ffffff06fe635100 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e940c40: ffffff002e940990 [ ffffff002e940990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e940ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e93ac40 ffffff06fe1fd010 ffffff06fe635800 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e93ac40: ffffff002e93a990 [ ffffff002e93a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e93aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e934c40 ffffff06fe1fd010 ffffff06fe636100 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e934c40: ffffff002e934990 [ ffffff002e934990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e934ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e92ec40 ffffff06fe1fd010 ffffff06fe636800 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e92ec40: ffffff002e92e990 [ ffffff002e92e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e92ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e928c40 ffffff06fe1fd010 ffffff06fe636f00 2 99 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e928c40: ffffff002e928990 [ ffffff002e928990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e928ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e922c40 ffffff06fe1fd010 ffffff06fe637600 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e922c40: ffffff002e922990 [ ffffff002e922990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e922ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e91cc40 ffffff06fe1fd010 ffffff06fe637d00 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e91cc40: ffffff002e91c990 [ ffffff002e91c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e91cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e916c40 ffffff06fe1fd010 ffffff06fe638400 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e916c40: ffffff002e916990 [ ffffff002e916990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e916ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e910c40 ffffff06fe1fd010 ffffff06fe638b00 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e910c40: ffffff002e910990 [ ffffff002e910990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e910ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e90ac40 ffffff06fe1fd010 ffffff06fe639200 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e90ac40: ffffff002e90a990 [ ffffff002e90a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e90aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e904c40 ffffff06fe1fd010 ffffff06fe639900 2 0 ffffff06fe1e7d70 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e904c40: ffffff002e904990 [ ffffff002e904990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7d70, ffffff06fe1e7d60) taskq_thread_wait+0xbe(ffffff06fe1e7d40, ffffff06fe1e7d60, ffffff06fe1e7d70 , ffffff002e904ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7d40) thread_start+8() ffffff002e98ec40 ffffff06fe1fd010 ffffff06fe62f540 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e98ec40: ffffff002e98e990 [ ffffff002e98e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e98ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e988c40 ffffff06fe1fd010 ffffff06fe62fc40 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e988c40: ffffff002e988990 [ ffffff002e988990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e988ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e982c40 ffffff06fe1fd010 ffffff06fe630340 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e982c40: ffffff002e982990 [ ffffff002e982990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e982ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e97cc40 ffffff06fe1fd010 ffffff06fe630a40 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e97cc40: ffffff002e97c990 [ ffffff002e97c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e97cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e976c40 ffffff06fe1fd010 ffffff06fe631140 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e976c40: ffffff002e976990 [ ffffff002e976990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e976ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e970c40 ffffff06fe1fd010 ffffff06fe631840 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e970c40: ffffff002e970990 [ ffffff002e970990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e970ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e96ac40 ffffff06fe1fd010 ffffff06fe632000 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e96ac40: ffffff002e96a990 [ ffffff002e96a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e96aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e964c40 ffffff06fe1fd010 ffffff06fe632700 2 99 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e964c40: ffffff002e964990 [ ffffff002e964990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e964ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e95ec40 ffffff06fe1fd010 ffffff06fe632e00 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e95ec40: ffffff002e95e990 [ ffffff002e95e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e95ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e958c40 ffffff06fe1fd010 ffffff06fe633500 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e958c40: ffffff002e958990 [ ffffff002e958990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e958ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e952c40 ffffff06fe1fd010 ffffff06fe633c00 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e952c40: ffffff002e952990 [ ffffff002e952990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e952ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e94cc40 ffffff06fe1fd010 ffffff06fe634300 2 0 ffffff06fe1e7e88 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e94cc40: ffffff002e94c990 [ ffffff002e94c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7e88, ffffff06fe1e7e78) taskq_thread_wait+0xbe(ffffff06fe1e7e58, ffffff06fe1e7e78, ffffff06fe1e7e88 , ffffff002e94cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7e58) thread_start+8() ffffff002e64ec40 ffffff06fe1fd010 ffffff06f13468c0 2 99 ffffff06fe1fb048 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e64ec40: ffffff002e64e990 [ ffffff002e64e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb048, ffffff06fe1fb038) taskq_thread_wait+0xbe(ffffff06fe1fb018, ffffff06fe1fb038, ffffff06fe1fb048 , ffffff002e64ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb018) thread_start+8() ffffff002e648c40 ffffff06fe1fd010 ffffff06fe223100 2 99 ffffff06fe1fb048 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e648c40: ffffff002e648990 [ ffffff002e648990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb048, ffffff06fe1fb038) taskq_thread_wait+0xbe(ffffff06fe1fb018, ffffff06fe1fb038, ffffff06fe1fb048 , ffffff002e648ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb018) thread_start+8() ffffff002e994c40 ffffff06fe1fd010 ffffff06fe62ee40 2 0 ffffff06fe1fb048 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e994c40: ffffff002e994990 [ ffffff002e994990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb048, ffffff06fe1fb038) taskq_thread_wait+0xbe(ffffff06fe1fb018, ffffff06fe1fb038, ffffff06fe1fb048 , ffffff002e994ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb018) thread_start+8() ffffff002e9b2c40 ffffff06fe1fd010 ffffff06fe62ca80 2 99 ffffff06fe1fb160 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9b2c40: ffffff002e9b2990 [ ffffff002e9b2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb160, ffffff06fe1fb150) taskq_thread_wait+0xbe(ffffff06fe1fb130, ffffff06fe1fb150, ffffff06fe1fb160 , ffffff002e9b2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb130) thread_start+8() ffffff002e9acc40 ffffff06fe1fd010 ffffff06fe62d180 2 99 ffffff06fe1fb160 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9acc40: ffffff002e9ac990 [ ffffff002e9ac990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb160, ffffff06fe1fb150) taskq_thread_wait+0xbe(ffffff06fe1fb130, ffffff06fe1fb150, ffffff06fe1fb160 , ffffff002e9acad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb130) thread_start+8() ffffff002e9a6c40 ffffff06fe1fd010 ffffff06fe62d880 2 99 ffffff06fe1fb160 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9a6c40: ffffff002e9a6990 [ ffffff002e9a6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb160, ffffff06fe1fb150) taskq_thread_wait+0xbe(ffffff06fe1fb130, ffffff06fe1fb150, ffffff06fe1fb160 , ffffff002e9a6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb130) thread_start+8() ffffff002e9a0c40 ffffff06fe1fd010 ffffff06fe62e040 2 99 ffffff06fe1fb160 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9a0c40: ffffff002e9a0990 [ ffffff002e9a0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb160, ffffff06fe1fb150) taskq_thread_wait+0xbe(ffffff06fe1fb130, ffffff06fe1fb150, ffffff06fe1fb160 , ffffff002e9a0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb130) thread_start+8() ffffff002e99ac40 ffffff06fe1fd010 ffffff06fe62e740 2 99 ffffff06fe1fb160 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e99ac40: ffffff002e99a990 [ ffffff002e99a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb160, ffffff06fe1fb150) taskq_thread_wait+0xbe(ffffff06fe1fb130, ffffff06fe1fb150, ffffff06fe1fb160 , ffffff002e99aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb130) thread_start+8() ffffff002e9e2c40 ffffff06fe1fd010 ffffff06fe6891c0 2 99 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9e2c40: ffffff002e9e2990 [ ffffff002e9e2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9e2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9dcc40 ffffff06fe1fd010 ffffff06fe6898c0 2 99 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9dcc40: ffffff002e9dc990 [ ffffff002e9dc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9dcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9d6c40 ffffff06fe1fd010 ffffff06fe62a080 2 0 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9d6c40: ffffff002e9d6990 [ ffffff002e9d6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9d6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9d0c40 ffffff06fe1fd010 ffffff06fe62a780 2 0 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9d0c40: ffffff002e9d0990 [ ffffff002e9d0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9d0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9cac40 ffffff06fe1fd010 ffffff06fe62ae80 2 99 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9cac40: ffffff002e9ca990 [ ffffff002e9ca990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9caad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9c4c40 ffffff06fe1fd010 ffffff06fe62b580 2 0 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9c4c40: ffffff002e9c4990 [ ffffff002e9c4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9c4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9bec40 ffffff06fe1fd010 ffffff06fe62bc80 2 0 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9bec40: ffffff002e9be990 [ ffffff002e9be990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9bead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002e9b8c40 ffffff06fe1fd010 ffffff06fe62c380 2 0 ffffff06fe1fb278 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9b8c40: ffffff002e9b8990 [ ffffff002e9b8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb278, ffffff06fe1fb268) taskq_thread_wait+0xbe(ffffff06fe1fb248, ffffff06fe1fb268, ffffff06fe1fb278 , ffffff002e9b8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb248) thread_start+8() ffffff002ea00c40 ffffff06fe1fd010 ffffff06fe686ec0 2 99 ffffff06fe1fb390 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea00c40: ffffff002ea00990 [ ffffff002ea00990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb390, ffffff06fe1fb380) taskq_thread_wait+0xbe(ffffff06fe1fb360, ffffff06fe1fb380, ffffff06fe1fb390 , ffffff002ea00ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb360) thread_start+8() ffffff002e9fac40 ffffff06fe1fd010 ffffff06fe6875c0 2 99 ffffff06fe1fb390 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9fac40: ffffff002e9fa990 [ ffffff002e9fa990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb390, ffffff06fe1fb380) taskq_thread_wait+0xbe(ffffff06fe1fb360, ffffff06fe1fb380, ffffff06fe1fb390 , ffffff002e9faad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb360) thread_start+8() ffffff002e9f4c40 ffffff06fe1fd010 ffffff06fe687cc0 2 99 ffffff06fe1fb390 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9f4c40: ffffff002e9f4990 [ ffffff002e9f4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb390, ffffff06fe1fb380) taskq_thread_wait+0xbe(ffffff06fe1fb360, ffffff06fe1fb380, ffffff06fe1fb390 , ffffff002e9f4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb360) thread_start+8() ffffff002e9eec40 ffffff06fe1fd010 ffffff06fe6883c0 2 99 ffffff06fe1fb390 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9eec40: ffffff002e9ee990 [ ffffff002e9ee990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb390, ffffff06fe1fb380) taskq_thread_wait+0xbe(ffffff06fe1fb360, ffffff06fe1fb380, ffffff06fe1fb390 , ffffff002e9eead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb360) thread_start+8() ffffff002e9e8c40 ffffff06fe1fd010 ffffff06fe688ac0 2 99 ffffff06fe1fb390 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e9e8c40: ffffff002e9e8990 [ ffffff002e9e8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb390, ffffff06fe1fb380) taskq_thread_wait+0xbe(ffffff06fe1fb360, ffffff06fe1fb380, ffffff06fe1fb390 , ffffff002e9e8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb360) thread_start+8() ffffff002ea48c40 ffffff06fe1fd010 ffffff06fe681800 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea48c40: ffffff002ea48990 [ ffffff002ea48990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea48ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea42c40 ffffff06fe1fd010 ffffff06fe682100 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea42c40: ffffff002ea42990 [ ffffff002ea42990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea42ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea3cc40 ffffff06fe1fd010 ffffff06fe682800 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea3cc40: ffffff002ea3c990 [ ffffff002ea3c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea3cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea36c40 ffffff06fe1fd010 ffffff06fe682f00 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea36c40: ffffff002ea36990 [ ffffff002ea36990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea36ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea30c40 ffffff06fe1fd010 ffffff06fe683600 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea30c40: ffffff002ea30990 [ ffffff002ea30990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea30ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea2ac40 ffffff06fe1fd010 ffffff06fe683d00 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea2ac40: ffffff002ea2a990 [ ffffff002ea2a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea2aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea24c40 ffffff06fe1fd010 ffffff06fe684400 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea24c40: ffffff002ea24990 [ ffffff002ea24990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea24ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea1ec40 ffffff06fe1fd010 ffffff06fe684b00 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea1ec40: ffffff002ea1e990 [ ffffff002ea1e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea1ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea18c40 ffffff06fe1fd010 ffffff06fe685200 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea18c40: ffffff002ea18990 [ ffffff002ea18990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea18ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea12c40 ffffff06fe1fd010 ffffff06fe685900 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea12c40: ffffff002ea12990 [ ffffff002ea12990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea12ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea0cc40 ffffff06fe1fd010 ffffff06fe6860c0 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea0cc40: ffffff002ea0c990 [ ffffff002ea0c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea0cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea06c40 ffffff06fe1fd010 ffffff06fe6867c0 2 99 ffffff06fe1fb4a8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea06c40: ffffff002ea06990 [ ffffff002ea06990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb4a8, ffffff06fe1fb498) taskq_thread_wait+0xbe(ffffff06fe1fb478, ffffff06fe1fb498, ffffff06fe1fb4a8 , ffffff002ea06ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb478) thread_start+8() ffffff002ea90c40 ffffff06fe1fd010 ffffff06fe67c340 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea90c40: ffffff002ea90990 [ ffffff002ea90990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea90ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea8ac40 ffffff06fe1fd010 ffffff06fe67ca40 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea8ac40: ffffff002ea8a990 [ ffffff002ea8a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea8aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea84c40 ffffff06fe1fd010 ffffff06fe67d140 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea84c40: ffffff002ea84990 [ ffffff002ea84990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea84ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea7ec40 ffffff06fe1fd010 ffffff06fe67d840 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea7ec40: ffffff002ea7e990 [ ffffff002ea7e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea7ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea78c40 ffffff06fe1fd010 ffffff06fe67e000 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea78c40: ffffff002ea78990 [ ffffff002ea78990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea78ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea72c40 ffffff06fe1fd010 ffffff06fe67e700 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea72c40: ffffff002ea72990 [ ffffff002ea72990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea72ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea6cc40 ffffff06fe1fd010 ffffff06fe67ee00 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea6cc40: ffffff002ea6c990 [ ffffff002ea6c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea6cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea66c40 ffffff06fe1fd010 ffffff06fe67f500 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea66c40: ffffff002ea66990 [ ffffff002ea66990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea66ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea60c40 ffffff06fe1fd010 ffffff06fe67fc00 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea60c40: ffffff002ea60990 [ ffffff002ea60990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea60ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea5ac40 ffffff06fe1fd010 ffffff06fe680300 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea5ac40: ffffff002ea5a990 [ ffffff002ea5a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea5aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea54c40 ffffff06fe1fd010 ffffff06fe680a00 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea54c40: ffffff002ea54990 [ ffffff002ea54990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea54ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ea4ec40 ffffff06fe1fd010 ffffff06fe681100 2 99 ffffff06fe1fb5c0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea4ec40: ffffff002ea4e990 [ ffffff002ea4e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb5c0, ffffff06fe1fb5b0) taskq_thread_wait+0xbe(ffffff06fe1fb590, ffffff06fe1fb5b0, ffffff06fe1fb5c0 , ffffff002ea4ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb590) thread_start+8() ffffff002ead8c40 ffffff06fe1fd010 ffffff06fe676e80 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ead8c40: ffffff002ead8990 [ ffffff002ead8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002ead8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002ead2c40 ffffff06fe1fd010 ffffff06fe677580 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ead2c40: ffffff002ead2990 [ ffffff002ead2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002ead2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eaccc40 ffffff06fe1fd010 ffffff06fe677c80 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaccc40: ffffff002eacc990 [ ffffff002eacc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eaccad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eac6c40 ffffff06fe1fd010 ffffff06fe678380 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eac6c40: ffffff002eac6990 [ ffffff002eac6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eac6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eac0c40 ffffff06fe1fd010 ffffff06fe678a80 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eac0c40: ffffff002eac0990 [ ffffff002eac0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eac0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eabac40 ffffff06fe1fd010 ffffff06fe679180 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eabac40: ffffff002eaba990 [ ffffff002eaba990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eabaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eab4c40 ffffff06fe1fd010 ffffff06fe679880 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eab4c40: ffffff002eab4990 [ ffffff002eab4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eab4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eaaec40 ffffff06fe1fd010 ffffff06fe67a040 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaaec40: ffffff002eaae990 [ ffffff002eaae990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eaaead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eaa8c40 ffffff06fe1fd010 ffffff06fe67a740 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaa8c40: ffffff002eaa8990 [ ffffff002eaa8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eaa8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eaa2c40 ffffff06fe1fd010 ffffff06fe67ae40 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaa2c40: ffffff002eaa2990 [ ffffff002eaa2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002eaa2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002ea9cc40 ffffff06fe1fd010 ffffff06fe67b540 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea9cc40: ffffff002ea9c990 [ ffffff002ea9c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002ea9cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002ea96c40 ffffff06fe1fd010 ffffff06fe67bc40 2 99 ffffff06fe1fb6d8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ea96c40: ffffff002ea96990 [ ffffff002ea96990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb6d8, ffffff06fe1fb6c8) taskq_thread_wait+0xbe(ffffff06fe1fb6a8, ffffff06fe1fb6c8, ffffff06fe1fb6d8 , ffffff002ea96ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb6a8) thread_start+8() ffffff002eb20c40 ffffff06fe1fd010 ffffff06fe671900 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb20c40: ffffff002eb20990 [ ffffff002eb20990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eb20ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eb1ac40 ffffff06fe1fd010 ffffff06fe6720c0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb1ac40: ffffff002eb1a990 [ ffffff002eb1a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eb1aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eb14c40 ffffff06fe1fd010 ffffff06fe6727c0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb14c40: ffffff002eb14990 [ ffffff002eb14990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eb14ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eb0ec40 ffffff06fe1fd010 ffffff06fe672ec0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb0ec40: ffffff002eb0e990 [ ffffff002eb0e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eb0ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eb08c40 ffffff06fe1fd010 ffffff06fe6735c0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb08c40: ffffff002eb08990 [ ffffff002eb08990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eb08ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eb02c40 ffffff06fe1fd010 ffffff06fe673cc0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb02c40: ffffff002eb02990 [ ffffff002eb02990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eb02ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eafcc40 ffffff06fe1fd010 ffffff06fe6743c0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eafcc40: ffffff002eafc990 [ ffffff002eafc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eafcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eaf6c40 ffffff06fe1fd010 ffffff06fe674ac0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaf6c40: ffffff002eaf6990 [ ffffff002eaf6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eaf6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eaf0c40 ffffff06fe1fd010 ffffff06fe6751c0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaf0c40: ffffff002eaf0990 [ ffffff002eaf0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eaf0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eaeac40 ffffff06fe1fd010 ffffff06fe6758c0 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eaeac40: ffffff002eaea990 [ ffffff002eaea990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eaeaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eae4c40 ffffff06fe1fd010 ffffff06fe676080 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eae4c40: ffffff002eae4990 [ ffffff002eae4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eae4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eadec40 ffffff06fe1fd010 ffffff06fe676780 2 99 ffffff06fe1fb7f0 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eadec40: ffffff002eade990 [ ffffff002eade990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb7f0, ffffff06fe1fb7e0) taskq_thread_wait+0xbe(ffffff06fe1fb7c0, ffffff06fe1fb7e0, ffffff06fe1fb7f0 , ffffff002eadead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb7c0) thread_start+8() ffffff002eb68c40 ffffff06fe1fd010 ffffff06fe66c300 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb68c40: ffffff002eb68990 [ ffffff002eb68990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb68ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb62c40 ffffff06fe1fd010 ffffff06fe66ca00 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb62c40: ffffff002eb62990 [ ffffff002eb62990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb62ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb5cc40 ffffff06fe1fd010 ffffff06fe66d100 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb5cc40: ffffff002eb5c990 [ ffffff002eb5c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb5cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb56c40 ffffff06fe1fd010 ffffff06fe66d800 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb56c40: ffffff002eb56990 [ ffffff002eb56990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb56ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb50c40 ffffff06fe1fd010 ffffff06fe66e100 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb50c40: ffffff002eb50990 [ ffffff002eb50990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb50ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb4ac40 ffffff06fe1fd010 ffffff06fe66e800 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb4ac40: ffffff002eb4a990 [ ffffff002eb4a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb4aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb44c40 ffffff06fe1fd010 ffffff06fe66ef00 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb44c40: ffffff002eb44990 [ ffffff002eb44990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb44ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb3ec40 ffffff06fe1fd010 ffffff06fe66f600 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb3ec40: ffffff002eb3e990 [ ffffff002eb3e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb3ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb38c40 ffffff06fe1fd010 ffffff06fe66fd00 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb38c40: ffffff002eb38990 [ ffffff002eb38990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb38ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb32c40 ffffff06fe1fd010 ffffff06fe670400 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb32c40: ffffff002eb32990 [ ffffff002eb32990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb32ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb2cc40 ffffff06fe1fd010 ffffff06fe670b00 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb2cc40: ffffff002eb2c990 [ ffffff002eb2c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb2cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002eb26c40 ffffff06fe1fd010 ffffff06fe671200 2 99 ffffff06fe1fb908 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb26c40: ffffff002eb26990 [ ffffff002eb26990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fb908, ffffff06fe1fb8f8) taskq_thread_wait+0xbe(ffffff06fe1fb8d8, ffffff06fe1fb8f8, ffffff06fe1fb908 , ffffff002eb26ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb8d8) thread_start+8() ffffff002ebb0c40 ffffff06fe1fd010 ffffff06fe6a6e40 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebb0c40: ffffff002ebb0990 [ ffffff002ebb0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002ebb0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002ebaac40 ffffff06fe1fd010 ffffff06fe6a7540 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebaac40: ffffff002ebaa990 [ ffffff002ebaa990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002ebaaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eba4c40 ffffff06fe1fd010 ffffff06fe6a7c40 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eba4c40: ffffff002eba4990 [ ffffff002eba4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eba4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb9ec40 ffffff06fe1fd010 ffffff06fe6a8340 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb9ec40: ffffff002eb9e990 [ ffffff002eb9e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb9ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb98c40 ffffff06fe1fd010 ffffff06fe6a8a40 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb98c40: ffffff002eb98990 [ ffffff002eb98990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb98ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb92c40 ffffff06fe1fd010 ffffff06fe6a9140 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb92c40: ffffff002eb92990 [ ffffff002eb92990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb92ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb8cc40 ffffff06fe1fd010 ffffff06fe6a9840 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb8cc40: ffffff002eb8c990 [ ffffff002eb8c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb8cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb86c40 ffffff06fe1fd010 ffffff06fe66a000 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb86c40: ffffff002eb86990 [ ffffff002eb86990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb86ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb80c40 ffffff06fe1fd010 ffffff06fe66a700 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb80c40: ffffff002eb80990 [ ffffff002eb80990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb80ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb7ac40 ffffff06fe1fd010 ffffff06fe66ae00 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb7ac40: ffffff002eb7a990 [ ffffff002eb7a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb7aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb74c40 ffffff06fe1fd010 ffffff06fe66b500 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb74c40: ffffff002eb74990 [ ffffff002eb74990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb74ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002eb6ec40 ffffff06fe1fd010 ffffff06fe66bc00 2 99 ffffff06fe1fba20 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002eb6ec40: ffffff002eb6e990 [ ffffff002eb6e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fba20, ffffff06fe1fba10) taskq_thread_wait+0xbe(ffffff06fe1fb9f0, ffffff06fe1fba10, ffffff06fe1fba20 , ffffff002eb6ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fb9f0) thread_start+8() ffffff002ebf8c40 ffffff06fe1fd010 ffffff06fe6a18c0 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebf8c40: ffffff002ebf8990 [ ffffff002ebf8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebf8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebf2c40 ffffff06fe1fd010 ffffff06fe6a2080 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebf2c40: ffffff002ebf2990 [ ffffff002ebf2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebf2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebecc40 ffffff06fe1fd010 ffffff06fe6a2780 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebecc40: ffffff002ebec990 [ ffffff002ebec990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebecad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebe6c40 ffffff06fe1fd010 ffffff06fe6a2e80 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebe6c40: ffffff002ebe6990 [ ffffff002ebe6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebe6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebe0c40 ffffff06fe1fd010 ffffff06fe6a3580 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebe0c40: ffffff002ebe0990 [ ffffff002ebe0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebe0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebdac40 ffffff06fe1fd010 ffffff06fe6a3c80 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebdac40: ffffff002ebda990 [ ffffff002ebda990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebdaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebd4c40 ffffff06fe1fd010 ffffff06fe6a4380 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebd4c40: ffffff002ebd4990 [ ffffff002ebd4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebd4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebcec40 ffffff06fe1fd010 ffffff06fe6a4a80 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebcec40: ffffff002ebce990 [ ffffff002ebce990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebcead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebc8c40 ffffff06fe1fd010 ffffff06fe6a5180 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebc8c40: ffffff002ebc8990 [ ffffff002ebc8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebc8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebc2c40 ffffff06fe1fd010 ffffff06fe6a5880 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebc2c40: ffffff002ebc2990 [ ffffff002ebc2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebc2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebbcc40 ffffff06fe1fd010 ffffff06fe6a6040 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebbcc40: ffffff002ebbc990 [ ffffff002ebbc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebbcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ebb6c40 ffffff06fe1fd010 ffffff06fe6a6740 2 99 ffffff06fe1fbb38 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebb6c40: ffffff002ebb6990 [ ffffff002ebb6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbb38, ffffff06fe1fbb28) taskq_thread_wait+0xbe(ffffff06fe1fbb08, ffffff06fe1fbb28, ffffff06fe1fbb38 , ffffff002ebb6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbb08) thread_start+8() ffffff002ec40c40 ffffff06fe1fd010 ffffff06fe69c400 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec40c40: ffffff002ec40990 [ ffffff002ec40990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec40ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec3ac40 ffffff06fe1fd010 ffffff06fe69cb00 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec3ac40: ffffff002ec3a990 [ ffffff002ec3a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec3aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec34c40 ffffff06fe1fd010 ffffff06fe69d200 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec34c40: ffffff002ec34990 [ ffffff002ec34990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec34ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec2ec40 ffffff06fe1fd010 ffffff06fe69d900 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec2ec40: ffffff002ec2e990 [ ffffff002ec2e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec2ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec28c40 ffffff06fe1fd010 ffffff06fe69e0c0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec28c40: ffffff002ec28990 [ ffffff002ec28990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec28ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec22c40 ffffff06fe1fd010 ffffff06fe69e7c0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec22c40: ffffff002ec22990 [ ffffff002ec22990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec22ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec1cc40 ffffff06fe1fd010 ffffff06fe69eec0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec1cc40: ffffff002ec1c990 [ ffffff002ec1c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec1cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec16c40 ffffff06fe1fd010 ffffff06fe69f5c0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec16c40: ffffff002ec16990 [ ffffff002ec16990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec16ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec10c40 ffffff06fe1fd010 ffffff06fe69fcc0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec10c40: ffffff002ec10990 [ ffffff002ec10990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec10ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec0ac40 ffffff06fe1fd010 ffffff06fe6a03c0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec0ac40: ffffff002ec0a990 [ ffffff002ec0a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec0aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec04c40 ffffff06fe1fd010 ffffff06fe6a0ac0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec04c40: ffffff002ec04990 [ ffffff002ec04990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ec04ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ebfec40 ffffff06fe1fd010 ffffff06fe6a11c0 2 99 ffffff06fe1fbc50 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ebfec40: ffffff002ebfe990 [ ffffff002ebfe990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbc50, ffffff06fe1fbc40) taskq_thread_wait+0xbe(ffffff06fe1fbc20, ffffff06fe1fbc40, ffffff06fe1fbc50 , ffffff002ebfead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbc20) thread_start+8() ffffff002ec46c40 ffffff06fe1fd010 ffffff06fe69bd00 2 99 ffffff06fe1fbd68 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec46c40: ffffff002ec46990 [ ffffff002ec46990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbd68, ffffff06fe1fbd58) taskq_thread_wait+0xbe(ffffff06fe1fbd38, ffffff06fe1fbd58, ffffff06fe1fbd68 , ffffff002ec46ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbd38) thread_start+8() ffffff002ec4cc40 ffffff06fe1fd010 ffffff06fe69b600 2 99 ffffff06fe1fbe80 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec4cc40: ffffff002ec4c990 [ ffffff002ec4c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1fbe80, ffffff06fe1fbe70) taskq_thread_wait+0xbe(ffffff06fe1fbe50, ffffff06fe1fbe70, ffffff06fe1fbe80 , ffffff002ec4cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1fbe50) thread_start+8() ffffff002ec52c40 ffffff06fe1fd010 ffffff06fe69af00 2 99 ffffff06fdf76040 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec52c40: ffffff002ec52990 [ ffffff002ec52990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76040, ffffff06fdf76030) taskq_thread_wait+0xbe(ffffff06fdf76010, ffffff06fdf76030, ffffff06fdf76040 , ffffff002ec52ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76010) thread_start+8() ffffff002ec58c40 ffffff06fe1fd010 ffffff06fe69a800 2 99 ffffff06fdf76158 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec58c40: ffffff002ec58990 [ ffffff002ec58990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76158, ffffff06fdf76148) taskq_thread_wait+0xbe(ffffff06fdf76128, ffffff06fdf76148, ffffff06fdf76158 , ffffff002ec58ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76128) thread_start+8() ffffff002ec5ec40 ffffff06fe1fd010 ffffff06fe69a100 2 99 ffffff06fdf76270 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002ec5ec40: ffffff002ec5e990 [ ffffff002ec5e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fdf76270, ffffff06fdf76260) taskq_thread_wait+0xbe(ffffff06fdf76240, ffffff06fdf76260, ffffff06fdf76270 , ffffff002ec5ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fdf76240) thread_start+8() ffffff002e24ac40 ffffff06fe1fd010 ffffff06f134da40 2 99 ffffff06f6b4f8f8 PC: _resume_from_idle+0xf4 CMD: zpool-rpool stack pointer for thread ffffff002e24ac40: ffffff002e24aa50 [ ffffff002e24aa50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f6b4f8f8, ffffff06f6b4f8f0) spa_thread+0x1db(ffffff06f6b4f000) thread_start+8() ffffff002e642c40 fffffffffbc2ea80 0 0 60 ffffff06fe1e7168 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff002e642c40: ffffff002e642a80 [ ffffff002e642a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7168, ffffff06fe1e7158) taskq_thread_wait+0xbe(ffffff06fe1e7138, ffffff06fe1e7158, ffffff06fe1e7168 , ffffff002e642bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7138) thread_start+8() ffffff002ec64c40 fffffffffbc2ea80 0 0 60 ffffff06fe1e7168 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff002ec64c40: ffffff002ec64a80 [ ffffff002ec64a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7168, ffffff06fe1e7158) taskq_thread_wait+0xbe(ffffff06fe1e7138, ffffff06fe1e7158, ffffff06fe1e7168 , ffffff002ec64bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7138) thread_start+8() ffffff002ec7cc40 fffffffffbc2ea80 0 0 60 ffffff06fe1e7280 PC: _resume_from_idle+0xf4 TASKQ: zfs_vn_rele_taskq stack pointer for thread ffffff002ec7cc40: ffffff002ec7ca80 [ ffffff002ec7ca80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7280, ffffff06fe1e7270) taskq_thread_wait+0xbe(ffffff06fe1e7250, ffffff06fe1e7270, ffffff06fe1e7280 , ffffff002ec7cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7250) thread_start+8() ffffff002e760c40 fffffffffbc2ea80 0 0 60 ffffff06f676e544 PC: _resume_from_idle+0xf4 THREAD: txg_quiesce_thread() stack pointer for thread ffffff002e760c40: ffffff002e760ad0 [ ffffff002e760ad0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f676e544, ffffff06f676e500) txg_thread_wait+0xaf(ffffff06f676e4f8, ffffff002e760bc0, ffffff06f676e544, 0 ) txg_quiesce_thread+0x106(ffffff06f676e380) thread_start+8() ffffff002e283c40 fffffffffbc2ea80 0 0 60 ffffff06f676e540 PC: _resume_from_idle+0xf4 THREAD: txg_sync_thread() stack pointer for thread ffffff002e283c40: ffffff002e283a10 [ ffffff002e283a10 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff06f676e540, ffffff06f676e500, 12a05f200, 989680, 0) cv_timedwait+0x5c(ffffff06f676e540, ffffff06f676e500, 90a898) txg_thread_wait+0x5f(ffffff06f676e4f8, ffffff002e283bc0, ffffff06f676e540, 1f4) txg_sync_thread+0x111(ffffff06f676e380) thread_start+8() ffffff002e766c40 fffffffffbc2ea80 0 0 60 ffffff06fe1e7050 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff002e766c40: ffffff002e766a80 [ ffffff002e766a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06fe1e7050, ffffff06fe1e7040) taskq_thread_wait+0xbe(ffffff06fe1e7020, ffffff06fe1e7040, ffffff06fe1e7050 , ffffff002e766bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff06fe1e7020) thread_start+8() ffffff002e215c40 fffffffffbc2ea80 0 0 60 ffffff0722f8ee90 PC: _resume_from_idle+0xf4 TASKQ: acpinex_nexus_enum_tq stack pointer for thread ffffff002e215c40: ffffff002e215a80 [ ffffff002e215a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8ee90, ffffff0722f8ee80) taskq_thread_wait+0xbe(ffffff0722f8ee60, ffffff0722f8ee80, ffffff0722f8ee90 , ffffff002e215bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8ee60) thread_start+8() ffffff002ec76c40 fffffffffbc2ea80 0 0 60 ffffff0722f8ed78 PC: _resume_from_idle+0xf4 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff002ec76c40: ffffff002ec76a80 [ ffffff002ec76a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8ed78, ffffff0722f8ed68) taskq_thread_wait+0xbe(ffffff0722f8ed48, ffffff0722f8ed68, ffffff0722f8ed78 , ffffff002ec76bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8ed48) thread_start+8() ffffff002ec70c40 fffffffffbc2ea80 0 0 60 fffffffffbd48b20 PC: _resume_from_idle+0xf4 THREAD: dld_taskq_dispatch() stack pointer for thread ffffff002ec70c40: ffffff002ec70b60 [ ffffff002ec70b60 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbd48b20, fffffffffbd48b18) dld_taskq_dispatch+0x115() thread_start+8() ffffff002ec6ac40 fffffffffbc2ea80 0 0 60 ffffff0722f8ec60 PC: _resume_from_idle+0xf4 TASKQ: IP_INJECT_QUEUE_OUT stack pointer for thread ffffff002ec6ac40: ffffff002ec6aa80 [ ffffff002ec6aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8ec60, ffffff0722f8ec50) taskq_thread_wait+0xbe(ffffff0722f8ec30, ffffff0722f8ec50, ffffff0722f8ec60 , ffffff002ec6abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8ec30) thread_start+8() ffffff002ec82c40 fffffffffbc2ea80 0 0 60 ffffff0722f8eb48 PC: _resume_from_idle+0xf4 TASKQ: IP_INJECT_QUEUE_IN stack pointer for thread ffffff002ec82c40: ffffff002ec82a80 [ ffffff002ec82a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8eb48, ffffff0722f8eb38) taskq_thread_wait+0xbe(ffffff0722f8eb18, ffffff0722f8eb38, ffffff0722f8eb48 , ffffff002ec82bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8eb18) thread_start+8() ffffff002e274c40 fffffffffbc2ea80 0 0 60 ffffff0722f8ea30 PC: _resume_from_idle+0xf4 TASKQ: IP_NIC_EVENT_QUEUE stack pointer for thread ffffff002e274c40: ffffff002e274a80 [ ffffff002e274a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8ea30, ffffff0722f8ea20) taskq_thread_wait+0xbe(ffffff0722f8ea00, ffffff0722f8ea20, ffffff0722f8ea30 , ffffff002e274bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8ea00) thread_start+8() ffffff002e289c40 fffffffffbc2ea80 0 0 99 ffffff06f7e46808 PC: _resume_from_idle+0xf4 THREAD: ipsec_loader() stack pointer for thread ffffff002e289c40: ffffff002e289b30 [ ffffff002e289b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f7e46808, ffffff06f7e467f0) ipsec_loader+0x149(ffffff06f7e45000) thread_start+8() ffffff002e28fc40 fffffffffbc2ea80 0 0 99 ffffff0722feaed0 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002e28fc40: ffffff002e28fb40 [ ffffff002e28fb40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feaed0, ffffff0722feae90) squeue_worker+0x104(ffffff0722feae80) thread_start+8() ffffff002e295c40 fffffffffbc2ea80 0 0 99 ffffff0722feaed2 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002e295c40: ffffff002e295b00 [ ffffff002e295b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feaed2, ffffff0722feae90) squeue_polling_thread+0xa9(ffffff0722feae80) thread_start+8() ffffff002e29bc40 fffffffffbc2ea80 0 0 60 ffffffffc0091010 PC: _resume_from_idle+0xf4 THREAD: dce_reclaim_worker() stack pointer for thread ffffff002e29bc40: ffffff002e29bab0 [ ffffff002e29bab0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffffffc0091010, ffffffffc0091008, df8475800, 989680, 0) cv_timedwait+0x5c(ffffffffc0091010, ffffffffc0091008, 90a9a7) dce_reclaim_worker+0xab(0) thread_start+8() ffffff002e2a1c40 fffffffffbc2ea80 0 0 60 ffffff06f7e41e70 PC: _resume_from_idle+0xf4 THREAD: ill_taskq_dispatch() stack pointer for thread ffffff002e2a1c40: ffffff002e2a1af0 [ ffffff002e2a1af0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f7e41e70, ffffff06f7e41e68) ill_taskq_dispatch+0x155(ffffff06f7e41000) thread_start+8() ffffff002e2a7c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e918 PC: _resume_from_idle+0xf4 TASKQ: ilb_rule_taskq_ffffff06f10bbe0 stack pointer for thread ffffff002e2a7c40: ffffff002e2a7a80 [ ffffff002e2a7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e918, ffffff0722f8e908) taskq_thread_wait+0xbe(ffffff0722f8e8e8, ffffff0722f8e908, ffffff0722f8e918 , ffffff002e2a7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e8e8) thread_start+8() ffffff002e2c5c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e800 PC: _resume_from_idle+0xf4 TASKQ: sof_close_deferred_taskq stack pointer for thread ffffff002e2c5c40: ffffff002e2c5a80 [ ffffff002e2c5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e800, ffffff0722f8e7f0) taskq_thread_wait+0xbe(ffffff0722f8e7d0, ffffff0722f8e7f0, ffffff0722f8e800 , ffffff002e2c5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e7d0) thread_start+8() ffffff002e2cbc40 fffffffffbc2ea80 0 0 60 ffffff0722f8e6e8 PC: _resume_from_idle+0xf4 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff002e2cbc40: ffffff002e2cba80 [ ffffff002e2cba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e6e8, ffffff0722f8e6d8) taskq_thread_wait+0xbe(ffffff0722f8e6b8, ffffff0722f8e6d8, ffffff0722f8e6e8 , ffffff002e2cbbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e6b8) thread_start+8() ffffff002e2d1c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e5d0 PC: _resume_from_idle+0xf4 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff002e2d1c40: ffffff002e2d1a80 [ ffffff002e2d1a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e5d0, ffffff0722f8e5c0) taskq_thread_wait+0xbe(ffffff0722f8e5a0, ffffff0722f8e5c0, ffffff0722f8e5d0 , ffffff002e2d1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e5a0) thread_start+8() ffffff002e2d7c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e4b8 PC: _resume_from_idle+0xf4 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff002e2d7c40: ffffff002e2d7a80 [ ffffff002e2d7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e4b8, ffffff0722f8e4a8) taskq_thread_wait+0xbe(ffffff0722f8e488, ffffff0722f8e4a8, ffffff0722f8e4b8 , ffffff002e2d7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e488) thread_start+8() ffffff002e2ddc40 fffffffffbc2ea80 0 0 60 ffffff0722f8e3a0 PC: _resume_from_idle+0xf4 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff002e2ddc40: ffffff002e2dda80 [ ffffff002e2dda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e3a0, ffffff0722f8e390) taskq_thread_wait+0xbe(ffffff0722f8e370, ffffff0722f8e390, ffffff0722f8e3a0 , ffffff002e2ddbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e370) thread_start+8() ffffff002e2e3c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e288 PC: _resume_from_idle+0xf4 TASKQ: pcieb_nexus_enum_tq stack pointer for thread ffffff002e2e3c40: ffffff002e2e3a80 [ ffffff002e2e3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e288, ffffff0722f8e278) taskq_thread_wait+0xbe(ffffff0722f8e258, ffffff0722f8e278, ffffff0722f8e288 , ffffff002e2e3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e258) thread_start+8() ffffff002e2e9c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e170 PC: _resume_from_idle+0xf4 TASKQ: pci_pci_nexus_enum_tq stack pointer for thread ffffff002e2e9c40: ffffff002e2e9a80 [ ffffff002e2e9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e170, ffffff0722f8e160) taskq_thread_wait+0xbe(ffffff0722f8e140, ffffff0722f8e160, ffffff0722f8e170 , ffffff002e2e9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e140) thread_start+8() ffffff002e2f5c40 fffffffffbc2ea80 0 0 60 ffffff0722f8e058 PC: _resume_from_idle+0xf4 TASKQ: ehci_nexus_enum_tq stack pointer for thread ffffff002e2f5c40: ffffff002e2f5a80 [ ffffff002e2f5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722f8e058, ffffff0722f8e048) taskq_thread_wait+0xbe(ffffff0722f8e028, ffffff0722f8e048, ffffff0722f8e058 , ffffff002e2f5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0722f8e028) thread_start+8() ffffff002e30dc40 fffffffffbc2ea80 0 0 60 ffffff07233e2e98 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff002e30dc40: ffffff002e30da80 [ ffffff002e30da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2e98, ffffff07233e2e88) taskq_thread_wait+0xbe(ffffff07233e2e68, ffffff07233e2e88, ffffff07233e2e98 , ffffff002e30dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2e68) thread_start+8() ffffff002e307c40 fffffffffbc2ea80 0 0 60 ffffff07233e2e98 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff002e307c40: ffffff002e307a80 [ ffffff002e307a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2e98, ffffff07233e2e88) taskq_thread_wait+0xbe(ffffff07233e2e68, ffffff07233e2e88, ffffff07233e2e98 , ffffff002e307bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2e68) thread_start+8() ffffff002e301c40 fffffffffbc2ea80 0 0 60 ffffff07233e2e98 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff002e301c40: ffffff002e301a80 [ ffffff002e301a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2e98, ffffff07233e2e88) taskq_thread_wait+0xbe(ffffff07233e2e68, ffffff07233e2e88, ffffff07233e2e98 , ffffff002e301bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2e68) thread_start+8() ffffff002e2fbc40 fffffffffbc2ea80 0 0 60 ffffff07233e2e98 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_0 stack pointer for thread ffffff002e2fbc40: ffffff002e2fba80 [ ffffff002e2fba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2e98, ffffff07233e2e88) taskq_thread_wait+0xbe(ffffff07233e2e68, ffffff07233e2e88, ffffff07233e2e98 , ffffff002e2fbbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2e68) thread_start+8() ffffff002e319c40 fffffffffbc2ea80 0 0 60 ffffff07233e2d80 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_81_pipehndl_tq_0 stack pointer for thread ffffff002e319c40: ffffff002e319a80 [ ffffff002e319a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2d80, ffffff07233e2d70) taskq_thread_wait+0xbe(ffffff07233e2d50, ffffff07233e2d70, ffffff07233e2d80 , ffffff002e319bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2d50) thread_start+8() ffffff002e313c40 fffffffffbc2ea80 0 0 60 ffffff07233e2d80 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_81_pipehndl_tq_0 stack pointer for thread ffffff002e313c40: ffffff002e313a80 [ ffffff002e313a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2d80, ffffff07233e2d70) taskq_thread_wait+0xbe(ffffff07233e2d50, ffffff07233e2d70, ffffff07233e2d80 , ffffff002e313bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2d50) thread_start+8() ffffff002e31fc40 fffffffffbc2ea80 0 0 60 ffffff07233e2c68 PC: _resume_from_idle+0xf4 TASKQ: ehci_nexus_enum_tq stack pointer for thread ffffff002e31fc40: ffffff002e31fa80 [ ffffff002e31fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2c68, ffffff07233e2c58) taskq_thread_wait+0xbe(ffffff07233e2c38, ffffff07233e2c58, ffffff07233e2c68 , ffffff002e31fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2c38) thread_start+8() ffffff002e33dc40 fffffffffbc2ea80 0 0 60 ffffff07233e2b50 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff002e33dc40: ffffff002e33da80 [ ffffff002e33da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2b50, ffffff07233e2b40) taskq_thread_wait+0xbe(ffffff07233e2b20, ffffff07233e2b40, ffffff07233e2b50 , ffffff002e33dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2b20) thread_start+8() ffffff002e337c40 fffffffffbc2ea80 0 0 60 ffffff07233e2b50 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff002e337c40: ffffff002e337a80 [ ffffff002e337a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2b50, ffffff07233e2b40) taskq_thread_wait+0xbe(ffffff07233e2b20, ffffff07233e2b40, ffffff07233e2b50 , ffffff002e337bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2b20) thread_start+8() ffffff002e331c40 fffffffffbc2ea80 0 0 60 ffffff07233e2b50 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff002e331c40: ffffff002e331a80 [ ffffff002e331a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2b50, ffffff07233e2b40) taskq_thread_wait+0xbe(ffffff07233e2b20, ffffff07233e2b40, ffffff07233e2b50 , ffffff002e331bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2b20) thread_start+8() ffffff002e32bc40 fffffffffbc2ea80 0 0 60 ffffff07233e2b50 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_0_pipehndl_tq_1 stack pointer for thread ffffff002e32bc40: ffffff002e32ba80 [ ffffff002e32ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2b50, ffffff07233e2b40) taskq_thread_wait+0xbe(ffffff07233e2b20, ffffff07233e2b40, ffffff07233e2b50 , ffffff002e32bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2b20) thread_start+8() ffffff002e349c40 fffffffffbc2ea80 0 0 60 ffffff07233e2a38 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_81_pipehndl_tq_1 stack pointer for thread ffffff002e349c40: ffffff002e349a80 [ ffffff002e349a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2a38, ffffff07233e2a28) taskq_thread_wait+0xbe(ffffff07233e2a08, ffffff07233e2a28, ffffff07233e2a38 , ffffff002e349bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2a08) thread_start+8() ffffff002e343c40 fffffffffbc2ea80 0 0 60 ffffff07233e2a38 PC: _resume_from_idle+0xf4 TASKQ: USB_ehci_81_pipehndl_tq_1 stack pointer for thread ffffff002e343c40: ffffff002e343a80 [ ffffff002e343a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2a38, ffffff07233e2a28) taskq_thread_wait+0xbe(ffffff07233e2a08, ffffff07233e2a28, ffffff07233e2a38 , ffffff002e343bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2a08) thread_start+8() ffffff002e34fc40 fffffffffbc2ea80 0 0 60 ffffff07233e2920 PC: _resume_from_idle+0xf4 TASKQ: uhci_nexus_enum_tq stack pointer for thread ffffff002e34fc40: ffffff002e34fa80 [ ffffff002e34fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2920, ffffff07233e2910) taskq_thread_wait+0xbe(ffffff07233e28f0, ffffff07233e2910, ffffff07233e2920 , ffffff002e34fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e28f0) thread_start+8() ffffff002e367c40 fffffffffbc2ea80 0 0 60 ffffff07233e2808 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_0 stack pointer for thread ffffff002e367c40: ffffff002e367a80 [ ffffff002e367a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2808, ffffff07233e27f8) taskq_thread_wait+0xbe(ffffff07233e27d8, ffffff07233e27f8, ffffff07233e2808 , ffffff002e367bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e27d8) thread_start+8() ffffff002e361c40 fffffffffbc2ea80 0 0 60 ffffff07233e2808 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_0 stack pointer for thread ffffff002e361c40: ffffff002e361a80 [ ffffff002e361a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2808, ffffff07233e27f8) taskq_thread_wait+0xbe(ffffff07233e27d8, ffffff07233e27f8, ffffff07233e2808 , ffffff002e361bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e27d8) thread_start+8() ffffff002e35bc40 fffffffffbc2ea80 0 0 60 ffffff07233e2808 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_0 stack pointer for thread ffffff002e35bc40: ffffff002e35ba80 [ ffffff002e35ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2808, ffffff07233e27f8) taskq_thread_wait+0xbe(ffffff07233e27d8, ffffff07233e27f8, ffffff07233e2808 , ffffff002e35bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e27d8) thread_start+8() ffffff002e355c40 fffffffffbc2ea80 0 0 60 ffffff07233e2808 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_0 stack pointer for thread ffffff002e355c40: ffffff002e355a80 [ ffffff002e355a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2808, ffffff07233e27f8) taskq_thread_wait+0xbe(ffffff07233e27d8, ffffff07233e27f8, ffffff07233e2808 , ffffff002e355bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e27d8) thread_start+8() ffffff002e373c40 fffffffffbc2ea80 0 0 60 ffffff07233e26f0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_0 stack pointer for thread ffffff002e373c40: ffffff002e373a80 [ ffffff002e373a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e26f0, ffffff07233e26e0) taskq_thread_wait+0xbe(ffffff07233e26c0, ffffff07233e26e0, ffffff07233e26f0 , ffffff002e373bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e26c0) thread_start+8() ffffff002e36dc40 fffffffffbc2ea80 0 0 60 ffffff07233e26f0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_0 stack pointer for thread ffffff002e36dc40: ffffff002e36da80 [ ffffff002e36da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e26f0, ffffff07233e26e0) taskq_thread_wait+0xbe(ffffff07233e26c0, ffffff07233e26e0, ffffff07233e26f0 , ffffff002e36dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e26c0) thread_start+8() ffffff002e379c40 fffffffffbc2ea80 0 0 60 ffffff07233e25d8 PC: _resume_from_idle+0xf4 TASKQ: uhci_nexus_enum_tq stack pointer for thread ffffff002e379c40: ffffff002e379a80 [ ffffff002e379a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e25d8, ffffff07233e25c8) taskq_thread_wait+0xbe(ffffff07233e25a8, ffffff07233e25c8, ffffff07233e25d8 , ffffff002e379bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e25a8) thread_start+8() ffffff002e391c40 fffffffffbc2ea80 0 0 60 ffffff07233e24c0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_1 stack pointer for thread ffffff002e391c40: ffffff002e391a80 [ ffffff002e391a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e24c0, ffffff07233e24b0) taskq_thread_wait+0xbe(ffffff07233e2490, ffffff07233e24b0, ffffff07233e24c0 , ffffff002e391bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2490) thread_start+8() ffffff002e38bc40 fffffffffbc2ea80 0 0 60 ffffff07233e24c0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_1 stack pointer for thread ffffff002e38bc40: ffffff002e38ba80 [ ffffff002e38ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e24c0, ffffff07233e24b0) taskq_thread_wait+0xbe(ffffff07233e2490, ffffff07233e24b0, ffffff07233e24c0 , ffffff002e38bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2490) thread_start+8() ffffff002e385c40 fffffffffbc2ea80 0 0 60 ffffff07233e24c0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_1 stack pointer for thread ffffff002e385c40: ffffff002e385a80 [ ffffff002e385a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e24c0, ffffff07233e24b0) taskq_thread_wait+0xbe(ffffff07233e2490, ffffff07233e24b0, ffffff07233e24c0 , ffffff002e385bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2490) thread_start+8() ffffff002e37fc40 fffffffffbc2ea80 0 0 60 ffffff07233e24c0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_1 stack pointer for thread ffffff002e37fc40: ffffff002e37fa80 [ ffffff002e37fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e24c0, ffffff07233e24b0) taskq_thread_wait+0xbe(ffffff07233e2490, ffffff07233e24b0, ffffff07233e24c0 , ffffff002e37fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2490) thread_start+8() ffffff002e39dc40 fffffffffbc2ea80 0 0 60 ffffff07233e23a8 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_1 stack pointer for thread ffffff002e39dc40: ffffff002e39da80 [ ffffff002e39da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e23a8, ffffff07233e2398) taskq_thread_wait+0xbe(ffffff07233e2378, ffffff07233e2398, ffffff07233e23a8 , ffffff002e39dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2378) thread_start+8() ffffff002e397c40 fffffffffbc2ea80 0 0 60 ffffff07233e23a8 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_1 stack pointer for thread ffffff002e397c40: ffffff002e397a80 [ ffffff002e397a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e23a8, ffffff07233e2398) taskq_thread_wait+0xbe(ffffff07233e2378, ffffff07233e2398, ffffff07233e23a8 , ffffff002e397bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2378) thread_start+8() ffffff002e3a3c40 fffffffffbc2ea80 0 0 60 ffffff07233e2290 PC: _resume_from_idle+0xf4 TASKQ: uhci_nexus_enum_tq stack pointer for thread ffffff002e3a3c40: ffffff002e3a3a80 [ ffffff002e3a3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2290, ffffff07233e2280) taskq_thread_wait+0xbe(ffffff07233e2260, ffffff07233e2280, ffffff07233e2290 , ffffff002e3a3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2260) thread_start+8() ffffff002e3bbc40 fffffffffbc2ea80 0 0 60 ffffff07233e2178 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_2 stack pointer for thread ffffff002e3bbc40: ffffff002e3bba80 [ ffffff002e3bba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2178, ffffff07233e2168) taskq_thread_wait+0xbe(ffffff07233e2148, ffffff07233e2168, ffffff07233e2178 , ffffff002e3bbbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2148) thread_start+8() ffffff002e3b5c40 fffffffffbc2ea80 0 0 60 ffffff07233e2178 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_2 stack pointer for thread ffffff002e3b5c40: ffffff002e3b5a80 [ ffffff002e3b5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2178, ffffff07233e2168) taskq_thread_wait+0xbe(ffffff07233e2148, ffffff07233e2168, ffffff07233e2178 , ffffff002e3b5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2148) thread_start+8() ffffff002e3afc40 fffffffffbc2ea80 0 0 60 ffffff07233e2178 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_2 stack pointer for thread ffffff002e3afc40: ffffff002e3afa80 [ ffffff002e3afa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2178, ffffff07233e2168) taskq_thread_wait+0xbe(ffffff07233e2148, ffffff07233e2168, ffffff07233e2178 , ffffff002e3afbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2148) thread_start+8() ffffff002e3a9c40 fffffffffbc2ea80 0 0 60 ffffff07233e2178 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_2 stack pointer for thread ffffff002e3a9c40: ffffff002e3a9a80 [ ffffff002e3a9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2178, ffffff07233e2168) taskq_thread_wait+0xbe(ffffff07233e2148, ffffff07233e2168, ffffff07233e2178 , ffffff002e3a9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2148) thread_start+8() ffffff002e3c7c40 fffffffffbc2ea80 0 0 60 ffffff07233e2060 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_2 stack pointer for thread ffffff002e3c7c40: ffffff002e3c7a80 [ ffffff002e3c7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2060, ffffff07233e2050) taskq_thread_wait+0xbe(ffffff07233e2030, ffffff07233e2050, ffffff07233e2060 , ffffff002e3c7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2030) thread_start+8() ffffff002e3c1c40 fffffffffbc2ea80 0 0 60 ffffff07233e2060 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_2 stack pointer for thread ffffff002e3c1c40: ffffff002e3c1a80 [ ffffff002e3c1a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233e2060, ffffff07233e2050) taskq_thread_wait+0xbe(ffffff07233e2030, ffffff07233e2050, ffffff07233e2060 , ffffff002e3c1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233e2030) thread_start+8() ffffff002e3cdc40 fffffffffbc2ea80 0 0 60 ffffff07233d4ea0 PC: _resume_from_idle+0xf4 TASKQ: uhci_nexus_enum_tq stack pointer for thread ffffff002e3cdc40: ffffff002e3cda80 [ ffffff002e3cda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4ea0, ffffff07233d4e90) taskq_thread_wait+0xbe(ffffff07233d4e70, ffffff07233d4e90, ffffff07233d4ea0 , ffffff002e3cdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4e70) thread_start+8() ffffff002e3e5c40 fffffffffbc2ea80 0 0 60 ffffff07233d4d88 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_3 stack pointer for thread ffffff002e3e5c40: ffffff002e3e5a80 [ ffffff002e3e5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4d88, ffffff07233d4d78) taskq_thread_wait+0xbe(ffffff07233d4d58, ffffff07233d4d78, ffffff07233d4d88 , ffffff002e3e5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4d58) thread_start+8() ffffff002e3dfc40 fffffffffbc2ea80 0 0 60 ffffff07233d4d88 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_3 stack pointer for thread ffffff002e3dfc40: ffffff002e3dfa80 [ ffffff002e3dfa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4d88, ffffff07233d4d78) taskq_thread_wait+0xbe(ffffff07233d4d58, ffffff07233d4d78, ffffff07233d4d88 , ffffff002e3dfbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4d58) thread_start+8() ffffff002e3d9c40 fffffffffbc2ea80 0 0 60 ffffff07233d4d88 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_3 stack pointer for thread ffffff002e3d9c40: ffffff002e3d9a80 [ ffffff002e3d9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4d88, ffffff07233d4d78) taskq_thread_wait+0xbe(ffffff07233d4d58, ffffff07233d4d78, ffffff07233d4d88 , ffffff002e3d9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4d58) thread_start+8() ffffff002e3d3c40 fffffffffbc2ea80 0 0 60 ffffff07233d4d88 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_3 stack pointer for thread ffffff002e3d3c40: ffffff002e3d3a80 [ ffffff002e3d3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4d88, ffffff07233d4d78) taskq_thread_wait+0xbe(ffffff07233d4d58, ffffff07233d4d78, ffffff07233d4d88 , ffffff002e3d3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4d58) thread_start+8() ffffff002e3f1c40 fffffffffbc2ea80 0 0 60 ffffff07233d4c70 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_3 stack pointer for thread ffffff002e3f1c40: ffffff002e3f1a80 [ ffffff002e3f1a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4c70, ffffff07233d4c60) taskq_thread_wait+0xbe(ffffff07233d4c40, ffffff07233d4c60, ffffff07233d4c70 , ffffff002e3f1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4c40) thread_start+8() ffffff002e3ebc40 fffffffffbc2ea80 0 0 60 ffffff07233d4c70 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_3 stack pointer for thread ffffff002e3ebc40: ffffff002e3eba80 [ ffffff002e3eba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4c70, ffffff07233d4c60) taskq_thread_wait+0xbe(ffffff07233d4c40, ffffff07233d4c60, ffffff07233d4c70 , ffffff002e3ebbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4c40) thread_start+8() ffffff002e3f7c40 fffffffffbc2ea80 0 0 60 ffffff07233d4b58 PC: _resume_from_idle+0xf4 TASKQ: uhci_nexus_enum_tq stack pointer for thread ffffff002e3f7c40: ffffff002e3f7a80 [ ffffff002e3f7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4b58, ffffff07233d4b48) taskq_thread_wait+0xbe(ffffff07233d4b28, ffffff07233d4b48, ffffff07233d4b58 , ffffff002e3f7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4b28) thread_start+8() ffffff002e40fc40 fffffffffbc2ea80 0 0 60 ffffff07233d4a40 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_4 stack pointer for thread ffffff002e40fc40: ffffff002e40fa80 [ ffffff002e40fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4a40, ffffff07233d4a30) taskq_thread_wait+0xbe(ffffff07233d4a10, ffffff07233d4a30, ffffff07233d4a40 , ffffff002e40fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4a10) thread_start+8() ffffff002e409c40 fffffffffbc2ea80 0 0 60 ffffff07233d4a40 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_4 stack pointer for thread ffffff002e409c40: ffffff002e409a80 [ ffffff002e409a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4a40, ffffff07233d4a30) taskq_thread_wait+0xbe(ffffff07233d4a10, ffffff07233d4a30, ffffff07233d4a40 , ffffff002e409bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4a10) thread_start+8() ffffff002e403c40 fffffffffbc2ea80 0 0 60 ffffff07233d4a40 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_4 stack pointer for thread ffffff002e403c40: ffffff002e403a80 [ ffffff002e403a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4a40, ffffff07233d4a30) taskq_thread_wait+0xbe(ffffff07233d4a10, ffffff07233d4a30, ffffff07233d4a40 , ffffff002e403bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4a10) thread_start+8() ffffff002e3fdc40 fffffffffbc2ea80 0 0 60 ffffff07233d4a40 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_4 stack pointer for thread ffffff002e3fdc40: ffffff002e3fda80 [ ffffff002e3fda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4a40, ffffff07233d4a30) taskq_thread_wait+0xbe(ffffff07233d4a10, ffffff07233d4a30, ffffff07233d4a40 , ffffff002e3fdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4a10) thread_start+8() ffffff002e41bc40 fffffffffbc2ea80 0 0 60 ffffff07233d4928 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_4 stack pointer for thread ffffff002e41bc40: ffffff002e41ba80 [ ffffff002e41ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4928, ffffff07233d4918) taskq_thread_wait+0xbe(ffffff07233d48f8, ffffff07233d4918, ffffff07233d4928 , ffffff002e41bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d48f8) thread_start+8() ffffff002e415c40 fffffffffbc2ea80 0 0 60 ffffff07233d4928 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_4 stack pointer for thread ffffff002e415c40: ffffff002e415a80 [ ffffff002e415a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4928, ffffff07233d4918) taskq_thread_wait+0xbe(ffffff07233d48f8, ffffff07233d4918, ffffff07233d4928 , ffffff002e415bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d48f8) thread_start+8() ffffff002e421c40 fffffffffbc2ea80 0 0 60 ffffff07233d4810 PC: _resume_from_idle+0xf4 TASKQ: uhci_nexus_enum_tq stack pointer for thread ffffff002e421c40: ffffff002e421a80 [ ffffff002e421a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4810, ffffff07233d4800) taskq_thread_wait+0xbe(ffffff07233d47e0, ffffff07233d4800, ffffff07233d4810 , ffffff002e421bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d47e0) thread_start+8() ffffff002e439c40 fffffffffbc2ea80 0 0 60 ffffff07233d46f8 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_5 stack pointer for thread ffffff002e439c40: ffffff002e439a80 [ ffffff002e439a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d46f8, ffffff07233d46e8) taskq_thread_wait+0xbe(ffffff07233d46c8, ffffff07233d46e8, ffffff07233d46f8 , ffffff002e439bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d46c8) thread_start+8() ffffff002e433c40 fffffffffbc2ea80 0 0 60 ffffff07233d46f8 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_5 stack pointer for thread ffffff002e433c40: ffffff002e433a80 [ ffffff002e433a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d46f8, ffffff07233d46e8) taskq_thread_wait+0xbe(ffffff07233d46c8, ffffff07233d46e8, ffffff07233d46f8 , ffffff002e433bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d46c8) thread_start+8() ffffff002e42dc40 fffffffffbc2ea80 0 0 60 ffffff07233d46f8 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_5 stack pointer for thread ffffff002e42dc40: ffffff002e42da80 [ ffffff002e42da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d46f8, ffffff07233d46e8) taskq_thread_wait+0xbe(ffffff07233d46c8, ffffff07233d46e8, ffffff07233d46f8 , ffffff002e42dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d46c8) thread_start+8() ffffff002e427c40 fffffffffbc2ea80 0 0 60 ffffff07233d46f8 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_0_pipehndl_tq_5 stack pointer for thread ffffff002e427c40: ffffff002e427a80 [ ffffff002e427a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d46f8, ffffff07233d46e8) taskq_thread_wait+0xbe(ffffff07233d46c8, ffffff07233d46e8, ffffff07233d46f8 , ffffff002e427bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d46c8) thread_start+8() ffffff002e445c40 fffffffffbc2ea80 0 0 60 ffffff07233d45e0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_5 stack pointer for thread ffffff002e445c40: ffffff002e445a80 [ ffffff002e445a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d45e0, ffffff07233d45d0) taskq_thread_wait+0xbe(ffffff07233d45b0, ffffff07233d45d0, ffffff07233d45e0 , ffffff002e445bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d45b0) thread_start+8() ffffff002e43fc40 fffffffffbc2ea80 0 0 60 ffffff07233d45e0 PC: _resume_from_idle+0xf4 TASKQ: USB_uhci_81_pipehndl_tq_5 stack pointer for thread ffffff002e43fc40: ffffff002e43fa80 [ ffffff002e43fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d45e0, ffffff07233d45d0) taskq_thread_wait+0xbe(ffffff07233d45b0, ffffff07233d45d0, ffffff07233d45e0 , ffffff002e43fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d45b0) thread_start+8() ffffff002e44bc40 fffffffffbc2ea80 0 0 60 ffffff07233d44c8 PC: _resume_from_idle+0xf4 TASKQ: ibmf_saa_event_taskq stack pointer for thread ffffff002e44bc40: ffffff002e44ba80 [ ffffff002e44ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d44c8, ffffff07233d44b8) taskq_thread_wait+0xbe(ffffff07233d4498, ffffff07233d44b8, ffffff07233d44c8 , ffffff002e44bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4498) thread_start+8() ffffff002e469c40 fffffffffbc2ea80 0 0 60 ffffff07233d43b0 PC: _resume_from_idle+0xf4 TASKQ: ibmf_taskq stack pointer for thread ffffff002e469c40: ffffff002e469a80 [ ffffff002e469a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d43b0, ffffff07233d43a0) taskq_thread_wait+0xbe(ffffff07233d4380, ffffff07233d43a0, ffffff07233d43b0 , ffffff002e469bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4380) thread_start+8() ffffff002f4bfc40 fffffffffbc2ea80 0 0 60 ffffffffc00f5e88 PC: _resume_from_idle+0xf4 TASKQ: STMF_SVC_TASKQ stack pointer for thread ffffff002f4bfc40: ffffff002f4bf9a0 [ ffffff002f4bf9a0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffffffc00f5e88, ffffffffc00f5e80, 1312d00, 989680 , 0) cv_reltimedwait+0x51(ffffffffc00f5e88, ffffffffc00f5e80, 2, 4) stmf_svc_timeout+0x112(ffffff002f4bfb00) stmf_svc+0x1c0(0) taskq_thread+0x2d0(ffffff07235435b8) thread_start+8() ffffff002f5b4c40 fffffffffbc2ea80 0 0 60 ffffffffc0195fc2 PC: _resume_from_idle+0xf4 THREAD: ibcm_process_tlist() stack pointer for thread ffffff002f5b4c40: ffffff002f5b4b50 [ ffffff002f5b4b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc0195fc2, ffffffffc01962e8) ibcm_process_tlist+0x1e1() thread_start+8() ffffff002e475c40 fffffffffbc2ea80 0 0 60 fffffffffbcca820 PC: _resume_from_idle+0xf4 THREAD: task_commit() stack pointer for thread ffffff002e475c40: ffffff002e475b60 [ ffffff002e475b60 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcca820, fffffffffbcca818) task_commit+0xd9() thread_start+8() ffffff002e47bc40 fffffffffbc2ea80 0 0 60 ffffff06f13fbea0 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002e47bc40: ffffff002e47ba90 [ ffffff002e47ba90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f13fbea0, ffffff06f13fbe98) evch_delivery_hold+0x70(ffffff06f13fbe78, ffffff002e47bbc0) evch_delivery_thr+0x29e(ffffff06f13fbe78) thread_start+8() ffffff002e481c40 fffffffffbc2ea80 0 0 60 ffffff06f13fbe30 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002e481c40: ffffff002e481a90 [ ffffff002e481a90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff06f13fbe30, ffffff06f13fbe28) evch_delivery_hold+0x70(ffffff06f13fbe08, ffffff002e481bc0) evch_delivery_thr+0x29e(ffffff06f13fbe08) thread_start+8() ffffff002e487c40 fffffffffbc2ea80 0 0 109 0 PC: _resume_from_idle+0xf4 THREAD: cpu_pause() stack pointer for thread ffffff002e487c40: ffffff002e487bb0 [ ffffff002e487bb0 _resume_from_idle+0xf4() ] swtch+0x141() cpu_pause+0x80(0) thread_start+8() ffffff002e4dbc40 fffffffffbc2ea80 0 0 99 ffffff0722feae10 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002e4dbc40: ffffff002e4dbb40 [ ffffff002e4dbb40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feae10, ffffff0722feadd0) squeue_worker+0x104(ffffff0722feadc0) thread_start+8() ffffff002e4e1c40 fffffffffbc2ea80 0 0 99 ffffff0722feae12 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002e4e1c40: ffffff002e4e1b00 [ ffffff002e4e1b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feae12, ffffff0722feadd0) squeue_polling_thread+0xa9(ffffff0722feadc0) thread_start+8() ffffff002e59ac40 fffffffffbc2ea80 0 0 60 ffffff07233d4298 PC: _resume_from_idle+0xf4 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff002e59ac40: ffffff002e59aa80 [ ffffff002e59aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4298, ffffff07233d4288) taskq_thread_wait+0xbe(ffffff07233d4268, ffffff07233d4288, ffffff07233d4298 , ffffff002e59abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4268) thread_start+8() ffffff002e493c40 fffffffffbc2ea80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff002e493c40: ffffff002e4936a0 0xffffff072349e500() do_splx+0x65(b) xc_common+0x221(20, 0, ffffff072349e500, 0, fffffffff791891a, ffffff002e493900) apic_setspl+0x5a(2) 0x10() 0xf() acpi_cpu_cstate+0x11b(ffffff0723378630) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff002e4d5c40 fffffffffbc2ea80 0 0 109 0 PC: _resume_from_idle+0xf4 THREAD: cpu_pause() stack pointer for thread ffffff002e4d5c40: ffffff002e4d5bb0 [ ffffff002e4d5bb0 _resume_from_idle+0xf4() ] swtch+0x141() cpu_pause+0x80(1) thread_start+8() ffffff002e583c40 fffffffffbc2ea80 0 0 99 ffffff0722fead50 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002e583c40: ffffff002e583b40 [ ffffff002e583b40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722fead50, ffffff0722fead10) squeue_worker+0x104(ffffff0722fead00) thread_start+8() ffffff002e589c40 fffffffffbc2ea80 0 0 99 ffffff0722fead52 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002e589c40: ffffff002e589b00 [ ffffff002e589b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722fead52, ffffff0722fead10) squeue_polling_thread+0xa9(ffffff0722fead00) thread_start+8() ffffff002e63cc40 fffffffffbc2ea80 0 0 60 ffffff07233d4180 PC: _resume_from_idle+0xf4 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff002e63cc40: ffffff002e63ca80 [ ffffff002e63ca80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4180, ffffff07233d4170) taskq_thread_wait+0xbe(ffffff07233d4150, ffffff07233d4170, ffffff07233d4180 , ffffff002e63cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4150) thread_start+8() ffffff002e50ec40 fffffffffbc2ea80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff002e50ec40: ffffff002e50e660 apic_intr_exit+0x45(a0, f0) apic_intr_exit+0x45(b, f0) hilevel_intr_epilog+0xc8(ffffff072349d000, f, b, f0) do_interrupt+0xff(ffffff002e50e7b0, b) _sys_rtt_ints_disabled+8() splr+0x6a(a0) apic_setspl+0x5a(90) apic_setspl+0x5a(a) 0x10() 0xf() acpi_cpu_cstate+0x11b(ffffff0723378510) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff002e550c40 fffffffffbc2ea80 0 0 109 0 PC: _resume_from_idle+0xf4 THREAD: cpu_pause() stack pointer for thread ffffff002e550c40: ffffff002e550bb0 [ ffffff002e550bb0 _resume_from_idle+0xf4() ] swtch+0x141() cpu_pause+0x80(2) thread_start+8() ffffff002e625c40 fffffffffbc2ea80 0 0 99 ffffff0722feac90 PC: _resume_from_idle+0xf4 THREAD: squeue_worker() stack pointer for thread ffffff002e625c40: ffffff002e625b40 [ ffffff002e625b40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feac90, ffffff0722feac50) squeue_worker+0x104(ffffff0722feac40) thread_start+8() ffffff002e62bc40 fffffffffbc2ea80 0 0 99 ffffff0722feac92 PC: _resume_from_idle+0xf4 THREAD: squeue_polling_thread() stack pointer for thread ffffff002e62bc40: ffffff002e62bb00 [ ffffff002e62bb00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0722feac92, ffffff0722feac50) squeue_polling_thread+0xa9(ffffff0722feac40) thread_start+8() ffffff002e654c40 fffffffffbc2ea80 0 0 60 ffffff07233d4068 PC: _resume_from_idle+0xf4 TASKQ: cpudrv_cpudrv_monitor stack pointer for thread ffffff002e654c40: ffffff002e654a80 [ ffffff002e654a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07233d4068, ffffff07233d4058) taskq_thread_wait+0xbe(ffffff07233d4038, ffffff07233d4058, ffffff07233d4068 , ffffff002e654bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07233d4038) thread_start+8() ffffff002e5a6c40 fffffffffbc2ea80 0 0 -1 0 PC: panic_idle+0x20 THREAD: idle() stack pointer for thread ffffff002e5a6c40: ffffff002e5a67f0 xc_serv+0x247(b, 0) xc_common+0x221(20, 0, ffffff0723497ac0, 0, fffffffff791891a, ffffff002e5a6900) apic_setspl+0x5a(2) dosoftint_prolog+0x9d(ffffff0723497ac0, ffffff002e5a6a50, fffffffffb82953d, ffffff002e5a69e0) 0xf() acpi_cpu_cstate+0x11b(ffffff0723378330) cpu_acpi_idle+0x8d() cpu_idle_adaptive+0x13() idle+0xa7() thread_start+8() ffffff002e5e8c40 fffffffffbc2ea80 0 0 109 0 PC: _resume_from_idle+0xf4 THREAD: cpu_pause() stack pointer for thread ffffff002e5e8c40: ffffff002e5e8bb0 [ ffffff002e5e8bb0 _resume_from_idle+0xf4() ] swtch+0x141() cpu_pause+0x80(3) thread_start+8() ffffff002e5a0c40 fffffffffbc2ea80 0 0 99 ffffff0723543ea8 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e5a0c40: ffffff002e5a0a80 [ ffffff002e5a0a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543ea8, ffffff0723543e98) taskq_thread_wait+0xbe(ffffff0723543e78, ffffff0723543e98, ffffff0723543ea8 , ffffff002e5a0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543e78) thread_start+8() ffffff002e508c40 fffffffffbc2ea80 0 0 99 ffffff0723543ea8 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e508c40: ffffff002e508a80 [ ffffff002e508a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543ea8, ffffff0723543e98) taskq_thread_wait+0xbe(ffffff0723543e78, ffffff0723543e98, ffffff0723543ea8 , ffffff002e508bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543e78) thread_start+8() ffffff002e6dec40 fffffffffbc2ea80 0 0 99 ffffff0723543d90 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e6dec40: ffffff002e6dea80 [ ffffff002e6dea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543d90, ffffff0723543d80) taskq_thread_wait+0xbe(ffffff0723543d60, ffffff0723543d80, ffffff0723543d90 , ffffff002e6debc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543d60) thread_start+8() ffffff002e57dc40 fffffffffbc2ea80 0 0 99 ffffff0723543d90 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e57dc40: ffffff002e57da80 [ ffffff002e57da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543d90, ffffff0723543d80) taskq_thread_wait+0xbe(ffffff0723543d60, ffffff0723543d80, ffffff0723543d90 , ffffff002e57dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543d60) thread_start+8() ffffff002e6eac40 fffffffffbc2ea80 0 0 99 ffffff0723543c78 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e6eac40: ffffff002e6eaa80 [ ffffff002e6eaa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543c78, ffffff0723543c68) taskq_thread_wait+0xbe(ffffff0723543c48, ffffff0723543c68, ffffff0723543c78 , ffffff002e6eabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543c48) thread_start+8() ffffff002e6e4c40 fffffffffbc2ea80 0 0 99 ffffff0723543c78 PC: _resume_from_idle+0xf4 TASKQ: callout_taskq stack pointer for thread ffffff002e6e4c40: ffffff002e6e4a80 [ ffffff002e6e4a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543c78, ffffff0723543c68) taskq_thread_wait+0xbe(ffffff0723543c48, ffffff0723543c68, ffffff0723543c78 , ffffff002e6e4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543c48) thread_start+8() ffffff002f260c40 fffffffffbc2ea80 0 0 98 ffffff0723543818 PC: _resume_from_idle+0xf4 TASKQ: console_taskq stack pointer for thread ffffff002f260c40: ffffff002f260a80 [ ffffff002f260a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543818, ffffff0723543808) taskq_thread_wait+0xbe(ffffff07235437e8, ffffff0723543808, ffffff0723543818 , ffffff002f260bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07235437e8) thread_start+8() ffffff0723f9cb40 ffffff0723cfb040 ffffff06f1345ac0 1 59 ffffff0723cfb4b8 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0723f9cb40: ffffff002e48dd80 [ ffffff002e48dd80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff0723cfb4b8, fffffffffbd11010) door_unref+0x94() doorfs32+0xf5(0, 0, 0, 0, 0, 8) sys_syscall32+0xff() ffffff07246daba0 ffffff0723cfb040 ffffff06fe2400c0 1 59 ffffff07246dad8e PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07246daba0: ffffff002e6ccc60 [ ffffff002e6ccc60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07246dad8e, ffffff07246dad90, 2540bd574, 1, 4) cv_waituntil_sig+0xfa(ffffff07246dad8e, ffffff07246dad90, ffffff002e6cce10, 3) lwp_park+0x15e(fe270f18, 0) syslwp_park+0x63(0, fe270f18, 0) sys_syscall32+0xff() ffffff07242b1860 ffffff0723cfb040 ffffff06fe23a300 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242b1860: ffffff002ee2cd20 [ ffffff002ee2cd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5f100) shuttle_resume+0x2af(ffffff0730c5f100, fffffffffbd11010) door_return+0x3e0(fde74db4, 4, 0, 0, fde74e00, f5f00) doorfs32+0x180(fde74db4, 4, 0, fde74e00, f5f00, a) sys_syscall32+0xff() ffffff0724235be0 ffffff0723cfb040 ffffff072a83f200 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724235be0: ffffff002f31ad20 [ ffffff002f31ad20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f0c0) shuttle_resume+0x2af(ffffff072428f0c0, fffffffffbd11010) door_return+0x3e0(fc294db4, 4, 0, 0, fc294e00, f5f00) doorfs32+0x180(fc294db4, 4, 0, fc294e00, f5f00, a) sys_syscall32+0xff() ffffff072441e740 ffffff0723cfb040 ffffff06fe277200 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072441e740: ffffff002e67ed20 [ ffffff002e67ed20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f460) shuttle_resume+0x2af(ffffff072428f460, fffffffffbd11010) door_return+0x3e0(fc591db4, 4, 0, 0, fc591e00, f5f00) doorfs32+0x180(fc591db4, 4, 0, fc591e00, f5f00, a) sys_syscall32+0xff() ffffff0724229500 ffffff0723cfb040 ffffff072a78fe00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724229500: ffffff002f32cd20 [ ffffff002f32cd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07247ee780) shuttle_resume+0x2af(ffffff07247ee780, fffffffffbd11010) door_return+0x3e0(fbf97dac, 4, 0, 0, fbf97e00, f5f00) doorfs32+0x180(fbf97dac, 4, 0, fbf97e00, f5f00, a) sys_syscall32+0xff() ffffff072441eae0 ffffff0723cfb040 ffffff072a847180 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072441eae0: ffffff002f326d20 [ ffffff002f326d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07247ee780) shuttle_resume+0x2af(ffffff07247ee780, fffffffffbd11010) door_return+0x3e0(fc096db0, 4, 0, 0, fc096e00, f5f00) doorfs32+0x180(fc096db0, 4, 0, fc096e00, f5f00, a) sys_syscall32+0xff() ffffff07242b14c0 ffffff0723cfb040 ffffff072a83e400 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242b14c0: ffffff002f302d20 [ ffffff002f302d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072fbec140) shuttle_resume+0x2af(ffffff072fbec140, fffffffffbd11010) door_return+0x3e0(fca0fdb8, 4, 0, 0, fca0fe00, f5f00) doorfs32+0x180(fca0fdb8, 4, 0, fca0fe00, f5f00, a) sys_syscall32+0xff() ffffff07241eb480 ffffff0723cfb040 ffffff072a794d00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241eb480: ffffff002f2f6d20 [ ffffff002f2f6d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0724281ae0) shuttle_resume+0x2af(ffffff0724281ae0, fffffffffbd11010) door_return+0x3e0(fcc3bdb4, 4, 0, 0, fcc3be00, f5f00) doorfs32+0x180(fcc3bdb4, 4, 0, fcc3be00, f5f00, a) sys_syscall32+0xff() ffffff07242bb4a0 ffffff0723cfb040 ffffff06fe280740 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242bb4a0: ffffff002ee44d20 [ ffffff002ee44d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07242a3b60) shuttle_resume+0x2af(ffffff07242a3b60, fffffffffbd11010) door_return+0x3e0(fdb77db4, 4, 0, 0, fdb77e00, f5f00) doorfs32+0x180(fdb77db4, 4, 0, fdb77e00, f5f00, a) sys_syscall32+0xff() ffffff07242bb840 ffffff0723cfb040 ffffff06fe280e40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242bb840: ffffff002f2a8d20 [ ffffff002f2a8d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f800) shuttle_resume+0x2af(ffffff072428f800, fffffffffbd11010) door_return+0x3e0(fd631ce0, e, 0, 0, fd631e00, f5f00) doorfs32+0x180(fd631ce0, e, 0, fd631e00, f5f00, a) sys_syscall32+0xff() ffffff07241ebbc0 ffffff0723cfb040 ffffff06fe231c80 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241ebbc0: ffffff002ee62d20 [ ffffff002ee62d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c8b3a0) shuttle_resume+0x2af(ffffff0730c8b3a0, fffffffffbd11010) door_return+0x3e0(fd730db4, 4, 0, 0, fd730e00, f5f00) doorfs32+0x180(fd730db4, 4, 0, fd730e00, f5f00, a) sys_syscall32+0xff() ffffff07242970a0 ffffff0723cfb040 ffffff072a793f00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242970a0: ffffff002ee56d20 [ ffffff002ee56d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0725124b00) shuttle_resume+0x2af(ffffff0725124b00, fffffffffbd11010) door_return+0x3e0(fd92edb8, 4, 0, 0, fd92ee00, f5f00) doorfs32+0x180(fd92edb8, 4, 0, fd92ee00, f5f00, a) sys_syscall32+0xff() ffffff07241eb820 ffffff0723cfb040 ffffff06fe282a40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241eb820: ffffff002f2ccd20 [ ffffff002f2ccd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5d120) shuttle_resume+0x2af(ffffff0730c5d120, fffffffffbd11010) door_return+0x3e0(fd334db4, 4, 0, 0, fd334e00, f5f00) doorfs32+0x180(fd334db4, 4, 0, fd334e00, f5f00, a) sys_syscall32+0xff() ffffff07242f8500 ffffff0723cfb040 ffffff06fe281540 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242f8500: ffffff002ef0ad20 [ ffffff002ef0ad20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0724775140) shuttle_resume+0x2af(ffffff0724775140, fffffffffbd11010) door_return+0x3e0(fc690db8, 4, 0, 0, fc690e00, f5f00) doorfs32+0x180(fc690db8, 4, 0, fc690e00, f5f00, a) sys_syscall32+0xff() ffffff0723ba93e0 ffffff0723cfb040 ffffff072a8423c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0723ba93e0: ffffff002f320d50 [ ffffff002f320d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fc195e00, f5f00) doorfs32+0x180(0, 0, 0, fc195e00, f5f00, a) sys_syscall32+0xff() ffffff072441e3a0 ffffff0723cfb040 ffffff072a845580 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072441e3a0: ffffff002f314d20 [ ffffff002f314d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07242a37c0) shuttle_resume+0x2af(ffffff07242a37c0, fffffffffbd11010) door_return+0x3e0(fc393db4, 4, 0, 0, fc393e00, f5f00) doorfs32+0x180(fc393db4, 4, 0, fc393e00, f5f00, a) sys_syscall32+0xff() ffffff07242fec20 ffffff0723cfb040 ffffff072a83c800 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242fec20: ffffff002f30ed20 [ ffffff002f30ed20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f800) shuttle_resume+0x2af(ffffff072428f800, fffffffffbd11010) door_return+0x3e0(fc492ce0, e, 0, 0, fc492e00, f5f00) doorfs32+0x180(fc492ce0, e, 0, fc492e00, f5f00, a) sys_syscall32+0xff() ffffff07242fe4e0 ffffff0723cfb040 ffffff072a8431c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242fe4e0: ffffff002f308d20 [ ffffff002f308d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f800) shuttle_resume+0x2af(ffffff072428f800, fffffffffbd11010) door_return+0x3e0(fc78fce0, e, 0, 0, fc78fe00, f5f00) doorfs32+0x180(fc78fce0, e, 0, fc78fe00, f5f00, a) sys_syscall32+0xff() ffffff072353e3c0 ffffff0723cfb040 ffffff06fe27a3c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072353e3c0: ffffff002f2f0d20 [ ffffff002f2f0d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07242977e0) shuttle_resume+0x2af(ffffff07242977e0, fffffffffbd11010) door_return+0x3e0(fcd3adb4, 4, 0, 0, fcd3ae00, f5f00) doorfs32+0x180(fcd3adb4, 4, 0, fcd3ae00, f5f00, a) sys_syscall32+0xff() ffffff072353e020 ffffff0723cfb040 ffffff06fe27f880 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072353e020: ffffff002f2ded20 [ ffffff002f2ded20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f800) shuttle_resume+0x2af(ffffff072428f800, fffffffffbd11010) door_return+0x3e0(fd037ce0, e, 0, 0, fd037e00, f5f00) doorfs32+0x180(fd037ce0, e, 0, fd037e00, f5f00, a) sys_syscall32+0xff() ffffff0724235840 ffffff0723cfb040 ffffff06fe232a80 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724235840: ffffff002e6a2d20 [ ffffff002e6a2d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f800) shuttle_resume+0x2af(ffffff072428f800, fffffffffbd11010) door_return+0x3e0(fc88ece0, e, 0, 0, fc88ee00, f5f00) doorfs32+0x180(fc88ece0, e, 0, fc88ee00, f5f00, a) sys_syscall32+0xff() ffffff07240aeb60 ffffff0723cfb040 ffffff072a795b00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07240aeb60: ffffff002f2fcd20 [ ffffff002f2fcd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c64bc0) shuttle_resume+0x2af(ffffff0730c64bc0, fffffffffbd11010) door_return+0x3e0(fcb0edb8, 4, 0, 0, fcb0ee00, f5f00) doorfs32+0x180(fcb0edb8, 4, 0, fcb0ee00, f5f00, a) sys_syscall32+0xff() ffffff07247df400 ffffff0723cfb040 ffffff06fe245c80 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07247df400: ffffff002f2ead20 [ ffffff002f2ead20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072426c060) shuttle_resume+0x2af(ffffff072426c060, fffffffffbd11010) door_return+0x3e0(fce39db4, 4, 0, 0, fce39e00, f5f00) doorfs32+0x180(fce39db4, 4, 0, fce39e00, f5f00, a) sys_syscall32+0xff() ffffff07241ec0c0 ffffff0723cfb040 ffffff072a83f900 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241ec0c0: ffffff002f2d2d20 [ ffffff002f2d2d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07247a6500) shuttle_resume+0x2af(ffffff07247a6500, fffffffffbd11010) door_return+0x3e0(fd235db8, 4, 0, 0, fd235e00, f5f00) doorfs32+0x180(fd235db8, 4, 0, fd235e00, f5f00, a) sys_syscall32+0xff() ffffff07241f60a0 ffffff0723cfb040 ffffff06fe2795c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241f60a0: ffffff002f2c6d20 [ ffffff002f2c6d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c8b3a0) shuttle_resume+0x2af(ffffff0730c8b3a0, fffffffffbd11010) door_return+0x3e0(fd433db8, 4, 0, 0, fd433e00, f5f00) doorfs32+0x180(fd433db8, 4, 0, fd433e00, f5f00, a) sys_syscall32+0xff() ffffff0723f9c060 ffffff0723cfb040 ffffff072a83d600 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0723f9c060: ffffff002f2e4d20 [ ffffff002f2e4d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072428f0c0) shuttle_resume+0x2af(ffffff072428f0c0, fffffffffbd11010) door_return+0x3e0(fcf38d38, 4, 0, 0, fcf38e00, f5f00) doorfs32+0x180(fcf38d38, 4, 0, fcf38e00, f5f00, a) sys_syscall32+0xff() ffffff07241ecba0 ffffff0723cfb040 ffffff072a793100 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241ecba0: ffffff002f2d8d20 [ ffffff002f2d8d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c54160) shuttle_resume+0x2af(ffffff0730c54160, fffffffffbd11010) door_return+0x3e0(fd136db4, 4, 0, 0, fd136e00, f5f00) doorfs32+0x180(fd136db4, 4, 0, fd136e00, f5f00, a) sys_syscall32+0xff() ffffff07247f3ae0 ffffff0723cfb040 ffffff06fe275d00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07247f3ae0: ffffff002f2c0d20 [ ffffff002f2c0d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0725124760) shuttle_resume+0x2af(ffffff0725124760, fffffffffbd11010) door_return+0x3e0(fd532db8, 4, 0, 0, fd532e00, f5f00) doorfs32+0x180(fd532db8, 4, 0, fd532e00, f5f00, a) sys_syscall32+0xff() ffffff07242b1c00 ffffff0723cfb040 ffffff06fe27aac0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242b1c00: ffffff002ee5cd20 [ ffffff002ee5cd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5d4c0) shuttle_resume+0x2af(ffffff0730c5d4c0, fffffffffbd11010) door_return+0x3e0(fd82fdb4, 4, 0, 0, fd82fe00, f5f00) doorfs32+0x180(fd82fdb4, 4, 0, fd82fe00, f5f00, a) sys_syscall32+0xff() ffffff0724286820 ffffff0723cfb040 ffffff06fe275600 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724286820: ffffff002ee38d20 [ ffffff002ee38d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07247f33a0) shuttle_resume+0x2af(ffffff07247f33a0, fffffffffbd11010) door_return+0x3e0(fdd75db8, 4, 0, 0, fdd75e00, f5f00) doorfs32+0x180(fdd75db8, 4, 0, fdd75e00, f5f00, a) sys_syscall32+0xff() ffffff0724281740 ffffff0723cfb040 ffffff06fe22c0c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724281740: ffffff002ee26d20 [ ffffff002ee26d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072465d060) shuttle_resume+0x2af(ffffff072465d060, fffffffffbd11010) door_return+0x3e0(fe072db8, 4, 0, 0, fe072e00, f5f00) doorfs32+0x180(fe072db8, 4, 0, fe072e00, f5f00, a) sys_syscall32+0xff() ffffff07242f8160 ffffff0723cfb040 ffffff06fe2407c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07242f8160: ffffff002e6a8d20 [ ffffff002e6a8d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff073151dc60) shuttle_resume+0x2af(ffffff073151dc60, fffffffffbd11010) door_return+0x3e0(fe591ce0, c, 0, 0, fe591e00, f5f00) doorfs32+0x180(fe591ce0, c, 0, fe591e00, f5f00, a) sys_syscall32+0xff() ffffff072464c7c0 ffffff0723cfb040 ffffff06fe234e40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072464c7c0: ffffff002f4aad20 [ ffffff002f4aad20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072473fc00) shuttle_resume+0x2af(ffffff072473fc00, fffffffffbd11010) door_return+0x3e0(fe36fdb8, 4, 0, 0, fe36fe00, f5f00) doorfs32+0x180(fe36fdb8, 4, 0, fe36fe00, f5f00, a) sys_syscall32+0xff() ffffff0724229c40 ffffff0723cfb040 ffffff06fe220500 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724229c40: ffffff002e71ad20 [ ffffff002e71ad20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07246d2480) shuttle_resume+0x2af(ffffff07246d2480, fffffffffbd11010) door_return+0x3e0(fe690db8, 4, 0, 0, fe690e00, f5f00) doorfs32+0x180(fe690db8, 4, 0, fe690e00, f5f00, a) sys_syscall32+0xff() ffffff072422e120 ffffff0723cfb040 ffffff06fe224d00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072422e120: ffffff002f556d20 [ ffffff002f556d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0724951b60) shuttle_resume+0x2af(ffffff0724951b60, fffffffffbd11010) door_return+0x3e0(fe88edb8, 4, 0, 0, fe88ee00, f5f00) doorfs32+0x180(fe88edb8, 4, 0, fe88ee00, f5f00, a) sys_syscall32+0xff() ffffff0723f9c400 ffffff0723cfb040 ffffff06f13437c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0723f9c400: ffffff002f54ad20 [ ffffff002f54ad20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5cc20) shuttle_resume+0x2af(ffffff0730c5cc20, fffffffffbd11010) door_return+0x3e0(fea7edb4, 4, 0, 0, fea7ee00, f5f00) doorfs32+0x180(fea7edb4, 4, 0, fea7ee00, f5f00, a) sys_syscall32+0xff() ffffff072422bc20 ffffff0723cfb040 ffffff06fe225400 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072422bc20: ffffff002e556d20 [ ffffff002e556d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07242f3180) shuttle_resume+0x2af(ffffff07242f3180, fffffffffbd11010) door_return+0x3e0(fe78fdb8, 4, 0, 0, fe78fe00, f5f00) doorfs32+0x180(fe78fdb8, 4, 0, fe78fe00, f5f00, a) sys_syscall32+0xff() ffffff072464cb60 ffffff0723cfb040 ffffff06fe208c40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072464cb60: ffffff002e6c6d20 [ ffffff002e6c6d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5f4a0) shuttle_resume+0x2af(ffffff0730c5f4a0, fffffffffbd11010) door_return+0x3e0(fe46edb4, 4, 0, 0, fe46ee00, f5f00) doorfs32+0x180(fe46edb4, 4, 0, fe46ee00, f5f00, a) sys_syscall32+0xff() ffffff07247f3000 ffffff0723cfb040 ffffff06fe276b00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07247f3000: ffffff002f44fd20 [ ffffff002f44fd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07246d20e0) shuttle_resume+0x2af(ffffff07246d20e0, fffffffffbd11010) door_return+0x3e0(fe171db8, 4, 0, 0, fe171e00, f5f00) doorfs32+0x180(fe171db8, 4, 0, fe171e00, f5f00, a) sys_syscall32+0xff() ffffff07241ec800 ffffff0723cfb040 ffffff06fe282340 1 59 ffffff07241ec9ee PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff07241ec800: ffffff002ee4ac50 [ ffffff002ee4ac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07241ec9ee, ffffff07241ec9f0, 0) cv_wait_sig_swap+0x17(ffffff07241ec9ee, ffffff07241ec9f0) cv_waituntil_sig+0xbd(ffffff07241ec9ee, ffffff07241ec9f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff072426cb40 ffffff0723cfb040 ffffff072a847880 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff072426cb40: ffffff002ee3ed20 [ ffffff002ee3ed20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5f840) shuttle_resume+0x2af(ffffff0730c5f840, fffffffffbd11010) door_return+0x3e0(fdc76db4, 4, 0, 0, fdc76e00, f5f00) doorfs32+0x180(fdc76db4, 4, 0, fdc76e00, f5f00, a) sys_syscall32+0xff() ffffff0724286bc0 ffffff0723cfb040 ffffff06fe286300 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0724286bc0: ffffff002ee32d20 [ ffffff002ee32d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c5dc00) shuttle_resume+0x2af(ffffff0730c5dc00, fffffffffbd11010) door_return+0x3e0(fdf73db4, 4, 0, 0, fdf73e00, f5f00) doorfs32+0x180(fdf73db4, 4, 0, fdf73e00, f5f00, a) sys_syscall32+0xff() ffffff0723f9c7a0 ffffff0723cfb040 ffffff06fe226900 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0723f9c7a0: ffffff002f0f2d20 [ ffffff002f0f2d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072353e760) shuttle_resume+0x2af(ffffff072353e760, fffffffffbd11010) door_return+0x3e0(fec9fdb8, 4, 0, 0, fec9fe00, f5f00) doorfs32+0x180(fec9fdb8, 4, 0, fec9fe00, f5f00, a) sys_syscall32+0xff() ffffff0723ba9040 ffffff0723cfb040 ffffff06f13430c0 1 59 ffffff0723ba922e PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.configd stack pointer for thread ffffff0723ba9040: ffffff002f5aec40 [ ffffff002f5aec40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0723ba922e, ffffff06f6a8e200, 0) cv_wait_sig_swap+0x17(ffffff0723ba922e, ffffff06f6a8e200) cv_waituntil_sig+0xbd(ffffff0723ba922e, ffffff06f6a8e200, 0, 0) sigtimedwait+0x19c(8047e4c, 8046d30, 0) sys_syscall32+0xff() ffffff0723ba9780 ffffff0723aab038 ffffff06f13461c0 1 59 ffffff06fe1fa574 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff0723ba9780: ffffff002ec88ae0 [ ffffff002ec88ae0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff06fe1fa574, ffffff06fe1e80b0) cte_get_event+0xb3(ffffff06fe1fa540, 0, 80bafa8, 0, 0, 1) ctfs_endpoint_ioctl+0xf9(ffffff06fe1fa538, 63746502, 80bafa8, ffffff06f69d5cf8, fffffffffbcefb80, 0) ctfs_bu_ioctl+0x4b(ffffff0723fa0900, 63746502, 80bafa8, 102001, ffffff06f69d5cf8, ffffff002ec88e68, 0) fop_ioctl+0x55(ffffff0723fa0900, 63746502, 80bafa8, 102001, ffffff06f69d5cf8 , ffffff002ec88e68, 0) ioctl+0x9b(3, 63746502, 80bafa8) sys_syscall32+0xff() ffffff07242354a0 ffffff0723aab038 ffffff06fe221a00 1 59 ffffff072423568e PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff07242354a0: ffffff002f561c50 [ ffffff002f561c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072423568e, ffffff0724235690, 0) cv_wait_sig_swap+0x17(ffffff072423568e, ffffff0724235690) cv_waituntil_sig+0xbd(ffffff072423568e, ffffff0724235690, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff0724746100 ffffff07241f1060 ffffff06fe234740 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/inet/ipmgmtd stack pointer for thread ffffff0724746100: ffffff002f4f2d50 [ ffffff002f4f2d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fe93fe00, f5f00) doorfs32+0x180(0, 0, 0, fe93fe00, f5f00, a) sys_syscall32+0xff() ffffff07240ae420 ffffff07241f1060 ffffff06fe22c7c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/inet/ipmgmtd stack pointer for thread ffffff07240ae420: ffffff002f46dd20 [ ffffff002f46dd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07315587a0) shuttle_resume+0x2af(ffffff07315587a0, fffffffffbd11010) door_return+0x3e0(fea3e5d0, 304, 0, 0, fea3ee00, f5f00) doorfs32+0x180(fea3e5d0, 304, 0, fea3ee00, f5f00, a) sys_syscall32+0xff() ffffff07246da0c0 ffffff07241f1060 ffffff06fe236340 1 59 ffffff07246da2ae PC: _resume_from_idle+0xf4 CMD: /lib/inet/ipmgmtd stack pointer for thread ffffff07246da0c0: ffffff002f60fdd0 [ ffffff002f60fdd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07246da2ae, ffffff07246da2b0, 0) cv_wait_sig_swap+0x17(ffffff07246da2ae, ffffff07246da2b0) pause+0x45() sys_syscall32+0xff() ffffff07241f6440 ffffff0724652088 ffffff06fe235c40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/inet/netcfgd stack pointer for thread ffffff07241f6440: ffffff002f36fd50 [ ffffff002f36fd50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fed8ee00, f5f00) doorfs32+0x180(0, 0, 0, fed8ee00, f5f00, a) sys_syscall32+0xff() ffffff072465d400 ffffff0724652088 ffffff06fe238000 1 59 ffffff072465d5ee PC: _resume_from_idle+0xf4 CMD: /lib/inet/netcfgd stack pointer for thread ffffff072465d400: ffffff002f4b0dd0 [ ffffff002f4b0dd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072465d5ee, ffffff072465d5f0, 0) cv_wait_sig_swap+0x17(ffffff072465d5ee, ffffff072465d5f0) pause+0x45() sys_syscall32+0xff() ffffff0724991740 ffffff072423c070 ffffff06fe287800 1 59 0 PC: _resume_from_idle+0xf4 CMD: /sbin/dlmgmtd stack pointer for thread ffffff0724991740: ffffff002efdfd50 [ ffffff002efdfd50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fe8afe00, f5f00) doorfs32+0x180(0, 0, 0, fe8afe00, f5f00, a) sys_syscall32+0xff() ffffff0724991ae0 ffffff072423c070 ffffff06fe228100 1 59 0 PC: _resume_from_idle+0xf4 CMD: /sbin/dlmgmtd stack pointer for thread ffffff0724991ae0: ffffff002efd9d20 [ ffffff002efd9d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07246ec440) shuttle_resume+0x2af(ffffff07246ec440, fffffffffbd11010) door_return+0x3e0(fe9ae960, 410, 0, 0, fe9aee00, f5f00) doorfs32+0x180(fe9ae960, 410, 0, fe9aee00, f5f00, a) sys_syscall32+0xff() ffffff07249913a0 ffffff072423c070 ffffff06fe22b900 1 59 0 PC: _resume_from_idle+0xf4 CMD: /sbin/dlmgmtd stack pointer for thread ffffff07249913a0: ffffff002efe5d20 [ ffffff002efe5d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07249378c0) shuttle_resume+0x2af(ffffff07249378c0, fffffffffbd11010) door_return+0x3e0(fe7b0960, 410, 0, 0, fe7b0e00, f5f00) doorfs32+0x180(fe7b0960, 410, 0, fe7b0e00, f5f00, a) sys_syscall32+0xff() ffffff072473f4c0 ffffff072423c070 ffffff06fe22cec0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /sbin/dlmgmtd stack pointer for thread ffffff072473f4c0: ffffff002f4fed20 [ ffffff002f4fed20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07246ec440) shuttle_resume+0x2af(ffffff07246ec440, fffffffffbd11010) door_return+0x3e0(feb8e960, 410, 0, 0, feb8ee00, f5f00) doorfs32+0x180(feb8e960, 410, 0, feb8ee00, f5f00, a) sys_syscall32+0xff() ffffff07246ec440 ffffff072423c070 ffffff06fe22f1c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /sbin/dlmgmtd stack pointer for thread ffffff07246ec440: ffffff002f387d20 [ ffffff002f387d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07276517a0) shuttle_resume+0x2af(ffffff07276517a0, fffffffffbd11010) door_return+0x3e0(fecee960, 410, 0, 0, feceee00, f5f00) doorfs32+0x180(fecee960, 410, 0, feceee00, f5f00, a) sys_syscall32+0xff() ffffff002e46fc40 fffffffffbc2ea80 0 0 60 ffffffffc0196f28 PC: _resume_from_idle+0xf4 THREAD: softmac_taskq_dispatch() stack pointer for thread ffffff002e46fc40: ffffff002e46fb60 [ ffffff002e46fb60 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc0196f28, ffffffffc0196f20) softmac_taskq_dispatch+0x11d() thread_start+8() ffffff072465d7a0 ffffff072423c070 ffffff06fe237140 1 59 ffffff072465d98e PC: _resume_from_idle+0xf4 CMD: /sbin/dlmgmtd stack pointer for thread ffffff072465d7a0: ffffff002f483dd0 [ ffffff002f483dd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072465d98e, ffffff072465d990, 0) cv_wait_sig_swap+0x17(ffffff072465d98e, ffffff072465d990) pause+0x45() sys_syscall32+0xff() ffffff002ef8ac40 fffffffffbc2ea80 0 0 60 ffffff07249bc7c0 PC: _resume_from_idle+0xf4 THREAD: i_mac_notify_thread() stack pointer for thread ffffff002ef8ac40: ffffff002ef8ab00 [ ffffff002ef8ab00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07249bc7c0, ffffff07249bc7b0) i_mac_notify_thread+0xee(ffffff07249bc6c0) thread_start+8() ffffff002f2bac40 fffffffffbc2ea80 0 0 60 ffffff07249b9238 PC: _resume_from_idle+0xf4 THREAD: i_mac_notify_thread() stack pointer for thread ffffff002f2bac40: ffffff002f2bab00 [ ffffff002f2bab00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07249b9238, ffffff07249b9228) i_mac_notify_thread+0xee(ffffff07249b9138) thread_start+8() ffffff002edb7c40 fffffffffbc2ea80 0 0 60 ffffff072a5b4600 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002edb7c40: ffffff002edb7a80 [ ffffff002edb7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4600, ffffff072a5b45f0) taskq_thread_wait+0xbe(ffffff072a5b45d0, ffffff072a5b45f0, ffffff072a5b4600 , ffffff002edb7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b45d0) thread_start+8() ffffff002edc9c40 fffffffffbc2ea80 0 0 60 ffffff072a5b42b8 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_mptsas_event_taskq stack pointer for thread ffffff002edc9c40: ffffff002edc9a80 [ ffffff002edc9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b42b8, ffffff072a5b42a8) taskq_thread_wait+0xbe(ffffff072a5b4288, ffffff072a5b42a8, ffffff072a5b42b8 , ffffff002edc9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4288) thread_start+8() ffffff002f767c40 fffffffffbc2ea80 0 0 60 ffffff072a7e1c98 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_mptsas_dr_taskq stack pointer for thread ffffff002f767c40: ffffff002f767a80 [ ffffff002f767a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1c98, ffffff072a7e1c88) taskq_thread_wait+0xbe(ffffff072a7e1c68, ffffff072a7e1c88, ffffff072a7e1c98 , ffffff002f767bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1c68) thread_start+8() ffffff002f76dc40 fffffffffbc2ea80 0 0 60 ffffff072a4f8018 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f76dc40: ffffff002f76db50 [ ffffff002f76db50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f8018, ffffff072a4f8020) mptsas_doneq_thread+0x103(ffffff072a4f8030) thread_start+8() ffffff002f77fc40 fffffffffbc2ea80 0 0 60 ffffff072a4f8058 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f77fc40: ffffff002f77fb50 [ ffffff002f77fb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f8058, ffffff072a4f8060) mptsas_doneq_thread+0x103(ffffff072a4f8070) thread_start+8() ffffff002f791c40 fffffffffbc2ea80 0 0 60 ffffff072a4f8098 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f791c40: ffffff002f791b50 [ ffffff002f791b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f8098, ffffff072a4f80a0) mptsas_doneq_thread+0x103(ffffff072a4f80b0) thread_start+8() ffffff002f7a3c40 fffffffffbc2ea80 0 0 60 ffffff072a4f80d8 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7a3c40: ffffff002f7a3b50 [ ffffff002f7a3b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f80d8, ffffff072a4f80e0) mptsas_doneq_thread+0x103(ffffff072a4f80f0) thread_start+8() ffffff002f7b5c40 fffffffffbc2ea80 0 0 60 ffffff072a4f8118 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7b5c40: ffffff002f7b5b50 [ ffffff002f7b5b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f8118, ffffff072a4f8120) mptsas_doneq_thread+0x103(ffffff072a4f8130) thread_start+8() ffffff002f7c7c40 fffffffffbc2ea80 0 0 60 ffffff072a4f8158 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7c7c40: ffffff002f7c7b50 [ ffffff002f7c7b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f8158, ffffff072a4f8160) mptsas_doneq_thread+0x103(ffffff072a4f8170) thread_start+8() ffffff002f7d9c40 fffffffffbc2ea80 0 0 60 ffffff072a4f8198 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7d9c40: ffffff002f7d9b50 [ ffffff002f7d9b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f8198, ffffff072a4f81a0) mptsas_doneq_thread+0x103(ffffff072a4f81b0) thread_start+8() ffffff002f7ebc40 fffffffffbc2ea80 0 0 60 ffffff072a4f81d8 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7ebc40: ffffff002f7ebb50 [ ffffff002f7ebb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a4f81d8, ffffff072a4f81e0) mptsas_doneq_thread+0x103(ffffff072a4f81f0) thread_start+8() ffffff002ef04c40 fffffffffbc2ea80 0 0 60 ffffff072ab14db8 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002ef04c40: ffffff002ef04a80 [ ffffff002ef04a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14db8, ffffff072ab14da8) taskq_thread_wait+0xbe(ffffff072ab14d88, ffffff072ab14da8, ffffff072ab14db8 , ffffff002ef04bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14d88) thread_start+8() ffffff002e6bac40 fffffffffbc2ea80 0 0 60 ffffff072ab14840 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002e6bac40: ffffff002e6baa80 [ ffffff002e6baa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14840, ffffff072ab14830) taskq_thread_wait+0xbe(ffffff072ab14810, ffffff072ab14830, ffffff072ab14840 , ffffff002e6babc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14810) thread_start+8() ffffff002edbdc40 fffffffffbc2ea80 0 0 60 ffffff072a5b44e8 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002edbdc40: ffffff002edbda80 [ ffffff002edbda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b44e8, ffffff072a5b44d8) taskq_thread_wait+0xbe(ffffff072a5b44b8, ffffff072a5b44d8, ffffff072a5b44e8 , ffffff002edbdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b44b8) thread_start+8() ffffff002f74fc40 fffffffffbc2ea80 0 0 60 ffffff072a5b41a0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_mptsas_event_taskq stack pointer for thread ffffff002f74fc40: ffffff002f74fa80 [ ffffff002f74fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b41a0, ffffff072a5b4190) taskq_thread_wait+0xbe(ffffff072a5b4170, ffffff072a5b4190, ffffff072a5b41a0 , ffffff002f74fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4170) thread_start+8() ffffff002f761c40 fffffffffbc2ea80 0 0 60 ffffff072a7e1db0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_mptsas_dr_taskq stack pointer for thread ffffff002f761c40: ffffff002f761a80 [ ffffff002f761a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1db0, ffffff072a7e1da0) taskq_thread_wait+0xbe(ffffff072a7e1d80, ffffff072a7e1da0, ffffff072a7e1db0 , ffffff002f761bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1d80) thread_start+8() ffffff002f773c40 fffffffffbc2ea80 0 0 60 ffffff0723ab5a18 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f773c40: ffffff002f773b50 [ ffffff002f773b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5a18, ffffff0723ab5a20) mptsas_doneq_thread+0x103(ffffff0723ab5a30) thread_start+8() ffffff002f78bc40 fffffffffbc2ea80 0 0 60 ffffff0723ab5a58 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f78bc40: ffffff002f78bb50 [ ffffff002f78bb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5a58, ffffff0723ab5a60) mptsas_doneq_thread+0x103(ffffff0723ab5a70) thread_start+8() ffffff002f79dc40 fffffffffbc2ea80 0 0 60 ffffff0723ab5a98 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f79dc40: ffffff002f79db50 [ ffffff002f79db50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5a98, ffffff0723ab5aa0) mptsas_doneq_thread+0x103(ffffff0723ab5ab0) thread_start+8() ffffff002f7afc40 fffffffffbc2ea80 0 0 60 ffffff0723ab5ad8 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7afc40: ffffff002f7afb50 [ ffffff002f7afb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5ad8, ffffff0723ab5ae0) mptsas_doneq_thread+0x103(ffffff0723ab5af0) thread_start+8() ffffff002f7c1c40 fffffffffbc2ea80 0 0 60 ffffff0723ab5b18 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7c1c40: ffffff002f7c1b50 [ ffffff002f7c1b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5b18, ffffff0723ab5b20) mptsas_doneq_thread+0x103(ffffff0723ab5b30) thread_start+8() ffffff002f7d3c40 fffffffffbc2ea80 0 0 60 ffffff0723ab5b58 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7d3c40: ffffff002f7d3b50 [ ffffff002f7d3b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5b58, ffffff0723ab5b60) mptsas_doneq_thread+0x103(ffffff0723ab5b70) thread_start+8() ffffff002f7e5c40 fffffffffbc2ea80 0 0 60 ffffff0723ab5b98 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7e5c40: ffffff002f7e5b50 [ ffffff002f7e5b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5b98, ffffff0723ab5ba0) mptsas_doneq_thread+0x103(ffffff0723ab5bb0) thread_start+8() ffffff002f7f7c40 fffffffffbc2ea80 0 0 60 ffffff0723ab5bd8 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7f7c40: ffffff002f7f7b50 [ ffffff002f7f7b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab5bd8, ffffff0723ab5be0) mptsas_doneq_thread+0x103(ffffff0723ab5bf0) thread_start+8() ffffff002f803c40 fffffffffbc2ea80 0 0 60 ffffff072a7e1a68 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f803c40: ffffff002f803a80 [ ffffff002f803a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1a68, ffffff072a7e1a58) taskq_thread_wait+0xbe(ffffff072a7e1a38, ffffff072a7e1a58, ffffff072a7e1a68 , ffffff002f803bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1a38) thread_start+8() ffffff002edd8c40 fffffffffbc2ea80 0 0 60 ffffff072a7e1838 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002edd8c40: ffffff002edd8a80 [ ffffff002edd8a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1838, ffffff072a7e1828) taskq_thread_wait+0xbe(ffffff072a7e1808, ffffff072a7e1828, ffffff072a7e1838 , ffffff002edd8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1808) thread_start+8() ffffff002f5f0c40 fffffffffbc2ea80 0 0 60 ffffff072ab14958 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f5f0c40: ffffff002f5f0a80 [ ffffff002f5f0a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14958, ffffff072ab14948) taskq_thread_wait+0xbe(ffffff072ab14928, ffffff072ab14948, ffffff072ab14958 , ffffff002f5f0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14928) thread_start+8() ffffff002e6aec40 fffffffffbc2ea80 0 0 60 ffffff072a7e1090 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002e6aec40: ffffff002e6aea80 [ ffffff002e6aea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1090, ffffff072a7e1080) taskq_thread_wait+0xbe(ffffff072a7e1060, ffffff072a7e1080, ffffff072a7e1090 , ffffff002e6aebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1060) thread_start+8() ffffff002ee9ac40 fffffffffbc2ea80 0 0 60 ffffff072ab14ed0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002ee9ac40: ffffff002ee9aa80 [ ffffff002ee9aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14ed0, ffffff072ab14ec0) taskq_thread_wait+0xbe(ffffff072ab14ea0, ffffff072ab14ec0, ffffff072ab14ed0 , ffffff002ee9abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14ea0) thread_start+8() ffffff002ef5fc40 fffffffffbc2ea80 0 0 60 ffffff072ab14ca0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002ef5fc40: ffffff002ef5fa80 [ ffffff002ef5fa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14ca0, ffffff072ab14c90) taskq_thread_wait+0xbe(ffffff072ab14c70, ffffff072ab14c90, ffffff072ab14ca0 , ffffff002ef5fbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14c70) thread_start+8() ffffff002edf6c40 fffffffffbc2ea80 0 0 60 ffffff072a7e1608 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002edf6c40: ffffff002edf6a80 [ ffffff002edf6a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1608, ffffff072a7e15f8) taskq_thread_wait+0xbe(ffffff072a7e15d8, ffffff072a7e15f8, ffffff072a7e1608 , ffffff002edf6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e15d8) thread_start+8() ffffff002f634c40 fffffffffbc2ea80 0 0 60 ffffff072ab14a70 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f634c40: ffffff002f634a80 [ ffffff002f634a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14a70, ffffff072ab14a60) taskq_thread_wait+0xbe(ffffff072ab14a40, ffffff072ab14a60, ffffff072ab14a70 , ffffff002f634bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14a40) thread_start+8() ffffff002f38dc40 fffffffffbc2ea80 0 0 60 ffffff072a7e14f0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f38dc40: ffffff002f38da80 [ ffffff002f38da80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e14f0, ffffff072a7e14e0) taskq_thread_wait+0xbe(ffffff072a7e14c0, ffffff072a7e14e0, ffffff072a7e14f0 , ffffff002f38dbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e14c0) thread_start+8() ffffff002f4f8c40 fffffffffbc2ea80 0 0 60 ffffff072a7e12c0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f4f8c40: ffffff002f4f8a80 [ ffffff002f4f8a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e12c0, ffffff072a7e12b0) taskq_thread_wait+0xbe(ffffff072a7e1290, ffffff072a7e12b0, ffffff072a7e12c0 , ffffff002f4f8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1290) thread_start+8() ffffff002edc3c40 fffffffffbc2ea80 0 0 60 ffffff072a5b43d0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002edc3c40: ffffff002edc3a80 [ ffffff002edc3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b43d0, ffffff072a5b43c0) taskq_thread_wait+0xbe(ffffff072a5b43a0, ffffff072a5b43c0, ffffff072a5b43d0 , ffffff002edc3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b43a0) thread_start+8() ffffff002f755c40 fffffffffbc2ea80 0 0 60 ffffff072a5b4088 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_mptsas_event_taskq stack pointer for thread ffffff002f755c40: ffffff002f755a80 [ ffffff002f755a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4088, ffffff072a5b4078) taskq_thread_wait+0xbe(ffffff072a5b4058, ffffff072a5b4078, ffffff072a5b4088 , ffffff002f755bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4058) thread_start+8() ffffff002f75bc40 fffffffffbc2ea80 0 0 60 ffffff072a7e1ec8 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_mptsas_dr_taskq stack pointer for thread ffffff002f75bc40: ffffff002f75ba80 [ ffffff002f75ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1ec8, ffffff072a7e1eb8) taskq_thread_wait+0xbe(ffffff072a7e1e98, ffffff072a7e1eb8, ffffff072a7e1ec8 , ffffff002f75bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1e98) thread_start+8() ffffff002f779c40 fffffffffbc2ea80 0 0 60 ffffff072a7e0e18 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f779c40: ffffff002f779b50 [ ffffff002f779b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0e18, ffffff072a7e0e20) mptsas_doneq_thread+0x103(ffffff072a7e0e30) thread_start+8() ffffff002f785c40 fffffffffbc2ea80 0 0 60 ffffff072a7e0e58 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f785c40: ffffff002f785b50 [ ffffff002f785b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0e58, ffffff072a7e0e60) mptsas_doneq_thread+0x103(ffffff072a7e0e70) thread_start+8() ffffff002f797c40 fffffffffbc2ea80 0 0 60 ffffff072a7e0e98 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f797c40: ffffff002f797b50 [ ffffff002f797b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0e98, ffffff072a7e0ea0) mptsas_doneq_thread+0x103(ffffff072a7e0eb0) thread_start+8() ffffff002f7a9c40 fffffffffbc2ea80 0 0 60 ffffff072a7e0ed8 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7a9c40: ffffff002f7a9b50 [ ffffff002f7a9b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0ed8, ffffff072a7e0ee0) mptsas_doneq_thread+0x103(ffffff072a7e0ef0) thread_start+8() ffffff002f7bbc40 fffffffffbc2ea80 0 0 60 ffffff072a7e0f18 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7bbc40: ffffff002f7bbb50 [ ffffff002f7bbb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0f18, ffffff072a7e0f20) mptsas_doneq_thread+0x103(ffffff072a7e0f30) thread_start+8() ffffff002f7cdc40 fffffffffbc2ea80 0 0 60 ffffff072a7e0f58 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7cdc40: ffffff002f7cdb50 [ ffffff002f7cdb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0f58, ffffff072a7e0f60) mptsas_doneq_thread+0x103(ffffff072a7e0f70) thread_start+8() ffffff002f7dfc40 fffffffffbc2ea80 0 0 60 ffffff072a7e0f98 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7dfc40: ffffff002f7dfb50 [ ffffff002f7dfb50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0f98, ffffff072a7e0fa0) mptsas_doneq_thread+0x103(ffffff072a7e0fb0) thread_start+8() ffffff002f7f1c40 fffffffffbc2ea80 0 0 60 ffffff072a7e0fd8 PC: _resume_from_idle+0xf4 THREAD: mptsas_doneq_thread() stack pointer for thread ffffff002f7f1c40: ffffff002f7f1b50 [ ffffff002f7f1b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e0fd8, ffffff072a7e0fe0) mptsas_doneq_thread+0x103(ffffff072a7e0ff0) thread_start+8() ffffff002f7fdc40 fffffffffbc2ea80 0 0 60 ffffff072a7e1b80 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f7fdc40: ffffff002f7fda80 [ ffffff002f7fda80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1b80, ffffff072a7e1b70) taskq_thread_wait+0xbe(ffffff072a7e1b50, ffffff072a7e1b70, ffffff072a7e1b80 , ffffff002f7fdbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1b50) thread_start+8() ffffff002e714c40 fffffffffbc2ea80 0 0 60 ffffff072a7e11a8 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002e714c40: ffffff002e714a80 [ ffffff002e714a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e11a8, ffffff072a7e1198) taskq_thread_wait+0xbe(ffffff072a7e1178, ffffff072a7e1198, ffffff072a7e11a8 , ffffff002e714bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1178) thread_start+8() ffffff002f64cc40 fffffffffbc2ea80 0 0 60 ffffff072a7e13d8 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f64cc40: ffffff002f64ca80 [ ffffff002f64ca80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e13d8, ffffff072a7e13c8) taskq_thread_wait+0xbe(ffffff072a7e13a8, ffffff072a7e13c8, ffffff072a7e13d8 , ffffff002f64cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e13a8) thread_start+8() ffffff002e6c0c40 fffffffffbc2ea80 0 0 60 ffffff072ab14728 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002e6c0c40: ffffff002e6c0a80 [ ffffff002e6c0a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14728, ffffff072ab14718) taskq_thread_wait+0xbe(ffffff072ab146f8, ffffff072ab14718, ffffff072ab14728 , ffffff002e6c0bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab146f8) thread_start+8() ffffff002f809c40 fffffffffbc2ea80 0 0 60 ffffff072a7e1950 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f809c40: ffffff002f809a80 [ ffffff002f809a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1950, ffffff072a7e1940) taskq_thread_wait+0xbe(ffffff072a7e1920, ffffff072a7e1940, ffffff072a7e1950 , ffffff002f809bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e1920) thread_start+8() ffffff002f550c40 fffffffffbc2ea80 0 0 60 ffffff072ab14b88 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002f550c40: ffffff002f550a80 [ ffffff002f550a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14b88, ffffff072ab14b78) taskq_thread_wait+0xbe(ffffff072ab14b58, ffffff072ab14b78, ffffff072ab14b88 , ffffff002f550bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14b58) thread_start+8() ffffff002edeac40 fffffffffbc2ea80 0 0 60 ffffff072a7e1720 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002edeac40: ffffff002edeaa80 [ ffffff002edeaa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a7e1720, ffffff072a7e1710) taskq_thread_wait+0xbe(ffffff072a7e16f0, ffffff072a7e1710, ffffff072a7e1720 , ffffff002edeabc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a7e16f0) thread_start+8() ffffff002f1ddc40 fffffffffbc2ea80 0 0 60 fffffffffbd27a78 PC: _resume_from_idle+0xf4 THREAD: crypto_bufcall_service() stack pointer for thread ffffff002f1ddc40: ffffff002f1ddb70 [ ffffff002f1ddb70 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbd27a78, fffffffffbd27a60) crypto_bufcall_service+0x8d() thread_start+8() ffffff002fc17c40 fffffffffbc2ea80 0 0 60 ffffff0724078c80 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff002fc17c40: ffffff002fc17a80 [ ffffff002fc17a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078c80, ffffff0724078c70) taskq_thread_wait+0xbe(ffffff0724078c50, ffffff0724078c70, ffffff0724078c80 , ffffff002fc17bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078c50) thread_start+8() ffffff002fc29c40 fffffffffbc2ea80 0 0 60 ffffff072ab141b0 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff002fc29c40: ffffff002fc29a80 [ ffffff002fc29a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab141b0, ffffff072ab141a0) taskq_thread_wait+0xbe(ffffff072ab14180, ffffff072ab141a0, ffffff072ab141b0 , ffffff002fc29bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14180) thread_start+8() ffffff072493e880 ffffff072d461010 ffffff06fe20f100 1 59 ffffff072493ea6e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072493e880: ffffff002f431c50 [ ffffff002f431c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072493ea6e, ffffff072493ea70, 0) cv_wait_sig_swap+0x17(ffffff072493ea6e, ffffff072493ea70) cv_waituntil_sig+0xbd(ffffff072493ea6e, ffffff072493ea70, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072493e140 ffffff072d461010 ffffff06fe214cc0 1 59 ffffff072493e32e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072493e140: ffffff002f45bc50 [ ffffff002f45bc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072493e32e, ffffff072493e330, 0) cv_wait_sig_swap+0x17(ffffff072493e32e, ffffff072493e330) cv_waituntil_sig+0xbd(ffffff072493e32e, ffffff072493e330, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072493ac40 ffffff072d461010 ffffff06fe217e80 1 59 ffffff072493ae2e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072493ac40: ffffff002f449c40 [ ffffff002f449c40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072493ae2e, ffffff06f6a8e600, 0) cv_wait_sig_swap+0x17(ffffff072493ae2e, ffffff06f6a8e600) cv_waituntil_sig+0xbd(ffffff072493ae2e, ffffff06f6a8e600, 0, 0) sigtimedwait+0x19c(febaffac, febafea0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072493a8a0 ffffff072d461010 ffffff06fe274100 1 59 ffffff072493aa8e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072493a8a0: ffffff002e6b4c50 [ ffffff002e6b4c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072493aa8e, ffffff072493aa90, 0) cv_wait_sig_swap+0x17(ffffff072493aa8e, ffffff072493aa90) cv_waituntil_sig+0xbd(ffffff072493aa8e, ffffff072493aa90, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072493a500 ffffff072d461010 ffffff06fe2137c0 1 59 ffffff072493a6ee PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072493a500: ffffff002f455c50 [ ffffff002f455c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072493a6ee, ffffff072493a6f0, 0) cv_wait_sig_swap+0x17(ffffff072493a6ee, ffffff072493a6f0) cv_waituntil_sig+0xbd(ffffff072493a6ee, ffffff072493a6f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07247a08c0 ffffff072d461010 ffffff06fe219a80 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07247a08c0: ffffff002f489d50 [ ffffff002f489d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fd6c1e00, f5f00) doorfs32+0x180(0, 0, 0, fd6c1e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff07276d00c0 ffffff072d461010 ffffff06fe21be40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276d00c0: ffffff002edded20 [ ffffff002edded20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002fc35c40) shuttle_resume+0x2af(ffffff002fc35c40, fffffffffbd11010) door_return+0x3e0(fd7c0d04, 4, 0, 0, fd7c0e00, f5f00) doorfs32+0x180(fd7c0d04, 4, 0, fd7c0e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072494c440 ffffff072d461010 ffffff06fe20e800 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072494c440: ffffff002f443d20 [ ffffff002f443d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002fc35c40) shuttle_resume+0x2af(ffffff002fc35c40, fffffffffbd11010) door_return+0x3e0(fe80ed64, 4, 0, 0, fe80ee00, f5f00) doorfs32+0x180(fe80ed64, 4, 0, fe80ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0724273040 ffffff072d409000 ffffff06fe247180 1 59 ffffff072427322e PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff0724273040: ffffff002e720c50 [ ffffff002e720c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072427322e, ffffff0724273230, 0) cv_wait_sig_swap+0x17(ffffff072427322e, ffffff0724273230) cv_waituntil_sig+0xbd(ffffff072427322e, ffffff0724273230, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07242a3080 ffffff072d409000 ffffff06fe246a80 1 59 ffffff07242a326e PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff07242a3080: ffffff002f479c60 [ ffffff002f479c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07242a326e, ffffff07242a3270, 769b4e14, 1 , 4) cv_waituntil_sig+0xfa(ffffff07242a326e, ffffff07242a3270, ffffff002f479e10, 3) lwp_park+0x15e(fec3ef38, 0) syslwp_park+0x63(0, fec3ef38, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724937180 ffffff072d409000 ffffff06fe23aa00 1 59 0 PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff0724937180: ffffff002e684d50 [ ffffff002e684d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fd95ee00, f5f00) doorfs32+0x180(0, 0, 0, fd95ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072426c7a0 ffffff072d409000 ffffff06fe231580 1 59 0 PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff072426c7a0: ffffff002e666d50 [ ffffff002e666d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(fe7a0cfc, 4, 0, 0, fe7a0e00, f5f00) doorfs32+0x180(fe7a0cfc, 4, 0, fe7a0e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002e113c40 fffffffffbc2ea80 0 0 60 ffffffffc019b8a8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002e113c40: ffffff002e113b00 [ ffffff002e113b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b8a8, ffffffffc019b8b0) md_daemon+0xd4(0, ffffffffc019b880) start_daemon+0x16(ffffffffc019b880) thread_start+8() ffffff002f3dec40 fffffffffbc2ea80 0 0 60 ffffffffc019b868 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002f3dec40: ffffff002f3deb00 [ ffffff002f3deb00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b868, ffffffffc019b870) md_daemon+0xd4(0, ffffffffc019b840) start_daemon+0x16(ffffffffc019b840) thread_start+8() ffffff002f3eac40 fffffffffbc2ea80 0 0 60 ffffffffc019a0c8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002f3eac40: ffffff002f3eab00 [ ffffff002f3eab00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019a0c8, ffffffffc019a0d0) md_daemon+0xd4(0, ffffffffc019a0a0) start_daemon+0x16(ffffffffc019a0a0) thread_start+8() ffffff002ef7dc40 fffffffffbc2ea80 0 0 60 ffffffffc019b7e8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002ef7dc40: ffffff002ef7db00 [ ffffff002ef7db00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b7e8, ffffffffc019b7f0) md_daemon+0xd4(0, ffffffffc019b7c0) start_daemon+0x16(ffffffffc019b7c0) thread_start+8() ffffff002f034c40 fffffffffbc2ea80 0 0 60 ffffffffc019b7e8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002f034c40: ffffff002f034b00 [ ffffff002f034b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b7e8, ffffffffc019b7f0) md_daemon+0xd4(0, ffffffffc019b7c0) start_daemon+0x16(ffffffffc019b7c0) thread_start+8() ffffff002f48fc40 fffffffffbc2ea80 0 0 60 ffffffffc019b7e8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002f48fc40: ffffff002f48fb00 [ ffffff002f48fb00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b7e8, ffffffffc019b7f0) md_daemon+0xd4(0, ffffffffc019b7c0) start_daemon+0x16(ffffffffc019b7c0) thread_start+8() ffffff002f34ec40 fffffffffbc2ea80 0 0 60 ffffffffc019a048 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002f34ec40: ffffff002f34eb00 [ ffffff002f34eb00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019a048, ffffffffc019a050) md_daemon+0xd4(0, ffffffffc019a020) start_daemon+0x16(ffffffffc019a020) thread_start+8() ffffff002e1fdc40 fffffffffbc2ea80 0 0 60 ffffffffc019b768 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002e1fdc40: ffffff002e1fdb00 [ ffffff002e1fdb00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b768, ffffffffc019b770) md_daemon+0xd4(0, ffffffffc019b740) start_daemon+0x16(ffffffffc019b740) thread_start+8() ffffff002e209c40 fffffffffbc2ea80 0 0 60 ffffffffc019b7a8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002e209c40: ffffff002e209b00 [ ffffff002e209b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019b7a8, ffffffffc019b7b0) md_daemon+0xd4(0, ffffffffc019b780) start_daemon+0x16(ffffffffc019b780) thread_start+8() ffffff002f461c40 fffffffffbc2ea80 0 0 60 ffffffffc019a088 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002f461c40: ffffff002f461b00 [ ffffff002f461b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019a088, ffffffffc019a090) md_daemon+0xd4(0, ffffffffc019a060) start_daemon+0x16(ffffffffc019a060) thread_start+8() ffffff002ede4c40 fffffffffbc2ea80 0 0 60 ffffffffc019a3e8 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002ede4c40: ffffff002ede4b00 [ ffffff002ede4b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019a3e8, ffffffffc019a3f0) md_daemon+0xd4(0, ffffffffc019a3c0) start_daemon+0x16(ffffffffc019a3c0) thread_start+8() ffffff002e325c40 fffffffffbc2ea80 0 0 60 ffffffffc019a428 PC: _resume_from_idle+0xf4 THREAD: start_daemon() stack pointer for thread ffffff002e325c40: ffffff002e325b00 [ ffffff002e325b00 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc019a428, ffffffffc019a430) md_daemon+0xd4(0, ffffffffc019a400) start_daemon+0x16(ffffffffc019a400) thread_start+8() ffffff002eccfc40 fffffffffbc2ea80 0 0 60 ffffff072a5b9b70 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002eccfc40: ffffff002eccfa80 [ ffffff002eccfa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9b70, ffffff072a5b9b60) taskq_thread_wait+0xbe(ffffff072a5b9b40, ffffff072a5b9b60, ffffff072a5b9b70 , ffffff002eccfbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9b40) thread_start+8() ffffff002ecd5c40 fffffffffbc2ea80 0 0 60 ffffff0724078190 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002ecd5c40: ffffff002ecd5a80 [ ffffff002ecd5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078190, ffffff0724078180) taskq_thread_wait+0xbe(ffffff0724078160, ffffff0724078180, ffffff0724078190 , ffffff002ecd5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078160) thread_start+8() ffffff002ecc9c40 fffffffffbc2ea80 0 0 60 ffffff07240783c0 PC: _resume_from_idle+0xf4 TASKQ: mpt_sas_nexus_enum_tq stack pointer for thread ffffff002ecc9c40: ffffff002ecc9a80 [ ffffff002ecc9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240783c0, ffffff07240783b0) taskq_thread_wait+0xbe(ffffff0724078390, ffffff07240783b0, ffffff07240783c0 , ffffff002ecc9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078390) thread_start+8() ffffff002fc46c40 fffffffffbc2ea80 0 0 60 ffffff072a5b9c88 PC: _resume_from_idle+0xf4 TASKQ: acpinex_nexus_enum_tq stack pointer for thread ffffff002fc46c40: ffffff002fc46a80 [ ffffff002fc46a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9c88, ffffff072a5b9c78) taskq_thread_wait+0xbe(ffffff072a5b9c58, ffffff072a5b9c78, ffffff072a5b9c88 , ffffff002fc46bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9c58) thread_start+8() ffffff002efb6c40 fffffffffbc2ea80 0 0 60 ffffff072a5b9a58 PC: _resume_from_idle+0xf4 TASKQ: pseudo_nexus_enum_tq stack pointer for thread ffffff002efb6c40: ffffff002efb6a80 [ ffffff002efb6a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9a58, ffffff072a5b9a48) taskq_thread_wait+0xbe(ffffff072a5b9a28, ffffff072a5b9a48, ffffff072a5b9a58 , ffffff002efb6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9a28) thread_start+8() ffffff002f6e1c40 fffffffffbc2ea80 0 0 60 ffffff072a5b9080 PC: _resume_from_idle+0xf4 TASKQ: bridge_bridge stack pointer for thread ffffff002f6e1c40: ffffff002f6e1a80 [ ffffff002f6e1a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9080, ffffff072a5b9070) taskq_thread_wait+0xbe(ffffff072a5b9050, ffffff072a5b9070, ffffff072a5b9080 , ffffff002f6e1bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9050) thread_start+8() ffffff002faebc40 fffffffffbc2ea80 0 0 60 ffffff072a5b93c8 PC: _resume_from_idle+0xf4 TASKQ: ah_taskq stack pointer for thread ffffff002faebc40: ffffff002faeba80 [ ffffff002faeba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b93c8, ffffff072a5b93b8) taskq_thread_wait+0xbe(ffffff072a5b9398, ffffff072a5b93b8, ffffff072a5b93c8 , ffffff002faebbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9398) thread_start+8() ffffff002fb28c40 fffffffffbc2ea80 0 0 60 ffffff072a5b94e0 PC: _resume_from_idle+0xf4 TASKQ: esp_taskq stack pointer for thread ffffff002fb28c40: ffffff002fb28a80 [ ffffff002fb28a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b94e0, ffffff072a5b94d0) taskq_thread_wait+0xbe(ffffff072a5b94b0, ffffff072a5b94d0, ffffff072a5b94e0 , ffffff002fb28bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b94b0) thread_start+8() ffffff002fb65c40 fffffffffbc2ea80 0 0 60 ffffff0723543b60 PC: _resume_from_idle+0xf4 TASKQ: iptun_taskq stack pointer for thread ffffff002fb65c40: ffffff002fb65a80 [ ffffff002fb65a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543b60, ffffff0723543b50) taskq_thread_wait+0xbe(ffffff0723543b30, ffffff0723543b50, ffffff0723543b60 , ffffff002fb65bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543b30) thread_start+8() ffffff002ff52c40 fffffffffbc2ea80 0 0 60 ffffff0723543a48 PC: _resume_from_idle+0xf4 TASKQ: simnet_simnet stack pointer for thread ffffff002ff52c40: ffffff002ff52a80 [ ffffff002ff52a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723543a48, ffffff0723543a38) taskq_thread_wait+0xbe(ffffff0723543a18, ffffff0723543a38, ffffff0723543a48 , ffffff002ff52bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0723543a18) thread_start+8() ffffff00300d9c40 fffffffffbc2ea80 0 0 60 ffffffffc00d1e16 PC: _resume_from_idle+0xf4 THREAD: ufs_thread_idle() stack pointer for thread ffffff00300d9c40: ffffff00300d9b70 [ ffffff00300d9b70 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc00d1e16, ffffffffc00d1e20) ufs_thread_idle+0x147() thread_start+8() ffffff00300dfc40 fffffffffbc2ea80 0 0 60 ffffffffc00d23f6 PC: _resume_from_idle+0xf4 THREAD: ufs_thread_hlock() stack pointer for thread ffffff00300dfc40: ffffff00300dfaf0 [ ffffff00300dfaf0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc00d23f6, ffffffffc00d2400) ufs_thread_run+0x80(ffffffffc00d23e0, ffffff00300dfbd0) ufs_thread_hlock+0x73(0) thread_start+8() ffffff003031ac40 fffffffffbc2ea80 0 0 60 ffffffffc014e180 PC: _resume_from_idle+0xf4 THREAD: smb_thread_entry_point() stack pointer for thread ffffff003031ac40: ffffff003031aad0 [ ffffff003031aad0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc014e180, ffffffffc014e178) smb_thread_continue_timedwait_locked+0x5d(ffffffffc014e138, 0) smb_thread_continue+0x2d(ffffffffc014e138) smb_kshare_unexport_thread+0x28(ffffffffc014e138, 0) smb_thread_entry_point+0x91(ffffffffc014e138) thread_start+8() ffffff0030362c40 fffffffffbc2ea80 0 0 60 ffffffffc0154b08 PC: _resume_from_idle+0xf4 THREAD: smb_thread_entry_point() stack pointer for thread ffffff0030362c40: ffffff0030362ac0 [ ffffff0030362ac0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffffffc0154b08, ffffffffc0154b00) smb_thread_continue_timedwait_locked+0x5d(ffffffffc0154ac0, 0) smb_thread_continue+0x2d(ffffffffc0154ac0) smb_oplock_break_thread+0x20(ffffffffc0154ac0, 0) smb_thread_entry_point+0x91(ffffffffc0154ac0) thread_start+8() ffffff07242733e0 ffffff072d409000 ffffff06fe210600 1 59 0 PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff07242733e0: ffffff002e4ffd20 [ ffffff002e4ffd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002e119c40) shuttle_resume+0x2af(ffffff002e119c40, fffffffffbd11010) door_return+0x3e0(fe99ed8c, 4, 0, 0, fe99ee00, f5f00) doorfs32+0x180(fe99ed8c, 4, 0, fe99ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072427a760 ffffff072d409000 ffffff06fe223f00 1 59 ffffff072427a94e PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff072427a760: ffffff002e742c50 [ ffffff002e742c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072427a94e, ffffff072427a950, 0) cv_wait_sig_swap+0x17(ffffff072427a94e, ffffff072427a950) cv_waituntil_sig+0xbd(ffffff072427a94e, ffffff072427a950, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276dc080 ffffff072d409000 ffffff06fe20e100 1 59 ffffff07276dc26e PC: _resume_from_idle+0xf4 CMD: devfsadmd stack pointer for thread ffffff07276dc080: ffffff002edf0dd0 [ ffffff002edf0dd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276dc26e, ffffff07276dc270, 0) cv_wait_sig_swap+0x17(ffffff07276dc26e, ffffff07276dc270) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff072494c7e0 ffffff072d461010 ffffff06fe223800 1 59 ffffff072494c9ce PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff072494c7e0: ffffff002f4b6c50 [ ffffff002f4b6c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072494c9ce, ffffff072494c9d0, 0) cv_wait_sig_swap+0x17(ffffff072494c9ce, ffffff072494c9d0) cv_waituntil_sig+0xbd(ffffff072494c9ce, ffffff072494c9d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0727657780 ffffff072d461010 ffffff06fe218580 1 59 ffffff072765796e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff0727657780: ffffff002f375c50 [ ffffff002f375c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072765796e, ffffff0727657970, 0) cv_wait_sig_swap+0x17(ffffff072765796e, ffffff0727657970) cv_waituntil_sig+0xbd(ffffff072765796e, ffffff0727657970, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0727651060 ffffff072d461010 ffffff06fe218c80 1 59 ffffff072765124e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff0727651060: ffffff002f381c50 [ ffffff002f381c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072765124e, ffffff0727651250, 0) cv_wait_sig_swap+0x17(ffffff072765124e, ffffff0727651250) cv_waituntil_sig+0xbd(ffffff072765124e, ffffff0727651250, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276dcb60 ffffff072d461010 ffffff06fe215ac0 1 59 ffffff07276dcd4e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276dcb60: ffffff002f37bc50 [ ffffff002f37bc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276dcd4e, ffffff07276dcd50, 0) cv_wait_sig_swap+0x17(ffffff07276dcd4e, ffffff07276dcd50) cv_waituntil_sig+0xbd(ffffff07276dcd4e, ffffff07276dcd50, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276d6440 ffffff072d461010 ffffff06fe219380 1 59 ffffff07276d662e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276d6440: ffffff002f43dc50 [ ffffff002f43dc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276d662e, ffffff07276d6630, 0) cv_wait_sig_swap+0x17(ffffff07276d662e, ffffff07276d6630) cv_waituntil_sig+0xbd(ffffff07276d662e, ffffff07276d6630, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276d60a0 ffffff072d461010 ffffff06fe20ff00 1 59 ffffff07276d628e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276d60a0: ffffff002f615c50 [ ffffff002f615c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276d628e, ffffff07276d6290, 0) cv_wait_sig_swap+0x17(ffffff07276d628e, ffffff07276d6290) cv_waituntil_sig+0xbd(ffffff07276d628e, ffffff07276d6290, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276d0800 ffffff072d461010 ffffff06fe2130c0 1 59 ffffff07276d09ee PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276d0800: ffffff002e69cc50 [ ffffff002e69cc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276d09ee, ffffff07276d09f0, 0) cv_wait_sig_swap+0x17(ffffff07276d09ee, ffffff07276d09f0) cv_waituntil_sig+0xbd(ffffff07276d09ee, ffffff07276d09f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276d0ba0 ffffff072d461010 ffffff06fe20f800 1 59 ffffff07276d0d8e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276d0ba0: ffffff002e76cc50 [ ffffff002e76cc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276d0d8e, ffffff07276d0d90, 0) cv_wait_sig_swap+0x17(ffffff07276d0d8e, ffffff07276d0d90) cv_waituntil_sig+0xbd(ffffff07276d0d8e, ffffff07276d0d90, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276d0460 ffffff072d461010 ffffff06fe210d00 1 59 ffffff07276d064e PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07276d0460: ffffff002f467c50 [ ffffff002f467c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276d064e, ffffff07276d0650, 0) cv_wait_sig_swap+0x17(ffffff07276d064e, ffffff07276d0650) cv_waituntil_sig+0xbd(ffffff07276d064e, ffffff07276d0650, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff002fc35c40 fffffffffbc2ea80 0 0 60 fffffffffbca9460 PC: _resume_from_idle+0xf4 THREAD: log_event_deliver() stack pointer for thread ffffff002fc35c40: ffffff002fc35b50 [ ffffff002fc35b50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbca9460, fffffffffbca9450) log_event_deliver+0x1b3() thread_start+8() ffffff07247ee3e0 ffffff072d461010 ffffff06fe2780c0 1 59 ffffff07247ee5ce PC: _resume_from_idle+0xf4 CMD: /usr/lib/sysevent/syseventd stack pointer for thread ffffff07247ee3e0: ffffff002f42bdd0 [ ffffff002f42bdd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07247ee5ce, ffffff07247ee5d0, 0) cv_wait_sig_swap+0x17(ffffff07247ee5ce, ffffff07247ee5d0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff0730d7dbe0 ffffff073145c018 ffffff072a792800 1 59 ffffff0730d7ddce PC: _resume_from_idle+0xf4 CMD: /usr/sbin/rpcbind stack pointer for thread ffffff0730d7dbe0: ffffff002e4edc60 [ ffffff002e4edc60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730d7ddce, ffffff0730d7ddd0, 2540bd42a, 1, 4) cv_waituntil_sig+0xfa(ffffff0730d7ddce, ffffff0730d7ddd0, ffffff002e4ede10, 3) lwp_park+0x15e(fe93ff18, 0) syslwp_park+0x63(0, fe93ff18, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c5f4a0 ffffff073145c018 ffffff072b253100 1 59 ffffff0730cafea2 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/rpcbind stack pointer for thread ffffff0730c5f4a0: ffffff002e73bc50 [ ffffff002e73bc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730cafea2, ffffff0730cafe68, 0) cv_wait_sig_swap+0x17(ffffff0730cafea2, ffffff0730cafe68) cv_timedwait_sig_hrtime+0x35(ffffff0730cafea2, ffffff0730cafe68, ffffffffffffffff) poll_common+0x504(8129bc8, b, 0, 0) pollsys+0xe7(8129bc8, b, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730cb27e0 ffffff073145c018 ffffff06fe2153c0 1 59 ffffff0730cb29ce PC: _resume_from_idle+0xf4 CMD: /usr/sbin/rpcbind stack pointer for thread ffffff0730cb27e0: ffffff002e660c40 [ ffffff002e660c40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730cb29ce, ffffff06f6a8ec40, 0) cv_wait_sig_swap+0x17(ffffff0730cb29ce, ffffff06f6a8ec40) cv_waituntil_sig+0xbd(ffffff0730cb29ce, ffffff06f6a8ec40, 0, 0) sigtimedwait+0x19c(806d794, fed0eef0, 0) _sys_sysenter_post_swapgs+0x149() ffffff002ef10c40 fffffffffbc2ea80 0 0 60 ffffff0724078a50 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff002ef10c40: ffffff002ef10a80 [ ffffff002ef10a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078a50, ffffff0724078a40) taskq_thread_wait+0xbe(ffffff0724078a20, ffffff0724078a40, ffffff0724078a50 , ffffff002ef10bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078a20) thread_start+8() ffffff002f1e3c40 fffffffffbc2ea80 0 0 60 ffffff0724078820 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff002f1e3c40: ffffff002f1e3a80 [ ffffff002f1e3a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078820, ffffff0724078810) taskq_thread_wait+0xbe(ffffff07240787f0, ffffff0724078810, ffffff0724078820 , ffffff002f1e3bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07240787f0) thread_start+8() ffffff002f1e9c40 fffffffffbc2ea80 0 0 60 ffffff0724078708 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff002f1e9c40: ffffff002f1e9a80 [ ffffff002f1e9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078708, ffffff07240786f8) taskq_thread_wait+0xbe(ffffff07240786d8, ffffff07240786f8, ffffff0724078708 , ffffff002f1e9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07240786d8) thread_start+8() ffffff07241ec460 ffffff0730dad078 ffffff072b24e140 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/method/iscsid stack pointer for thread ffffff07241ec460: ffffff002fb87d50 [ ffffff002fb87d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fecaee00, f5f00) doorfs32+0x180(0, 0, 0, fecaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002f28ac40 fffffffffbc2ea80 0 0 60 ffffff072a5b4c90 PC: _resume_from_idle+0xf4 TASKQ: idm_global_taskq stack pointer for thread ffffff002f28ac40: ffffff002f28aa80 [ ffffff002f28aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4c90, ffffff072a5b4c80) taskq_thread_wait+0xbe(ffffff072a5b4c60, ffffff072a5b4c80, ffffff072a5b4c90 , ffffff002f28abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4c60) thread_start+8() ffffff002f290c40 fffffffffbc2ea80 0 0 60 ffffffffc01a7ea4 PC: _resume_from_idle+0xf4 THREAD: idm_wd_thread() stack pointer for thread ffffff002f290c40: ffffff002f290ae0 [ ffffff002f290ae0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffffffc01a7ea4, ffffffffc01a7e80, 12a05f200, 989680, 0) cv_reltimedwait+0x51(ffffffffc01a7ea4, ffffffffc01a7e80, 1f4, 4) idm_wd_thread+0x203(0) thread_start+8() ffffff002fbf9c40 fffffffffbc2ea80 0 0 60 ffffff072a5b4b78 PC: _resume_from_idle+0xf4 TASKQ: iscsi_nexus_enum_tq stack pointer for thread ffffff002fbf9c40: ffffff002fbf9a80 [ ffffff002fbf9a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4b78, ffffff072a5b4b68) taskq_thread_wait+0xbe(ffffff072a5b4b48, ffffff072a5b4b68, ffffff072a5b4b78 , ffffff002fbf9bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4b48) thread_start+8() ffffff002fb7bc40 fffffffffbc2ea80 0 0 60 ffffff0724078b68 PC: _resume_from_idle+0xf4 TASKQ: isns_reg_query_taskq stack pointer for thread ffffff002fb7bc40: ffffff002fb7ba80 [ ffffff002fb7ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078b68, ffffff0724078b58) taskq_thread_wait+0xbe(ffffff0724078b38, ffffff0724078b58, ffffff0724078b68 , ffffff002fb7bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078b38) thread_start+8() ffffff002e4e7c40 fffffffffbc2ea80 0 0 60 ffffff072a5b4948 PC: _resume_from_idle+0xf4 TASKQ: isns_scn_taskq stack pointer for thread ffffff002e4e7c40: ffffff002e4e7a80 [ ffffff002e4e7a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4948, ffffff072a5b4938) taskq_thread_wait+0xbe(ffffff072a5b4918, ffffff072a5b4938, ffffff072a5b4948 , ffffff002e4e7bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4918) thread_start+8() ffffff002fc0bc40 fffffffffbc2ea80 0 0 60 ffffff072a5b4830 PC: _resume_from_idle+0xf4 TASKQ: iscsi_Static stack pointer for thread ffffff002fc0bc40: ffffff002fc0ba80 [ ffffff002fc0ba80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4830, ffffff072a5b4820) taskq_thread_wait+0xbe(ffffff072a5b4800, ffffff072a5b4820, ffffff072a5b4830 , ffffff002fc0bbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4800) thread_start+8() ffffff002fba5c40 fffffffffbc2ea80 0 0 60 ffffff072a5b9710 PC: _resume_from_idle+0xf4 TASKQ: iscsi_SendTarget stack pointer for thread ffffff002fba5c40: ffffff002fba5a80 [ ffffff002fba5a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9710, ffffff072a5b9700) taskq_thread_wait+0xbe(ffffff072a5b96e0, ffffff072a5b9700, ffffff072a5b9710 , ffffff002fba5bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b96e0) thread_start+8() ffffff002f52ac40 fffffffffbc2ea80 0 0 60 ffffff072ab142c8 PC: _resume_from_idle+0xf4 TASKQ: iscsi_SLP stack pointer for thread ffffff002f52ac40: ffffff002f52aa80 [ ffffff002f52aa80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab142c8, ffffff072ab142b8) taskq_thread_wait+0xbe(ffffff072ab14298, ffffff072ab142b8, ffffff072ab142c8 , ffffff002f52abc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14298) thread_start+8() ffffff002f266c40 fffffffffbc2ea80 0 0 60 ffffff072a5b4da8 PC: _resume_from_idle+0xf4 TASKQ: iscsi_iSNS stack pointer for thread ffffff002f266c40: ffffff002f266a80 [ ffffff002f266a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4da8, ffffff072a5b4d98) taskq_thread_wait+0xbe(ffffff072a5b4d78, ffffff072a5b4d98, ffffff072a5b4da8 , ffffff002f266bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4d78) thread_start+8() ffffff07247754e0 ffffff0730dad078 ffffff072a8407c0 1 59 ffffff07247756ce PC: _resume_from_idle+0xf4 CMD: /lib/svc/method/iscsid stack pointer for thread ffffff07247754e0: ffffff002fb81c40 [ ffffff002fb81c40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07247756ce, ffffff06f6a8e440, 0) cv_wait_sig_swap+0x17(ffffff07247756ce, ffffff06f6a8e440) cv_waituntil_sig+0xbd(ffffff07247756ce, ffffff06f6a8e440, 0, 0) sigtimedwait+0x19c(8047de0, 8047cd0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730ca2ba0 ffffff07241f4058 ffffff072a834e40 1 59 ffffff0730d80eaa PC: _resume_from_idle+0xf4 CMD: /usr/sbin/cron stack pointer for thread ffffff0730ca2ba0: ffffff002f5c6c60 [ ffffff002f5c6c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730d80eaa, ffffff0730d80e70, 86ef0b31cc52, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff0730d80eaa, ffffff0730d80e70, 86ef0b31cc52) poll_common+0x504(8047b80, 1, ffffff002f5c6e40, ffffff002f5c6e50) pollsys+0xe7(8047b80, 1, 8047c34, 806e058) _sys_sysenter_post_swapgs+0x149() ffffff07242bbbe0 ffffff072380c030 ffffff06fe27b1c0 1 59 ffffff072b23af42 PC: _resume_from_idle+0xf4 CMD: /usr/lib/utmpd stack pointer for thread ffffff07242bbbe0: ffffff002ee50c60 [ ffffff002ee50c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072b23af42, ffffff072b23af08, 565fb26bbd38, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff072b23af42, ffffff072b23af08, 565fb26bbd38) poll_common+0x504(806c5d0, 5, ffffff002ee50e40, 0) pollsys+0xe7(806c5d0, 5, 8047b48, 0) _sys_sysenter_post_swapgs+0x149() ffffff07249424a0 ffffff0724655080 ffffff072a8438c0 1 59 ffffff072ab271da PC: _resume_from_idle+0xf4 CMD: /usr/lib/power/powerd stack pointer for thread ffffff07249424a0: ffffff002f3d8c50 [ ffffff002f3d8c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072ab271da, ffffff072ab271a0, 0) cv_wait_sig_swap+0x17(ffffff072ab271da, ffffff072ab271a0) cv_timedwait_sig_hrtime+0x35(ffffff072ab271da, ffffff072ab271a0, ffffffffffffffff) poll_common+0x504(fedbefa8, 1, 0, 0) pollsys+0xe7(fedbefa8, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724940c00 ffffff0724655080 ffffff072a84b140 1 59 ffffff0724940dee PC: _resume_from_idle+0xf4 CMD: /usr/lib/power/powerd stack pointer for thread ffffff0724940c00: ffffff002f415d90 [ ffffff002f415d90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0724940dee, ffffff06f6a8e540, 0) cv_wait_sig_swap+0x17(ffffff0724940dee, ffffff06f6a8e540) sigsuspend+0x101(fecaef90) _sys_sysenter_post_swapgs+0x149() ffffff0724942be0 ffffff0724655080 ffffff06fe21c540 1 59 ffffff0724745962 PC: _resume_from_idle+0xf4 CMD: /usr/lib/power/powerd stack pointer for thread ffffff0724942be0: ffffff002e672c50 [ ffffff002e672c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0724745962, ffffff0724745928, 0) cv_wait_sig_swap+0x17(ffffff0724745962, ffffff0724745928) cv_timedwait_sig_hrtime+0x35(ffffff0724745962, ffffff0724745928, ffffffffffffffff) poll_common+0x504(febaffa8, 1, 0, 0) pollsys+0xe7(febaffa8, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724943480 ffffff0724655080 ffffff072a8400c0 1 59 ffffff072494366e PC: _resume_from_idle+0xf4 CMD: /usr/lib/power/powerd stack pointer for thread ffffff0724943480: ffffff002f6f5d90 [ ffffff002f6f5d90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072494366e, ffffff06f6a8e540, 0) cv_wait_sig_swap+0x17(ffffff072494366e, ffffff06f6a8e540) sigsuspend+0x101(8047ed0) _sys_sysenter_post_swapgs+0x149() ffffff072fbec140 ffffff072d40f0b0 ffffff072a848040 1 59 ffffff072fbec32e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff072fbec140: ffffff002ec9fcc0 [ ffffff002ec9fcc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072fbec32e, ffffff072fbec330, 1bf08eae30 , 1, 4) cv_waituntil_sig+0xfa(ffffff072fbec32e, ffffff072fbec330, ffffff002ec9fe70, 3) nanosleep+0x19f(fe95ef88, fe95ef80) _sys_sysenter_post_swapgs+0x149() ffffff073051c8c0 ffffff072d40f0b0 ffffff072a842ac0 1 59 ffffff072d40f528 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff073051c8c0: ffffff002f049d80 [ ffffff002f049d80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff072d40f528, fffffffffbd11010) door_unref+0x94() doorfs32+0xf5(0, 0, 0, 0, 0, 8) _sys_sysenter_post_swapgs+0x149() ffffff07315aa420 ffffff072d40f0b0 ffffff072b24f000 1 59 ffffff07315aa60e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff07315aa420: ffffff002f35fcc0 [ ffffff002f35fcc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07315aa60e, ffffff07315aa610, 22ecb25be66 , 1, 4) cv_waituntil_sig+0xfa(ffffff07315aa60e, ffffff07315aa610, ffffff002f35fe70, 3) nanosleep+0x19f(fd64cf78, fd64cf70) _sys_sysenter_post_swapgs+0x149() ffffff07315aa080 ffffff072d40f0b0 ffffff06fe2145c0 1 59 ffffff07315aa26e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff07315aa080: ffffff002fc2fcc0 [ ffffff002fc2fcc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07315aa26e, ffffff07315aa270, 34630b89e66 , 1, 4) cv_waituntil_sig+0xfa(ffffff07315aa26e, ffffff07315aa270, ffffff002fc2fe70, 3) nanosleep+0x19f(fd54df78, fd54df70) _sys_sysenter_post_swapgs+0x149() ffffff0731642140 ffffff072d40f0b0 ffffff06fe283840 1 59 ffffff073164232e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0731642140: ffffff002eefec60 [ ffffff002eefec60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff073164232e, ffffff0731642330, 2540bd427, 1, 4) cv_waituntil_sig+0xfa(ffffff073164232e, ffffff0731642330, ffffff002eefee10, 3) lwp_park+0x15e(fd44ef18, 0) syslwp_park+0x63(0, fd44ef18, 0) _sys_sysenter_post_swapgs+0x149() ffffff07247a0180 ffffff072d40f0b0 ffffff06fe22dcc0 1 59 ffffff07247a036e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff07247a0180: ffffff002f437cc0 [ ffffff002f437cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07247a036e, ffffff07247a0370, 14f46b0295 , 1, 4) cv_waituntil_sig+0xfa(ffffff07247a036e, ffffff07247a0370, ffffff002f437e70, 3) nanosleep+0x19f(fd34ff38, fd34ff30) _sys_sysenter_post_swapgs+0x149() ffffff07242fe140 ffffff072d40f0b0 ffffff06fe279cc0 1 59 ffffff07242fe32e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff07242fe140: ffffff002ee20cc0 [ ffffff002ee20cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07242fe32e, ffffff07242fe330, 3466c53688b , 1, 4) cv_waituntil_sig+0xfa(ffffff07242fe32e, ffffff07242fe330, ffffff002ee20e70, 3) nanosleep+0x19f(fd250f78, fd250f70) _sys_sysenter_post_swapgs+0x149() ffffff072428f0c0 ffffff072d40f0b0 ffffff072a830080 1 59 ffffff072428f2ae PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff072428f0c0: ffffff002eeb0cc0 [ ffffff002eeb0cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072428f2ae, ffffff072428f2b0, 22ecb25be66 , 1, 4) cv_waituntil_sig+0xfa(ffffff072428f2ae, ffffff072428f2b0, ffffff002eeb0e70, 3) nanosleep+0x19f(fd151f78, fd151f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c64480 ffffff072d40f0b0 ffffff06fe230780 1 59 ffffff0730c6466e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c64480: ffffff002ef28cc0 [ ffffff002ef28cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c6466e, ffffff0730c64670, 3466c53688e , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c6466e, ffffff0730c64670, ffffff002ef28e70, 3) nanosleep+0x19f(fd052f78, fd052f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c853e0 ffffff072d40f0b0 ffffff072a83eb00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c853e0: ffffff002f24fd20 [ ffffff002f24fd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072493a160) shuttle_resume+0x2af(ffffff072493a160, fffffffffbd11010) door_return+0x3e0(fdcc1c80, 179, 0, 0, fdd45e00, f5f00) doorfs32+0x180(fdcc1c80, 179, 0, fdd45e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c1eae0 ffffff072d40f0b0 ffffff072a83b800 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1eae0: ffffff002ecfad20 [ ffffff002ecfad20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0724286820) shuttle_resume+0x2af(ffffff0724286820, fffffffffbd11010) door_return+0x3e0(fe54dca8, 13d, 0, 0, fe551e00, f5f00) doorfs32+0x180(fe54dca8, 13d, 0, fe551e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c1e740 ffffff072d40f0b0 ffffff072a839c00 1 59 ffffff0730c1e92e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1e740: ffffff002ed00cc0 [ ffffff002ed00cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c1e92e, ffffff0730c1e930, 5d21db9e69 , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c1e92e, ffffff0730c1e930, ffffff002ed00e70, 3) nanosleep+0x19f(fe43ef78, fe43ef70) _sys_sysenter_post_swapgs+0x149() ffffff0730c1e3a0 ffffff072d40f0b0 ffffff072a837840 1 59 ffffff0730c1e58e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1e3a0: ffffff002ed06cc0 [ ffffff002ed06cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c1e58e, ffffff0730c1e590, 8bee64388b , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c1e58e, ffffff0730c1e590, ffffff002ed06e70, 3) nanosleep+0x19f(fe33ff78, fe33ff70) _sys_sysenter_post_swapgs+0x149() ffffff0730c1e000 ffffff072d40f0b0 ffffff072a839500 1 59 ffffff0730c1e1ee PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1e000: ffffff002ed0ccc0 [ ffffff002ed0ccc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c1e1ee, ffffff0730c1e1f0, 22ecb25be6a , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c1e1ee, ffffff0730c1e1f0, ffffff002ed0ce70, 3) nanosleep+0x19f(fe240f78, fe240f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c1cb00 ffffff072d40f0b0 ffffff072a836340 1 59 ffffff0730c1ccee PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1cb00: ffffff002ed12cc0 [ ffffff002ed12cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c1ccee, ffffff0730c1ccf0, 3466c53688b , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c1ccee, ffffff0730c1ccf0, ffffff002ed12e70, 3) nanosleep+0x19f(fe141f78, fe141f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c1c760 ffffff072d40f0b0 ffffff072a838000 1 59 ffffff0730c1c94e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1c760: ffffff002ed18cc0 [ ffffff002ed18cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c1c94e, ffffff0730c1c950, 22ecb25be69 , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c1c94e, ffffff0730c1c950, ffffff002ed18e70, 3) nanosleep+0x19f(fe042f78, fe042f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c1c3c0 ffffff072d40f0b0 ffffff072a838e00 1 59 ffffff0730c1c5ae PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c1c3c0: ffffff002ed1ecc0 [ ffffff002ed1ecc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c1c5ae, ffffff0730c1c5b0, 3466c53687a , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c1c5ae, ffffff0730c1c5b0, ffffff002ed1ee70, 3) nanosleep+0x19f(fdf43f78, fdf43f70) _sys_sysenter_post_swapgs+0x149() ffffff0730d8e820 ffffff072d40f0b0 ffffff06fe22f8c0 1 59 ffffff0730d8ea0e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730d8e820: ffffff002eea4cc0 [ ffffff002eea4cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730d8ea0e, ffffff0730d8ea10, 22ecb25be45 , 1, 4) cv_waituntil_sig+0xfa(ffffff0730d8ea0e, ffffff0730d8ea10, ffffff002eea4e70, 3) nanosleep+0x19f(fda48f78, fda48f70) _sys_sysenter_post_swapgs+0x149() ffffff0724229160 ffffff072d40f0b0 ffffff072a830780 1 59 ffffff072422934e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0724229160: ffffff002ed3ccc0 [ ffffff002ed3ccc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072422934e, ffffff0724229350, 3466c53686d , 1, 4) cv_waituntil_sig+0xfa(ffffff072422934e, ffffff0724229350, ffffff002ed3ce70, 3) nanosleep+0x19f(fd949f78, fd949f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c85780 ffffff072d40f0b0 ffffff06fe278ec0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c85780: ffffff002fb9fd20 [ ffffff002fb9fd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07241ec800) shuttle_resume+0x2af(ffffff07241ec800, fffffffffbd11010) door_return+0x3e0(fde40ca8, 13d, 0, 0, fde44e00, f5f00) doorfs32+0x180(fde40ca8, 13d, 0, fde44e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c85040 ffffff072d40f0b0 ffffff072a849c40 1 59 ffffff0730c8522e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c85040: ffffff002ecabcc0 [ ffffff002ecabcc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c8522e, ffffff0730c85230, 22ecb25be69 , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c8522e, ffffff0730c85230, ffffff002ecabe70, 3) nanosleep+0x19f(fdc46f78, fdc46f70) _sys_sysenter_post_swapgs+0x149() ffffff0730c7eb40 ffffff072d40f0b0 ffffff072a83b100 1 59 ffffff0730c7ed2e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff0730c7eb40: ffffff002ed42cc0 [ ffffff002ed42cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c7ed2e, ffffff0730c7ed30, 34630b89e83 , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c7ed2e, ffffff0730c7ed30, ffffff002ed42e70, 3) nanosleep+0x19f(fdb47f78, fdb47f70) _sys_sysenter_post_swapgs+0x149() ffffff073155f780 ffffff072d40f0b0 ffffff072b259ac0 1 59 ffffff073155f96e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff073155f780: ffffff002f228cc0 [ ffffff002f228cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff073155f96e, ffffff073155f970, 22ecb25be49 , 1, 4) cv_waituntil_sig+0xfa(ffffff073155f96e, ffffff073155f970, ffffff002f228e70, 3) nanosleep+0x19f(fd84af78, fd84af70) _sys_sysenter_post_swapgs+0x149() ffffff073155f3e0 ffffff072d40f0b0 ffffff072b2585c0 1 59 ffffff073155f5ce PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff073155f3e0: ffffff002f4dbcc0 [ ffffff002f4dbcc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff073155f5ce, ffffff073155f5d0, 3466c53688e , 1, 4) cv_waituntil_sig+0xfa(ffffff073155f5ce, ffffff073155f5d0, ffffff002f4dbe70, 3) nanosleep+0x19f(fd74bf78, fd74bf70) _sys_sysenter_post_swapgs+0x149() ffffff07246d2bc0 ffffff072d40f0b0 ffffff06fe21cc40 1 59 ffffff07246d2dae PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff07246d2bc0: ffffff002e696cc0 [ ffffff002e696cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07246d2dae, ffffff07246d2db0, df847567d, 1, 4) cv_waituntil_sig+0xfa(ffffff07246d2dae, ffffff07246d2db0, ffffff002e696e70, 3) nanosleep+0x19f(fcf53f58, fcf53f50) _sys_sysenter_post_swapgs+0x149() ffffff073051c520 ffffff072d40f0b0 ffffff072a838700 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff073051c520: ffffff002ecb7d20 [ ffffff002ecb7d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0724943bc0) shuttle_resume+0x2af(ffffff0724943bc0, fffffffffbd11010) door_return+0x3e0(fe6cbca0, b6, 0, 0, fe74fe00, f5f00) doorfs32+0x180(fe6cbca0, b6, 0, fe74fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff073051c180 ffffff072d40f0b0 ffffff072a83aa00 1 60 ffffff07247358e0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff073051c180: ffffff002ecbd950 [ ffffff002ecbd950 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff07247358e0, ffffff07247357a8) so_dequeue_msg+0x2f7(ffffff0724735788, ffffff002ecbdb68, ffffff002ecbddf0, ffffff002ecbdb70, 20) so_recvmsg+0x249(ffffff0724735788, ffffff002ecbdc30, ffffff002ecbddf0, ffffff072cd06238) socket_recvmsg+0x33(ffffff0724735788, ffffff002ecbdc30, ffffff002ecbddf0, ffffff072cd06238) socket_vop_read+0x5f(ffffff0724749580, ffffff002ecbddf0, 0, ffffff072cd06238 , 0) fop_read+0x5b(ffffff0724749580, ffffff002ecbddf0, 0, ffffff072cd06238, 0) read+0x2a7(4, fe650664, 94c) read32+0x1e(4, fe650664, 94c) _sys_sysenter_post_swapgs+0x149() ffffff072fbec4e0 ffffff072d40f0b0 ffffff072a83cf00 1 59 ffffff072fbec6ce PC: _resume_from_idle+0xf4 CMD: /usr/sbin/nscd stack pointer for thread ffffff072fbec4e0: ffffff002ec99dd0 [ ffffff002ec99dd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072fbec6ce, ffffff072fbec6d0, 0) cv_wait_sig_swap+0x17(ffffff072fbec6ce, ffffff072fbec6d0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff073151ec40 ffffff0731532070 ffffff072a831580 1 59 ffffff073151ee2e PC: _resume_from_idle+0xf4 CMD: /usr/lib/inet/ntpd -p /var/run/ntp.pid -g stack pointer for thread ffffff073151ec40: ffffff002fbbbd90 [ ffffff002fbbbd90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073151ee2e, ffffff06f6a8eb80, 0) cv_wait_sig_swap+0x17(ffffff073151ee2e, ffffff06f6a8eb80) sigsuspend+0x101(8047910) _sys_sysenter_post_swapgs+0x149() ffffff0724775880 ffffff072ccfe0a0 ffffff072a837140 1 59 ffffff07314ca882 PC: _resume_from_idle+0xf4 CMD: -bash stack pointer for thread ffffff0724775880: ffffff002eeec9f0 [ ffffff002eeec9f0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff07314ca882, ffffff07312f80c0) str_cv_wait+0x27(ffffff07314ca882, ffffff07312f80c0, ffffffffffffffff, 0) strwaitq+0x2c3(ffffff07312f8040, 2, 1, 2803, ffffffffffffffff, ffffff002eeecbb8) strread+0x144(ffffff0731c23640, ffffff002eeecdf0, ffffff07314ebdd0) spec_read+0x66(ffffff0731c23640, ffffff002eeecdf0, 0, ffffff07314ebdd0, 0) fop_read+0x5b(ffffff0731c23640, ffffff002eeecdf0, 0, ffffff07314ebdd0, 0) read+0x2a7(0, 80472bb, 1) read32+0x1e(0, 80472bb, 1) _sys_sysenter_post_swapgs+0x149() ffffff002fe33c40 ffffff0730d90060 ffffff06fe27dc80 2 0 ffffff07441aeb20 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe33c40: ffffff002fe33990 [ ffffff002fe33990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aeb20, ffffff07441aeb10) taskq_thread_wait+0xbe(ffffff07441aeaf0, ffffff07441aeb10, ffffff07441aeb20 , ffffff002fe33ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aeaf0) thread_start+8() ffffff002fe39c40 ffffff0730d90060 ffffff072a790c00 2 99 ffffff0724078d98 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe39c40: ffffff002fe39990 [ ffffff002fe39990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078d98, ffffff0724078d88) taskq_thread_wait+0xbe(ffffff0724078d68, ffffff0724078d88, ffffff0724078d98 , ffffff002fe39ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078d68) thread_start+8() ffffff002fe9fc40 ffffff0730d90060 ffffff072b297e40 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe9fc40: ffffff002fe9f990 [ ffffff002fe9f990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe9fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe87c40 ffffff0730d90060 ffffff072b245ac0 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe87c40: ffffff002fe87990 [ ffffff002fe87990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe87ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe7bc40 ffffff0730d90060 ffffff072b24da40 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe7bc40: ffffff002fe7b990 [ ffffff002fe7b990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe7bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe63c40 ffffff0730d90060 ffffff072b24a880 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe63c40: ffffff002fe63990 [ ffffff002fe63990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe63ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe57c40 ffffff0730d90060 ffffff06fe20be00 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe57c40: ffffff002fe57990 [ ffffff002fe57990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe57ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe4bc40 ffffff0730d90060 ffffff06fe233880 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe4bc40: ffffff002fe4b990 [ ffffff002fe4b990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe4bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe45c40 ffffff0730d90060 ffffff06fe244e80 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe45c40: ffffff002fe45990 [ ffffff002fe45990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe45ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002fe3fc40 ffffff0730d90060 ffffff06fe27c080 2 99 ffffff072ab143e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe3fc40: ffffff002fe3f990 [ ffffff002fe3f990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab143e0, ffffff072ab143d0) taskq_thread_wait+0xbe(ffffff072ab143b0, ffffff072ab143d0, ffffff072ab143e0 , ffffff002fe3fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab143b0) thread_start+8() ffffff002ff35c40 ffffff0730d90060 ffffff072b244cc0 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff35c40: ffffff002ff35990 [ ffffff002ff35990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002ff35ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002ff11c40 ffffff0730d90060 ffffff072b2430c0 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff11c40: ffffff002ff11990 [ ffffff002ff11990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002ff11ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fef9c40 ffffff0730d90060 ffffff072b242900 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fef9c40: ffffff002fef9990 [ ffffff002fef9990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fef9ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fee1c40 ffffff0730d90060 ffffff072b27a180 2 99 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fee1c40: ffffff002fee1990 [ ffffff002fee1990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fee1ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fec9c40 ffffff0730d90060 ffffff072b2570c0 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fec9c40: ffffff002fec9990 [ ffffff002fec9990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fec9ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002feb7c40 ffffff0730d90060 ffffff072b25a1c0 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002feb7c40: ffffff002feb7990 [ ffffff002feb7990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002feb7ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fe99c40 ffffff0730d90060 ffffff072b258cc0 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe99c40: ffffff002fe99990 [ ffffff002fe99990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fe99ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fe81c40 ffffff0730d90060 ffffff072a78f700 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe81c40: ffffff002fe81990 [ ffffff002fe81990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fe81ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fe75c40 ffffff0730d90060 ffffff072b247780 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe75c40: ffffff002fe75990 [ ffffff002fe75990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fe75ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fe69c40 ffffff0730d90060 ffffff072a78f000 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe69c40: ffffff002fe69990 [ ffffff002fe69990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fe69ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fe5dc40 ffffff0730d90060 ffffff06fe27ea80 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe5dc40: ffffff002fe5d990 [ ffffff002fe5d990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fe5dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff002fe51c40 ffffff0730d90060 ffffff072a834740 2 0 ffffff072a5b9940 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe51c40: ffffff002fe51990 [ ffffff002fe51990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9940, ffffff072a5b9930) taskq_thread_wait+0xbe(ffffff072a5b9910, ffffff072a5b9930, ffffff072a5b9940 , ffffff002fe51ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9910) thread_start+8() ffffff003038cc40 ffffff0730d90060 ffffff072b23be00 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003038cc40: ffffff003038c990 [ ffffff003038c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff003038cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff003036ec40 ffffff0730d90060 ffffff072b277e80 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003036ec40: ffffff003036e990 [ ffffff003036e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff003036ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002ff3bc40 ffffff0730d90060 ffffff072b277780 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff3bc40: ffffff002ff3b990 [ ffffff002ff3b990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002ff3bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002ff23c40 ffffff0730d90060 ffffff072b278c80 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff23c40: ffffff002ff23990 [ ffffff002ff23990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002ff23ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002ff05c40 ffffff0730d90060 ffffff072b247080 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff05c40: ffffff002ff05990 [ ffffff002ff05990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002ff05ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002feedc40 ffffff0730d90060 ffffff072b29a140 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002feedc40: ffffff002feed990 [ ffffff002feed990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002feedad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002fed5c40 ffffff0730d90060 ffffff072b2461c0 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fed5c40: ffffff002fed5990 [ ffffff002fed5990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002fed5ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002febdc40 ffffff0730d90060 ffffff072b2445c0 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002febdc40: ffffff002febd990 [ ffffff002febd990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002febdad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002feb1c40 ffffff0730d90060 ffffff072a845c80 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002feb1c40: ffffff002feb1990 [ ffffff002feb1990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002feb1ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002fe93c40 ffffff0730d90060 ffffff072a835c40 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe93c40: ffffff002fe93990 [ ffffff002fe93990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002fe93ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002fe8dc40 ffffff0730d90060 ffffff072b23e800 2 99 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe8dc40: ffffff002fe8d990 [ ffffff002fe8d990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002fe8dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff002fe6fc40 ffffff0730d90060 ffffff072a831c80 2 0 ffffff07441ea7e0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe6fc40: ffffff002fe6f990 [ ffffff002fe6f990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea7e0, ffffff07441ea7d0) taskq_thread_wait+0xbe(ffffff07441ea7b0, ffffff07441ea7d0, ffffff07441ea7e0 , ffffff002fe6fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea7b0) thread_start+8() ffffff00303c2c40 ffffff0730d90060 ffffff072b241b00 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303c2c40: ffffff00303c2990 [ ffffff00303c2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff00303c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff00303b6c40 ffffff0730d90060 ffffff072b23b700 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303b6c40: ffffff00303b6990 [ ffffff00303b6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff00303b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff0030398c40 ffffff0730d90060 ffffff072b297740 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030398c40: ffffff0030398990 [ ffffff0030398990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff0030398ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff0030380c40 ffffff0730d90060 ffffff072b2437c0 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030380c40: ffffff0030380990 [ ffffff0030380990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff0030380ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002ff47c40 ffffff0730d90060 ffffff072b23b000 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff47c40: ffffff002ff47990 [ ffffff002ff47990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002ff47ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002ff29c40 ffffff0730d90060 ffffff072b23e100 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff29c40: ffffff002ff29990 [ ffffff002ff29990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002ff29ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002ff0bc40 ffffff0730d90060 ffffff072b278580 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff0bc40: ffffff002ff0b990 [ ffffff002ff0b990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002ff0bad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002fef3c40 ffffff0730d90060 ffffff072b249a80 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fef3c40: ffffff002fef3990 [ ffffff002fef3990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002fef3ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002fedbc40 ffffff0730d90060 ffffff072b298540 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fedbc40: ffffff002fedb990 [ ffffff002fedb990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002fedbad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002fec3c40 ffffff0730d90060 ffffff072b243ec0 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fec3c40: ffffff002fec3990 [ ffffff002fec3990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002fec3ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002feabc40 ffffff0730d90060 ffffff072b299a40 2 0 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002feabc40: ffffff002feab990 [ ffffff002feab990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002feabad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff002fea5c40 ffffff0730d90060 ffffff072b247e80 2 99 ffffff07441aea08 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fea5c40: ffffff002fea5990 [ ffffff002fea5990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aea08, ffffff07441ae9f8) taskq_thread_wait+0xbe(ffffff07441ae9d8, ffffff07441ae9f8, ffffff07441aea08 , ffffff002fea5ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae9d8) thread_start+8() ffffff00303fec40 ffffff0730d90060 ffffff06fe244780 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303fec40: ffffff00303fe990 [ ffffff00303fe990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff00303fead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff00303e6c40 ffffff0730d90060 ffffff06fe222800 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303e6c40: ffffff00303e6990 [ ffffff00303e6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff00303e6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff00303cec40 ffffff0730d90060 ffffff072b23d300 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303cec40: ffffff00303ce990 [ ffffff00303ce990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff00303cead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff00303b0c40 ffffff0730d90060 ffffff072b277080 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303b0c40: ffffff00303b0990 [ ffffff00303b0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff00303b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff003039ec40 ffffff0730d90060 ffffff072b279380 2 99 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003039ec40: ffffff003039e990 [ ffffff003039e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff003039ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff003037ac40 ffffff0730d90060 ffffff072b279a80 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003037ac40: ffffff003037a990 [ ffffff003037a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff003037aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff0030368c40 ffffff0730d90060 ffffff072b23c500 2 99 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030368c40: ffffff0030368990 [ ffffff0030368990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff0030368ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff002ff2fc40 ffffff0730d90060 ffffff072b298c40 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff2fc40: ffffff002ff2f990 [ ffffff002ff2f990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff002ff2fad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff002ff1dc40 ffffff0730d90060 ffffff072b2453c0 2 99 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff1dc40: ffffff002ff1d990 [ ffffff002ff1d990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff002ff1dad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff002feffc40 ffffff0730d90060 ffffff072b248c80 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002feffc40: ffffff002feff990 [ ffffff002feff990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff002feffad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff002fee7c40 ffffff0730d90060 ffffff072b24a180 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fee7c40: ffffff002fee7990 [ ffffff002fee7990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff002fee7ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff002fecfc40 ffffff0730d90060 ffffff072b248580 2 0 ffffff07441aec38 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fecfc40: ffffff002fecf990 [ ffffff002fecf990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aec38, ffffff07441aec28) taskq_thread_wait+0xbe(ffffff07441aec08, ffffff07441aec28, ffffff07441aec38 , ffffff002fecfad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aec08) thread_start+8() ffffff0030446c40 ffffff0730d90060 ffffff06fe20cc00 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030446c40: ffffff0030446990 [ ffffff0030446990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff0030446ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff003042ec40 ffffff0730d90060 ffffff06fe246380 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003042ec40: ffffff003042e990 [ ffffff003042e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff003042ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff0030416c40 ffffff0730d90060 ffffff06fe207740 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030416c40: ffffff0030416990 [ ffffff0030416990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff0030416ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff0030404c40 ffffff0730d90060 ffffff06fe23f900 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030404c40: ffffff0030404990 [ ffffff0030404990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff0030404ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff00303ecc40 ffffff0730d90060 ffffff06fe21f700 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303ecc40: ffffff00303ec990 [ ffffff00303ec990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff00303ecad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff00303d4c40 ffffff0730d90060 ffffff06fe238700 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303d4c40: ffffff00303d4990 [ ffffff00303d4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff00303d4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff00303bcc40 ffffff0730d90060 ffffff06fe277900 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303bcc40: ffffff00303bc990 [ ffffff00303bc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff00303bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff00303a4c40 ffffff0730d90060 ffffff072b242200 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303a4c40: ffffff00303a4990 [ ffffff00303a4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff00303a4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff0030392c40 ffffff0730d90060 ffffff072b23da00 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030392c40: ffffff0030392990 [ ffffff0030392990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff0030392ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff0030374c40 ffffff0730d90060 ffffff072b29a840 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030374c40: ffffff0030374990 [ ffffff0030374990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff0030374ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff002ff41c40 ffffff0730d90060 ffffff072b297040 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff41c40: ffffff002ff41990 [ ffffff002ff41990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff002ff41ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff002ff17c40 ffffff0730d90060 ffffff072b27a880 2 0 ffffff072a5b9eb8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002ff17c40: ffffff002ff17990 [ ffffff002ff17990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9eb8, ffffff072a5b9ea8) taskq_thread_wait+0xbe(ffffff072a5b9e88, ffffff072a5b9ea8, ffffff072a5b9eb8 , ffffff002ff17ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9e88) thread_start+8() ffffff0030494c40 ffffff0730d90060 ffffff06fe240ec0 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030494c40: ffffff0030494990 [ ffffff0030494990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff0030494ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff003047cc40 ffffff0730d90060 ffffff06fe20a140 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003047cc40: ffffff003047c990 [ ffffff003047c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff003047cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff0030464c40 ffffff0730d90060 ffffff06fe2431c0 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030464c40: ffffff0030464990 [ ffffff0030464990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff0030464ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff003044cc40 ffffff0730d90060 ffffff06fe21e840 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003044cc40: ffffff003044c990 [ ffffff003044c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff003044cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff003043ac40 ffffff0730d90060 ffffff06fe20b700 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003043ac40: ffffff003043a990 [ ffffff003043a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff003043aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff0030422c40 ffffff0730d90060 ffffff06fe241cc0 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030422c40: ffffff0030422990 [ ffffff0030422990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff0030422ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff003040ac40 ffffff0730d90060 ffffff06fe20a840 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003040ac40: ffffff003040a990 [ ffffff003040a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff003040aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff00303f2c40 ffffff0730d90060 ffffff06fe207e40 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303f2c40: ffffff00303f2990 [ ffffff00303f2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff00303f2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff00303dac40 ffffff0730d90060 ffffff06fe21b740 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303dac40: ffffff00303da990 [ ffffff00303da990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff00303daad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff00303c8c40 ffffff0730d90060 ffffff06fe21b040 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303c8c40: ffffff00303c8990 [ ffffff00303c8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff00303c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff00303aac40 ffffff0730d90060 ffffff072b23cc00 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303aac40: ffffff00303aa990 [ ffffff00303aa990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff00303aaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff0030386c40 ffffff0730d90060 ffffff072b299340 2 0 ffffff07441ae8f0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030386c40: ffffff0030386990 [ ffffff0030386990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae8f0, ffffff07441ae8e0) taskq_thread_wait+0xbe(ffffff07441ae8c0, ffffff07441ae8e0, ffffff07441ae8f0 , ffffff0030386ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae8c0) thread_start+8() ffffff00304cac40 ffffff0730d90060 ffffff06fe273100 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304cac40: ffffff00304ca990 [ ffffff00304ca990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff00304caad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff00304c4c40 ffffff0730d90060 ffffff06fe207040 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304c4c40: ffffff00304c4990 [ ffffff00304c4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff00304c4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff00304b2c40 ffffff0730d90060 ffffff06fe23dd00 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304b2c40: ffffff00304b2990 [ ffffff00304b2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff00304b2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff00304a0c40 ffffff0730d90060 ffffff06fe20b000 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304a0c40: ffffff00304a0990 [ ffffff00304a0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff00304a0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030488c40 ffffff0730d90060 ffffff06fe20d300 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030488c40: ffffff0030488990 [ ffffff0030488990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff0030488ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030470c40 ffffff0730d90060 ffffff06fe238e00 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030470c40: ffffff0030470990 [ ffffff0030470990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff0030470ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030458c40 ffffff0730d90060 ffffff06fe23c800 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030458c40: ffffff0030458990 [ ffffff0030458990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff0030458ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030440c40 ffffff0730d90060 ffffff06fe244080 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030440c40: ffffff0030440990 [ ffffff0030440990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff0030440ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030428c40 ffffff0730d90060 ffffff06fe20c500 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030428c40: ffffff0030428990 [ ffffff0030428990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff0030428ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030410c40 ffffff0730d90060 ffffff06fe2423c0 2 99 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030410c40: ffffff0030410990 [ ffffff0030410990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff0030410ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff00303f8c40 ffffff0730d90060 ffffff06fe20da00 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303f8c40: ffffff00303f8990 [ ffffff00303f8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff00303f8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff00303e0c40 ffffff0730d90060 ffffff06fe213ec0 2 0 ffffff07441ae148 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00303e0c40: ffffff00303e0990 [ ffffff00303e0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae148, ffffff07441ae138) taskq_thread_wait+0xbe(ffffff07441ae118, ffffff07441ae138, ffffff07441ae148 , ffffff00303e0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae118) thread_start+8() ffffff0030500c40 ffffff0730d90060 ffffff072a7cbe40 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030500c40: ffffff0030500990 [ ffffff0030500990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff0030500ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff00304f4c40 ffffff0730d90060 ffffff06fe23e400 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304f4c40: ffffff00304f4990 [ ffffff00304f4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff00304f4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff00304e2c40 ffffff0730d90060 ffffff06fe26f840 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304e2c40: ffffff00304e2990 [ ffffff00304e2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff00304e2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff00304d0c40 ffffff0730d90060 ffffff06fe23c100 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304d0c40: ffffff00304d0990 [ ffffff00304d0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff00304d0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff00304bec40 ffffff0730d90060 ffffff06fe208540 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304bec40: ffffff00304be990 [ ffffff00304be990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff00304bead0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff00304acc40 ffffff0730d90060 ffffff06fe272a00 2 99 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304acc40: ffffff00304ac990 [ ffffff00304ac990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff00304acad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff003049ac40 ffffff0730d90060 ffffff06fe239c00 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003049ac40: ffffff003049a990 [ ffffff003049a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff003049aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff0030482c40 ffffff0730d90060 ffffff06fe236a40 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030482c40: ffffff0030482990 [ ffffff0030482990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff0030482ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff003046ac40 ffffff0730d90060 ffffff06fe242ac0 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003046ac40: ffffff003046a990 [ ffffff003046a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff003046aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff0030452c40 ffffff0730d90060 ffffff06fe2438c0 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030452c40: ffffff0030452990 [ ffffff0030452990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff0030452ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff0030434c40 ffffff0730d90060 ffffff06fe209a40 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030434c40: ffffff0030434990 [ ffffff0030434990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff0030434ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff003041cc40 ffffff0730d90060 ffffff06fe235540 2 0 ffffff07240782a8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003041cc40: ffffff003041c990 [ ffffff003041c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240782a8, ffffff0724078298) taskq_thread_wait+0xbe(ffffff0724078278, ffffff0724078298, ffffff07240782a8 , ffffff003041cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078278) thread_start+8() ffffff003048ec40 ffffff0730d90060 ffffff06fe2415c0 2 0 ffffff07441ae030 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003048ec40: ffffff003048e990 [ ffffff003048e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae030, ffffff07441ae020) taskq_thread_wait+0xbe(ffffff07441ae000, ffffff07441ae020, ffffff07441ae030 , ffffff003048ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae000) thread_start+8() ffffff0030476c40 ffffff0730d90060 ffffff06fe23b100 2 0 ffffff07441ae030 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030476c40: ffffff0030476990 [ ffffff0030476990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae030, ffffff07441ae020) taskq_thread_wait+0xbe(ffffff07441ae000, ffffff07441ae020, ffffff07441ae030 , ffffff0030476ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae000) thread_start+8() ffffff003045ec40 ffffff0730d90060 ffffff06fe23eb00 2 0 ffffff07441ae030 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003045ec40: ffffff003045e990 [ ffffff003045e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae030, ffffff07441ae020) taskq_thread_wait+0xbe(ffffff07441ae000, ffffff07441ae020, ffffff07441ae030 , ffffff003045ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae000) thread_start+8() ffffff00304fac40 ffffff0730d90060 ffffff06fe239500 2 99 ffffff07441ae7d8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304fac40: ffffff00304fa990 [ ffffff00304fa990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae7d8, ffffff07441ae7c8) taskq_thread_wait+0xbe(ffffff07441ae7a8, ffffff07441ae7c8, ffffff07441ae7d8 , ffffff00304faad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae7a8) thread_start+8() ffffff00304e8c40 ffffff0730d90060 ffffff06fe270700 2 99 ffffff07441ae7d8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304e8c40: ffffff00304e8990 [ ffffff00304e8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae7d8, ffffff07441ae7c8) taskq_thread_wait+0xbe(ffffff07441ae7a8, ffffff07441ae7c8, ffffff07441ae7d8 , ffffff00304e8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae7a8) thread_start+8() ffffff00304d6c40 ffffff0730d90060 ffffff06fe273800 2 99 ffffff07441ae7d8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304d6c40: ffffff00304d6990 [ ffffff00304d6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae7d8, ffffff07441ae7c8) taskq_thread_wait+0xbe(ffffff07441ae7a8, ffffff07441ae7c8, ffffff07441ae7d8 , ffffff00304d6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae7a8) thread_start+8() ffffff00304b8c40 ffffff0730d90060 ffffff06fe26f140 2 99 ffffff07441ae7d8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304b8c40: ffffff00304b8990 [ ffffff00304b8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae7d8, ffffff07441ae7c8) taskq_thread_wait+0xbe(ffffff07441ae7a8, ffffff07441ae7c8, ffffff07441ae7d8 , ffffff00304b8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae7a8) thread_start+8() ffffff00304a6c40 ffffff0730d90060 ffffff06fe23d600 2 99 ffffff07441ae7d8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304a6c40: ffffff00304a6990 [ ffffff00304a6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae7d8, ffffff07441ae7c8) taskq_thread_wait+0xbe(ffffff07441ae7a8, ffffff07441ae7c8, ffffff07441ae7d8 , ffffff00304a6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae7a8) thread_start+8() ffffff0030554c40 ffffff0730d90060 ffffff06fe270e00 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030554c40: ffffff0030554990 [ ffffff0030554990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff0030554ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff0030548c40 ffffff0730d90060 ffffff06fe271500 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030548c40: ffffff0030548990 [ ffffff0030548990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff0030548ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff0030536c40 ffffff0730d90060 ffffff072a7ac500 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030536c40: ffffff0030536990 [ ffffff0030536990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff0030536ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff0030524c40 ffffff0730d90060 ffffff072a7ce140 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030524c40: ffffff0030524990 [ ffffff0030524990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff0030524ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff0030512c40 ffffff0730d90060 ffffff072a7ad300 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030512c40: ffffff0030512990 [ ffffff0030512990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff0030512ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff0030506c40 ffffff0730d90060 ffffff06fe237840 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030506c40: ffffff0030506990 [ ffffff0030506990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff0030506ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff00304eec40 ffffff0730d90060 ffffff06fe247880 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304eec40: ffffff00304ee990 [ ffffff00304ee990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff00304eead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff00304dcc40 ffffff0730d90060 ffffff06fe26ea40 2 99 ffffff072a5b92b0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00304dcc40: ffffff00304dc990 [ ffffff00304dc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b92b0, ffffff072a5b92a0) taskq_thread_wait+0xbe(ffffff072a5b9280, ffffff072a5b92a0, ffffff072a5b92b0 , ffffff00304dcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9280) thread_start+8() ffffff0030542c40 ffffff0730d90060 ffffff072a7cb740 2 0 ffffff072a5b9828 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030542c40: ffffff0030542990 [ ffffff0030542990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9828, ffffff072a5b9818) taskq_thread_wait+0xbe(ffffff072a5b97f8, ffffff072a5b9818, ffffff072a5b9828 , ffffff0030542ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b97f8) thread_start+8() ffffff0030530c40 ffffff0730d90060 ffffff072a7ca880 2 99 ffffff072a5b9828 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030530c40: ffffff0030530990 [ ffffff0030530990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9828, ffffff072a5b9818) taskq_thread_wait+0xbe(ffffff072a5b97f8, ffffff072a5b9818, ffffff072a5b9828 , ffffff0030530ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b97f8) thread_start+8() ffffff003051ec40 ffffff0730d90060 ffffff06fe271c00 2 99 ffffff072a5b9828 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003051ec40: ffffff003051e990 [ ffffff003051e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9828, ffffff072a5b9818) taskq_thread_wait+0xbe(ffffff072a5b97f8, ffffff072a5b9818, ffffff072a5b9828 , ffffff003051ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b97f8) thread_start+8() ffffff0030518c40 ffffff0730d90060 ffffff06fe23cf00 2 99 ffffff072a5b9828 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030518c40: ffffff0030518990 [ ffffff0030518990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9828, ffffff072a5b9818) taskq_thread_wait+0xbe(ffffff072a5b97f8, ffffff072a5b9818, ffffff072a5b9828 , ffffff0030518ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b97f8) thread_start+8() ffffff003050cc40 ffffff0730d90060 ffffff072a7cd340 2 99 ffffff072a5b9828 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003050cc40: ffffff003050c990 [ ffffff003050c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9828, ffffff072a5b9818) taskq_thread_wait+0xbe(ffffff072a5b97f8, ffffff072a5b9818, ffffff072a5b9828 , ffffff003050cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b97f8) thread_start+8() ffffff0030602c40 ffffff0730d90060 ffffff072a7bbe00 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030602c40: ffffff0030602990 [ ffffff0030602990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff0030602ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff00305eac40 ffffff0730d90060 ffffff072a7c68c0 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305eac40: ffffff00305ea990 [ ffffff00305ea990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff00305eaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff00305d2c40 ffffff0730d90060 ffffff072a7c8580 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305d2c40: ffffff00305d2990 [ ffffff00305d2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff00305d2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff00305bac40 ffffff0730d90060 ffffff072a7be800 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305bac40: ffffff00305ba990 [ ffffff00305ba990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff00305baad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff00305a8c40 ffffff0730d90060 ffffff072a7c9380 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305a8c40: ffffff00305a8990 [ ffffff00305a8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff00305a8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff0030596c40 ffffff0730d90060 ffffff072a7ca180 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030596c40: ffffff0030596990 [ ffffff0030596990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff0030596ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff0030584c40 ffffff0730d90060 ffffff072a7ae800 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030584c40: ffffff0030584990 [ ffffff0030584990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff0030584ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff003056cc40 ffffff0730d90060 ffffff06fe23b800 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003056cc40: ffffff003056c990 [ ffffff003056c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff003056cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff0030560c40 ffffff0730d90060 ffffff072a7cc540 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030560c40: ffffff0030560990 [ ffffff0030560990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff0030560ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff003054ec40 ffffff0730d90060 ffffff072a7ce840 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003054ec40: ffffff003054e990 [ ffffff003054e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff003054ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff003053cc40 ffffff0730d90060 ffffff072a7ae100 2 0 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003053cc40: ffffff003053c990 [ ffffff003053c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff003053cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff003052ac40 ffffff0730d90060 ffffff06fe270000 2 99 ffffff0724078938 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003052ac40: ffffff003052a990 [ ffffff003052a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078938, ffffff0724078928) taskq_thread_wait+0xbe(ffffff0724078908, ffffff0724078928, ffffff0724078938 , ffffff003052aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078908) thread_start+8() ffffff0030626c40 ffffff0730d90060 ffffff072a7bcc00 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030626c40: ffffff0030626990 [ ffffff0030626990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff0030626ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff003060ec40 ffffff0730d90060 ffffff072a7bb000 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003060ec40: ffffff003060e990 [ ffffff003060e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff003060ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff00305fcc40 ffffff0730d90060 ffffff072a7c37c0 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305fcc40: ffffff00305fc990 [ ffffff00305fc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff00305fcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff00305e4c40 ffffff0730d90060 ffffff072a7c7e80 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305e4c40: ffffff00305e4990 [ ffffff00305e4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff00305e4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff00305ccc40 ffffff0730d90060 ffffff072a7bd300 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305ccc40: ffffff00305cc990 [ ffffff00305cc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff00305ccad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff00305b4c40 ffffff0730d90060 ffffff072a7c8c80 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305b4c40: ffffff00305b4990 [ ffffff00305b4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff00305b4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff00305a2c40 ffffff0730d90060 ffffff072a7acc00 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305a2c40: ffffff00305a2990 [ ffffff00305a2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff00305a2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff0030590c40 ffffff0730d90060 ffffff072a7ccc40 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030590c40: ffffff0030590990 [ ffffff0030590990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff0030590ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff003057ec40 ffffff0730d90060 ffffff072a7ab000 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003057ec40: ffffff003057e990 [ ffffff003057e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff003057ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff0030572c40 ffffff0730d90060 ffffff072a7ab700 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030572c40: ffffff0030572990 [ ffffff0030572990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff0030572ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff0030566c40 ffffff0730d90060 ffffff06fe272300 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030566c40: ffffff0030566990 [ ffffff0030566990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff0030566ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff003055ac40 ffffff0730d90060 ffffff06f134e140 2 99 ffffff072ab14610 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003055ac40: ffffff003055a990 [ ffffff003055a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14610, ffffff072ab14600) taskq_thread_wait+0xbe(ffffff072ab145e0, ffffff072ab14600, ffffff072ab14610 , ffffff003055aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab145e0) thread_start+8() ffffff0030662c40 ffffff0730d90060 ffffff072a7b8540 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030662c40: ffffff0030662990 [ ffffff0030662990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff0030662ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff003064ac40 ffffff0730d90060 ffffff072a7c45c0 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003064ac40: ffffff003064a990 [ ffffff003064a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff003064aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff0030638c40 ffffff0730d90060 ffffff072a7c5ac0 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030638c40: ffffff0030638990 [ ffffff0030638990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff0030638ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff003061ac40 ffffff0730d90060 ffffff072a7b8c40 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003061ac40: ffffff003061a990 [ ffffff003061a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff003061aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff0030608c40 ffffff0730d90060 ffffff072a7c2200 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030608c40: ffffff0030608990 [ ffffff0030608990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff0030608ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff00305f6c40 ffffff0730d90060 ffffff072a7c2900 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305f6c40: ffffff00305f6990 [ ffffff00305f6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff00305f6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff00305dec40 ffffff0730d90060 ffffff072a7bda00 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305dec40: ffffff00305de990 [ ffffff00305de990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff00305dead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff00305c6c40 ffffff0730d90060 ffffff072a7c7080 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305c6c40: ffffff00305c6990 [ ffffff00305c6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff00305c6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff00305aec40 ffffff0730d90060 ffffff072a7abe00 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305aec40: ffffff00305ae990 [ ffffff00305ae990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff00305aead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff003059cc40 ffffff0730d90060 ffffff072a7cda40 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003059cc40: ffffff003059c990 [ ffffff003059c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff003059cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff003058ac40 ffffff0730d90060 ffffff072a7cb040 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003058ac40: ffffff003058a990 [ ffffff003058a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff003058aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff0030578c40 ffffff0730d90060 ffffff072a7ada00 2 99 ffffff072a5b4a60 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030578c40: ffffff0030578990 [ ffffff0030578990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4a60, ffffff072a5b4a50) taskq_thread_wait+0xbe(ffffff072a5b4a30, ffffff072a5b4a50, ffffff072a5b4a60 , ffffff0030578ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b4a30) thread_start+8() ffffff00306c2c40 ffffff0730d90060 ffffff072a7a7100 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306c2c40: ffffff00306c2990 [ ffffff00306c2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff00306c2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff00306aac40 ffffff0730d90060 ffffff072a7b7040 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306aac40: ffffff00306aa990 [ ffffff00306aa990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff00306aaad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff0030692c40 ffffff0730d90060 ffffff072a7b9340 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030692c40: ffffff0030692990 [ ffffff0030692990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff0030692ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff003067ac40 ffffff0730d90060 ffffff072a7b6180 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003067ac40: ffffff003067a990 [ ffffff003067a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff003067aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff0030668c40 ffffff0730d90060 ffffff072a7bf100 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030668c40: ffffff0030668990 [ ffffff0030668990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff0030668ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff0030650c40 ffffff0730d90060 ffffff072a7b4580 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030650c40: ffffff0030650990 [ ffffff0030650990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff0030650ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff003063ec40 ffffff0730d90060 ffffff072a7b28c0 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003063ec40: ffffff003063e990 [ ffffff003063e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff003063ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff003062cc40 ffffff0730d90060 ffffff072a7bff00 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003062cc40: ffffff003062c990 [ ffffff003062c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff003062cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff0030614c40 ffffff0730d90060 ffffff072a7bb700 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030614c40: ffffff0030614990 [ ffffff0030614990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff0030614ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff00305f0c40 ffffff0730d90060 ffffff072a7be100 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305f0c40: ffffff00305f0990 [ ffffff00305f0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff00305f0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff00305d8c40 ffffff0730d90060 ffffff072a7c7780 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305d8c40: ffffff00305d8990 [ ffffff00305d8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff00305d8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff00305c0c40 ffffff0730d90060 ffffff072a7c9a80 2 99 ffffff072a5b4718 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00305c0c40: ffffff00305c0990 [ ffffff00305c0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b4718, ffffff072a5b4708) taskq_thread_wait+0xbe(ffffff072a5b46e8, ffffff072a5b4708, ffffff072a5b4718 , ffffff00305c0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b46e8) thread_start+8() ffffff0030722c40 ffffff0730d90060 ffffff072a7b21c0 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030722c40: ffffff0030722990 [ ffffff0030722990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff0030722ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff0030704c40 ffffff0730d90060 ffffff072a7b7740 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030704c40: ffffff0030704990 [ ffffff0030704990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff0030704ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff00306e6c40 ffffff0730d90060 ffffff072a7afec0 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306e6c40: ffffff00306e6990 [ ffffff00306e6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff00306e6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff00306cec40 ffffff0730d90060 ffffff072a7b6880 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306cec40: ffffff00306ce990 [ ffffff00306ce990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff00306cead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff00306bcc40 ffffff0730d90060 ffffff072a7c53c0 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306bcc40: ffffff00306bc990 [ ffffff00306bc990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff00306bcad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff00306a4c40 ffffff0730d90060 ffffff072a7c30c0 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306a4c40: ffffff00306a4990 [ ffffff00306a4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff00306a4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff003068cc40 ffffff0730d90060 ffffff072a7b4c80 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003068cc40: ffffff003068c990 [ ffffff003068c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff003068cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff0030674c40 ffffff0730d90060 ffffff072a7c4cc0 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030674c40: ffffff0030674990 [ ffffff0030674990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff0030674ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff003065cc40 ffffff0730d90060 ffffff072a7bc500 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003065cc40: ffffff003065c990 [ ffffff003065c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff003065cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff0030644c40 ffffff0730d90060 ffffff072a7c1400 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030644c40: ffffff0030644990 [ ffffff0030644990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff0030644ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff0030632c40 ffffff0730d90060 ffffff072a7ba140 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030632c40: ffffff0030632990 [ ffffff0030632990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff0030632ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff0030620c40 ffffff0730d90060 ffffff072a7c0d00 2 99 ffffff07441aee68 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030620c40: ffffff0030620990 [ ffffff0030620990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441aee68, ffffff07441aee58) taskq_thread_wait+0xbe(ffffff07441aee38, ffffff07441aee58, ffffff07441aee68 , ffffff0030620ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441aee38) thread_start+8() ffffff0030758c40 ffffff0730d90060 ffffff072a7af7c0 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030758c40: ffffff0030758990 [ ffffff0030758990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff0030758ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff0030740c40 ffffff0730d90060 ffffff072a7a0c40 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030740c40: ffffff0030740990 [ ffffff0030740990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff0030740ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff003072ec40 ffffff0730d90060 ffffff072a7a8600 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003072ec40: ffffff003072e990 [ ffffff003072e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff003072ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff0030710c40 ffffff0730d90060 ffffff072a7a3700 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030710c40: ffffff0030710990 [ ffffff0030710990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff0030710ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff00306f8c40 ffffff0730d90060 ffffff072a7b3780 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306f8c40: ffffff00306f8990 [ ffffff00306f8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff00306f8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff00306e0c40 ffffff0730d90060 ffffff072a7a2840 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306e0c40: ffffff00306e0990 [ ffffff00306e0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff00306e0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff00306c8c40 ffffff0730d90060 ffffff072a7b9a40 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306c8c40: ffffff00306c8990 [ ffffff00306c8990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff00306c8ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff00306b0c40 ffffff0730d90060 ffffff072a7b3080 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306b0c40: ffffff00306b0990 [ ffffff00306b0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff00306b0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff0030698c40 ffffff0730d90060 ffffff072a7a8d00 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030698c40: ffffff0030698990 [ ffffff0030698990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff0030698ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff0030680c40 ffffff0730d90060 ffffff072a7c1b00 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030680c40: ffffff0030680990 [ ffffff0030680990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff0030680ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff003066ec40 ffffff0730d90060 ffffff072a7c0600 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003066ec40: ffffff003066e990 [ ffffff003066e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff003066ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff0030656c40 ffffff0730d90060 ffffff072a7bf800 2 99 ffffff07441eae70 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030656c40: ffffff0030656990 [ ffffff0030656990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eae70, ffffff07441eae60) taskq_thread_wait+0xbe(ffffff07441eae40, ffffff07441eae60, ffffff07441eae70 , ffffff0030656ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eae40) thread_start+8() ffffff0030788c40 ffffff0730d90060 ffffff072a7a9400 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030788c40: ffffff0030788990 [ ffffff0030788990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff0030788ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff0030770c40 ffffff0730d90060 ffffff072a7a0540 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030770c40: ffffff0030770990 [ ffffff0030770990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff0030770ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff003075ec40 ffffff0730d90060 ffffff072a7b13c0 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003075ec40: ffffff003075e990 [ ffffff003075e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff003075ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff003074cc40 ffffff0730d90060 ffffff072a7aa900 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003074cc40: ffffff003074c990 [ ffffff003074c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff003074cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff0030734c40 ffffff0730d90060 ffffff072a7a7800 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030734c40: ffffff0030734990 [ ffffff0030734990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff0030734ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff0030716c40 ffffff0730d90060 ffffff072a7b7e40 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030716c40: ffffff0030716990 [ ffffff0030716990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff0030716ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff00306fec40 ffffff0730d90060 ffffff072a7a7f00 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306fec40: ffffff00306fe990 [ ffffff00306fe990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff00306fead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff00306ecc40 ffffff0730d90060 ffffff072a7b5380 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306ecc40: ffffff00306ec990 [ ffffff00306ec990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff00306ecad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff00306d4c40 ffffff0730d90060 ffffff072a7a9b00 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306d4c40: ffffff00306d4990 [ ffffff00306d4990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff00306d4ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff00306b6c40 ffffff0730d90060 ffffff072a7c3ec0 2 0 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306b6c40: ffffff00306b6990 [ ffffff00306b6990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff00306b6ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff003069ec40 ffffff0730d90060 ffffff072a7c61c0 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003069ec40: ffffff003069e990 [ ffffff003069e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff003069ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff0030686c40 ffffff0730d90060 ffffff072a7ba840 2 99 ffffff072a5b95f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030686c40: ffffff0030686990 [ ffffff0030686990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b95f8, ffffff072a5b95e8) taskq_thread_wait+0xbe(ffffff072a5b95c8, ffffff072a5b95e8, ffffff072a5b95f8 , ffffff0030686ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b95c8) thread_start+8() ffffff00307a0c40 ffffff0730d90060 ffffff072a7a6100 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00307a0c40: ffffff00307a0990 [ ffffff00307a0990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff00307a0ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff003079ac40 ffffff0730d90060 ffffff072a79b080 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003079ac40: ffffff003079a990 [ ffffff003079a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff003079aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff0030794c40 ffffff0730d90060 ffffff072a7a3e00 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030794c40: ffffff0030794990 [ ffffff0030794990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff0030794ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff0030782c40 ffffff0730d90060 ffffff072a79be80 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030782c40: ffffff0030782990 [ ffffff0030782990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff0030782ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff0030776c40 ffffff0730d90060 ffffff072a7b0cc0 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030776c40: ffffff0030776990 [ ffffff0030776990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff0030776ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff003076ac40 ffffff0730d90060 ffffff072a7a4c00 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003076ac40: ffffff003076a990 [ ffffff003076a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff003076aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff0030752c40 ffffff0730d90060 ffffff072a7b05c0 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030752c40: ffffff0030752990 [ ffffff0030752990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff0030752ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff003073ac40 ffffff0730d90060 ffffff072a79f740 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003073ac40: ffffff003073a990 [ ffffff003073a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff003073aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff0030728c40 ffffff0730d90060 ffffff072a7b3e80 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030728c40: ffffff0030728990 [ ffffff0030728990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff0030728ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff003070ac40 ffffff0730d90060 ffffff072a7aa200 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003070ac40: ffffff003070a990 [ ffffff003070a990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff003070aad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff00306f2c40 ffffff0730d90060 ffffff072a7b5a80 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306f2c40: ffffff00306f2990 [ ffffff00306f2990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff00306f2ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff00306dac40 ffffff0730d90060 ffffff072a7b1ac0 2 99 ffffff072a5b9da0 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff00306dac40: ffffff00306da990 [ ffffff00306da990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9da0, ffffff072a5b9d90) taskq_thread_wait+0xbe(ffffff072a5b9d70, ffffff072a5b9d90, ffffff072a5b9da0 , ffffff00306daad0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9d70) thread_start+8() ffffff003071cc40 ffffff0730d90060 ffffff072a7a5300 2 99 ffffff07441eac40 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003071cc40: ffffff003071c990 [ ffffff003071c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eac40, ffffff07441eac30) taskq_thread_wait+0xbe(ffffff07441eac10, ffffff07441eac30, ffffff07441eac40 , ffffff003071cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eac10) thread_start+8() ffffff0030746c40 ffffff0730d90060 ffffff072a7a6800 2 99 ffffff07441ae490 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030746c40: ffffff0030746990 [ ffffff0030746990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae490, ffffff07441ae480) taskq_thread_wait+0xbe(ffffff07441ae460, ffffff07441ae480, ffffff07441ae490 , ffffff0030746ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae460) thread_start+8() ffffff0030764c40 ffffff0730d90060 ffffff072a7a3000 2 99 ffffff07441ae378 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff0030764c40: ffffff0030764990 [ ffffff0030764990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae378, ffffff07441ae368) taskq_thread_wait+0xbe(ffffff07441ae348, ffffff07441ae368, ffffff07441ae378 , ffffff0030764ad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae348) thread_start+8() ffffff003077cc40 ffffff0730d90060 ffffff072a7a2140 2 99 ffffff07441eab28 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003077cc40: ffffff003077c990 [ ffffff003077c990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eab28, ffffff07441eab18) taskq_thread_wait+0xbe(ffffff07441eaaf8, ffffff07441eab18, ffffff07441eab28 , ffffff003077cad0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441eaaf8) thread_start+8() ffffff003078ec40 ffffff0730d90060 ffffff072a79a8c0 2 99 ffffff07441ead58 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff003078ec40: ffffff003078e990 [ ffffff003078e990 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ead58, ffffff07441ead48) taskq_thread_wait+0xbe(ffffff07441ead28, ffffff07441ead48, ffffff07441ead58 , ffffff003078ead0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ead28) thread_start+8() ffffff002fe2dc40 ffffff0730d90060 ffffff06fe21d340 2 99 ffffff0731a8d8f8 PC: _resume_from_idle+0xf4 CMD: zpool-zaphod stack pointer for thread ffffff002fe2dc40: ffffff002fe2da50 [ ffffff002fe2da50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0731a8d8f8, ffffff0731a8d8f0) spa_thread+0x1db(ffffff0731a8d000) thread_start+8() ffffff00307acc40 fffffffffbc2ea80 0 0 60 ffffff07441eaa10 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff00307acc40: ffffff00307aca80 [ ffffff00307aca80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eaa10, ffffff07441eaa00) taskq_thread_wait+0xbe(ffffff07441ea9e0, ffffff07441eaa00, ffffff07441eaa10 , ffffff00307acbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea9e0) thread_start+8() ffffff00307a6c40 fffffffffbc2ea80 0 0 60 ffffff07441eaa10 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff00307a6c40: ffffff00307a6a80 [ ffffff00307a6a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441eaa10, ffffff07441eaa00) taskq_thread_wait+0xbe(ffffff07441ea9e0, ffffff07441eaa00, ffffff07441eaa10 , ffffff00307a6bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea9e0) thread_start+8() ffffff00307b8c40 fffffffffbc2ea80 0 0 60 ffffff072ab14098 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff00307b8c40: ffffff00307b8a80 [ ffffff00307b8a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14098, ffffff072ab14088) taskq_thread_wait+0xbe(ffffff072ab14068, ffffff072ab14088, ffffff072ab14098 , ffffff00307b8bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14068) thread_start+8() ffffff00307b2c40 fffffffffbc2ea80 0 0 60 ffffff072ab14098 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff00307b2c40: ffffff00307b2a80 [ ffffff00307b2a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072ab14098, ffffff072ab14088) taskq_thread_wait+0xbe(ffffff072ab14068, ffffff072ab14088, ffffff072ab14098 , ffffff00307b2bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072ab14068) thread_start+8() ffffff00307c4c40 fffffffffbc2ea80 0 0 60 ffffff072a5b9198 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff00307c4c40: ffffff00307c4a80 [ ffffff00307c4a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9198, ffffff072a5b9188) taskq_thread_wait+0xbe(ffffff072a5b9168, ffffff072a5b9188, ffffff072a5b9198 , ffffff00307c4bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9168) thread_start+8() ffffff00307bec40 fffffffffbc2ea80 0 0 60 ffffff072a5b9198 PC: _resume_from_idle+0xf4 TASKQ: metaslab_group_taskq stack pointer for thread ffffff00307bec40: ffffff00307bea80 [ ffffff00307bea80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072a5b9198, ffffff072a5b9188) taskq_thread_wait+0xbe(ffffff072a5b9168, ffffff072a5b9188, ffffff072a5b9198 , ffffff00307bebc0, ffffffffffffffff) taskq_thread+0x37c(ffffff072a5b9168) thread_start+8() ffffff0030830c40 fffffffffbc2ea80 0 0 60 ffffff07441ae6c0 PC: _resume_from_idle+0xf4 TASKQ: zfs_vn_rele_taskq stack pointer for thread ffffff0030830c40: ffffff0030830a80 [ ffffff0030830a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ae6c0, ffffff07441ae6b0) taskq_thread_wait+0xbe(ffffff07441ae690, ffffff07441ae6b0, ffffff07441ae6c0 , ffffff0030830bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ae690) thread_start+8() ffffff00307d6c40 fffffffffbc2ea80 0 0 60 ffffff072b0f4204 PC: _resume_from_idle+0xf4 THREAD: txg_quiesce_thread() stack pointer for thread ffffff00307d6c40: ffffff00307d6ad0 [ ffffff00307d6ad0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072b0f4204, ffffff072b0f41c0) txg_thread_wait+0xaf(ffffff072b0f41b8, ffffff00307d6bc0, ffffff072b0f4204, 0 ) txg_quiesce_thread+0x106(ffffff072b0f4040) thread_start+8() ffffff002f62ec40 fffffffffbc2ea80 0 0 60 ffffff072b0f4200 PC: _resume_from_idle+0xf4 THREAD: txg_sync_thread() stack pointer for thread ffffff002f62ec40: ffffff002f62ea10 [ ffffff002f62ea10 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(ffffff072b0f4200, ffffff072b0f41c0, 12a05f200, 989680, 0) cv_timedwait+0x5c(ffffff072b0f4200, ffffff072b0f41c0, 90a83d) txg_thread_wait+0x5f(ffffff072b0f41b8, ffffff002f62ebc0, ffffff072b0f4200, 1f4) txg_sync_thread+0x111(ffffff072b0f4040) thread_start+8() ffffff003080cc40 fffffffffbc2ea80 0 0 60 ffffff07441ea498 PC: _resume_from_idle+0xf4 TASKQ: zil_clean stack pointer for thread ffffff003080cc40: ffffff003080ca80 [ ffffff003080ca80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07441ea498, ffffff07441ea488) taskq_thread_wait+0xbe(ffffff07441ea468, ffffff07441ea488, ffffff07441ea498 , ffffff003080cbc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07441ea468) thread_start+8() ffffff072473f120 ffffff0cb7de1048 ffffff06fe285500 1 59 ffffff072473f30e PC: _resume_from_idle+0xf4 CMD: tail -f /var/adm/messages stack pointer for thread ffffff072473f120: ffffff002f2a2cc0 [ ffffff002f2a2cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072473f30e, ffffff072473f310, 3b9ac7da, 1 , 4) cv_waituntil_sig+0xfa(ffffff072473f30e, ffffff072473f310, ffffff002f2a2e70, 3) nanosleep+0x19f(8047aa8, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c0d420 ffffff072becb098 ffffff06fe212200 1 59 ffffff072becb158 PC: _resume_from_idle+0xf4 CMD: -bash stack pointer for thread ffffff0730c0d420: ffffff002ed78c30 [ ffffff002ed78c30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072becb158, fffffffffbcf29b0, 0) cv_wait_sig_swap+0x17(ffffff072becb158, fffffffffbcf29b0) waitid+0x24d(7, 0, ffffff002ed78e40, f) waitsys32+0x36(7, 0, 8047ba0, f) _sys_sysenter_post_swapgs+0x149() ffffff0730c5f840 ffffff07314e6048 ffffff072a832a80 1 60 0 PC: panicsys+0x102 CMD: rm ./1441644902_8442_0/fgroups/fHarter/fFrancis/fGIS/fUnsorted/fMarius/fAg-Gro u stack pointer for thread ffffff0730c5f840: ffffff002ed548e0 param_preset() die+0xdf(e, ffffff002ed54b00, e8, 0) trap+0xdb3(ffffff002ed54b00, e8, 0) 0xfffffffffb8001d6() zfs_remove+0x395(ffffff0c6acd4440, ffffff0822395e44, ffffff073154d760, 0, 0 ) fop_remove+0x5b(ffffff0c6acd4440, ffffff0822395e44, ffffff073154d760, 0, 0) vn_removeat+0x382(0, 808d050, 0, 0) unlinkat+0x59(ffd19553, 808d050, 0) _sys_sysenter_post_swapgs+0x149() ffffff07246da460 ffffff0740f53098 ffffff06fe27b8c0 1 59 ffffff0740f53158 PC: _resume_from_idle+0xf4 CMD: -bash stack pointer for thread ffffff07246da460: ffffff002f5e3c30 [ ffffff002f5e3c30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0740f53158, fffffffffbcf29b0, 0) cv_wait_sig_swap+0x17(ffffff0740f53158, fffffffffbcf29b0) waitid+0x24d(7, 0, ffffff002f5e3e40, f) waitsys32+0x36(7, 0, 8047ba0, f) _sys_sysenter_post_swapgs+0x149() ffffff0724225520 ffffff0724236078 ffffff072a830e80 1 59 ffffff072ab3730a PC: _resume_from_idle+0xf4 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff0724225520: ffffff002ed2ac50 [ ffffff002ed2ac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072ab3730a, ffffff072ab372d0, 0) cv_wait_sig_swap+0x17(ffffff072ab3730a, ffffff072ab372d0) cv_timedwait_sig_hrtime+0x35(ffffff072ab3730a, ffffff072ab372d0, ffffffffffffffff) poll_common+0x504(8047300, 7, 0, 0) pollsys+0xe7(8047300, 7, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0727651400 ffffff0731536068 ffffff06fe274800 1 59 ffffff072ab3758a PC: _resume_from_idle+0xf4 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff0727651400: ffffff002fc11c50 [ ffffff002fc11c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072ab3758a, ffffff072ab37550, 0) cv_wait_sig_swap+0x17(ffffff072ab3758a, ffffff072ab37550) cv_timedwait_sig_hrtime+0x35(ffffff072ab3758a, ffffff072ab37550, ffffffffffffffff) poll_common+0x504(8047380, 2, 0, 0) pollsys+0xe7(8047380, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724937c60 ffffff072d448020 ffffff072a7af0c0 1 59 ffffff0727660872 PC: _resume_from_idle+0xf4 CMD: /usr/bin/bash stack pointer for thread ffffff0724937c60: ffffff002ee0e9f0 [ ffffff002ee0e9f0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff0727660872, ffffff07325386d8) str_cv_wait+0x27(ffffff0727660872, ffffff07325386d8, ffffffffffffffff, 0) strwaitq+0x2c3(ffffff0732538658, 2, 1, 2803, ffffffffffffffff, ffffff002ee0ebb8) strread+0x144(ffffff072b12fc40, ffffff002ee0edf0, ffffff06f69d5a18) spec_read+0x66(ffffff072b12fc40, ffffff002ee0edf0, 0, ffffff06f69d5a18, 0) fop_read+0x5b(ffffff072b12fc40, ffffff002ee0edf0, 0, ffffff06f69d5a18, 0) read+0x2a7(0, 804722b, 1) read32+0x1e(0, 804722b, 1) _sys_sysenter_post_swapgs+0x149() ffffff0730c0f060 ffffff072ad43000 ffffff072a79b780 1 59 ffffff0c7e3eaf02 PC: _resume_from_idle+0xf4 CMD: sudo -s stack pointer for thread ffffff0730c0f060: ffffff002eed4c50 [ ffffff002eed4c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0c7e3eaf02, ffffff0c7e3eaec8, 0) cv_wait_sig_swap+0x17(ffffff0c7e3eaf02, ffffff0c7e3eaec8) cv_timedwait_sig_hrtime+0x35(ffffff0c7e3eaf02, ffffff0c7e3eaec8, ffffffffffffffff) poll_common+0x504(8047a10, 2, 0, 0) pollsys+0xe7(8047a10, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0731524c20 ffffff0730da1098 ffffff072a797ec0 1 59 ffffff0730da1158 PC: _resume_from_idle+0xf4 CMD: -bash stack pointer for thread ffffff0731524c20: ffffff002f5c0c30 [ ffffff002f5c0c30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730da1158, fffffffffbcf29b0, 0) cv_wait_sig_swap+0x17(ffffff0730da1158, fffffffffbcf29b0) waitid+0x24d(7, 0, ffffff002f5c0e40, f) waitsys32+0x36(7, 0, 8047bb0, f) _sys_sysenter_post_swapgs+0x149() ffffff0731642c20 ffffff0732436078 ffffff072a799ac0 1 59 ffffff0c8a0719fa PC: _resume_from_idle+0xf4 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff0731642c20: ffffff002ed7ec50 [ ffffff002ed7ec50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0c8a0719fa, ffffff0c8a0719c0, 0) cv_wait_sig_swap+0x17(ffffff0c8a0719fa, ffffff0c8a0719c0) cv_timedwait_sig_hrtime+0x35(ffffff0c8a0719fa, ffffff0c8a0719c0, ffffffffffffffff) poll_common+0x504(8047360, 5, 0, 0) pollsys+0xe7(8047360, 5, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c54160 ffffff0730c9c058 ffffff072a792100 1 59 ffffff0c89f784f2 PC: _resume_from_idle+0xf4 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff0730c54160: ffffff002ed36c50 [ ffffff002ed36c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0c89f784f2, ffffff0c89f784b8, 0) cv_wait_sig_swap+0x17(ffffff0c89f784f2, ffffff0c89f784b8) cv_timedwait_sig_hrtime+0x35(ffffff0c89f784f2, ffffff0c89f784b8, ffffffffffffffff) poll_common+0x504(8047380, 2, 0, 0) pollsys+0xe7(8047380, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07315aab60 ffffff07314ee040 ffffff06fe284700 1 59 ffffff0730d801da PC: _resume_from_idle+0xf4 CMD: /usr/lib/ssh/sshd stack pointer for thread ffffff07315aab60: ffffff002ef52c50 [ ffffff002ef52c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730d801da, ffffff0730d801a0, 0) cv_wait_sig_swap+0x17(ffffff0730d801da, ffffff0730d801a0) cv_timedwait_sig_hrtime+0x35(ffffff0730d801da, ffffff0730d801a0, ffffffffffffffff) poll_common+0x504(8047470, 1, 0, 0) pollsys+0xe7(8047470, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073156bb00 ffffff07314bb038 ffffff072b24c540 1 59 ffffff0724745af2 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/in.routed stack pointer for thread ffffff073156bb00: ffffff002f5e9c60 [ ffffff002f5e9c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0724745af2, ffffff0724745ab8, 565796f571cb, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff0724745af2, ffffff0724745ab8, 565796f571cb) poll_common+0x504(8047a40, 4, ffffff002f5e9e40, 0) pollsys+0xe7(8047a40, 4, 8047b18, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c7e060 ffffff0730c6e038 ffffff072a844e80 1 59 ffffff0723a186da PC: _resume_from_idle+0xf4 CMD: /usr/lib/dbus-daemon --system stack pointer for thread ffffff0730c7e060: ffffff002f3abc50 [ ffffff002f3abc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0723a186da, ffffff0723a186a0, 0) cv_wait_sig_swap+0x17(ffffff0723a186da, ffffff0723a186a0) cv_timedwait_sig_hrtime+0x35(ffffff0723a186da, ffffff0723a186a0, ffffffffffffffff) poll_common+0x504(80d83f8, 3, 0, 0) pollsys+0xe7(80d83f8, 3, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07242258c0 ffffff0730df40a8 ffffff072a849540 1 59 ffffff072b1d73aa PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald-addon-network-discovery stack pointer for thread ffffff07242258c0: ffffff002f399c50 [ ffffff002f399c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072b1d73aa, ffffff072b1d7370, 0) cv_wait_sig_swap+0x17(ffffff072b1d73aa, ffffff072b1d7370) cv_timedwait_sig_hrtime+0x35(ffffff072b1d73aa, ffffff072b1d7370, ffffffffffffffff) poll_common+0x504(8069678, 2, 0, 0) pollsys+0xe7(8069678, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07244173c0 ffffff0730d9d0a0 ffffff072b24f700 1 59 ffffff072b1d767a PC: _resume_from_idle+0xf4 CMD: hald-runner stack pointer for thread ffffff07244173c0: ffffff002f39fc50 [ ffffff002f39fc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072b1d767a, ffffff072b1d7640, 0) cv_wait_sig_swap+0x17(ffffff072b1d767a, ffffff072b1d7640) cv_timedwait_sig_hrtime+0x35(ffffff072b1d767a, ffffff072b1d7640, ffffffffffffffff) poll_common+0x504(8064790, 1, 0, 0) pollsys+0xe7(8064790, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c097e0 ffffff0730dae070 ffffff072a834040 1 59 ffffff072b1db0a2 PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald-addon-cpufreq stack pointer for thread ffffff0730c097e0: ffffff002ed66c50 [ ffffff002ed66c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072b1db0a2, ffffff072b1db068, 0) cv_wait_sig_swap+0x17(ffffff072b1db0a2, ffffff072b1db068) cv_timedwait_sig_hrtime+0x35(ffffff072b1db0a2, ffffff072b1db068, ffffffffffffffff) poll_common+0x504(80696f8, 2, 0, 0) pollsys+0xe7(80696f8, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c5d120 ffffff072d406008 ffffff072a833180 1 59 ffffff06fe1f42c2 PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald-addon-acpi stack pointer for thread ffffff0730c5d120: ffffff002ed60c60 [ ffffff002ed60c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff06fe1f42c2, ffffff06fe1f4288, 5657923d833f, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff06fe1f42c2, ffffff06fe1f4288, 5657923d833f) poll_common+0x504(8066fa8, 1, ffffff002ed60e40, 0) pollsys+0xe7(8066fa8, 1, 80477c8, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724225180 ffffff0730d9d0a0 ffffff072b24cc40 1 59 ffffff072a4ffa52 PC: _resume_from_idle+0xf4 CMD: hald-runner stack pointer for thread ffffff0724225180: ffffff002ed30c50 [ ffffff002ed30c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072a4ffa52, ffffff072a4ffa18, 0) cv_wait_sig_swap+0x17(ffffff072a4ffa52, ffffff072a4ffa18) cv_timedwait_sig_hrtime+0x35(ffffff072a4ffa52, ffffff072a4ffa18, ffffffffffffffff) poll_common+0x504(8067558, 2, 0, 0) pollsys+0xe7(8067558, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724943820 ffffff0730da6088 ffffff072b251300 1 59 ffffff072b1d776a PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff0724943820: ffffff002f332c50 [ ffffff002f332c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072b1d776a, ffffff072b1d7730, 0) cv_wait_sig_swap+0x17(ffffff072b1d776a, ffffff072b1d7730) cv_timedwait_sig_hrtime+0x35(ffffff072b1d776a, ffffff072b1d7730, ffffffffffffffff) poll_common+0x504(808a6f0, 1, 0, 0) pollsys+0xe7(808a6f0, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c5d860 ffffff0730da6088 ffffff072a833880 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff0730c5d860: ffffff002f3a5d50 [ ffffff002f3a5d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fe29ee00, f5f00) doorfs32+0x180(0, 0, 0, fe29ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072493ec20 ffffff0730da6088 ffffff072a848e40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff072493ec20: ffffff002f6efd20 [ ffffff002f6efd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07276d60a0) shuttle_resume+0x2af(ffffff07276d60a0, fffffffffbd11010) door_return+0x3e0(fe5fecfc, 4, 0, 0, fe5fee00, f5f00) doorfs32+0x180(fe5fecfc, 4, 0, fe5fee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c85b20 ffffff0730da6088 ffffff06fe286a00 1 59 ffffff0730c85d0e PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff0730c85b20: ffffff002fb99c50 [ ffffff002fb99c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730c85d0e, ffffff0730c85d10, 0) cv_wait_sig_swap+0x17(ffffff0730c85d0e, ffffff0730c85d10) cv_waituntil_sig+0xbd(ffffff0730c85d0e, ffffff0730c85d10, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724225c60 ffffff0730da6088 ffffff072b24fe00 1 59 ffffff0730caf9f2 PC: _resume_from_idle+0xf4 CMD: /usr/lib/hal/hald --daemon=yes stack pointer for thread ffffff0724225c60: ffffff002f61bc50 [ ffffff002f61bc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730caf9f2, ffffff0730caf9b8, 0) cv_wait_sig_swap+0x17(ffffff0730caf9f2, ffffff0730caf9b8) cv_timedwait_sig_hrtime+0x35(ffffff0730caf9f2, ffffff0730caf9b8, ffffffffffffffff) poll_common+0x504(842f608, b, 0, 0) pollsys+0xe7(842f608, b, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072fb2f120 ffffff072bb4d090 ffffff072a83a300 3 60 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff072fb2f120: ffffff002f0a2d50 [ ffffff002f0a2d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fea3ee00, f5f00) doorfs32+0x180(0, 0, 0, fea3ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072fb2fc00 ffffff072bb4d090 ffffff072a841cc0 3 60 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff072fb2fc00: ffffff002f022d20 [ ffffff002f022d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072fb2f860) shuttle_resume+0x2af(ffffff072fb2f860, fffffffffbd11010) door_return+0x3e0(0, 0, 0, 0, fedaee00, f5f00) doorfs32+0x180(0, 0, 0, fedaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072fb2f860 ffffff072bb4d090 ffffff072a84b840 3 60 ffffff072bb4d508 PC: _resume_from_idle+0xf4 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff072fb2f860: ffffff002f249d80 [ ffffff002f249d80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff072bb4d508, fffffffffbd11010) door_unref+0x94() doorfs32+0xf5(0, 0, 0, 0, 0, 8) _sys_sysenter_post_swapgs+0x149() ffffff072fbecc20 ffffff072bb4d090 ffffff072a844780 3 60 ffffff072fbece0e PC: _resume_from_idle+0xf4 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff072fbecc20: ffffff002f04fc50 [ ffffff002f04fc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072fbece0e, ffffff072fbece10, 0) cv_wait_sig_swap+0x17(ffffff072fbece0e, ffffff072fbece10) cv_waituntil_sig+0xbd(ffffff072fbece0e, ffffff072fbece10, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07301a8be0 ffffff072bb4d090 ffffff072a846380 3 60 ffffff07301a8dce PC: _resume_from_idle+0xf4 CMD: /usr/lib/zones/zonestatd stack pointer for thread ffffff07301a8be0: ffffff002f6e9dd0 [ ffffff002f6e9dd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07301a8dce, ffffff07301a8dd0, 0) cv_wait_sig_swap+0x17(ffffff07301a8dce, ffffff07301a8dd0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff0731558400 ffffff07240be050 ffffff072a7993c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/pfexecd stack pointer for thread ffffff0731558400: ffffff002ef34d50 [ ffffff002ef34d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, feaaee00, f5f00) doorfs32+0x180(0, 0, 0, feaaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff07315ca460 ffffff07240be050 ffffff06fe22d5c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/pfexecd stack pointer for thread ffffff07315ca460: ffffff002e4f3d20 [ ffffff002e4f3d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff072473f120) shuttle_resume+0x2af(ffffff072473f120, fffffffffbd11010) door_return+0x3e0(fec9f960, c, 0, 0, fec9fe00, f5f00) doorfs32+0x180(fec9f960, c, 0, fec9fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c09b80 ffffff07240be050 ffffff072a836a40 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/pfexecd stack pointer for thread ffffff0730c09b80: ffffff002ecb1d20 [ ffffff002ecb1d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c0f060) shuttle_resume+0x2af(ffffff0730c0f060, fffffffffbd11010) door_return+0x3e0(fed9e960, c, 0, 0, fed9ee00, f5f00) doorfs32+0x180(fed9e960, c, 0, fed9ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c0d080 ffffff07240be050 ffffff072a835540 1 59 ffffff0730c0d26e PC: _resume_from_idle+0xf4 CMD: /usr/lib/pfexecd stack pointer for thread ffffff0730c0d080: ffffff002ed24d90 [ ffffff002ed24d90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730c0d26e, ffffff06f6a8e580, 0) cv_wait_sig_swap+0x17(ffffff0730c0d26e, ffffff06f6a8e580) sigsuspend+0x101(8047dc0) _sys_sysenter_post_swapgs+0x149() ffffff07242bb100 ffffff0730dac080 ffffff072b255b00 1 59 ffffff07242bb2ee PC: _resume_from_idle+0xf4 CMD: /usr/lib/saf/ttymon stack pointer for thread ffffff07242bb100: ffffff002ed84dd0 [ ffffff002ed84dd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07242bb2ee, ffffff07242bb2f0, 0) cv_wait_sig_swap+0x17(ffffff07242bb2ee, ffffff07242bb2f0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff0730ca2800 ffffff0730ca1050 ffffff072a794600 1 59 ffffff0723ac3d3c PC: _resume_from_idle+0xf4 CMD: /usr/lib/saf/sac -t 300 stack pointer for thread ffffff0730ca2800: ffffff002f1efb30 [ ffffff002f1efb30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0723ac3d3c, ffffff0723ac3ce0, 0) cv_wait_sig_swap+0x17(ffffff0723ac3d3c, ffffff0723ac3ce0) fifo_read+0xc9(ffffff072fddcd80, ffffff002f1efdf0, 0, ffffff072fee63b0, 0) fop_read+0x5b(ffffff072fddcd80, ffffff002f1efdf0, 0, ffffff072fee63b0, 0) read+0x2a7(4, 8047da8, 18) read32+0x1e(4, 8047da8, 18) _sys_sysenter_post_swapgs+0x149() ffffff002f234c40 fffffffffbc2ea80 0 0 60 ffffff0723ab43f8 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f234c40: ffffff002f234a90 [ ffffff002f234a90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab43f8, ffffff0723ab43f0) evch_delivery_hold+0x70(ffffff0723ab43d0, ffffff002f234bc0) evch_delivery_thr+0x29e(ffffff0723ab43d0) thread_start+8() ffffff0730c5c140 ffffff0730c61048 ffffff06fe228800 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/picl/picld stack pointer for thread ffffff0730c5c140: ffffff002f2aed20 [ ffffff002f2aed20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07276d0ba0) shuttle_resume+0x2af(ffffff07276d0ba0, fffffffffbd11010) door_return+0x3e0(fea1ed30, 0, 0, 0, fea1ee00, f5f00) doorfs32+0x180(fea1ed30, 0, 0, fea1ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c7e7a0 ffffff0730c61048 ffffff072a83c100 1 59 ffffff0730c7e98e PC: _resume_from_idle+0xf4 CMD: /usr/lib/picl/picld stack pointer for thread ffffff0730c7e7a0: ffffff002f206c50 [ ffffff002f206c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730c7e98e, ffffff0730c7e990, 0) cv_wait_sig_swap+0x17(ffffff0730c7e98e, ffffff0730c7e990) cv_waituntil_sig+0xbd(ffffff0730c7e98e, ffffff0730c7e990, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c7e400 ffffff0730c61048 ffffff06fe232380 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/picl/picld stack pointer for thread ffffff0730c7e400: ffffff002e690d50 [ ffffff002e690d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fe80fe00, f5f00) doorfs32+0x180(0, 0, 0, fe80fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c5fbe0 ffffff0730c61048 ffffff072b24be40 1 59 ffffff0730c5fdce PC: _resume_from_idle+0xf4 CMD: /usr/lib/picl/picld stack pointer for thread ffffff0730c5fbe0: ffffff002ed5add0 [ ffffff002ed5add0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730c5fdce, ffffff0730c5fdd0, 0) cv_wait_sig_swap+0x17(ffffff0730c5fdce, ffffff0730c5fdd0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff002f284c40 fffffffffbc2ea80 0 0 99 ffffff07240785f0 PC: _resume_from_idle+0xf4 TASKQ: dtrace_taskq stack pointer for thread ffffff002f284c40: ffffff002f284a80 [ ffffff002f284a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240785f0, ffffff07240785e0) taskq_thread_wait+0xbe(ffffff07240785c0, ffffff07240785e0, ffffff07240785f0 , ffffff002f284bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07240785c0) thread_start+8() ffffff0730c153e0 ffffff072d523030 ffffff072a7a4500 1 59 ffffff0cc3e3f86a PC: _resume_from_idle+0xf4 CMD: /opt/csw/bin/ruby2.0 /opt/csw/bin/puppet agent stack pointer for thread ffffff0730c153e0: ffffff002ee14c50 [ ffffff002ee14c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0cc3e3f86a, ffffff0cc3e3f830, 0) cv_wait_sig_swap+0x17(ffffff0cc3e3f86a, ffffff0cc3e3f830) cv_timedwait_sig_hrtime+0x35(ffffff0cc3e3f86a, ffffff0cc3e3f830, ffffffffffffffff) poll_common+0x504(fed73fa0, 2, 0, 0) pollsys+0xe7(fed73fa0, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724746be0 ffffff072d523030 ffffff072a848740 1 59 ffffff0724746dce PC: _resume_from_idle+0xf4 CMD: /opt/csw/bin/ruby2.0 /opt/csw/bin/puppet agent stack pointer for thread ffffff0724746be0: ffffff002f1f5c60 [ ffffff002f1f5c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0724746dce, ffffff0724746dd0, 37e0e194a, 1, 4) cv_waituntil_sig+0xfa(ffffff0724746dce, ffffff0724746dd0, ffffff002f1f5e10, 3) lwp_park+0x15e(8047678, 0) syslwp_park+0x63(0, 8047678, 0) _sys_sysenter_post_swapgs+0x149() ffffff002f116c40 fffffffffbc2ea80 0 0 60 ffffff07240784d8 PC: _resume_from_idle+0xf4 TASKQ: ipnet stack pointer for thread ffffff002f116c40: ffffff002f116a80 [ ffffff002f116a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07240784d8, ffffff07240784c8) taskq_thread_wait+0xbe(ffffff07240784a8, ffffff07240784c8, ffffff07240784d8 , ffffff002f116bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff07240784a8) thread_start+8() ffffff002f128c40 fffffffffbc2ea80 0 0 60 ffffff0724078078 PC: _resume_from_idle+0xf4 TASKQ: ipnet_nic_event_queue stack pointer for thread ffffff002f128c40: ffffff002f128a80 [ ffffff002f128a80 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724078078, ffffff0724078068) taskq_thread_wait+0xbe(ffffff0724078048, ffffff0724078068, ffffff0724078078 , ffffff002f128bc0, ffffffffffffffff) taskq_thread+0x37c(ffffff0724078048) thread_start+8() ffffff07249378c0 ffffff0730d8c068 ffffff072a83dd00 1 59 ffffff072b1db5f2 PC: _resume_from_idle+0xf4 CMD: /opt/csw/sbin/snmpd stack pointer for thread ffffff07249378c0: ffffff002fbc9c60 [ ffffff002fbc9c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072b1db5f2, ffffff072b1db5b8, 5657dff41eab, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff072b1db5f2, ffffff072b1db5b8, 5657dff41eab) poll_common+0x504(80479f0, 6, ffffff002fbc9e40, 0) pollsys+0xe7(80479f0, 6, 8047b08, 0) _sys_sysenter_post_swapgs+0x149() ffffff07247df7a0 ffffff072d4110a8 ffffff072b24b740 1 59 ffffff072b1db1e2 PC: _resume_from_idle+0xf4 CMD: /opt/csw/sbin/snmptrapd stack pointer for thread ffffff07247df7a0: ffffff002ed6cc60 [ ffffff002ed6cc60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072b1db1e2, ffffff072b1db1a8, 565841c0c10d, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff072b1db1e2, ffffff072b1db1a8, 565841c0c10d) poll_common+0x504(80479f0, 4, ffffff002ed6ce40, 0) pollsys+0xe7(80479f0, 4, 8047ae8, 0) _sys_sysenter_post_swapgs+0x149() ffffff072464c420 ffffff0731496028 ffffff06fe287100 1 59 ffffff0731516498 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff072464c420: ffffff002e66cba0 [ ffffff002e66cba0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0731516498, ffffff07315163f8, 0) cv_wait_sig_swap+0x17(ffffff0731516498, ffffff07315163f8) sowaitconnind+0x73(ffffff07315163d8, 3, ffffff002e66cd68) sotpi_accept+0xaa(ffffff07315163d8, 3, ffffff072cd06460, ffffff002e66ce40) socket_accept+0x1f(ffffff07315163d8, 3, ffffff072cd06460, ffffff002e66ce40) accept+0x101(3, 8047bc0, 8047ba8, 1, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276c80e0 ffffff0731493030 ffffff06fe229600 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff07276c80e0: ffffff002ef1cb80 [ ffffff002ef1cb80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002ef1ce10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002ef1ce10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff07241eb0e0 ffffff0730de50b0 ffffff072b254600 1 59 ffffff0c4db3adac PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff07241eb0e0: ffffff002f222b30 [ ffffff002f222b30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0c4db3adac, ffffff0723f79560, 2540be03c, 1, 4) cv_waituntil_sig+0xfa(ffffff0c4db3adac, ffffff0723f79560, ffffff002f222d20, 3) port_getn+0x39f(ffffff0723f79500, 8176278, 2, ffffff002f222e1c, ffffff002f222dd0) portfs+0x1c0(6, f, 8176278, 2, 1, 8047b38) portfs32+0x40(6, f, 8176278, 2, 1, 8047b38) sys_syscall32+0xff() ffffff0724417020 ffffff072d4df028 ffffff06fe284000 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff0724417020: ffffff002fc05b80 [ ffffff002fc05b80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002fc05e10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002fc05e10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff0730d7d840 ffffff0730c66040 ffffff072b255400 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff0730d7d840: ffffff002fbc3b80 [ ffffff002fbc3b80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002fbc3e10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002fbc3e10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff07247464a0 ffffff0724243068 ffffff072b253800 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff07247464a0: ffffff002eedab80 [ ffffff002eedab80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002eedae10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002eedae10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff072fb1c160 ffffff0743ceb0a0 ffffff072b252800 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff072fb1c160: ffffff002f524b80 [ ffffff002f524b80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002f524e10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002f524e10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff0730c8d8c0 ffffff072f940008 ffffff06fe21a180 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff0730c8d8c0: ffffff002e68ab80 [ ffffff002e68ab80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002e68ae10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002e68ae10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff0730c5f100 ffffff082353f0b0 ffffff06fe222100 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff0730c5f100: ffffff002ed90b80 [ ffffff002ed90b80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002ed90e10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002ed90e10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff0730c8b740 ffffff0732435080 ffffff072a790500 1 59 ffffff0731890700 PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff0730c8b740: ffffff002f531b80 [ ffffff002f531b80 _resume_from_idle+0xf4() ] swtch+0x141() turnstile_block+0x555(ffffff0724759cb0, 0, ffffff0731890700, fffffffffbc9a620, fffffffffbcfb780, ffffff002f531e10) lwp_upimutex_lock+0x1db(fec70000, 51, 0, ffffff002f531e10) lwp_mutex_timedlock+0x1dd(fec70000, 0, fed62a40) _sys_sysenter_post_swapgs+0x149() ffffff07315e3c00 ffffff0730dde000 ffffff072a832380 1 59 ffffff07315e3dee PC: _resume_from_idle+0xf4 CMD: /opt/csw/apache2/sbin/httpd -f /opt/csw/apache2/etc/httpd.conf -k start -DSSL stack pointer for thread ffffff07315e3c00: ffffff002ed48c60 [ ffffff002ed48c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07315e3dee, ffffff07315e3df0, 56578ae41b65, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff07315e3dee, ffffff07315e3df0, 56578ae41b65) poll_common+0x439(8047bc0, 0, ffffff002ed48e40, 0) pollsys+0xe7(8047bc0, 0, 8047c58, 0) _sys_sysenter_post_swapgs+0x149() ffffff07246da800 ffffff0730f30008 ffffff072b251a00 1 59 ffffff072ab2e812 PC: _resume_from_idle+0xf4 CMD: /usr/lib/inet/in.ndpd stack pointer for thread ffffff07246da800: ffffff002e730c50 [ ffffff002e730c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072ab2e812, ffffff072ab2e7d8, 0) cv_wait_sig_swap+0x17(ffffff072ab2e812, ffffff072ab2e7d8) cv_timedwait_sig_hrtime+0x35(ffffff072ab2e812, ffffff072ab2e7d8, ffffffffffffffff) poll_common+0x504(80a41b0, 20, 0, 0) pollsys+0xe7(80a41b0, 20, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730c8b3a0 ffffff07314de058 ffffff072a844080 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff0730c8b3a0: ffffff002f255d50 [ ffffff002f255d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fedaee00, f5f00) doorfs32+0x180(0, 0, 0, fedaee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c8a760 ffffff07314db060 ffffff06fe2787c0 1 59 ffffff0730c8a94e PC: _resume_from_idle+0xf4 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff0730c8a760: ffffff002e6d8c60 [ ffffff002e6d8c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0730c8a94e, ffffff0730c8a950, 22ecb25890 , 1, 4) cv_waituntil_sig+0xfa(ffffff0730c8a94e, ffffff0730c8a950, ffffff002e6d8e10, 3) lwp_park+0x15e(fec9ef48, 0) syslwp_park+0x63(0, fec9ef48, 0) _sys_sysenter_post_swapgs+0x149() ffffff07242f88a0 ffffff07314db060 ffffff072a796200 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff07242f88a0: ffffff002f646d50 [ ffffff002f646d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fe93fe00, f5f00) doorfs32+0x180(0, 0, 0, fe93fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff072fb1c8a0 ffffff07314db060 ffffff072a8415c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff072fb1c8a0: ffffff002f3d2d20 [ ffffff002f3d2d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002f608c40) shuttle_resume+0x2af(ffffff002f608c40, fffffffffbd11010) door_return+0x3e0(0, 0, 0, 0, fea3ee00, f5f00) doorfs32+0x180(0, 0, 0, fea3ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002f22ec40 fffffffffbc2ea80 0 0 60 fffffffffbcefcc8 PC: _resume_from_idle+0xf4 THREAD: auto_do_unmount() stack pointer for thread ffffff002f22ec40: ffffff002f22ea30 [ ffffff002f22ea30 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbcefcc8, fffffffffbcef710, 1bf08eb000, 989680, 0) cv_timedwait+0x5c(fffffffffbcefcc8, fffffffffbcef710, 90d446) zone_status_timedwait+0x6b(fffffffffbcefb80, 90d446, 5) auto_do_unmount+0xc7(ffffff07314fcaf8) thread_start+8() ffffff0730c8bae0 ffffff07314db060 ffffff072b250500 1 59 ffffff0730c8bcce PC: _resume_from_idle+0xf4 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff0730c8bae0: ffffff002fc1ddd0 [ ffffff002fc1ddd0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730c8bcce, ffffff0730c8bcd0, 0) cv_wait_sig_swap+0x17(ffffff0730c8bcce, ffffff0730c8bcd0) pause+0x45() _sys_sysenter_post_swapgs+0x149() ffffff07242b1120 ffffff07314de058 ffffff072b257ec0 1 59 ffffff07314de118 PC: _resume_from_idle+0xf4 CMD: /usr/lib/autofs/automountd stack pointer for thread ffffff07242b1120: ffffff002f519c30 [ ffffff002f519c30 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07314de118, fffffffffbcf29b0, 0) cv_wait_sig_swap+0x17(ffffff07314de118, fffffffffbcf29b0) waitid+0x24d(0, 287, ffffff002f519e40, 3) waitsys32+0x36(0, 287, 8047cf0, 3) _sys_sysenter_post_swapgs+0x149() ffffff0730c8a3c0 ffffff0730da4090 ffffff072b256200 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/inet/inetd start stack pointer for thread ffffff0730c8a3c0: ffffff002fb93d50 [ ffffff002fb93d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fe9bee00, f5f00) doorfs32+0x180(0, 0, 0, fe9bee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0730c8b000 ffffff0730da4090 ffffff072b256900 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/inet/inetd start stack pointer for thread ffffff0730c8b000: ffffff002fc23d20 [ ffffff002fc23d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002fb8dc40) shuttle_resume+0x2af(ffffff002fb8dc40, fffffffffbd11010) door_return+0x3e0(fed3ed00, 4, 0, 0, fed3ee00, f5f00) doorfs32+0x180(fed3ed00, 4, 0, fed3ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002fb8dc40 fffffffffbc2ea80 0 0 60 ffffff0723ab4858 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002fb8dc40: ffffff002fb8da90 [ ffffff002fb8da90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723ab4858, ffffff0723ab4850) evch_delivery_hold+0x70(ffffff0723ab4830, ffffff002fb8dbc0) evch_delivery_thr+0x29e(ffffff0723ab4830) thread_start+8() ffffff072fb1cc40 ffffff0730da4090 ffffff06fe21fe00 1 59 ffffff072a4e9d0a PC: _resume_from_idle+0xf4 CMD: /usr/lib/inet/inetd start stack pointer for thread ffffff072fb1cc40: ffffff002f1fbc50 [ ffffff002f1fbc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072a4e9d0a, ffffff072a4e9cd0, 0) cv_wait_sig_swap+0x17(ffffff072a4e9d0a, ffffff072a4e9cd0) cv_timedwait_sig_hrtime+0x35(ffffff072a4e9d0a, ffffff072a4e9cd0, ffffffffffffffff) poll_common+0x504(810af30, 10, 0, 0) pollsys+0xe7(810af30, 10, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162c760 ffffff07314e2050 ffffff06fe27d580 1 59 ffffff073162c94e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162c760: ffffff002ef46c50 [ ffffff002ef46c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162c94e, ffffff073162c950, 0) cv_wait_sig_swap+0x17(ffffff073162c94e, ffffff073162c950) cv_waituntil_sig+0xbd(ffffff073162c94e, ffffff073162c950, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162c3c0 ffffff07314e2050 ffffff06fe22ab00 1 59 ffffff073162c5ae PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162c3c0: ffffff002f29cc50 [ ffffff002f29cc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162c5ae, ffffff073162c5b0, 0) cv_wait_sig_swap+0x17(ffffff073162c5ae, ffffff073162c5b0) cv_waituntil_sig+0xbd(ffffff073162c5ae, ffffff073162c5b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162c020 ffffff07314e2050 ffffff06fe27f180 1 59 ffffff073162c20e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162c020: ffffff002ef3ac50 [ ffffff002ef3ac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162c20e, ffffff073162c210, 0) cv_wait_sig_swap+0x17(ffffff073162c20e, ffffff073162c210) cv_waituntil_sig+0xbd(ffffff073162c20e, ffffff073162c210, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162ab20 ffffff07314e2050 ffffff06fe230080 1 59 ffffff073162ad0e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162ab20: ffffff002ef58c50 [ ffffff002ef58c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162ad0e, ffffff073162ad10, 0) cv_wait_sig_swap+0x17(ffffff073162ad0e, ffffff073162ad10) cv_waituntil_sig+0xbd(ffffff073162ad0e, ffffff073162ad10, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162a780 ffffff07314e2050 ffffff072a84aa40 1 59 ffffff073162a96e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162a780: ffffff002ec8ec50 [ ffffff002ec8ec50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162a96e, ffffff073162a970, 0) cv_wait_sig_swap+0x17(ffffff073162a96e, ffffff073162a970) cv_waituntil_sig+0xbd(ffffff073162a96e, ffffff073162a970, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162a3e0 ffffff07314e2050 ffffff06fe285c00 1 59 ffffff073162a5ce PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162a3e0: ffffff002ef2ec50 [ ffffff002ef2ec50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162a5ce, ffffff073162a5d0, 0) cv_wait_sig_swap+0x17(ffffff073162a5ce, ffffff073162a5d0) cv_waituntil_sig+0xbd(ffffff073162a5ce, ffffff073162a5d0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162a040 ffffff07314e2050 ffffff06fe234040 1 59 ffffff073162a22e PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff073162a040: ffffff002eef8c50 [ ffffff002eef8c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162a22e, ffffff073162a230, 0) cv_wait_sig_swap+0x17(ffffff073162a22e, ffffff073162a230) cv_waituntil_sig+0xbd(ffffff073162a22e, ffffff073162a230, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0731709b40 ffffff07314e2050 ffffff072b25a8c0 1 59 ffffff07315adb82 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff0731709b40: ffffff002ed8ac50 [ ffffff002ed8ac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07315adb82, ffffff07315adb48, 0) cv_wait_sig_swap+0x17(ffffff07315adb82, ffffff07315adb48) cv_timedwait_sig_hrtime+0x35(ffffff07315adb82, ffffff07315adb48, ffffffffffffffff) poll_common+0x504(8075648, 1, 0, 0) pollsys+0xe7(8075648, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07241f67e0 ffffff07314e2050 ffffff072b253f00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff07241f67e0: ffffff002e6d2d20 [ ffffff002e6d2d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07315587a0) shuttle_resume+0x2af(ffffff07315587a0, fffffffffbd11010) door_return+0x3e0(0, 0, 0, 0, fe444e00, f5f00) doorfs32+0x180(0, 0, 0, fe444e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff07317097a0 ffffff07314e2050 ffffff06fe245580 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff07317097a0: ffffff002edcfd20 [ ffffff002edcfd20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff0730c0f060) shuttle_resume+0x2af(ffffff0730c0f060, fffffffffbd11010) door_return+0x3e0(0, 0, 0, 0, fe543e00, f5f00) doorfs32+0x180(0, 0, 0, fe543e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff07316424e0 ffffff07314e2050 ffffff072a793800 1 59 ffffff07316426ce PC: _resume_from_idle+0xf4 CMD: /usr/sbin/syslogd stack pointer for thread ffffff07316424e0: ffffff002eecec40 [ ffffff002eecec40 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07316426ce, ffffff06f6a8ed00, 0) cv_wait_sig_swap+0x17(ffffff07316426ce, ffffff06f6a8ed00) cv_waituntil_sig+0xbd(ffffff07316426ce, ffffff06f6a8ed00, 0, 0) sigtimedwait+0x19c(8047dec, 8047bf0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0731637520 ffffff07314a8020 ffffff06fe21e140 1 59 ffffff07315f99fa PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0731637520: ffffff002f296c60 [ ffffff002f296c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07315f99fa, ffffff07315f99c0, 56599d9a38c3, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff07315f99fa, ffffff07315f99c0, 56599d9a38c3) poll_common+0x504(fdcdeb50, 1, ffffff002f296e40, 0) pollsys+0xe7(fdcdeb50, 1, fdcdeaf8, 0) _sys_sysenter_post_swapgs+0x149() ffffff0731637180 ffffff07314a8020 ffffff072a795400 1 59 ffffff073163736e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0731637180: ffffff002eef2cc0 [ ffffff002eef2cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff073163736e, ffffff0731637370, df8458a8d, 1, 4) cv_waituntil_sig+0xfa(ffffff073163736e, ffffff0731637370, ffffff002eef2e70, 3) nanosleep+0x19f(fdbdff28, 0) _sys_sysenter_post_swapgs+0x149() ffffff07276c8bc0 ffffff07314a8020 ffffff06fe280040 1 59 ffffff07276c8dae PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07276c8bc0: ffffff002edfcc50 [ ffffff002edfcc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07276c8dae, ffffff07276c8db0, 0) cv_wait_sig_swap+0x17(ffffff07276c8dae, ffffff07276c8db0) cv_waituntil_sig+0xbd(ffffff07276c8dae, ffffff07276c8db0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07247ee040 ffffff07314a8020 ffffff06fe27e380 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07247ee040: ffffff002ee02d20 [ ffffff002ee02d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002f23ac40) shuttle_resume+0x2af(ffffff002f23ac40, fffffffffbd11010) door_return+0x3e0(fd8afcf0, 4, 0, 0, fd8afe00, f5f00) doorfs32+0x180(fd8afcf0, 4, 0, fd8afe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002f23ac40 fffffffffbc2ea80 0 0 60 ffffff0724766e30 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f23ac40: ffffff002f23aa90 [ ffffff002f23aa90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724766e30, ffffff0724766e28) evch_delivery_hold+0x70(ffffff0724766e08, ffffff002f23abc0) evch_delivery_thr+0x29e(ffffff0724766e08) thread_start+8() ffffff073151b000 ffffff07314a8020 ffffff06fe21f000 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff073151b000: ffffff002f513d20 [ ffffff002f513d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07276d60a0) shuttle_resume+0x2af(ffffff07276d60a0, fffffffffbd11010) door_return+0x3e0(fd7b0d2c, 4, 0, 0, fd7b0e00, f5f00) doorfs32+0x180(fd7b0d2c, 4, 0, fd7b0e00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff0724945ba0 ffffff07314a8020 ffffff06fe233180 1 59 ffffff0724945d8e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0724945ba0: ffffff002e65ac50 [ ffffff002e65ac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0724945d8e, ffffff0724945d90, 0) cv_wait_sig_swap+0x17(ffffff0724945d8e, ffffff0724945d90) cv_waituntil_sig+0xbd(ffffff0724945d8e, ffffff0724945d90, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07315bd100 ffffff07314a8020 ffffff06fe281c40 1 59 ffffff07315bd2ee PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07315bd100: ffffff002ef16c50 [ ffffff002ef16c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07315bd2ee, ffffff07315bd2f0, 0) cv_wait_sig_swap+0x17(ffffff07315bd2ee, ffffff07315bd2f0) cv_waituntil_sig+0xbd(ffffff07315bd2ee, ffffff07315bd2f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072494c0a0 ffffff07314a8020 ffffff072b249380 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff072494c0a0: ffffff002f504d50 [ ffffff002f504d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fd58ee00, f5f00) doorfs32+0x180(0, 0, 0, fd58ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff07249430e0 ffffff07314a8020 ffffff072a84a340 1 59 ffffff07315ade52 PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07249430e0: ffffff002f4d5c50 [ ffffff002f4d5c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07315ade52, ffffff07315ade18, 0) cv_wait_sig_swap+0x17(ffffff07315ade52, ffffff07315ade18) cv_timedwait_sig_hrtime+0x35(ffffff07315ade52, ffffff07315ade18, ffffffffffffffff) poll_common+0x504(8309948, 4, 0, 0) pollsys+0xe7(8309948, 4, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724945800 ffffff07314a8020 ffffff072b252100 1 59 ffffff07249459ee PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0724945800: ffffff002f20cc50 [ ffffff002f20cc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07249459ee, ffffff07249459f0, 0) cv_wait_sig_swap+0x17(ffffff07249459ee, ffffff07249459f0) cv_waituntil_sig+0xbd(ffffff07249459ee, ffffff07249459f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07242813a0 ffffff07314a8020 ffffff072a846a80 1 59 ffffff072428158e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07242813a0: ffffff002e4f9c50 [ ffffff002e4f9c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072428158e, ffffff0724281590, 0) cv_wait_sig_swap+0x17(ffffff072428158e, ffffff0724281590) cv_waituntil_sig+0xbd(ffffff072428158e, ffffff0724281590, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072427a3c0 ffffff07314a8020 ffffff06fe23f200 1 59 ffffff072427a5ae PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff072427a3c0: ffffff002e678c50 [ ffffff002e678c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072427a5ae, ffffff072427a5b0, 0) cv_wait_sig_swap+0x17(ffffff072427a5ae, ffffff072427a5b0) cv_waituntil_sig+0xbd(ffffff072427a5ae, ffffff072427a5b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07247f3740 ffffff07314a8020 ffffff06fe283140 1 59 ffffff07247f392e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07247f3740: ffffff002eeb6c50 [ ffffff002eeb6c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07247f392e, ffffff07247f3930, 0) cv_wait_sig_swap+0x17(ffffff07247f392e, ffffff07247f3930) cv_waituntil_sig+0xbd(ffffff07247f392e, ffffff07247f3930, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072464c080 ffffff07314a8020 ffffff06fe217780 1 59 ffffff072464c26e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff072464c080: ffffff002eee0c60 [ ffffff002eee0c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072464c26e, ffffff072464c270, 2540bd564, 1, 4) cv_waituntil_sig+0xfa(ffffff072464c26e, ffffff072464c270, ffffff002eee0e10, 3) lwp_park+0x15e(fcd7ef18, 0) syslwp_park+0x63(0, fcd7ef18, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730cb6420 ffffff07314a8020 ffffff06fe228f00 1 59 ffffff0730cb660e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0730cb6420: ffffff002eee6c50 [ ffffff002eee6c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730cb660e, ffffff0730cb6610, 0) cv_wait_sig_swap+0x17(ffffff0730cb660e, ffffff0730cb6610) cv_waituntil_sig+0xbd(ffffff0730cb660e, ffffff0730cb6610, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724951420 ffffff07314a8020 ffffff06fe276400 1 59 ffffff072495160e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0724951420: ffffff002f5bac50 [ ffffff002f5bac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072495160e, ffffff0724951610, 0) cv_wait_sig_swap+0x17(ffffff072495160e, ffffff0724951610) cv_waituntil_sig+0xbd(ffffff072495160e, ffffff0724951610, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07241f6b80 ffffff07314a8020 ffffff06fe284e00 1 59 ffffff07241f6d6e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07241f6b80: ffffff002eeaac50 [ ffffff002eeaac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07241f6d6e, ffffff07241f6d70, 0) cv_wait_sig_swap+0x17(ffffff07241f6d6e, ffffff07241f6d70) cv_waituntil_sig+0xbd(ffffff07241f6d6e, ffffff07241f6d70, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724775140 ffffff07314a8020 ffffff06fe22b200 1 59 ffffff072477532e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0724775140: ffffff002ef22c50 [ ffffff002ef22c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072477532e, ffffff0724775330, 0) cv_wait_sig_swap+0x17(ffffff072477532e, ffffff0724775330) cv_waituntil_sig+0xbd(ffffff072477532e, ffffff0724775330, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07249404c0 ffffff07314a8020 ffffff06fe22e3c0 1 59 ffffff07249406ae PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07249404c0: ffffff002f537c50 [ ffffff002f537c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07249406ae, ffffff07249406b0, 0) cv_wait_sig_swap+0x17(ffffff07249406ae, ffffff07249406b0) cv_waituntil_sig+0xbd(ffffff07249406ae, ffffff07249406b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0730cb2b80 ffffff07314a8020 ffffff06fe230e80 1 59 ffffff0730cb2d6e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0730cb2b80: ffffff002ef4cc50 [ ffffff002ef4cc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0730cb2d6e, ffffff0730cb2d70, 0) cv_wait_sig_swap+0x17(ffffff0730cb2d6e, ffffff0730cb2d70) cv_waituntil_sig+0xbd(ffffff0730cb2d6e, ffffff0730cb2d70, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07242a37c0 ffffff07314a8020 ffffff06fe2161c0 1 59 ffffff07242a39ae PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07242a37c0: ffffff002f240c50 [ ffffff002f240c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07242a39ae, ffffff07242a39b0, 0) cv_wait_sig_swap+0x17(ffffff07242a39ae, ffffff07242a39b0) cv_waituntil_sig+0xbd(ffffff07242a39ae, ffffff07242a39b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07247dfb40 ffffff07314a8020 ffffff072b24e840 1 59 ffffff07247dfd2e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07247dfb40: ffffff002f354c50 [ ffffff002f354c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07247dfd2e, ffffff07247dfd30, 0) cv_wait_sig_swap+0x17(ffffff07247dfd2e, ffffff07247dfd30) cv_waituntil_sig+0xbd(ffffff07247dfd2e, ffffff07247dfd30, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff002f26cc40 fffffffffbc2ea80 0 0 60 ffffff0723a242e8 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f26cc40: ffffff002f26ca90 [ ffffff002f26ca90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723a242e8, ffffff0723a242e0) evch_delivery_hold+0x70(ffffff0723a242c0, ffffff002f26cbc0) evch_delivery_thr+0x29e(ffffff0723a242c0) thread_start+8() ffffff07316333a0 ffffff07314a8020 ffffff072b2577c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07316333a0: ffffff002f272d20 [ ffffff002f272d20 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff002f278c40) shuttle_resume+0x2af(ffffff002f278c40, fffffffffbd11010) door_return+0x3e0(fc08e9b8, 4, 0, 0, fc08ee00, f5f00) doorfs32+0x180(fc08e9b8, 4, 0, fc08ee00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002f278c40 fffffffffbc2ea80 0 0 60 ffffff072b2336f0 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f278c40: ffffff002f278a90 [ ffffff002f278a90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072b2336f0, ffffff072b2336e8) evch_delivery_hold+0x70(ffffff072b2336c8, ffffff002f278bc0) evch_delivery_thr+0x29e(ffffff072b2336c8) thread_start+8() ffffff002f27ec40 fffffffffbc2ea80 0 0 60 ffffff0724766d50 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f27ec40: ffffff002f27ea90 [ ffffff002f27ea90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0724766d50, ffffff0724766d48) evch_delivery_hold+0x70(ffffff0724766d28, ffffff002f27ebc0) evch_delivery_thr+0x29e(ffffff0724766d28) thread_start+8() ffffff0731633000 ffffff07314a8020 ffffff072b2468c0 1 59 0 PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff0731633000: ffffff002f5f6d50 [ ffffff002f5f6d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fbf8fe00, f5f00) doorfs32+0x180(0, 0, 0, fbf8fe00, f5f00, a) _sys_sysenter_post_swapgs+0x149() ffffff002f5fcc40 fffffffffbc2ea80 0 0 60 ffffff072476a250 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f5fcc40: ffffff002f5fca90 [ ffffff002f5fca90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072476a250, ffffff072476a248) evch_delivery_hold+0x70(ffffff072476a228, ffffff002f5fcbc0) evch_delivery_thr+0x29e(ffffff072476a228) thread_start+8() ffffff073163cc40 ffffff07314a8020 ffffff072b24d340 1 59 ffffff073163ce2e PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff073163cc40: ffffff002f5d8c50 [ ffffff002f5d8c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073163ce2e, ffffff073163ce30, 0) cv_wait_sig_swap+0x17(ffffff073163ce2e, ffffff073163ce30) cv_waituntil_sig+0xbd(ffffff073163ce2e, ffffff073163ce30, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff073162cb00 ffffff07314a8020 ffffff072b2593c0 1 59 ffffff073162ccee PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff073162cb00: ffffff002f602c50 [ ffffff002f602c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff073162ccee, ffffff073162ccf0, 0) cv_wait_sig_swap+0x17(ffffff073162ccee, ffffff073162ccf0) cv_waituntil_sig+0xbd(ffffff073162ccee, ffffff073162ccf0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff07315aa7c0 ffffff07314a8020 ffffff072b24b040 1 59 ffffff07315aa9ae PC: _resume_from_idle+0xf4 CMD: /usr/lib/fm/fmd/fmd stack pointer for thread ffffff07315aa7c0: ffffff002f217d90 [ ffffff002f217d90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07315aa9ae, ffffff06f6a8ed40, 0) cv_wait_sig_swap+0x17(ffffff07315aa9ae, ffffff06f6a8ed40) sigsuspend+0x101(8047dc8) _sys_sysenter_post_swapgs+0x149() ffffff07247f13c0 ffffff072d453018 ffffff06fe27ce80 1 59 ffffff07247f15ae PC: _resume_from_idle+0xf4 CMD: /usr/perl5/bin/perl /usr/lib/intrd stack pointer for thread ffffff07247f13c0: ffffff002eca5cc0 [ ffffff002eca5cc0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff07247f15ae, ffffff07247f15b0, a7a35802e, 1, 4) cv_waituntil_sig+0xfa(ffffff07247f15ae, ffffff07247f15b0, ffffff002eca5e70, 3) nanosleep+0x19f(8047b18, 8047b10) _sys_sysenter_post_swapgs+0x149() ffffff0724273b20 ffffff0730f89010 ffffff072b254d00 1 59 ffffff072ab2e042 PC: _resume_from_idle+0xf4 CMD: /usr/lib/saf/ttymon -g -d /dev/console -l console -T sun-color -m ldterm,ttcom p stack pointer for thread ffffff0724273b20: ffffff002f50ac50 [ ffffff002f50ac50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072ab2e042, ffffff072ab2e008, 0) cv_wait_sig_swap+0x17(ffffff072ab2e042, ffffff072ab2e008) cv_timedwait_sig_hrtime+0x35(ffffff072ab2e042, ffffff072ab2e008, ffffffffffffffff) poll_common+0x504(8047ca8, 1, 0, 0) pollsys+0xe7(8047ca8, 1, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff072441e000 ffffff0723f89048 ffffff072a79fe40 1 59 ffffff07441e6b92 PC: _resume_from_idle+0xf4 CMD: /opt/csw/bin/nrpe -c /etc/opt/csw/nrpe.cfg -d stack pointer for thread ffffff072441e000: ffffff002ed72c50 [ ffffff002ed72c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07441e6b92, ffffff07441e6b58, 0) cv_wait_sig_swap+0x17(ffffff07441e6b92, ffffff07441e6b58) cv_timedwait_sig_hrtime+0x35(ffffff07441e6b92, ffffff07441e6b58, ffffffffffffffff) poll_common+0x504(8046a80, 2, 0, 0) pollsys+0xe7(8046a80, 2, 0, 0) _sys_sysenter_post_swapgs+0x149() ffffff0724235100 ffffff0723aab038 ffffff06fe221300 1 59 ffffff07242352ee PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff0724235100: ffffff002f4e6c50 [ ffffff002f4e6c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff07242352ee, ffffff07242352f0, 0) cv_wait_sig_swap+0x17(ffffff07242352ee, ffffff07242352f0) cv_waituntil_sig+0xbd(ffffff07242352ee, ffffff07242352f0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff072422ec00 ffffff0723aab038 ffffff06f13445c0 1 59 ffffff06fe1f643c PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff072422ec00: ffffff002e631ae0 [ ffffff002e631ae0 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig+0x185(ffffff06fe1f643c, ffffff06fe1e80b0) cte_get_event+0xb3(ffffff06fe1f6408, 0, 80ba4f0, 0, 0, 1) ctfs_endpoint_ioctl+0xf9(ffffff06fe1f6400, 63746502, 80ba4f0, ffffff06f69d5cf8, fffffffffbcefb80, 0) ctfs_bu_ioctl+0x4b(ffffff07276d3880, 63746502, 80ba4f0, 102001, ffffff06f69d5cf8, ffffff002e631e68, 0) fop_ioctl+0x55(ffffff07276d3880, 63746502, 80ba4f0, 102001, ffffff06f69d5cf8 , ffffff002e631e68, 0) ioctl+0x9b(a7, 63746502, 80ba4f0) sys_syscall32+0xff() ffffff072422e860 ffffff0723aab038 ffffff06f1344cc0 1 59 ffffff0c5479a064 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff072422e860: ffffff002f4ecb20 [ ffffff002f4ecb20 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0c5479a064, ffffff0723d815a0, 0) cv_wait_sig_swap+0x17(ffffff0c5479a064, ffffff0723d815a0) cv_waituntil_sig+0xbd(ffffff0c5479a064, ffffff0723d815a0, 0, 0) port_getn+0x39f(ffffff0723d81540, fe6c1fa0, 1, ffffff002f4ece1c, ffffff002f4ecdd0) portfs+0x25d(5, 5, fe6c1fa0, 0, 0, 0) portfs32+0x78(5, 5, fe6c1fa0, 0, 0, 0) sys_syscall32+0xff() ffffff072422b880 ffffff0723aab038 ffffff06fe224600 1 59 ffffff072422ba6e PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff072422b880: ffffff002e55cc60 [ ffffff002e55cc60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff072422ba6e, ffffff072422ba70, 2540bd751, 1, 4) cv_waituntil_sig+0xfa(ffffff072422ba6e, ffffff072422ba70, ffffff002e55ce10, 3) lwp_park+0x15e(fe4c3f18, 0) syslwp_park+0x63(0, fe4c3f18, 0) sys_syscall32+0xff() ffffff002f66bc40 fffffffffbc2ea80 0 0 60 ffffff07234ae7a0 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f66bc40: ffffff002f66ba90 [ ffffff002f66ba90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07234ae7a0, ffffff07234ae798) evch_delivery_hold+0x70(ffffff07234ae778, ffffff002f66bbc0) evch_delivery_thr+0x29e(ffffff07234ae778) thread_start+8() ffffff002f671c40 fffffffffbc2ea80 0 0 60 ffffff0723808a50 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002f671c40: ffffff002f671a90 [ ffffff002f671a90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff0723808a50, ffffff0723808a48) evch_delivery_hold+0x70(ffffff0723808a28, ffffff002f671bc0) evch_delivery_thr+0x29e(ffffff0723808a28) thread_start+8() ffffff0730c8a020 ffffff0723aab038 ffffff06fe229d00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff0730c8a020: ffffff002eebcd50 [ ffffff002eebcd50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(0, 0, 0, 0, fdccbe00, f5f00) doorfs32+0x180(0, 0, 0, fdccbe00, f5f00, a) sys_syscall32+0xff() ffffff07315244e0 ffffff0723aab038 ffffff072b250c00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff07315244e0: ffffff002f393d50 [ ffffff002f393d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(f8ce5cc0, 4, 0, 0, f8ce5e00, f5f00) doorfs32+0x180(f8ce5cc0, 4, 0, f8ce5e00, f5f00, a) sys_syscall32+0xff() ffffff0730d7d100 ffffff0723aab038 ffffff06fe226200 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff0730d7d100: ffffff002f425d50 [ ffffff002f425d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(fd3d4cc0, 4, 0, 0, fd3d4e00, f5f00) doorfs32+0x180(fd3d4cc0, 4, 0, fd3d4e00, f5f00, a) sys_syscall32+0xff() ffffff07240ae7c0 ffffff0723aab038 ffffff06fe209340 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff07240ae7c0: ffffff002f625d50 [ ffffff002f625d50 _resume_from_idle+0xf4() ] swtch+0x141() shuttle_swtch+0x203(fffffffffbd11010) door_return+0x214(faef9cc8, 4, 0, 0, faef9e00, f5f00) doorfs32+0x180(faef9cc8, 4, 0, faef9e00, f5f00, a) sys_syscall32+0xff() ffffff002e56cc40 fffffffffbc2ea80 0 0 60 ffffff07234be878 PC: _resume_from_idle+0xf4 THREAD: evch_delivery_thr() stack pointer for thread ffffff002e56cc40: ffffff002e56ca90 [ ffffff002e56ca90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff07234be878, ffffff07234be870) evch_delivery_hold+0x70(ffffff07234be850, ffffff002e56cbc0) evch_delivery_thr+0x29e(ffffff07234be850) thread_start+8() ffffff072422e4c0 ffffff0723aab038 ffffff06fe220c00 1 59 ffffff072422e6ae PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff072422e4c0: ffffff002e58fc50 [ ffffff002e58fc50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072422e6ae, ffffff072422e6b0, 0) cv_wait_sig_swap+0x17(ffffff072422e6ae, ffffff072422e6b0) cv_waituntil_sig+0xbd(ffffff072422e6ae, ffffff072422e6b0, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff072428f800 ffffff0723aab038 ffffff06fe211b00 1 59 0 PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff072428f800: ffffff002f5ccd00 [ ffffff002f5ccd00 _resume_from_idle+0xf4() ] swtch_to+0xb6(ffffff07241ec800) shuttle_resume+0x2af(ffffff07241ec800, fffffffffbd11010) door_call+0x336(2a, fc1e6c28) doorfs32+0xa7(2a, fc1e6c28, 0, 0, 0, 3) sys_syscall32+0xff() ffffff072428f460 ffffff0723aab038 ffffff06fe211400 1 59 ffffff072428f64e PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff072428f460: ffffff002f5d2c50 [ ffffff002f5d2c50 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff072428f64e, ffffff072428f650, 0) cv_wait_sig_swap+0x17(ffffff072428f64e, ffffff072428f650) cv_waituntil_sig+0xbd(ffffff072428f64e, ffffff072428f650, 0, 0) lwp_park+0x15e(0, 0) syslwp_park+0x63(0, 0, 0) sys_syscall32+0xff() ffffff0723ba9b20 ffffff0723aab038 ffffff06fe225b00 1 59 ffffff0723ba9d0e PC: _resume_from_idle+0xf4 CMD: /lib/svc/bin/svc.startd -s stack pointer for thread ffffff0723ba9b20: ffffff002f41fd90 [ ffffff002f41fd90 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait_sig_swap_core+0x1b9(ffffff0723ba9d0e, ffffff06f6a8e1c0, 0) cv_wait_sig_swap+0x17(ffffff0723ba9d0e, ffffff06f6a8e1c0) sigsuspend+0x101(8047e50) sys_syscall32+0xff() ffffff072353eb00 ffffff0723542018 ffffff06f1347e80 1 59 ffffff0723a74232 PC: _resume_from_idle+0xf4 CMD: /sbin/init -s stack pointer for thread ffffff072353eb00: ffffff002e6f0c60 [ ffffff002e6f0c60 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_sig_hires+0x39d(ffffff0723a74232, ffffff0723a741f8, 568ed47c4759, 1, 3) cv_timedwait_sig_hrtime+0x2a(ffffff0723a74232, ffffff0723a741f8, 568ed47c4759) poll_common+0x504(806b7a4, 1, ffffff002e6f0e40, 0) pollsys+0xe7(806b7a4, 1, 80475d8, 0) sys_syscall32+0xff() ffffff002e6fcc40 ffffff072353d020 ffffff06f13453c0 0 97 ffffff072353d0e0 PC: _resume_from_idle+0xf4 CMD: pageout stack pointer for thread ffffff002e6fcc40: ffffff002e6fca10 [ ffffff002e6fca10 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(ffffff072353d0e0, fffffffffbcdf3a0) pageout_scanner+0x121() thread_start+8() ffffff002e6f6c40 ffffff072353d020 ffffff06f1347080 0 98 fffffffffbcdf3c8 PC: _resume_from_idle+0xf4 CMD: pageout stack pointer for thread ffffff002e6f6c40: ffffff002e6f6a70 [ ffffff002e6f6a70 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcdf3c8, fffffffffbcdf3c0) pageout+0x1e6() thread_start+8() ffffff002e702c40 ffffff072353a028 ffffff06f1343ec0 0 60 fffffffffbcf0d24 PC: _resume_from_idle+0xf4 CMD: fsflush stack pointer for thread ffffff002e702c40: ffffff002e702a20 [ ffffff002e702a20 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcf0d24, fffffffffbca7400) fsflush+0x21d() thread_start+8() ffffff002e708c40 fffffffffbc2ea80 0 0 60 fffffffffbcfb5e8 PC: _resume_from_idle+0xf4 THREAD: mod_uninstall_daemon() stack pointer for thread ffffff002e708c40: ffffff002e708b70 [ ffffff002e708b70 _resume_from_idle+0xf4() ] swtch+0x141() cv_wait+0x70(fffffffffbcfb5e8, fffffffffbcf0760) mod_uninstall_daemon+0x123() thread_start+8() ffffff002e70ec40 fffffffffbc2ea80 0 0 60 fffffffffbcdf478 PC: _resume_from_idle+0xf4 THREAD: seg_pasync_thread() stack pointer for thread ffffff002e70ec40: ffffff002e70eaf0 [ ffffff002e70eaf0 _resume_from_idle+0xf4() ] swtch+0x141() cv_timedwait_hires+0xec(fffffffffbcdf478, fffffffffbcdf470, 3b9aca00, 989680 , 0) cv_reltimedwait+0x51(fffffffffbcdf478, fffffffffbcdf470, 64, 4) seg_pasync_thread+0xd1() thread_start+8() MESSAGE acpinex: sb at 0, acpinex1 acpinex1 is /fw/sb at 0 mpt_sas3 at mpt_sas2: scsi-iport v0 mpt_sas3 is /pci at 0,0/pci8086,340e at 7/pci1000,3050 at 0/iport at v0 mpt_sas4 at mpt_sas1: scsi-iport v0 mpt_sas4 is /pci at 0,0/pci8086,340c at 5/pci1000,3050 at 0/iport at v0 /pci at 0,0/pci8086,340e at 7/pci1000,3050 at 0/iport at v0 (mpt_sas3) online mpt_sas5 at mpt_sas0: scsi-iport v0 /pci at 0,0/pci8086,340c at 5/pci1000,3050 at 0/iport at v0 (mpt_sas4) online mpt_sas5 is /pci at 0,0/pci8086,3408 at 1/pci1000,3050 at 0/iport at v0 /pci at 0,0/pci8086,3408 at 1/pci1000,3050 at 0/iport at v0 (mpt_sas5) online pseudo-device: pseudo1 pseudo1 is /pseudo/zconsnex at 1 ISA-device: pit_beep0 pit_beep0 is /pci at 0,0/isa at 1f/pit_beep USB 1.10 interface (usbif413c,2002.config1.1) operating at full speed (USB 1.x) on USB 1.10 external hub: input at 1, hid2 at bus address 3 Dell USB Keyboard Hub hid2 is /pci at 0,0/pci15d9,f580 at 1a/hub at 1/device at 1/input at 1 pseudo-device: dcpc0 dcpc0 is /pseudo/dcpc at 0 sd2 at scsi_vhci0: unit-address g50014ee0ae19566d: f_sym sd2 is /scsi_vhci/disk at g50014ee0ae19566d pseudo-device: fasttrap0 fasttrap0 is /pseudo/fasttrap at 0 sd5 at scsi_vhci0: unit-address g50014ee0036e7947: f_sym sd5 is /scsi_vhci/disk at g50014ee0036e7947 sd18 at scsi_vhci0: unit-address g50014ee0036e8186: f_sym sd18 is /scsi_vhci/disk at g50014ee0036e8186 pseudo-device: fbt0 fbt0 is /pseudo/fbt at 0 sd21 at scsi_vhci0: unit-address g50014ee0036e7e5e: f_sym sd21 is /scsi_vhci/disk at g50014ee0036e7e5e pseudo-device: fcp0 fcp0 is /pseudo/fcp at 0 pseudo-device: fcsm0 fcsm0 is /pseudo/fcsm at 0 pseudo-device: fct0 fct0 is /pseudo/fct at 0 sd16 at scsi_vhci0: unit-address g50014ee0036e7db0: f_sym sd16 is /scsi_vhci/disk at g50014ee0036e7db0 sd12 at scsi_vhci0: unit-address g50014ee0ae195375: f_sym sd12 is /scsi_vhci/disk at g50014ee0ae195375 sd17 at scsi_vhci0: unit-address g50014ee0036e7e8e: f_sym sd17 is /scsi_vhci/disk at g50014ee0036e7e8e sd15 at scsi_vhci0: unit-address g50014ee0ae19576b: f_sym sd15 is /scsi_vhci/disk at g50014ee0ae19576b pseudo-device: kvm0 kvm0 is /pseudo/kvm at 0 sd10 at scsi_vhci0: unit-address g50014ee0ae1956d7: f_sym sd10 is /scsi_vhci/disk at g50014ee0ae1956d7 pseudo-device: llc10 llc10 is /pseudo/llc1 at 0 pseudo-device: lockstat0 lockstat0 is /pseudo/lockstat at 0 pseudo-device: lofi0 lofi0 is /pseudo/lofi at 0 sd13 at scsi_vhci0: unit-address g50014ee2b3427882: f_sym sd13 is /scsi_vhci/disk at g50014ee2b3427882 sd14 at scsi_vhci0: unit-address g50014ee0036e7901: f_sym sd14 is /scsi_vhci/disk at g50014ee0036e7901 pseudo-device: profile0 profile0 is /pseudo/profile at 0 pseudo-device: ramdisk1024 ramdisk1024 is /pseudo/ramdisk at 1024 sd20 at scsi_vhci0: unit-address g50014ee058c3c152: f_sym sd20 is /scsi_vhci/disk at g50014ee058c3c152 pseudo-device: sdt0 sdt0 is /pseudo/sdt at 0 sd8 at scsi_vhci0: unit-address g50014ee058c3c2cc: f_sym sd8 is /scsi_vhci/disk at g50014ee058c3c2cc pseudo-device: stmf0 stmf0 is /pseudo/stmf at 0 pseudo-device: systrace0 systrace0 is /pseudo/systrace at 0 pseudo-device: bpf0 bpf0 is /pseudo/bpf at 0 sd19 at scsi_vhci0: unit-address g50014ee0ae195924: f_sym sd19 is /scsi_vhci/disk at g50014ee0ae195924 sd3 at scsi_vhci0: unit-address g50014ee0036e7bad: f_sym sd3 is /scsi_vhci/disk at g50014ee0036e7bad pseudo-device: fssnap0 fssnap0 is /pseudo/fssnap at 0 sd4 at scsi_vhci0: unit-address g50014ee0ae19583f: f_sym sd4 is /scsi_vhci/disk at g50014ee0ae19583f IP Filter: v4.1.9, running. sd9 at scsi_vhci0: unit-address g50014ee058c3c58c: f_sym sd9 is /scsi_vhci/disk at g50014ee058c3c58c sd11 at scsi_vhci0: unit-address g50014ee058c3c2c5: f_sym sd11 is /scsi_vhci/disk at g50014ee058c3c2c5 sd6 at scsi_vhci0: unit-address g50014ee0036e7e6d: f_sym sd6 is /scsi_vhci/disk at g50014ee0036e7e6d pseudo-device: nsmb0 nsmb0 is /pseudo/nsmb at 0 pseudo-device: pm0 pm0 is /pseudo/pm at 0 panic[cpu0]/thread=ffffff0730c5f840: BAD TRAP: type=e (#pf Page fault) rp=ffffff002ed54b00 addr=e8 occurred in module "zfs" due to a NULL pointer dereference rm: #pf Page fault Bad kernel fault at addr=0xe8 pid=29523, pc=0xfffffffff7a4b805, sp=0xffffff002ed54bf0, eflags=0x10246 cr0: 8005003b cr4: 26f8 cr2: e8 cr3: 5fffc4000 cr8: c rdi: ffffff0c8e5a5d80 rsi: ffffff088f8e9900 rdx: 0 rcx: 1 r8: 4df5181bb11fe1 r9: ffffff002ed549e8 rax: 0 rbx: 0 rbp: ffffff002ed54d20 r10: fffffffffb85430c r11: fffffffffb800983 r12: ffffff0724340800 r13: ffffff0c8e5a5d80 r14: ffffff0c510a4980 r15: ffffff0c523303e0 fsb: 0 gsb: fffffffffbc30c40 ds: 4b es: 4b fs: 0 gs: 1c3 trp: e err: 0 rip: fffffffff7a4b805 cs: 30 rfl: 10246 rsp: ffffff002ed54bf0 ss: 38 ffffff002ed549e0 unix:die+df () ffffff002ed54af0 unix:trap+db3 () ffffff002ed54b00 unix:cmntrap+e6 () ffffff002ed54d20 zfs:zfs_remove+395 () ffffff002ed54da0 genunix:fop_remove+5b () ffffff002ed54e70 genunix:vn_removeat+382 () ffffff002ed54ec0 genunix:unlinkat+59 () ffffff002ed54f10 unix:brand_sys_sysenter+1c9 () syncing file systems... done dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel NOTICE: ahci0: ahci_tran_reset_dport port 4 reset port NOTICE: ahci0: ahci_tran_reset_dport port 5 reset port stack pointer for thread ffffff0730c5f840: ffffff002ed548e0 ffffff002ed54950 param_preset() ffffff002ed549e0 die+0xdf(e, ffffff002ed54b00, e8, 0) ffffff002ed54af0 trap+0xdb3(ffffff002ed54b00, e8, 0) ffffff002ed54b00 0xfffffffffb8001d6() ffffff002ed54d20 zfs_remove+0x395(ffffff0c6acd4440, ffffff0822395e44, ffffff073154d760, 0, 0) ffffff002ed54da0 fop_remove+0x5b(ffffff0c6acd4440, ffffff0822395e44, ffffff073154d760, 0, 0) ffffff002ed54e70 vn_removeat+0x382(0, 808d050, 0, 0) ffffff002ed54ec0 unlinkat+0x59(ffd19553, 808d050, 0) ffffff002ed54f10 _sys_sysenter_post_swapgs+0x149() THREAD STATE SOBJ COUNT ffffff002e011c40 SLEEP CV 679 swtch+0x141 cv_wait+0x70 taskq_thread_wait+0xbe taskq_thread+0x37c thread_start+8 ffffff07242b1860 SLEEP SHUTTLE 45 swtch_to+0xb6 shuttle_resume+0x2af door_return+0x3e0 doorfs32+0x180 sys_syscall32+0xff ffffff072493e880 SLEEP CV 42 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd lwp_park+0x15e syslwp_park+0x63 _sys_sysenter_post_swapgs+0x149 ffffff002e09bc40 FREE 25 ffffff002f76dc40 SLEEP CV 24 swtch+0x141 cv_wait+0x70 mptsas_doneq_thread+0x103 thread_start+8 ffffff072fbec140 SLEEP CV 23 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa nanosleep+0x19f _sys_sysenter_post_swapgs+0x149 ffffff0730c5f4a0 SLEEP CV 23 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_timedwait_sig_hrtime+0x35 poll_common+0x504 pollsys+0xe7 _sys_sysenter_post_swapgs+0x149 ffffff07276d00c0 SLEEP SHUTTLE 19 swtch_to+0xb6 shuttle_resume+0x2af door_return+0x3e0 doorfs32+0x180 _sys_sysenter_post_swapgs+0x149 ffffff07247a08c0 SLEEP SHUTTLE 13 swtch+0x141 shuttle_swtch+0x203 door_return+0x214 doorfs32+0x180 _sys_sysenter_post_swapgs+0x149 ffffff002e47bc40 SLEEP CV 12 swtch+0x141 cv_wait+0x70 evch_delivery_hold+0x70 evch_delivery_thr+0x29e thread_start+8 ffffff002f0c4c40 SLEEP CV 12 swtch+0x141 cv_wait+0x70 mac_soft_ring_worker+0xb1 thread_start+8 ffffff002e113c40 SLEEP CV 12 swtch+0x141 cv_wait+0x70 md_daemon+0xd4 start_daemon+0x16 thread_start+8 ffffff002f695c40 SLEEP CV 8 swtch+0x141 cv_wait+0x70 squeue_polling_thread+0xa9 thread_start+8 ffffff002f68fc40 SLEEP CV 8 swtch+0x141 cv_wait+0x70 squeue_worker+0x104 thread_start+8 ffffff0723ba93e0 SLEEP SHUTTLE 8 swtch+0x141 shuttle_swtch+0x203 door_return+0x214 doorfs32+0x180 sys_syscall32+0xff ffffff07276c80e0 SLEEP USER_PI 8 swtch+0x141 turnstile_block+0x555 lwp_upimutex_lock+0x1db lwp_mutex_timedlock+0x1dd _sys_sysenter_post_swapgs+0x149 ffffff0730ca2ba0 SLEEP CV 7 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_timedwait_sig_hrtime+0x2a poll_common+0x504 pollsys+0xe7 _sys_sysenter_post_swapgs+0x149 ffffff07276dc080 SLEEP CV 7 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 pause+0x45 _sys_sysenter_post_swapgs+0x149 ffffff07242a3080 SLEEP CV 6 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa lwp_park+0x15e syslwp_park+0x63 _sys_sysenter_post_swapgs+0x149 ffffff002f0acc40 SLEEP CV 6 swtch+0x141 cv_wait+0x70 mac_srs_worker+0x141 thread_start+8 ffffff002e0c5c40 FREE 5 apic_setspl+0x5a dosoftint_epilog+0xc5 dispatch_softint+0x49 ffffff07241ec800 SLEEP CV 5 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd lwp_park+0x15e syslwp_park+0x63 sys_syscall32+0xff ffffff0724940c00 SLEEP CV 5 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 sigsuspend+0x101 _sys_sysenter_post_swapgs+0x149 ffffff002f65cc40 SLEEP CV 4 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 kcfpool_svc+0x84 thread_start+8 ffffff002f473c40 SLEEP CV 4 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 taskq_thread_wait+0x64 taskq_d_thread+0x145 thread_start+8 ffffff072493ac40 SLEEP CV 4 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd sigtimedwait+0x19c _sys_sysenter_post_swapgs+0x149 ffffff0730c0d420 SLEEP CV 4 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 waitid+0x24d waitsys32+0x36 _sys_sysenter_post_swapgs+0x149 ffffff002e487c40 ONPROC 4 swtch+0x141 cpu_pause+0x80 thread_start+8 ffffff07246da0c0 SLEEP CV 3 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 pause+0x45 sys_syscall32+0xff ffffff002e538c40 FREE 2 apic_send_ipi+0x73 send_dirint+0x18 cbe_xcall+0xac cyclic_reprogram_here+0x46 cyclic_reprogram+0x68 apic_setspl+0x5a dosoftint_epilog+0xc5 dispatch_softint+0x49 ffffff002e283c40 SLEEP CV 2 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c txg_thread_wait+0x5f txg_sync_thread+0x111 thread_start+8 ffffff07246daba0 SLEEP CV 2 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa lwp_park+0x15e syslwp_park+0x63 sys_syscall32+0xff ffffff002ef8ac40 SLEEP CV 2 swtch+0x141 cv_wait+0x70 i_mac_notify_thread+0xee thread_start+8 ffffff002f0b8c40 SLEEP CV 2 swtch+0x141 cv_wait+0x70 mac_rx_srs_poll_ring+0xad thread_start+8 ffffff002e24ac40 SLEEP CV 2 swtch+0x141 cv_wait+0x70 spa_thread+0x1db thread_start+8 ffffff002e760c40 SLEEP CV 2 swtch+0x141 cv_wait+0x70 txg_thread_wait+0xaf txg_quiesce_thread+0x106 thread_start+8 ffffff0723ba9780 SLEEP CV 2 swtch+0x141 cv_wait_sig+0x185 cte_get_event+0xb3 ctfs_endpoint_ioctl+0xf9 ctfs_bu_ioctl+0x4b fop_ioctl+0x55 ioctl+0x9b sys_syscall32+0xff ffffff073051c8c0 SLEEP CV 2 swtch+0x141 cv_wait_sig+0x185 door_unref+0x94 doorfs32+0xf5 _sys_sysenter_post_swapgs+0x149 ffffff0724775880 SLEEP CV 2 swtch+0x141 cv_wait_sig+0x185 str_cv_wait+0x27 strwaitq+0x2c3 strread+0x144 spec_read+0x66 fop_read+0x5b read+0x2a7 read32+0x1e _sys_sysenter_post_swapgs+0x149 ffffff002e4c3c40 FREE 1 0 apic_intr_exit+0x45 intr_thread_epilog+0xce dispatch_hardint+0x48 ffffff002e0bfc40 FREE 1 apic_intr_exit+0x45 intr_thread_epilog+0xce dispatch_hardint+0x48 ffffff002e53ec40 FREE 1 apic_setspl+0x5a do_splx+0x65 disp_lock_exit+0x47 cv_signal+0x8a taskq_dispatch_ent+0xd7 spa_taskq_dispatch_ent+0x80 zio_taskq_dispatch+0x77 zio_interrupt+0x18 vdev_disk_io_intr+0x4f biodone+0x35 sd_buf_iodone+0x65 sd_mapblockaddr_iodone+0x45 sd_return_command+0x136 sdintr+0x3bf scsi_hba_pkt_comp+0x63 vhci_intr+0x21f mptsas_pkt_comp+0x2b uhci_get_sw_frame_number+0x2f apic_intr_exit+0x69 intr_thread_epilog+0xce dispatch_hardint+0x48 ffffff002e5d0c40 FREE 1 ehci_create_done_itd_list+0x28 ehci_traverse_active_isoc_list+0x43 apic_intr_exit+0x69 intr_thread_epilog+0xce dispatch_hardint+0x48 ffffff002f290c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 idm_wd_thread+0x203 thread_start+8 ffffff002e137c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 kcfpoold+0xf6 thread_start+8 ffffff002e131c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 page_capture_thread+0xb1 thread_start+8 ffffff002e1c1c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 sata_event_daemon+0xff thread_start+8 ffffff002e70ec40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 seg_pasync_thread+0xd1 thread_start+8 ffffff002f4bfc40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 stmf_svc_timeout+0x112 stmf_svc+0x1c0 taskq_thread+0x2d0 thread_start+8 ffffff002e149c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c arc_reclaim_thread+0x13e thread_start+8 ffffff002e14fc40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c arc_user_evicts_thread+0xd9 thread_start+8 ffffff002e29bc40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c dce_reclaim_worker+0xab thread_start+8 ffffff002e155c40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c l2arc_feed_thread+0xad thread_start+8 ffffff002f22ec40 SLEEP CV 1 swtch+0x141 cv_timedwait_hires+0xec cv_timedwait+0x5c zone_status_timedwait+0x6b auto_do_unmount+0xc7 thread_start+8 ffffff07315e3c00 SLEEP CV 1 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_timedwait_sig_hrtime+0x2a poll_common+0x439 pollsys+0xe7 _sys_sysenter_post_swapgs+0x149 ffffff072353eb00 SLEEP CV 1 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_timedwait_sig_hrtime+0x2a poll_common+0x504 pollsys+0xe7 sys_syscall32+0xff ffffff07241eb0e0 SLEEP CV 1 swtch+0x141 cv_timedwait_sig_hires+0x39d cv_waituntil_sig+0xfa port_getn+0x39f portfs+0x1c0 portfs32+0x40 sys_syscall32+0xff ffffff002f1ddc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 crypto_bufcall_service+0x8d thread_start+8 ffffff002ec70c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 dld_taskq_dispatch+0x115 thread_start+8 ffffff002e702c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 fsflush+0x21d thread_start+8 ffffff002f5b4c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ibcm_process_tlist+0x1e1 thread_start+8 ffffff002e2a1c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ill_taskq_dispatch+0x155 thread_start+8 ffffff002e289c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ipsec_loader+0x149 thread_start+8 ffffff002fc35c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 log_event_deliver+0x1b3 thread_start+8 ffffff002e708c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 mod_uninstall_daemon+0x123 thread_start+8 ffffff002e6f6c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 pageout+0x1e6 thread_start+8 ffffff002e6fcc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 pageout_scanner+0x121 thread_start+8 ffffff002e15bc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 pm_dep_thread+0xbd thread_start+8 ffffff002e023c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 scsi_hba_barrier_daemon+0xd6 thread_start+8 ffffff002e029c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 scsi_lunchg1_daemon+0x1de thread_start+8 ffffff002e02fc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 scsi_lunchg2_daemon+0x121 thread_start+8 ffffff003031ac40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 smb_thread_continue_timedwait_locked+0x5d smb_thread_continue+0x2d smb_kshare_unexport_thread+0x28 smb_thread_entry_point+0x91 thread_start+8 ffffff0030362c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 smb_thread_continue_timedwait_locked+0x5d smb_thread_continue+0x2d smb_oplock_break_thread+0x20 smb_thread_entry_point+0x91 thread_start+8 ffffff002e46fc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 softmac_taskq_dispatch+0x11d thread_start+8 ffffff002e11fc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 streams_bufcall_service+0x8d thread_start+8 ffffff002e125c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 streams_qbkgrnd_service+0x151 thread_start+8 ffffff002e12bc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 streams_sqbkgrnd_service+0xe5 thread_start+8 ffffff002e475c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 task_commit+0xd9 thread_start+8 ffffff002e00bc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 thread_reaper+0xb9 thread_start+8 ffffff00300d9c40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ufs_thread_idle+0x147 thread_start+8 ffffff00300dfc40 SLEEP CV 1 swtch+0x141 cv_wait+0x70 ufs_thread_run+0x80 ufs_thread_hlock+0x73 thread_start+8 ffffff0723f9cb40 SLEEP CV 1 swtch+0x141 cv_wait_sig+0x185 door_unref+0x94 doorfs32+0xf5 sys_syscall32+0xff ffffff073051c180 SLEEP CV 1 swtch+0x141 cv_wait_sig+0x185 so_dequeue_msg+0x2f7 so_recvmsg+0x249 socket_recvmsg+0x33 socket_vop_read+0x5f fop_read+0x5b read+0x2a7 read32+0x1e _sys_sysenter_post_swapgs+0x149 ffffff072422e860 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd port_getn+0x39f portfs+0x25d portfs32+0x78 sys_syscall32+0xff ffffff0723ba9040 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 cv_waituntil_sig+0xbd sigtimedwait+0x19c sys_syscall32+0xff ffffff0730ca2800 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 fifo_read+0xc9 fop_read+0x5b read+0x2a7 read32+0x1e _sys_sysenter_post_swapgs+0x149 ffffff0723ba9b20 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 sigsuspend+0x101 sys_syscall32+0xff ffffff072464c420 SLEEP CV 1 swtch+0x141 cv_wait_sig_swap_core+0x1b9 cv_wait_sig_swap+0x17 sowaitconnind+0x73 sotpi_accept+0xaa socket_accept+0x1f accept+0x101 _sys_sysenter_post_swapgs+0x149 ffffff072428f800 SLEEP SHUTTLE 1 swtch_to+0xb6 shuttle_resume+0x2af door_call+0x336 doorfs32+0xa7 sys_syscall32+0xff ffffff002f348c40 RUN 1 swtch+0x141 cv_timedwait_hires+0xec cv_reltimedwait+0x51 taskq_thread_wait+0x64 taskq_d_thread+0x145 thread_start+8 ffffff002e493c40 ONPROC 1 0xffffff072349e500 do_splx+0x65 xc_common+0x221 apic_setspl+0x5a 0x10 0xf acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 ffffff002e50ec40 ONPROC 1 apic_intr_exit+0x45 apic_intr_exit+0x45 hilevel_intr_epilog+0xc8 do_interrupt+0xff _sys_rtt_ints_disabled+8 splr+0x6a apic_setspl+0x5a apic_setspl+0x5a 0x10 0xf acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 ffffff002e005c40 ONPROC 1 swtch+0x141 idle+0xbc thread_start+8 ffffff002e5a6c40 ONPROC 1 xc_serv+0x247 xc_common+0x221 apic_setspl+0x5a dosoftint_prolog+0x9d 0xf acpi_cpu_cstate+0x11b cpu_acpi_idle+0x8d cpu_idle_adaptive+0x13 idle+0xa7 thread_start+8 fffffffffbc2f9e0 STOPPED 1 swtch+0x141 sched+0x835 main+0x46c ffffff0730c5f840 PANIC 1 param_preset die+0xdf trap+0xdb3 0xfffffffffb8001d6 zfs_remove+0x395 fop_remove+0x5b vn_removeat+0x382 unlinkat+0x59 _sys_sysenter_post_swapgs+0x149 From omen.wild at gmail.com Mon Sep 14 22:37:04 2015 From: omen.wild at gmail.com (Omen Wild) Date: Mon, 14 Sep 2015 15:37:04 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150914220930.GA30739@mandarb.com> References: <20150914220930.GA30739@mandarb.com> Message-ID: <20150914223704.GB30739@mandarb.com> Apologies, this email escaped without a subject line. I'm hoping this one, coupled with threading, will help ameliorate the problem. -- "What is this talk of 'release'? Klingons do not make software 'releases'. Our software 'escapes,' leaving a bloody trail of designers and quality assurance people in its wake." From danmcd at omniti.com Mon Sep 14 22:41:04 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 14 Sep 2015 18:41:04 -0400 Subject: [OmniOS-discuss] (no subject) In-Reply-To: <20150914220930.GA30739@mandarb.com> References: <20150914220930.GA30739@mandarb.com> Message-ID: One thing you can try is to overwrite the file and then remove it. Someone else reported a similar vug, and it turned out to be corrupt metadata or extended attributes. Do you have a URL for the panic? Also, please try today's update. Dan Sent from my iPhone (typos, autocorrect, and all) > On Sep 14, 2015, at 6:09 PM, Omen Wild wrote: > > [ I originally posted this to the Illumos ZFS list but got no responses. ] > > We have an up to date OmniOS system that panics every time we try to > unlink a specific file. We have a kernel pages-only crashdump and can > reproduce easily. I can make the panic files available to an interested > party. > > A zpool scrub turned up no errors or repairs. > > Mostly we are wondering how to clear the corruption off disk and worried > what else might be corrupt since the scrub turns up no issues. > > Details below. > > When we first encountered the issue we were running with a version from > mid-July: zfs at 0.5.11,5.11-0.151014:20150417T182430Z . > > After the first couple panics we upgraded to the newest (as of a couple > days ago, zfs at 0.5.11,5.11-0.151014:20150818T161042Z) which still panics. > > # uname -a > SunOS zaphod 5.11 omnios-d08e0e5 i86pc i386 i86pc > > The error looks like this: > BAD TRAP: type=e (#pf Page fault) rp=ffffff002ed54b00 addr=e8 occurred in module "zfs" due to a NULL pointer dereference > > The panic stack looks like this in every case: > param_preset > die+0xdf > trap+0xdb3 > 0xfffffffffb8001d6 > zfs_remove+0x395 > fop_remove+0x5b > vn_removeat+0x382 > unlinkat+0x59 > _sys_sysenter_post_swapgs+0x149 > > It is triggered by trying to rm a specific file. ls'ing the file gives > the error "Operation not applicable", ls'ing the directory shows ? in > place of the data: > > ?????????? ? ? ? ? ? filename.html > > I have attached the output of: > echo '::panicinfo\n::cpuinfo -v\n::threadlist -v 10\n::msgbuf\n*panic_thread::findstack -v\n::stacks' | mdb 7 > > I am a Solaris/OI/OmniOS debugging neophyte, but will happily run any > commands recommended. > > Thanks > Omen > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From omen.wild at gmail.com Mon Sep 14 23:11:14 2015 From: omen.wild at gmail.com (Omen Wild) Date: Mon, 14 Sep 2015 16:11:14 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: References: <20150914220930.GA30739@mandarb.com> Message-ID: <20150914231114.GC30739@mandarb.com> Quoting Dan McDonald on Mon, Sep 14 18:41: > > One thing you can try is to overwrite the file and then remove it. > Someone else reported a similar vug, and it turned out to be corrupt > metadata or extended attributes. We will try it, probably tomorrow. This is a backup server and we have a long running job in progress. > Do you have a URL for the panic? I will send it off-list. > Also, please try today's update. I have run the update, but we will have to wait until at least tomorrow to reboot. Thanks! From henson at acm.org Tue Sep 15 01:24:49 2015 From: henson at acm.org (Paul B. Henson) Date: Mon, 14 Sep 2015 18:24:49 -0700 Subject: [OmniOS-discuss] r151014 users - beware of illumos 6214 - steps to check and repair... In-Reply-To: <55F6F3DF.3060404@hfg-gmuend.de> References: <1317F4F3-2210-426C-8686-500CE0FBDFAC@omniti.com> <55F6F3DF.3060404@hfg-gmuend.de> Message-ID: <004a01d0ef55$508cae80$f1a60b80$@acm.org> > From: Guenther Alka > Sent: Monday, September 14, 2015 9:21 AM > > 1. what is the recommended way to detect possible problems > a. run scrub? seems useless I don't think it is necessarily useless, it might detect a problem. However, from what I understand there might be a problem it doesn't detect. So it can be considered verification there is a problem, but not verification that there isn't. > b. run zdb pool and check for what I ran a basic zdb and also a 'zdb -bbccsv', the former seems to be core dumping on parsing the history, but the latter ran successfully with no issues. If I understood George correctly, 'zdb -bbccsv' should be fairly reliable on finding metadata corruption as it traverses all of the blocks. > 2. when using an L2Arc and there is no obvious error detected by scrub > or zdb > a. trash the pool and restore from backup via rsync with possible > file corruptions but ZFS structure is 100% ok then > b. keep the pool and hope that there is no metadata corruption? > c. some action to verify that at least the pool is ok: .... Hmm, at this point given a successful scrub and successful zdb runs I'm going to keep my fingers crossed that I have no corruption. I was only running the buggy code for it out of month, without a particularly high load, so hopefully I got lucky. > 3. when using an L2Arc and there is an error detected by scrub or zdb [...] > b. keep the pool and hope that there is no metadata corruption If the scrub or zdb detect errors, it is possible your box might panic at some point, or be unable to import the pool after a reboot. So in that case, I don't think just keeping it is advisable :). I'm not sure if there is any way to fix it or if the best case is to try to restore it or temporarily transfer the data elsewhere, re-create it, and put it back. From henson at acm.org Tue Sep 15 01:38:38 2015 From: henson at acm.org (Paul B. Henson) Date: Mon, 14 Sep 2015 18:38:38 -0700 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> Message-ID: <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> > From: Dan McDonald > Sent: Monday, September 14, 2015 2:58 PM > > - OpenSSH is now at version 7.1p1. Has the packaging been fixed in bloody so you can actually install this now :)? If so, any thoughts on potentially back porting that to the current LTS :)? > - An additional pair of ZFS fixes from Delphix not yet upstreamed in illumos-gate. That would be DLPX-36997 and DLPX-35372? Do you happen to know if Delphix has their issue tracker accessible to the Internet if somebody wanted to take a look in more detail at these? Google didn't provide anything of any obvious use. Thanks! From henson at acm.org Tue Sep 15 01:44:22 2015 From: henson at acm.org (Paul B. Henson) Date: Mon, 14 Sep 2015 18:44:22 -0700 Subject: [OmniOS-discuss] OmniOS r151014 update - needs reboot! In-Reply-To: <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> References: <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> Message-ID: <005101d0ef58$0be0b260$23a21720$@acm.org> > From: Dan McDonald > Sent: Monday, September 14, 2015 2:58 PM > > Most importantly, this update fixes illumos 6214 for OmniOS. You should be > able to restore your L2ARC devices using the method I mentioned in my last e- > mail: Call me a scaredy-cat, but I think I might wait a bit for that to burn in before I reenable my cache :). > Because of the changes to zfs, this update requires that you reboot your system. And then despite successful scrub and zdb runs I'm still nervous about the pool not being successfully imported after a reboot 8-/ , so I might put this off until I've got a chunk of free time in case of unxpected recovery issues. Thanks much for the quick turnaround though! From henson at acm.org Tue Sep 15 01:46:47 2015 From: henson at acm.org (Paul B. Henson) Date: Mon, 14 Sep 2015 18:46:47 -0700 Subject: [OmniOS-discuss] (no subject) In-Reply-To: <20150914220930.GA30739@mandarb.com> References: <20150914220930.GA30739@mandarb.com> Message-ID: <005201d0ef58$622cfb60$2686f220$@acm.org> > From: Omen Wild > Sent: Monday, September 14, 2015 3:10 PM > > Mostly we are wondering how to clear the corruption off disk and worried > what else might be corrupt since the scrub turns up no issues. While looking into possible corruption from the recent L2 cache bug it seems that running 'zdb -bbccsv' is a good test for finding corruption as it looks at all of the blocks and verifies all of the checksums. From henson at acm.org Tue Sep 15 01:50:08 2015 From: henson at acm.org (Paul B. Henson) Date: Mon, 14 Sep 2015 18:50:08 -0700 Subject: [OmniOS-discuss] zdb -h bug? Message-ID: <005801d0ef58$da1898f0$8e49cad0$@acm.org> While trying to look for corruption from the recent L2 cache bug, I noticed that zdb core dumps trying to list the history on both my data pool (which had L2 cache) and my rpool (which did not). I'm wondering if there is some bug with zdb that is causing this as opposed to corruption of the pool. I'd be curious as to what 'zdb -h' does on the various pools out there, particularly ones created prior to 014 then subsequently upgraded to 014 but without large_blocks being enabled (as those are the characteristics of my pools :) ). If I get a little time I'm going to try to build a 012 box and simulate how my pools got to where they are and see if I can reproduce it. From danmcd at omniti.com Tue Sep 15 03:46:11 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 14 Sep 2015 23:46:11 -0400 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> Message-ID: <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> There have been some fixes there, but I'm not sure if it's all there. I do know one has to use --reject options to make a switch. Lauri "lotheac" Tirkkonen can provide more details. Also note - there is an effort to replace sunssh with OpenSSh altogether. Dan Sent from my iPhone (typos, autocorrect, and all) On Sep 14, 2015, at 9:38 PM, Paul B. Henson wrote: >> From: Dan McDonald >> Sent: Monday, September 14, 2015 2:58 PM >> >> - OpenSSH is now at version 7.1p1. > > Has the packaging been fixed in bloody so you can actually install this now > :)? If so, any thoughts on potentially back porting that to the current LTS > :)? > >> - An additional pair of ZFS fixes from Delphix not yet upstreamed in > illumos-gate. > > That would be DLPX-36997 and DLPX-35372? Do you happen to know if Delphix > has their issue tracker accessible to the Internet if somebody wanted to > take a look in more detail at these? Google didn't provide anything of any > obvious use. > > Thanks! > > From stephan.budach at JVM.DE Tue Sep 15 04:39:57 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Tue, 15 Sep 2015 06:39:57 +0200 Subject: [OmniOS-discuss] OmniOS r151014 update - needs reboot! In-Reply-To: <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> References: <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> Message-ID: <55F7A11D.3060901@jvm.de> Hi Dan, I will apply the upgrade to a couple of my OmniOS boxes today and give it a go. Thanks, Stephan From stephan.budach at JVM.DE Tue Sep 15 04:59:31 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Tue, 15 Sep 2015 06:59:31 +0200 Subject: [OmniOS-discuss] (no subject) In-Reply-To: <005201d0ef58$622cfb60$2686f220$@acm.org> References: <20150914220930.GA30739@mandarb.com> <005201d0ef58$622cfb60$2686f220$@acm.org> Message-ID: <55F7A5B3.4090509@jvm.de> Am 15.09.15 um 03:46 schrieb Paul B. Henson: >> From: Omen Wild >> Sent: Monday, September 14, 2015 3:10 PM >> >> Mostly we are wondering how to clear the corruption off disk and worried >> what else might be corrupt since the scrub turns up no issues. > While looking into possible corruption from the recent L2 cache bug it seems > that running 'zdb -bbccsv' is a good test for finding corruption as it looks > at all of the blocks and verifies all of the checksums. > > _______________________________________________ As George Wilson wrote on the ZFS mailing list: " Unfortunately, if the corruption impacts a data block then we won't be able to detect it.". So, I am afarid apart from metadata and indirect blocks corruption, there's no way to even detect a corruption inside a data block, as the checksum fits. I think, the best one can do is to run a scrub and act on the results of that. If scrub reports no errors, one can live with that or one would need to think of options to reference the data with known, good data from that pool, e.g. from a backup prior to 6214 having been introduced, but depending on the sheer amount of data or the type of it, that might not be even possible. Cheers, Stephan From omnios at citrus-it.net Tue Sep 15 08:40:34 2015 From: omnios at citrus-it.net (Andy Fiddaman) Date: Tue, 15 Sep 2015 08:40:34 +0000 (UTC) Subject: [OmniOS-discuss] (no subject) In-Reply-To: <005201d0ef58$622cfb60$2686f220$@acm.org> References: <20150914220930.GA30739@mandarb.com> <005201d0ef58$622cfb60$2686f220$@acm.org> Message-ID: On Mon, 14 Sep 2015, Paul B. Henson wrote: ; > From: Omen Wild ; > Sent: Monday, September 14, 2015 3:10 PM ; > ; > Mostly we are wondering how to clear the corruption off disk and worried ; > what else might be corrupt since the scrub turns up no issues. ; ; While looking into possible corruption from the recent L2 cache bug it seems ; that running 'zdb -bbccsv' is a good test for finding corruption as it looks ; at all of the blocks and verifies all of the checksums. zpool scrub is fine but I get lots of messages like this when I run zdb -bbccsv zdb_blkptr_cb: Got error 50 reading <3077, 212, 0, 52> DVA[0]=<0:14d528f8200:1ce00> [L0 ZFS plain file] fletcher4 lz4 LE contiguous unique single size=20000L/13200P birth=3708038L/3708038P fill=1 cksum=1717c7d38f62:374184e099ada9b:a86cf60db2f68605:2be4a1817f9f4b1d -- skipping Is this an indicator of corruption in the pool? It's going to be a right royal pain to rebuild them if I need to! Thanks, Andy -- Citrus IT Limited | +44 (0)870 199 8000 | enquiries at citrus-it.co.uk Rock House Farm | Green Moor | Wortley | Sheffield | S35 7DQ Registered in England and Wales | Company number 4899123 From johan.kragsterman at capvert.se Tue Sep 15 08:17:03 2015 From: johan.kragsterman at capvert.se (Johan Kragsterman) Date: Tue, 15 Sep 2015 10:17:03 +0200 Subject: [OmniOS-discuss] Ang: Re: OmniOS r151014 update - needs reboot! In-Reply-To: <005101d0ef58$0be0b260$23a21720$@acm.org> References: <005101d0ef58$0be0b260$23a21720$@acm.org>, <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> Message-ID: Hi! -----"OmniOS-discuss" skrev: ----- Till: "'Dan McDonald'" , "'omnios-discuss'" Fr?n: "Paul B. Henson" S?nt av: "OmniOS-discuss" Datum: 2015-09-15 04:01 ?rende: Re: [OmniOS-discuss] OmniOS r151014 update - needs reboot! > From: Dan McDonald > Sent: Monday, September 14, 2015 2:58 PM > > Most importantly, this update fixes illumos 6214 for OmniOS. ?You should be > able to restore your L2ARC devices using the method I mentioned in my last e- > mail: Call me a scaredy-cat, but I think I might wait a bit for that to burn in before I reenable my cache :). > Because of the changes to zfs, this update requires that you reboot your system. And then despite successful scrub and zdb runs I'm still nervous about the pool not being successfully imported after a reboot 8-/ , so I might put this off until I've got a chunk of free time in case of unxpected recovery issues. Thanks much for the quick turnaround though! Like you, Paul, I feel a little bit uneasy to reboot... Would be nice to hear some success stories from people who had an L2ARC and did this upgrade....tell us, please.... Regards Johan _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From stephan.budach at JVM.DE Tue Sep 15 10:00:46 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Tue, 15 Sep 2015 12:00:46 +0200 Subject: [OmniOS-discuss] Ang: Re: OmniOS r151014 update - needs reboot! In-Reply-To: References: <005101d0ef58$0be0b260$23a21720$@acm.org>, <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> Message-ID: <55F7EC4E.7050203@jvm.de> Am 15.09.15 um 10:17 schrieb Johan Kragsterman: > Hi! > > > -----"OmniOS-discuss" skrev: ----- > Till: "'Dan McDonald'" , "'omnios-discuss'" > Fr?n: "Paul B. Henson" > S?nt av: "OmniOS-discuss" > Datum: 2015-09-15 04:01 > ?rende: Re: [OmniOS-discuss] OmniOS r151014 update - needs reboot! > >> From: Dan McDonald >> Sent: Monday, September 14, 2015 2:58 PM >> >> Most importantly, this update fixes illumos 6214 for OmniOS. You should > be >> able to restore your L2ARC devices using the method I mentioned in my last > e- >> mail: > Call me a scaredy-cat, but I think I might wait a bit for that to burn in > before I reenable my cache :). > >> Because of the changes to zfs, this update requires that you reboot your > system. > > And then despite successful scrub and zdb runs I'm still nervous about the > pool not being successfully imported after a reboot 8-/ , so I might > put this off until I've got a chunk of free time in case of unxpected > recovery issues. > > Thanks much for the quick turnaround though! > > > > > > Like you, Paul, I feel a little bit uneasy to reboot... > > Would be nice to hear some success stories from people who had an L2ARC and did this upgrade....tell us, please.... > > Regards Johan > > I updated one of my OmniOS boxes and performed a reboot -p, which went smoothly. Afterwards, I re-added my L2ARC devices again to the two zpools running on that host. All without any issue. I will follow-up with at least two other nodes today, but that will be later on in the afternoon. Cheers, Stephan From johan.kragsterman at capvert.se Tue Sep 15 10:18:16 2015 From: johan.kragsterman at capvert.se (Johan Kragsterman) Date: Tue, 15 Sep 2015 12:18:16 +0200 Subject: [OmniOS-discuss] Ang: Re: Ang: Re: OmniOS r151014 update - needs reboot! In-Reply-To: <55F7EC4E.7050203@jvm.de> References: <55F7EC4E.7050203@jvm.de>, <005101d0ef58$0be0b260$23a21720$@acm.org>, <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> Message-ID: Hi! -----"OmniOS-discuss" skrev: ----- Till: Fr?n: Stephan Budach S?nt av: "OmniOS-discuss" Datum: 2015-09-15 12:02 ?rende: Re: [OmniOS-discuss] Ang: Re: OmniOS r151014 update - needs reboot! Am 15.09.15 um 10:17 schrieb Johan Kragsterman: > Hi! > > > -----"OmniOS-discuss" skrev: ----- > Till: "'Dan McDonald'" , "'omnios-discuss'" > Fr?n: "Paul B. Henson" > S?nt av: "OmniOS-discuss" > Datum: 2015-09-15 04:01 > ?rende: Re: [OmniOS-discuss] OmniOS r151014 update - needs reboot! > >> From: Dan McDonald >> Sent: Monday, September 14, 2015 2:58 PM >> >> Most importantly, this update fixes illumos 6214 for OmniOS. ?You should > be >> able to restore your L2ARC devices using the method I mentioned in my last > e- >> mail: > Call me a scaredy-cat, but I think I might wait a bit for that to burn in > before I reenable my cache :). > >> Because of the changes to zfs, this update requires that you reboot your > system. > > And then despite successful scrub and zdb runs I'm still nervous about the > pool not being successfully imported after a reboot 8-/ , so I might > put this off until I've got a chunk of free time in case of unxpected > recovery issues. > > Thanks much for the quick turnaround though! > > > > > > Like you, Paul, I feel a little bit uneasy to reboot... > > Would be nice to hear some success stories from people who had an L2ARC and did this upgrade....tell us, please.... > > Regards Johan > > I updated one of my OmniOS boxes and performed a reboot -p, which went smoothly. Afterwards, I re-added my L2ARC devices again to the two zpools running on that host. All without any issue. I will follow-up with at least two other nodes today, but that will be later on in the afternoon. Cheers, Stephan Thanks for the report, Stephan! Regards Johan _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From chip at innovates.com Tue Sep 15 12:32:12 2015 From: chip at innovates.com (Schweiss, Chip) Date: Tue, 15 Sep 2015 07:32:12 -0500 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> Message-ID: On Mon, Sep 14, 2015 at 10:46 PM, Dan McDonald wrote: > Lauri "lotheac" Tirkkonen can provide more details. Also note - there is > an effort to replace sunssh with OpenSSh altogether. > I'll second that request to port OpenSSH into r151014 when it's ready. The SunSSH keeps giving me fits. I understand Joyent has made some great headway in the effort to get OpenSSH into Illumos. Looking forward to the day I don't have to script around SunSSH problems. -Chip Sep 14, 2015, at 9:38 PM, Paul B. Henson wrote: >> From: Dan McDonald >> Sent: Monday, September 14, 2015 2:58 PM >> >> - OpenSSH is now at version 7.1p1. > > Has the packaging been fixed in bloody so you can actually install this now > :)? If so, any thoughts on potentially back porting that to the current LTS > :)? > >> - An additional pair of ZFS fixes from Delphix not yet upstreamed in > illumos-gate. > > That would be DLPX-36997 and DLPX-35372? Do you happen to know if Delphix > has their issue tracker accessible to the Internet if somebody wanted to > take a look in more detail at these? Google didn't provide anything of any > obvious use. > > Thanks! > > _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janus at volny.cz Tue Sep 15 13:38:33 2015 From: janus at volny.cz (Jan Vlach) Date: Tue, 15 Sep 2015 15:38:33 +0200 Subject: [OmniOS-discuss] MD5 and SHA1 checksums for r151014 ? Message-ID: <20150915133833.GA1136@volny.cz> Hello omnios-discuss team, I'm trying get OmniOS LTS release r151014: ### FROM WEBSITE: LTS release (r151014, omnios-cffff65): http://omnios.omniti.com/media/OmniOS_Text_r151014.usb-dd md5 (OmniOS_Text_r151014.usb-dd) = e6631e05f111d84e28a41274f7513c5f sha1 (OmniOS_Text_r151014.usb-dd) = 6912b8daece6afbb825e1263ff6dbc1b9a708e48 --- but I get different sums than published on OpenBSD5.7-stable and ### CONSOLE OpenBSD5.7 Requesting http://omnios.omniti.com/media/OmniOS_Text_r151014.usb-dd 100% |***************************| 467 MB 16:47 490278400 bytes received in 1007.91 seconds (475.03 KB/s) $ openssl md5 OmniOS_Text_r151014.usb-dd MD5(OmniOS_Text_r151014.usb-dd)= 6c07554d06e988b9e6ccc3a7f57a3331 $ md5 OmniOS_Text_r151014.usb-dd MD5 (OmniOS_Text_r151014.usb-dd) = 6c07554d06e988b9e6ccc3a7f57a3331 $ sha1 OmniOS_Text_r151014.usb-dd SHA1 (OmniOS_Text_r151014.usb-dd) = cbd37c5f62bfe05d6ae5fb853d75d5c8cf66048f $ openssl sha1 OmniOS_Text_r151014.usb-dd SHA1(OmniOS_Text_r151014.usb-dd)= cbd37c5f62bfe05d6ae5fb853d75d5c8cf66048f ### CONSOLE SMARTOS (GZ), different internet openssl md5 OmniOS_Text_r151014.usb-dd MD5(OmniOS_Text_r151014.usb-dd)= 6c07554d06e988b9e6ccc3a7f57a3331 openssl sha1 OmniOS_Text_r151014.usb-dd SHA1(OmniOS_Text_r151014.usb-dd)= cbd37c5f62bfe05d6ae5fb853d75d5c8cf66048f What am I doing wrong ? Thank you, Jan -- Be the change you want to see in the world. From danmcd at omniti.com Tue Sep 15 13:55:17 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 15 Sep 2015 09:55:17 -0400 Subject: [OmniOS-discuss] MD5 and SHA1 checksums for r151014 ? In-Reply-To: <20150915133833.GA1136@volny.cz> References: <20150915133833.GA1136@volny.cz> Message-ID: I may have forgotten to update them. I'm away from my desk currently, and without my work laptop, otherwise I could fix that now. I'll have them updated sometime in the next 4 hours or less. I'll ping this specific thread when I do. (It's also possible one of my colleagues may be able to get to it before me, but I won't volunteer someone for that because we all have things to do.) Sorry, Dan Sent from my iPhone (typos, autocorrect, and all) > On Sep 15, 2015, at 9:38 AM, Jan Vlach wrote: > > Hello omnios-discuss team, > > I'm trying get OmniOS LTS release r151014: > > ### FROM WEBSITE: > LTS release (r151014, omnios-cffff65): > http://omnios.omniti.com/media/OmniOS_Text_r151014.usb-dd > > md5 (OmniOS_Text_r151014.usb-dd) = e6631e05f111d84e28a41274f7513c5f > > sha1 (OmniOS_Text_r151014.usb-dd) = > 6912b8daece6afbb825e1263ff6dbc1b9a708e48 > --- > > but I get different sums than published on OpenBSD5.7-stable and > > ### CONSOLE OpenBSD5.7 > Requesting http://omnios.omniti.com/media/OmniOS_Text_r151014.usb-dd > 100% |***************************| 467 MB 16:47 > 490278400 bytes received in 1007.91 seconds (475.03 KB/s) > > $ openssl md5 OmniOS_Text_r151014.usb-dd > MD5(OmniOS_Text_r151014.usb-dd)= 6c07554d06e988b9e6ccc3a7f57a3331 > > $ md5 OmniOS_Text_r151014.usb-dd > MD5 (OmniOS_Text_r151014.usb-dd) = 6c07554d06e988b9e6ccc3a7f57a3331 > > $ sha1 OmniOS_Text_r151014.usb-dd > SHA1 (OmniOS_Text_r151014.usb-dd) = > cbd37c5f62bfe05d6ae5fb853d75d5c8cf66048f > > $ openssl sha1 OmniOS_Text_r151014.usb-dd > SHA1(OmniOS_Text_r151014.usb-dd)= > cbd37c5f62bfe05d6ae5fb853d75d5c8cf66048f > > ### CONSOLE SMARTOS (GZ), different internet > openssl md5 OmniOS_Text_r151014.usb-dd > MD5(OmniOS_Text_r151014.usb-dd)= 6c07554d06e988b9e6ccc3a7f57a3331 > > openssl sha1 OmniOS_Text_r151014.usb-dd > SHA1(OmniOS_Text_r151014.usb-dd)= > cbd37c5f62bfe05d6ae5fb853d75d5c8cf66048f > > What am I doing wrong ? > > Thank you, > Jan > > > > > -- > Be the change you want to see in the world. > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From janus at volny.cz Tue Sep 15 13:58:36 2015 From: janus at volny.cz (Jan Vlach) Date: Tue, 15 Sep 2015 15:58:36 +0200 Subject: [OmniOS-discuss] MD5 and SHA1 checksums for r151014 ? In-Reply-To: References: <20150915133833.GA1136@volny.cz> Message-ID: <20150915135836.GA1858@volny.cz> No worries Dan, I'm not in a hurry. This is the better option - the other one would be an attacker changing the binaries ... Have a nice day, Jan On Tue, Sep 15, 2015 at 09:55:17AM -0400, Dan McDonald wrote: > I may have forgotten to update them. I'm away from my desk currently, and without my work laptop, otherwise I could fix that now. > > I'll have them updated sometime in the next 4 hours or less. I'll ping this specific thread when I do. (It's also possible one of my colleagues may be able to get to it before me, but I won't volunteer someone for that because we all have things to do.) > > Sorry, > Dan -- Be the change you want to see in the world. From danmcd at omniti.com Tue Sep 15 14:09:28 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 15 Sep 2015 10:09:28 -0400 Subject: [OmniOS-discuss] MD5 and SHA1 checksums for r151014 ? In-Reply-To: <20150915135836.GA1858@volny.cz> References: <20150915133833.GA1136@volny.cz> <20150915135836.GA1858@volny.cz> Message-ID: <0D04AA07-2CEB-4B29-B958-A180F7FF41C3@omniti.com> > On Sep 15, 2015, at 9:58 AM, Jan Vlach wrote: > > I'm not in a hurry. This is the better option - the other one would be > an attacker changing the binaries ... I spun things, then was advised by Delphix to include their two fixes. So I had to respin everything, but didn't update the checksums. Dan Sent from my iPhone (typos, autocorrect, and all) From stephan.budach at JVM.DE Tue Sep 15 15:21:04 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Tue, 15 Sep 2015 17:21:04 +0200 Subject: [OmniOS-discuss] Ang: Re: Ang: Re: OmniOS r151014 update - needs reboot! In-Reply-To: References: <55F7EC4E.7050203@jvm.de>, <005101d0ef58$0be0b260$23a21720$@acm.org>, <9A8BE8EE-BCA7-4C64-9878-A382A3D9A830@omniti.com> Message-ID: <55F83760.7000308@jvm.de> Am 15.09.15 um 12:18 schrieb Johan Kragsterman: > Hi! > > > -----"OmniOS-discuss" skrev: ----- > Till: > Fr?n: Stephan Budach > S?nt av: "OmniOS-discuss" > Datum: 2015-09-15 12:02 > ?rende: Re: [OmniOS-discuss] Ang: Re: OmniOS r151014 update - needs reboot! > > Am 15.09.15 um 10:17 schrieb Johan Kragsterman: >> Hi! >> >> >> -----"OmniOS-discuss" skrev: ----- >> Till: "'Dan McDonald'" , "'omnios-discuss'" >> Fr?n: "Paul B. Henson" >> S?nt av: "OmniOS-discuss" >> Datum: 2015-09-15 04:01 >> ?rende: Re: [OmniOS-discuss] OmniOS r151014 update - needs reboot! >> >>> From: Dan McDonald >>> Sent: Monday, September 14, 2015 2:58 PM >>> >>> Most importantly, this update fixes illumos 6214 for OmniOS. You should >> be >>> able to restore your L2ARC devices using the method I mentioned in my last >> e- >>> mail: >> Call me a scaredy-cat, but I think I might wait a bit for that to burn in >> before I reenable my cache :). >> >>> Because of the changes to zfs, this update requires that you reboot your >> system. >> >> And then despite successful scrub and zdb runs I'm still nervous about the >> pool not being successfully imported after a reboot 8-/ , so I might >> put this off until I've got a chunk of free time in case of unxpected >> recovery issues. >> >> Thanks much for the quick turnaround though! >> >> >> >> >> >> Like you, Paul, I feel a little bit uneasy to reboot... >> >> Would be nice to hear some success stories from people who had an L2ARC and did this upgrade....tell us, please.... >> >> Regards Johan >> >> > I updated one of my OmniOS boxes and performed a reboot -p, which went > smoothly. Afterwards, I re-added my L2ARC devices again to the two > zpools running on that host. All without any issue. > > I will follow-up with at least two other nodes today, but that will be > later on in the afternoon. > > Cheers, > Stephan > > > > > > Thanks for the report, Stephan! > > Regards Johan > 2nd node updated -> reboot -p -> no issues! Cheers, Stephan From danmcd at omniti.com Tue Sep 15 15:28:14 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 15 Sep 2015 11:28:14 -0400 Subject: [OmniOS-discuss] MD5 and SHA1 checksums for r151014 ? In-Reply-To: <0D04AA07-2CEB-4B29-B958-A180F7FF41C3@omniti.com> References: <20150915133833.GA1136@volny.cz> <20150915135836.GA1858@volny.cz> <0D04AA07-2CEB-4B29-B958-A180F7FF41C3@omniti.com> Message-ID: <0311AD71-ABF6-4B3B-B1CD-21576EC6D9F3@omniti.com> > On Sep 15, 2015, at 10:09 AM, Dan McDonald wrote: > > I spun things, then was advised by Delphix to include their two fixes. So I had to respin everything, but didn't update the checksums. Turns out I didn't update JUST the usb-dd ones. Had you tried the iso, you'd have seen correct sums. I've fixed the usb-dd sums, and they now match what you mailed, so no attacker AFAICT. Happy updating! Dan From omen.wild at gmail.com Tue Sep 15 17:05:01 2015 From: omen.wild at gmail.com (Omen Wild) Date: Tue, 15 Sep 2015 10:05:01 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: References: <20150914220930.GA30739@mandarb.com> Message-ID: <20150915170501.GB8322@mandarb.com> Quoting Dan McDonald on Mon, Sep 14 18:41: > > One thing you can try is to overwrite the file and then remove it. > Someone else reported a similar vug, and it turned out to be corrupt > metadata or extended attributes. No luck. I get an "Operation not applicable" error trying to overwrite it: root at zaphod:/zaphod/backuppc/trash# mv a fcouncil.html mv: failed to access 'corrupt.html': Operation not applicable "echo '' > corrupt.html" gives the same error. > Do you have a URL for the panic? I have a fresh panic from an updated (omnios-cffff65) version and will send you the URL shortly. > Also, please try today's update. No improvement. Same panic: ffffff0786f9f520 PANIC 1 param_preset die+0xdf trap+0xdb3 0xfffffffffb8001d6 zfs_remove+0x395 fop_remove+0x5b vn_removeat+0x382 unlinkat+0x59 _sys_sysenter_post_swapgs+0x149 Thanks for the help. > Sent from my iPhone (typos, autocorrect, and all) > > > On Sep 14, 2015, at 6:09 PM, Omen Wild wrote: > > > > [ I originally posted this to the Illumos ZFS list but got no responses. ] > > > > We have an up to date OmniOS system that panics every time we try to > > unlink a specific file. We have a kernel pages-only crashdump and can > > reproduce easily. I can make the panic files available to an interested > > party. > > > > A zpool scrub turned up no errors or repairs. > > > > Mostly we are wondering how to clear the corruption off disk and worried > > what else might be corrupt since the scrub turns up no issues. > > > > Details below. > > > > When we first encountered the issue we were running with a version from > > mid-July: zfs at 0.5.11,5.11-0.151014:20150417T182430Z . > > > > After the first couple panics we upgraded to the newest (as of a couple > > days ago, zfs at 0.5.11,5.11-0.151014:20150818T161042Z) which still panics. > > > > # uname -a > > SunOS zaphod 5.11 omnios-d08e0e5 i86pc i386 i86pc > > > > The error looks like this: > > BAD TRAP: type=e (#pf Page fault) rp=ffffff002ed54b00 addr=e8 occurred in module "zfs" due to a NULL pointer dereference > > > > The panic stack looks like this in every case: > > param_preset > > die+0xdf > > trap+0xdb3 > > 0xfffffffffb8001d6 > > zfs_remove+0x395 > > fop_remove+0x5b > > vn_removeat+0x382 > > unlinkat+0x59 > > _sys_sysenter_post_swapgs+0x149 > > > > It is triggered by trying to rm a specific file. ls'ing the file gives > > the error "Operation not applicable", ls'ing the directory shows ? in > > place of the data: > > > > ?????????? ? ? ? ? ? filename.html > > > > I have attached the output of: > > echo '::panicinfo\n::cpuinfo -v\n::threadlist -v 10\n::msgbuf\n*panic_thread::findstack -v\n::stacks' | mdb 7 > > > > I am a Solaris/OI/OmniOS debugging neophyte, but will happily run any > > commands recommended. > > > > Thanks > > Omen > > > > _______________________________________________ > > OmniOS-discuss mailing list > > OmniOS-discuss at lists.omniti.com > > http://lists.omniti.com/mailman/listinfo/omnios-discuss > From omen.wild at gmail.com Tue Sep 15 17:13:11 2015 From: omen.wild at gmail.com (Omen Wild) Date: Tue, 15 Sep 2015 10:13:11 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <55F7A5B3.4090509@jvm.de> References: <20150914220930.GA30739@mandarb.com> <005201d0ef58$622cfb60$2686f220$@acm.org> <55F7A5B3.4090509@jvm.de> Message-ID: <20150915171311.GC8322@mandarb.com> Quoting Stephan Budach on Tue, Sep 15 06:59: > > Am 15.09.15 um 03:46 schrieb Paul B. Henson: > >>From: Omen Wild > >>Sent: Monday, September 14, 2015 3:10 PM > >> > >>Mostly we are wondering how to clear the corruption off disk and worried > >>what else might be corrupt since the scrub turns up no issues. > >While looking into possible corruption from the recent L2 cache bug it seems > >that running 'zdb -bbccsv' is a good test for finding corruption as it looks > >at all of the blocks and verifies all of the checksums. I have kicked that off. I expect it will take a while as the pool has 15.6TB of data. > As George Wilson wrote on the ZFS mailing list: " Unfortunately, if the > corruption impacts a data block then we won't be able to detect it.". So, I > am afarid apart from metadata and indirect blocks corruption, there's no way > to even detect a corruption inside a data block, as the checksum fits. We have no l2arc on this machine, so I don't believe 6214 will impact us. When the corruption first manifest itself we were running an update from July (2015-07-20T14:15:58) with pkg://omnios/system/file-system/zfs at 0.5.11,5.11-0.151014:20150402T175233Z > I think, the best one can do is to run a scrub and act on the results of > that. If scrub reports no errors, one can live with that or one would need > to think of options to reference the data with known, good data from that > pool, e.g. from a backup prior to 6214 having been introduced, but depending > on the sheer amount of data or the type of it, that might not be even > possible. The 'zpool scrub' was clean, no errors. unlinking the file still causes the panic. I will report the results of the zdb when it finishes. Thanks for the ideas. From henson at acm.org Tue Sep 15 20:24:22 2015 From: henson at acm.org (Paul B. Henson) Date: Tue, 15 Sep 2015 13:24:22 -0700 Subject: [OmniOS-discuss] (no subject) In-Reply-To: <55F7A5B3.4090509@jvm.de> References: <20150914220930.GA30739@mandarb.com> <005201d0ef58$622cfb60$2686f220$@acm.org> <55F7A5B3.4090509@jvm.de> Message-ID: <00cb01d0eff4$82632a20$87297e60$@acm.org> > From: Stephan Budach > Sent: Monday, September 14, 2015 10:00 PM > > As George Wilson wrote on the ZFS mailing list: " Unfortunately, if the > corruption impacts a data block then we won't be able to detect it.". > So, I am afarid apart from metadata and indirect blocks corruption, > there's no way to even detect a corruption inside a data block, as the > checksum fits. Yes, that's true, assuming you have no external source of verification. However, Arne said he didn't think this bug would result in data corruption, only metadata corruption. I was mostly worried about pool corruption that would cause panics or failure to import, which data level corruption would not cause. Most of the data on the pool I was worried about is media, a bad data block here or there wouldn't be too tragic. > from that pool, e.g. from a backup prior to 6214 having been introduced, > but depending on the sheer amount of data or the type of it, that might > not be even possible. Yup. This was a sucky bug :(. From henson at acm.org Tue Sep 15 20:26:08 2015 From: henson at acm.org (Paul B. Henson) Date: Tue, 15 Sep 2015 13:26:08 -0700 Subject: [OmniOS-discuss] (no subject) In-Reply-To: References: <20150914220930.GA30739@mandarb.com> <005201d0ef58$622cfb60$2686f220$@acm.org> Message-ID: <00cc01d0eff4$c3e667a0$4bb336e0$@acm.org> > From: Andy Fiddaman > Sent: Tuesday, September 15, 2015 1:41 AM > > zdb_blkptr_cb: Got error 50 reading <3077, 212, 0, 52> > DVA[0]=<0:14d528f8200:1ce00> [L0 ZFS plain file] fletcher4 lz4 LE > contiguous unique single size=20000L/13200P birth=3708038L/3708038P fill=1 > cksum=1717c7d38f62:374184e099ada9b:a86cf60db2f68605:2be4a1817f9f4b1d > -- > skipping > > Is this an indicator of corruption in the pool? > It's going to be a right royal pain to rebuild them if I need to! That certainly doesn't look good :(. I'd recommend posting this output on the zfs mailing list and asking for feedback. From henson at acm.org Tue Sep 15 20:28:28 2015 From: henson at acm.org (Paul B. Henson) Date: Tue, 15 Sep 2015 13:28:28 -0700 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> Message-ID: <00cd01d0eff5$14cd4fd0$3e67ef70$@acm.org> > From: Schweiss, Chip > Sent: Tuesday, September 15, 2015 5:32 AM > > I understand Joyent has made some great headway in the effort to get OpenSSH > into Illumos. Looking forward to the day I don't have to script around SunSSH > problems. Technically I think they're working on getting sunssh out of illumos-core, but not necessarily replacing it with openssh. It sounded like their intention is to have distributions package openssh like they do other third-party packages and simply not have a ssh implementation in the actual illumo source code base. But their patches to openssh to make it work better in smartos would probably be useful in an omnios package as well. From henson at acm.org Tue Sep 15 20:30:38 2015 From: henson at acm.org (Paul B. Henson) Date: Tue, 15 Sep 2015 13:30:38 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150915171311.GC8322@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <005201d0ef58$622cfb60$2686f220$@acm.org> <55F7A5B3.4090509@jvm.de> <20150915171311.GC8322@mandarb.com> Message-ID: <00d501d0eff5$6286ece0$2794c6a0$@acm.org> > From: Omen Wild > Sent: Tuesday, September 15, 2015 10:13 AM > > I have kicked that off. I expect it will take a while as the pool has > 15.6TB of data. My pool had about 10 TB, it ran for roughly 8-10 hours. > We have no l2arc on this machine, so I don't believe 6214 will impact > us. When the corruption first manifest itself we were running an Hmm, I wonder how your pool was corrupted then? There haven't been very many "trash the pool" bugs in recent history that I can recall. > The 'zpool scrub' was clean, no errors. unlinking the file still causes > the panic. I will report the results of the zdb when it finishes. How much other stuff is in the filesystem? You could try creating a new filesystem, copying all the stuff over, and then deleting the entire filesystem that contains the bad file to see if it goes away? From omen.wild at gmail.com Tue Sep 15 22:54:03 2015 From: omen.wild at gmail.com (Omen Wild) Date: Tue, 15 Sep 2015 15:54:03 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <55F89C35.5020700@ianshome.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> Message-ID: <20150915225403.GF30699@mandarb.com> Quoting Ian Collins on Wed, Sep 16 10:31: > > >No luck. I get an "Operation not applicable" error trying to overwrite it: > >root at zaphod:/zaphod/backuppc/trash# mv a fcouncil.html > >mv: failed to access 'corrupt.html': Operation not applicable > > > >"echo '' > corrupt.html" gives the same error. > > > > I have seen something similar, do you see an error when you try and cat or > stat the file? Both stat and cat produce the same error: Operation not applicable > In our case I was able to delete the parent directory. I have the problem file tucked away so it is not causing any harm at the moment. Dan McDonald (from OmniOS) has been working with me. Once he is done having me create crash dumps :) I will try removing the parent directory. From ian at ianshome.com Tue Sep 15 23:05:52 2015 From: ian at ianshome.com (Ian Collins) Date: Wed, 16 Sep 2015 11:05:52 +1200 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150915225403.GF30699@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> Message-ID: <55F8A450.4030504@ianshome.com> Omen Wild wrote: > Quoting Ian Collins on Wed, Sep 16 10:31: >>> No luck. I get an "Operation not applicable" error trying to overwrite it: >>> root at zaphod:/zaphod/backuppc/trash# mv a fcouncil.html >>> mv: failed to access 'corrupt.html': Operation not applicable >>> >>> "echo '' > corrupt.html" gives the same error. >>> >> I have seen something similar, do you see an error when you try and cat or >> stat the file? > Both stat and cat produce the same error: Operation not applicable This does sound like the problems I was seeing. See my threads here: http://news.gmane.org/gmane.os.illumos.zfs/cutoff=4897 -- Ian. From ian at ianshome.com Tue Sep 15 22:31:17 2015 From: ian at ianshome.com (Ian Collins) Date: Wed, 16 Sep 2015 10:31:17 +1200 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150915170501.GB8322@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> Message-ID: <55F89C35.5020700@ianshome.com> Omen Wild wrote: > Quoting Dan McDonald on Mon, Sep 14 18:41: >> One thing you can try is to overwrite the file and then remove it. >> Someone else reported a similar vug, and it turned out to be corrupt >> metadata or extended attributes. > No luck. I get an "Operation not applicable" error trying to overwrite it: > root at zaphod:/zaphod/backuppc/trash# mv a fcouncil.html > mv: failed to access 'corrupt.html': Operation not applicable > > "echo '' > corrupt.html" gives the same error. > I have seen something similar, do you see an error when you try and cat or stat the file? In our case I was able to delete the parent directory. -- Ian. From gary at genashor.com Tue Sep 15 23:18:22 2015 From: gary at genashor.com (Gary Gendel) Date: Tue, 15 Sep 2015 19:18:22 -0400 Subject: [OmniOS-discuss] zdb -h bug? In-Reply-To: <005801d0ef58$da1898f0$8e49cad0$@acm.org> References: <005801d0ef58$da1898f0$8e49cad0$@acm.org> Message-ID: <55F8A73E.6050003@genashor.com> Paul, I have a fresh install of OmniOS (installed 2 weeks ago and just updated). zdb -h core dumps on both of these, both before and after the update. Since I have nothing fancy (no cache or log disks), I suspect (and hope) the problem is in zdb. $ zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT archive 4.53T 1.49T 3.04T - 19% 32% 1.00x ONLINE - rpool 278G 82.1G 196G - 10% 29% 1.00x ONLINE - ---------------------- $ zpool status pool: archive state: ONLINE scan: scrub repaired 0 in 2h53m with 0 errors on Thu Sep 10 19:21:59 2015 config: NAME STATE READ WRITE CKSUM archive ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 0h3m with 0 errors on Tue Sep 1 16:19:55 2015 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0s0 ONLINE 0 0 0 c2t1d0s0 ONLINE 0 0 0 Gary On 9/14/2015 9:50 PM, Paul B. Henson wrote: > While trying to look for corruption from the recent L2 cache bug, I noticed > that zdb core dumps trying to list the history on both my data pool (which > had L2 cache) and my rpool (which did not). I'm wondering if there is some > bug with zdb that is causing this as opposed to corruption of the pool. > > I'd be curious as to what 'zdb -h' does on the various pools out there, > particularly ones created prior to 014 then subsequently upgraded to 014 but > without large_blocks being enabled (as those are the characteristics of my > pools :) ). > > If I get a little time I'm going to try to build a 012 box and simulate how > my pools got to where they are and see if I can reproduce it. > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From danmcd at omniti.com Tue Sep 15 23:31:30 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 15 Sep 2015 19:31:30 -0400 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150915225403.GF30699@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> Message-ID: <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> > On Sep 15, 2015, at 6:54 PM, Omen Wild wrote: > > > I have the problem file tucked away so it is not causing any harm at the > moment. Dan McDonald (from OmniOS) has been working with me. Once he is > done having me create crash dumps :) I will try removing the parent directory. I've got all the cores I need. Try it! Dan From info at houseofancients.nl Wed Sep 16 05:05:08 2015 From: info at houseofancients.nl (Floris van Essen ..:: House of Ancients Amstafs ::..) Date: Wed, 16 Sep 2015 05:05:08 +0000 Subject: [OmniOS-discuss] OmniOS r151014 update - needs reboot! Message-ID: <356582D1FC91784992ABB4265A16ED48A6165E33@vEX01.mindstorm-internet.local> Hi All, When i install the update, it no longer sees my HP Network card. Even when installing the HP driver (NTXNxge.pkg), it will no longer see the PCI device. When I perform a roll back, it comes back online straight away : Sep 15 22:22:41 PSD01 genunix: [ID 805372 kern.info] pcplusmp: pciex8086,1096 (e1000g) instance 0 irq 0x33 vector 0x60 ioapic 0xff intin 0xff is bound to cpu 3 Sep 15 22:22:41 PSD01 genunix: [ID 469746 kern.info] NOTICE: e1000g0 registered Sep 15 22:22:41 PSD01 genunix: [ID 805372 kern.info] pcplusmp: pciex8086,1096 (e1000g) instance 1 irq 0x34 vector 0x61 ioapic 0xff intin 0xff is bound to cpu 0 Sep 15 22:22:41 PSD01 genunix: [ID 469746 kern.info] NOTICE: e1000g1 registered Sep 15 22:22:42 PSD01 genunix: [ID 805372 kern.info] pcplusmp: pci4040,100 (ntxn) instance 0 irq 0x35 vector 0x62 ioapic 0xff intin 0xff is bound to cpu 1 Sep 15 22:22:44 PSD01 genunix: [ID 435574 kern.info] NOTICE: e1000g0 link up, 1000 Mbps, full duplex Sep 15 22:22:44 PSD01 genunix: [ID 435574 kern.info] NOTICE: e1000g1 link up, 1000 Mbps, full duplex Sep 15 22:22:52 PSD01 genunix: [ID 469746 kern.info] NOTICE: ntxn0 registered Sep 15 22:22:52 PSD01 genunix: [ID 805372 kern.info] pcplusmp: pci4040,100 (ntxn) instance 1 irq 0x36 vector 0x63 ioapic 0xff intin 0xff is bound to cpu 2 Sep 15 22:22:53 PSD01 genunix: [ID 469746 kern.info] NOTICE: ntxn1 registered Sep 15 22:22:54 PSD01 genunix: [ID 792948 kern.notice] NOTICE: ntxn0: NIC Link is up Sep 15 22:22:54 PSD01 genunix: [ID 435574 kern.info] NOTICE: ntxn0 link up, 1000 Mbps, full duplex Sep 15 22:22:54 PSD01 genunix: [ID 805372 kern.info] pcplusmp: pci4040,100 (ntxn) instance 2 irq 0x37 vector 0x64 ioapic 0xff intin 0xff is bound to cpu 3 Sep 15 22:22:54 PSD01 genunix: [ID 792948 kern.notice] NOTICE: ntxn1: NIC Link is up Sep 15 22:22:54 PSD01 genunix: [ID 435574 kern.info] NOTICE: ntxn1 link up, 1000 Mbps, full duplex Sep 15 22:22:55 PSD01 genunix: [ID 469746 kern.info] NOTICE: ntxn2 registered Sep 15 22:22:55 PSD01 genunix: [ID 805372 kern.info] pcplusmp: pci4040,100 (ntxn) instance 3 irq 0x38 vector 0x65 ioapic 0xff intin 0xff is bound to cpu 0 Sep 15 22:22:56 PSD01 genunix: [ID 469746 kern.info] NOTICE: ntxn3 registered Sep 15 22:22:56 PSD01 genunix: [ID 792948 kern.notice] NOTICE: ntxn2: NIC Link is up Sep 15 22:22:56 PSD01 genunix: [ID 435574 kern.info] NOTICE: ntxn2 link up, 1000 Mbps, full duplex Sep 15 22:22:56 PSD01 genunix: [ID 792948 kern.notice] NOTICE: ntxn3: NIC Link is up Sep 15 22:22:56 PSD01 genunix: [ID 435574 kern.info] NOTICE: ntxn3 link up, 1000 Mbps, full duplex The integrated e1000g driver/nics do not show this behavior This concerns this card : http://h17007.www1.hp.com/us/en/enterprise/servers/supportmatrix/solaris.aspx NC375i embedded NIC is supported with Solaris10 08/11 using the following driver / FW combination: NTXNxge-2.14-solaris10-i386 / FW 4.0.585. Any ideas ? ...:: House of Ancients ::... American Staffordshire Terriers +31-628-161-350 +31-614-198-389 Het Perk 48 4903 RB Oosterhout Netherlands www.houseofancients.nl From martin.truhlar at archcon.cz Wed Sep 16 08:04:36 2015 From: martin.truhlar at archcon.cz (=?utf-8?B?TWFydGluIFRydWhsw6HFmQ==?=) Date: Wed, 16 Sep 2015 10:04:36 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> Message-ID: Yes, I'm aware, that problem can be hidden in many places. MTU is 1500. All nics and their setup are included at this email. Martin -----Original Message----- From: Dan McDonald [mailto:danmcd at omniti.com] Sent: Wednesday, September 09, 2015 6:32 PM To: Martin Truhl?? Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] iSCSI poor write performance > On Sep 9, 2015, at 12:24 PM, Martin Truhl?? wrote: > > Hello everybody, > > I have a problem here, I can?t move with. My Windows server runs as virtual machine under KVM. I?m using a 10GB network card. On this hw configuration I expect much better performance than I?m getting. Two less important disks uses KVM cache, that improve performance a bit. But I don?t want to use KVM?s cache for system and databases disks and there I?m getting 6MB/s for writing. Also 4K writing is low even with KVM cache. So you have windows on KVM, and KVM is using iSCSI to speak to OmniOS? That's a lot of indirection... Question: What's the MTU on the 10Gig Link? Dan -------------- next part -------------- A non-text attachment was scrubbed... Name: nics.JPG Type: image/jpeg Size: 35329 bytes Desc: nics.JPG URL: From lotheac at iki.fi Wed Sep 16 10:49:24 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Wed, 16 Sep 2015 13:49:24 +0300 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> Message-ID: <20150916104924.GC20232@gutsman.lotheac.fi> On Mon, Sep 14 2015 23:46:11 -0400, Dan McDonald wrote: > There have been some fixes there, but I'm not sure if it's all there. > I do know one has to use --reject options to make a switch. > > Lauri "lotheac" Tirkkonen can provide more details. Also note - there > is an effort to replace sunssh with OpenSSh altogether. OpenSSH is installable in bloody with: pkg install --reject pkg:/network/ssh --reject pkg:/network/ssh/ssh-key --reject pkg:/service/network/ssh pkg:/network/openssh pkg:/network/openssh-server It's a bit unwieldy, but does work -- the rejects are necessary to tell the pkg solver that it's okay to uninstall those packages (and satisfy the dependencies with openssh ones instead). There's at least one more problem with this change, though, and that is that openssh seems to be the default in new installs. I'm discussing that with Dan, but I suspect it's going to be a blocker for backporting. -- Lauri Tirkkonen | lotheac @ IRCnet From danmcd at omniti.com Wed Sep 16 11:48:53 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 16 Sep 2015 07:48:53 -0400 Subject: [OmniOS-discuss] OmniOS r151014 update - needs reboot! In-Reply-To: <356582D1FC91784992ABB4265A16ED48A6165E33@vEX01.mindstorm-internet.local> References: <356582D1FC91784992ABB4265A16ED48A6165E33@vEX01.mindstorm-internet.local> Message-ID: > On Sep 16, 2015, at 1:05 AM, Floris van Essen ..:: House of Ancients Amstafs ::.. wrote: > > > When i install the update, it no longer sees my HP Network card. > Even when installing the HP driver (NTXNxge.pkg), it will no longer see the PCI device. > > When I perform a roll back, it comes back online straight away : > You show the successful-case logs. What would be more interesting are the unsuccessful ones. Also check /etc/driver_aliases between the old and new BEs. BTW, is this driver for Oracle Solaris? You *do* realize there are inherent hazards with using an Oracle Solaris (esp. network) driver on any illumos distro, right? Dan From danmcd at omniti.com Wed Sep 16 11:50:34 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 16 Sep 2015 07:50:34 -0400 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> Message-ID: <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> > On Sep 16, 2015, at 4:04 AM, Martin Truhl?? wrote: > > Yes, I'm aware, that problem can be hidden in many places. > MTU is 1500. All nics and their setup are included at this email. Start by making your 10GigE network use 9000 MTU. You'll need to configure this on both ends (is this directly-attached 10GigE? Or over a switch?). Dan From danmcd at omniti.com Wed Sep 16 11:57:26 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 16 Sep 2015 07:57:26 -0400 Subject: [OmniOS-discuss] openssh on omnios In-Reply-To: <20150916104106.GB20232@gutsman.lotheac.fi> References: <231DE587-4363-4033-B574-2DCACC392731@omniti.com> <20150902123930.GD2595@gutsman.lotheac.fi> <93608AB9-3C7D-4362-9FBB-5B2FBDAB2A86@omniti.com> <20150902125045.GF2595@gutsman.lotheac.fi> <20150902132456.GG2595@gutsman.lotheac.fi> <4CBB7843-7128-4DE8-A733-2BFD4F804E13@omniti.com> <20150904120240.GD11452@gutsman.lotheac.fi> <20150905083630.GA3038@gutsman.lotheac.fi> <20150916104106.GB20232@gutsman.lotheac.fi> Message-ID: <24A040B0-C0E5-4797-9174-78573C5165BC@omniti.com> Thanks for all of that useful data. After Surge next week, I need to begin the forking and coalescing of r151016. This openssh & sunssh issue is going to be one of the nastier problems. I want "pkg update" to work smoothly. I want fresh installs to have the right thing happen (for SOME value of "right thing"), and that means both ISO and Kayak. I've not done any recent fresh-off-the-media tests, which I'll start doing post-Surge. Appreciate opinions here. And as I mention, Joyent is working on better openssh that includes most/all of the SunSSH improvements. From what I can tell, it's going to mean that the omnios-build/build/openssh/patches directory will be getting a lot more entries. Dan From omen.wild at gmail.com Wed Sep 16 18:31:15 2015 From: omen.wild at gmail.com (Omen Wild) Date: Wed, 16 Sep 2015 11:31:15 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> Message-ID: <20150916183115.GA23826@mandarb.com> Quoting Dan McDonald on Tue, Sep 15 19:31: > > > On Sep 15, 2015, at 6:54 PM, Omen Wild wrote: > > > > I have the problem file tucked away so it is not causing any harm at the > > moment. Dan McDonald (from OmniOS) has been working with me. Once he is > > done having me create crash dumps :) I will try removing the parent directory. > > I've got all the cores I need. Try it! No change, still crashes when I try to 'rm -rf' the parent directory. From danmcd at omniti.com Wed Sep 16 18:32:01 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 16 Sep 2015 14:32:01 -0400 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150916183115.GA23826@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> <20150916183115.GA23826@mandarb.com> Message-ID: <444B095E-D5AE-439E-8BAC-F58A9B391BC2@omniti.com> > On Sep 16, 2015, at 2:31 PM, Omen Wild wrote: > > No change, still crashes when I try to 'rm -rf' the parent directory. I think he wanted you to just rmdir . Since the directory entry isn't all there? Dan From omen.wild at gmail.com Wed Sep 16 18:54:27 2015 From: omen.wild at gmail.com (Omen Wild) Date: Wed, 16 Sep 2015 11:54:27 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <444B095E-D5AE-439E-8BAC-F58A9B391BC2@omniti.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> <20150916183115.GA23826@mandarb.com> <444B095E-D5AE-439E-8BAC-F58A9B391BC2@omniti.com> Message-ID: <20150916185427.GB23826@mandarb.com> Quoting Dan McDonald on Wed, Sep 16 14:32: > > > > On Sep 16, 2015, at 2:31 PM, Omen Wild wrote: > > > > No change, still crashes when I try to 'rm -rf' the parent directory. > > I think he wanted you to just rmdir . Since the directory entry isn't all there? Ah, I was thinking the `rm -rf' would just walk the tree, unlinking the files, so that makes more sense. The directory entry is there, ls can see that it exists, but cannot get any data about it: ----- Begin quote ----- root at zaphod:.../groundwater# ls -Fla ls: cannot access fcouncil.html: Operation not applicable total 1 drwxr-x--- 2 backuppc other 3 Sep 16 11:51 ./ drwxr-x--- 3 backuppc other 3 Sep 7 13:57 ../ ?????????? ? ? ? ? ? fcouncil.html root at zaphod:.../groundwater# cd .. root at zaphod:.../conference# rmdir groundwater/ rmdir: failed to remove 'groundwater/': File exists ----- End quote ----- From danmcd at omniti.com Wed Sep 16 19:04:07 2015 From: danmcd at omniti.com (Dan McDonald) Date: Wed, 16 Sep 2015 15:04:07 -0400 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150916185427.GB23826@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> <20150916183115.GA23826@mandarb.com> <444B095E-D5AE-439E-8BAC-F58A9B391BC2@omniti.com> <20150916185427.GB23826@mandarb.com> Message-ID: > On Sep 16, 2015, at 2:54 PM, Omen Wild wrote: MUCH EARLIER you said: > A zpool scrub turned up no errors or repairs. Dumb question --> when did you last attempt a scrub on this pool? Think it might be worth another shot now? Dan From janus at volny.cz Wed Sep 16 19:07:39 2015 From: janus at volny.cz (Jan Vlach) Date: Wed, 16 Sep 2015 21:07:39 +0200 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150916185427.GB23826@mandarb.com> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> <20150916183115.GA23826@mandarb.com> <444B095E-D5AE-439E-8BAC-F58A9B391BC2@omniti.com> <20150916185427.GB23826@mandarb.com> Message-ID: <20150916190739.GA3354@volny.cz> Hi Omen, how about moving all other data than fcouncil.html to different zfs dataset and then doing zfs destroy on the original dataset? No guarantee that this would work though .. Jan > root at zaphod:.../groundwater# ls -Fla > ls: cannot access fcouncil.html: Operation not applicable > total 1 > drwxr-x--- 2 backuppc other 3 Sep 16 11:51 ./ > drwxr-x--- 3 backuppc other 3 Sep 7 13:57 ../ > ?????????? ? ? ? ? ? fcouncil.html > > root at zaphod:.../groundwater# cd .. > root at zaphod:.../conference# rmdir groundwater/ > rmdir: failed to remove 'groundwater/': File exists From omen.wild at gmail.com Wed Sep 16 21:23:20 2015 From: omen.wild at gmail.com (Omen Wild) Date: Wed, 16 Sep 2015 14:23:20 -0700 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150916190739.GA3354@volny.cz> Message-ID: <20150916212320.GC23826@mandarb.com> Quoting Dan McDonald on Wed, Sep 16 15:04: > > > On Sep 16, 2015, at 2:54 PM, Omen Wild wrote: > > MUCH EARLIER you said: > > > A zpool scrub turned up no errors or repairs. > > Dumb question --> when did you last attempt a scrub on this pool? Think > it might be worth another shot now? I did the scrub after the first couple panics. So after the corruption had already happened. Quoting Jan Vlach on Wed, Sep 16 21:07: > > how about moving all other data than fcouncil.html to different zfs > dataset and then doing zfs destroy on the original dataset? > > No guarantee that this would work though .. It's a backup server with 16TB sitting on it with only a top level filesystem. I can move the parent directories of the corrupt file around, at least within the same filesystem. I believe that moving it to a new filesystem will require ZFS to read the file (and xattrs) in order to write them in the new filesystem. I would be shocked if this does not crash the box, but I will give it a try when I get a chance. From ian at ianshome.com Thu Sep 17 03:48:59 2015 From: ian at ianshome.com (Ian Collins) Date: Thu, 17 Sep 2015 15:48:59 +1200 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150916212320.GC23826@mandarb.com> References: <20150916212320.GC23826@mandarb.com> Message-ID: <55FA382B.5070503@ianshome.com> Omen Wild wrote: > It's a backup server with 16TB sitting on it with only a top level > filesystem. I can move the parent directories of the corrupt file around, > at least within the same filesystem. I believe that moving it to a new > filesystem will require ZFS to read the file (and xattrs) in order to > write them in the new filesystem. I would be shocked if this does not > crash the box, but I will give it a try when I get a chance. > If your problem is like mine, it will crash. I was able to do the same (move the parent to somewhere else in the same filesystem) which is how I hid the data from the users! -- Ian. From ian at ianshome.com Thu Sep 17 03:50:47 2015 From: ian at ianshome.com (Ian Collins) Date: Thu, 17 Sep 2015 15:50:47 +1200 Subject: [OmniOS-discuss] ZFS panic when unlinking a file In-Reply-To: <20150916190739.GA3354@volny.cz> References: <20150914220930.GA30739@mandarb.com> <20150915170501.GB8322@mandarb.com> <55F89C35.5020700@ianshome.com> <20150915225403.GF30699@mandarb.com> <23B418F8-165B-48FE-950C-5D2719F61701@omniti.com> <20150916183115.GA23826@mandarb.com> <444B095E-D5AE-439E-8BAC-F58A9B391BC2@omniti.com> <20150916185427.GB23826@mandarb.com> <20150916190739.GA3354@volny.cz> Message-ID: <55FA3897.9050309@ianshome.com> Jan Vlach wrote: > Hi Omen, > > how about moving all other data than fcouncil.html to different zfs > dataset and then doing zfs destroy on the original dataset? > > No guarantee that this would work though .. That is how I recovered my data. This left me with a small enough data set to send to a VM for debugging. -- Ian. From canerturk at hotmail.com Thu Sep 17 04:58:26 2015 From: canerturk at hotmail.com (can erturk) Date: Thu, 17 Sep 2015 04:58:26 +0000 Subject: [OmniOS-discuss] omnios + nappit+ cifs data recovery Message-ID: hi. file sharing via cifs for Windows client I'm doing. share this with all documents in one of the directories have been deleted. set chmod 777 for the directory to be deleted. I didn't take backup of this folder. I have 2 questions. 1) "omnios" data recovery is it possible? How Can I do that? the tool? app? or Recycle Bin? 2)How do I access the IP address, and deletes the log records that I can find some? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alka at hfg-gmuend.de Thu Sep 17 10:20:19 2015 From: alka at hfg-gmuend.de (Guenther Alka) Date: Thu, 17 Sep 2015 12:20:19 +0200 Subject: [OmniOS-discuss] omnios + nappit+ cifs data recovery In-Reply-To: References: Message-ID: <55FA93E3.8040205@hfg-gmuend.de> hello Can 1. The correct and only way is to create snapshots = previous versions on your filesystems. You can access them from Windows with a mouse right klick on a folder with properties > previous version You can autocreate snapshots in napp-it with menu Jobs > snaps ex create a snapjob every hour, keep 24 ( can go back every hour for current day) add a snapjob 11pm, every day, hold 7 (can go back daily) add a snapjob sunday 11pm, keep 4 (can go back weekly) add a snapjob 11pm, every 1st sunday, keep 12 (can go back monthly in last year) btw you should not use classic Unix permissions like 777 with CIFS on a Solaris system as CIFS is is using Windows alike ACL with their inheritance possibilities. A changemod 777 deletes all inheritancs settings what you do not want. use ACLs settings like everyone@=full instead 2. I do not think that there are any cifs ip access logs per default Gea Am 17.09.2015 um 06:58 schrieb can erturk: > hi. > > > > file sharing via cifs for Windows client I'm doing. > > share this with all documents in one of the directories have been > deleted. > > > > > set chmod 777 for the directory to be deleted. > > > > > I didn't take backup of this folder. > > > > I have 2 questions. > > > > 1) "omnios" data recovery is it possible? How Can I do that? the tool? > app? or Recycle Bin? > > > > 2)How do I access the IP address, and deletes the log records that I > can find some? > > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimklimov at cos.ru Thu Sep 17 11:54:56 2015 From: jimklimov at cos.ru (Jim Klimov) Date: Thu, 17 Sep 2015 13:54:56 +0200 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: <20150916104924.GC20232@gutsman.lotheac.fi> References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> <20150916104924.GC20232@gutsman.lotheac.fi> Message-ID: 16 ???????? 2015??. 12:49:24 CEST, Lauri Tirkkonen ?????: >On Mon, Sep 14 2015 23:46:11 -0400, Dan McDonald wrote: >> There have been some fixes there, but I'm not sure if it's all there. >> I do know one has to use --reject options to make a switch. >> >> Lauri "lotheac" Tirkkonen can provide more details. Also note - >there >> is an effort to replace sunssh with OpenSSh altogether. > >OpenSSH is installable in bloody with: > >pkg install --reject pkg:/network/ssh --reject pkg:/network/ssh/ssh-key >--reject pkg:/service/network/ssh pkg:/network/openssh >pkg:/network/openssh-server > >It's a bit unwieldy, but does work -- the rejects are necessary to tell >the pkg solver that it's okay to uninstall those packages (and satisfy >the dependencies with openssh ones instead). > >There's at least one more problem with this change, though, and that is >that openssh seems to be the default in new installs. I'm discussing >that with Dan, but I suspect it's going to be a blocker for >backporting. Is there some defimitive list of functional difference between OpenSSH vanilla and SunSSH as of today (given the latter started as a fork of the former, IIRC)? Am I wrong to think the benefits of SunSSH revolved around integration with Solaris security features like RBAC and PAM? Was there more to it? Why is it hard to upstream and just get the common (or specially ifdef'ed) OPENSSH to become SUNSSH + more new features/bugfixes, and not maintain and reconcile two forks? Jim -- Typos courtesy of K-9 Mail on my Samsung Android From lotheac at iki.fi Thu Sep 17 12:15:05 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Thu, 17 Sep 2015 15:15:05 +0300 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> <20150916104924.GC20232@gutsman.lotheac.fi> Message-ID: <20150917121505.GK20232@gutsman.lotheac.fi> On Thu, Sep 17 2015 13:54:56 +0200, Jim Klimov wrote: > Is there some defimitive list of functional difference between OpenSSH > vanilla and SunSSH as of today (given the latter started as a fork of > the former, IIRC)? Am I wrong to think the benefits of SunSSH revolved > around integration with Solaris security features like RBAC and PAM? > Was there more to it? Why is it hard to upstream and just get the > common (or specially ifdef'ed) OPENSSH to become SUNSSH + more new > features/bugfixes, and not maintain and reconcile two forks? My personal view is that SunSSH is largely unmaintained, and it's downright incompatible with recent OpenSSH versions by default. I'm not very familiar with the history there, but AFAIK one big reason for the fork was that SunSSH had a different privilege separation model [0]. As I understand it, Joyent are working on patching the parts of SunSSH on top of OpenSSH and shipping that. I'm not familiar with the differences apart from the privsep and haven't had time to review, but I guess their patchset [1] would be a good starting point for a list like the one you ask for. The packaging change to allow vanilla OpenSSH installation on OmniOS is a separate effort; Dan hinted that OmniOS might include some of Joyent's patches in OpenSSH in the future, but I can't speak for him or OmniTI :) [0]: http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/ssh/README.altprivsep [1]: https://github.com/joyent/illumos-extra/tree/master/openssh/Patches -- Lauri Tirkkonen | lotheac @ IRCnet From alex at cooperi.net Thu Sep 17 18:53:23 2015 From: alex at cooperi.net (Alex Wilson) Date: Thu, 17 Sep 2015 11:53:23 -0700 Subject: [OmniOS-discuss] OmniOS Bloody update In-Reply-To: References: <00579F92-94AE-4DF1-A57C-0AFFCDC2A051@omniti.com> <004f01d0ef57$3ecd4ea0$bc67ebe0$@acm.org> <70C25CC8-CB6A-4ED1-8E62-FA25489CEF67@omniti.com> <20150916104924.GC20232@gutsman.lotheac.fi> Message-ID: <55CC54DF-BE51-4F7F-B878-1F43894417F0@cooperi.net> > Jim Klimov wrote: > > Is there some defimitive list of functional difference between OpenSSH > vanilla and SunSSH as of today (given the latter started as a fork of the > former, IIRC)? Am I wrong to think the benefits of SunSSH revolved around > integration with Solaris security features like RBAC and PAM? Was there > more to it? Why is it hard to upstream and just get the common (or > specially ifdef'ed) OPENSSH to become SUNSSH + more new features/bugfixes, > and not maintain and reconcile two forks? I can give you a list of the features that Sun added in SunSSH which were never upstreamed in to OpenSSH, and then also the features that OpenSSH have removed upstream which were still in SunSSH. Between the two of these you should be able to get a list of all the currently known expected differences between the two. Big ticket items added in SunSSH: * Support for Solaris PAM -- our PAM actually has some not-so-subtle differences to Linux PAM, and there are bugs in Linux PAM that openssh-portable has workarounds for which actively cause problems if used on our PAM. This is the root cause of all of the role/RBAC issues when using OpenSSH on Illumos. * Separate PAM facilities for each auth method (eg pubkey, keyboard-interactive etc) * Support for GSS KEx and the Solaris Kerberos -- once again our krb5 is slightly different to everyone else and the patches to make this work are important to some users. * Support for BSM audit -- makes system-wide login event reporting consistent * i18n/gettext support -- SunSSH is fully translatable with gettext (though in Illumos no translations are available in the gate) * Language and locale negotiation -- this uses the protocol provisions for language negotiation to set up the LANG and LC_* env vars on the remote machine to as closely match the client's settings as possible based on the locales and languages available on the server. This is not the same as just using "SendEnv" and "AcceptEnv" (but those do likely cover 90% of real use of this feature) * altprivsep (Sun's alternative privilege separation model) -- I'm not going to try to explain this in detail, but the README.altprivsep file in the gate has a lot of text about it. Note that the text doesn't really line up 100% with what the code actually does, and I personally think the altprivsep model is badly broken (hence I did not attempt to port it forwards in the patches Joyent are working on) * Dropping Illumos fine-grained privileges in ssh-agent and the daemon -- this sounds small but I consider it a big-ticket item. We've had support for fine-grained privileges in the vein of OpenBSD's tame(2) for a long time, and they are a powerful way to help reduce the impact of security bugs in critical daemons like SSH. Smaller things in SunSSH: * -Y and -X -- SunSSH's -X option acts like -Y does for the rest of the world * Support for some additional key formats (RFC4716, PKCS8) * Extra hooks / plugins: PreUserauthHook, PubKeyPlugin * Support for dtrace in places, including the sftp-server which has special probes for performance measurement * Bug fixes for users with non-default login privilege sets (eg not checking the inability to setuid back to root after changing) and users on auto-mounted NFS home dirs * man pages reorganised into Illumos numbering * CTF information in the SSH binaries * Workarounds for earlier SunSSH bugs so that S10-era SunSSH clients can connect to a SunSSH daemon (even though they would, in some circumstances, fail to connect to a stock OpenSSH server). I hope this one has been unused for a long time. Things removed or changed significantly in OpenSSH (as of 7.1p1): * DSA keys are disabled by default for pubkey auth * tcpwrappers have been removed * Support for dh-group1 keyex has been disabled by default (so a brand new OpenSSH daemon and an old SunSSH client have no keyex algos in common and cannot connect) * Key fingerprint formats: default key fingerprint format is now ALG:base64 rather than the old MD5 hex format. A number of these changes have had attempts to upstream them over the years, but most of them have been rejected. Especially with respect to the PAM issues, it's not clear that there is an easy way to have a single C file that is compatible with both our PAM and the Linux PAM without it having so many #ifdefs as to be unreadable. Similar story with KRB5. Some patches are just simply not suitable for upstreaming -- dtrace changes, man page renumbering, CTF information, restoring tcpwrappers support, gettext/i18n code -- none of these things are things that the upstream OpenSSH developers have shown any interest in or desire to take on (and have actively dismissed attempts to do so in the past) In other cases, patches have been rejected upstream because of past lack of time/effort on Oracle or Sun's part to take feedback from OpenSSH developers into account, or various semi-political concerns. I'd like to revisit some of these in the near future. Hope that helps explain things. The patching effort I've been going through at Joyent is to aimed at creating an up to date OpenSSH that is as compatible as reasonably possible with SunSSH, to make the transition smooth for the majority of our users. Then I will proceed to try to slowly upstream or deprecate and drop as many of the changes as possible to get us closer to plain OpenSSH. It's likely that some of the patches will be carried by us indefinitely, however, which is not unexpected. As an aside, I don't think there is a major Linux distro or BSD (aside from OpenBSD, of course) out there that runs an unpatched stock OpenSSH by default with the upstream default config. The fact is that everybody has local patches they carry around on the SSH code, or at the very least a non-stock default config file for the daemon they provide. Debian, for example, carry around about 40 patches to OpenSSH locally, with a total line count comparable to the current revision of the Joyent patch stack. From janus at volny.cz Fri Sep 18 21:27:23 2015 From: janus at volny.cz (Jan Vlach) Date: Fri, 18 Sep 2015 23:27:23 +0200 Subject: [OmniOS-discuss] Anyone running EMC networker on OmniOS ? Message-ID: <20150918212722.GA15921@volny.cz> Hello omnios discuss, is anyone running successfully EMC networker client on OmniOS? What version? Thank you, Jan From mail at steffenwagner.com Fri Sep 18 21:44:55 2015 From: mail at steffenwagner.com (Steffen Wagner) Date: Fri, 18 Sep 2015 23:44:55 +0200 Subject: [OmniOS-discuss] Anyone running EMC networker on OmniOS ? In-Reply-To: <20150918212722.GA15921@volny.cz> References: <20150918212722.GA15921@volny.cz> Message-ID: <005301d0f25b$42996800$c7cc3800$@steffenwagner.com> Hi, try solaris clients... in most cases this works well. Regards, Steffen -----Original Message----- From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Jan Vlach Sent: Freitag, 18. September 2015 23:27 To: omnios-discuss at lists.omniti.com Subject: [OmniOS-discuss] Anyone running EMC networker on OmniOS ? Hello omnios discuss, is anyone running successfully EMC networker client on OmniOS? What version? Thank you, Jan _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From henson at acm.org Sat Sep 19 01:10:54 2015 From: henson at acm.org (Paul B. Henson) Date: Fri, 18 Sep 2015 18:10:54 -0700 Subject: [OmniOS-discuss] zdb -h bug? In-Reply-To: <55F8A73E.6050003@genashor.com> References: <005801d0ef58$da1898f0$8e49cad0$@acm.org> <55F8A73E.6050003@genashor.com> Message-ID: <034601d0f278$08ce27b0$1a6a7710$@acm.org> > From: Gary Gendel > Sent: Tuesday, September 15, 2015 4:18 PM > > zdb -h core dumps on both of these, both before and after the update. > Since I have nothing fancy (no cache or log disks), I suspect (and hope) > the problem is in zdb. Thanks for the verification. I gotta tell you, when zdb core dumped while I was trying to determine if my pool had been corrupted by the L2ARC bug, it was not a good feeling 8-/. But I'm pretty sure at this point it is an unrelated bug with zdb and not a pool corruption issue. I still haven't had time to set up a test environment to reproduce it, maybe next week. From lists at marzocchi.net Sun Sep 20 14:15:42 2015 From: lists at marzocchi.net (Olaf Marzocchi) Date: Sun, 20 Sep 2015 16:15:42 +0200 Subject: [OmniOS-discuss] Issue with dovecot under OmniOS and permissions or ACLs Message-ID: <55FEBF8E.7000806@marzocchi.net> Hello, I am running dovecot 2.2.18 (compiled from source) on OmniOS r151014. The Maildir folder is located in my home folder and I assigned it recursively the following permissions: drwxrwx---+348 olaf olaf 359 Sep 20 14:31 Maildir owner@:rwxpdDaARWcCos:fd-----:allow group@:rwxpdDaARWcCos:fd-----:allow group:mail:rwxpdDaARWcCos:fd-----:allow everyone@:------a-R-c--s:fd-----:allow I verified that newly created files inside Maildir correctly retain these ACLs. I still get this kind of errors: [ID 583609 mail.error] imap(olaf): Error: rename(/tank/home/olaf/Maildir/.Amici, conoscenti/dovecot.index.cache) failed: Permission denied (euid=501(olaf) egid=501(olaf) UNIX perms appear ok (ACL/MAC wrong?)) [ID 583609 mail.error] imap(olaf): Error: rename(/tank/home/olaf/Maildir/.Amici, conoscenti/dovecot.index.tmp, /tank/home/olaf/Maildir/.Amici, conoscenti/dovecot.index) failed: Permission denied I checked and the files mentioned have the same permissions as the folder Maildir. Since (from what I understand) dovecot works on the mail with my username, there's no reason for these errors. Other errors after I tried to rename a folder: Debug: Namespace : Using permissions from /tank/home/olaf/Maildir: mode=0770 gid=default Error: unlink(/tank/home/olaf/Maildir/subscriptions.lock) failed: Permission denied Error: file_dotlock_replace() failed with subscription file /tank/home/olaf/Maildir/subscriptions: Permission denied Error: rename(/tank/home/olaf/Maildir/subscriptions.lock, /tank/home/olaf/Maildir/subscriptions) failed: Permission denied At this point I don't know if it is an issue with my system, or some sort of incompatibility between dovecot and illumos or ZFS. I am not able to read and understand the source file, but this is where the "ACL/MAC wrong" error is coded: http://hg.dovecot.org/dovecot-2.0/file/tip/src/lib/eacces-error.c Has anyone a clue about possible way to solve the issue? It appears I can still put mail on the IMAP folders, but I fear this issue will cause problems later on. Thanks Olaf Marzocchi From steve at linuxsuite.org Mon Sep 21 20:11:51 2015 From: steve at linuxsuite.org (steve at linuxsuite.org) Date: Mon, 21 Sep 2015 16:11:51 -0400 Subject: [OmniOS-discuss] Configuring Jumbo frames Message-ID: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> Howdy! I have several bnx interfaces, bnx0 bnx1 bnx2 bnx3. I want to directly connect bnx2 and bnx3 to another machine (ie. no switch ), and configure jumbo frames on these interfaces only. Is this possible and what do I need to do? In bnx.conf I found this line ############################################################################ # mtu : Configures the hardware MTU size. The valid range for this # parameter is 60 to 9000. The default value is 1500. # #mtu=1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500; Should I put something like mtu=1500,9000; in bnx.conf and can I control the mtu on the particular interface with ifconfig and hostname.bnx files?? thanx - steve From doug at will.to Mon Sep 21 20:25:58 2015 From: doug at will.to (Doug Hughes) Date: Mon, 21 Sep 2015 16:25:58 -0400 Subject: [OmniOS-discuss] Configuring Jumbo frames In-Reply-To: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> References: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> Message-ID: All 1gig interfaces are auto-mdix these days, which means they will auto-negotiate and no special crossover cable is needed for host-to-host. You can use ipadm to set the mtu like so: ipadm set-ifprop -p mtu=9000 -m ipv4 bnx0 If the mtu updated in the driver.conf file is needed, you'll need to do that first (I cannot confirm nor deny whether that is needed) On Mon, Sep 21, 2015 at 4:11 PM, wrote: > > Howdy! > > I have several bnx interfaces, bnx0 bnx1 bnx2 bnx3. I want to > directly connect bnx2 and bnx3 to another machine (ie. no > switch ), and configure jumbo frames on > these interfaces only. Is this possible and what do I need to do? > > In bnx.conf I found this line > > > ############################################################################ > # mtu : Configures the hardware MTU size. The valid range for this > # parameter is 60 to 9000. The default value is 1500. > # > > #mtu=1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500; > > Should I put something like > > mtu=1500,9000; > > in bnx.conf and can I control the mtu on the particular interface > with ifconfig and hostname.bnx files?? > > thanx - steve > > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at omniti.com Mon Sep 21 20:26:47 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 21 Sep 2015 16:26:47 -0400 Subject: [OmniOS-discuss] Configuring Jumbo frames In-Reply-To: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> References: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> Message-ID: <9A7A099D-9E53-4C76-9A0F-2387F98CA534@omniti.com> > On Sep 21, 2015, at 4:11 PM, steve at linuxsuite.org wrote: > > in bnx.conf and can I control the mtu on the particular interface > with ifconfig and hostname.bnx files?? bnx is an old, closed-source driver. MAYBE it's up to date enough to interface with GLDv3 properly. Try this: dladm show-linkprop -p mtu bnx2 If you get an answer, you can use: dladm set-linkprop -p mtu=9000 bnx2 to update on a per-link basis. Dan From vab at bb-c.de Mon Sep 21 20:28:23 2015 From: vab at bb-c.de (Volker A. Brandt) Date: Mon, 21 Sep 2015 22:28:23 +0200 Subject: [OmniOS-discuss] Configuring Jumbo frames In-Reply-To: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> References: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> Message-ID: <22016.26727.898720.290813@glaurung.bb-c.de> Hi Steve! > I have several bnx interfaces, bnx0 bnx1 bnx2 bnx3. I > want to directly connect bnx2 and bnx3 to another machine (ie. no > switch ), and configure jumbo frames on these interfaces only. Is > this possible Yes. > and what do I need to do? > > In bnx.conf I found this line > > ############################################################################ > # mtu : Configures the hardware MTU size. The valid range for this > # parameter is 60 to 9000. The default value is 1500. > # > #mtu=1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500; > > Should I put something like > > mtu=1500,9000; No. The line you quoted defines the settings for 16 instances of the bnx driver. To set jumbo frames for instances #2 and #3 only. you would need remove the comment sign and change it to mtu=1500,1500,9000,9000,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500; > in bnx.conf and can I control the mtu on the particular > interface with ifconfig and hostname.bnx files?? Forget those. This is OmniOS and the 21st century. :-) Use dladm, as in: dladm set-linkprop -p mtu=9000 where is the name of your datalink, e.g. "bnx2" (if you have not renamed it to be something else). Hope this helps -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Oracle Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim, GERMANY Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 46 Gesch?ftsf?hrer: Rainer J.H. Brandt und Volker A. Brandt "When logic and proportion have fallen sloppy dead" From danmcd at omniti.com Mon Sep 21 20:31:00 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 21 Sep 2015 16:31:00 -0400 Subject: [OmniOS-discuss] Configuring Jumbo frames In-Reply-To: References: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> Message-ID: > On Sep 21, 2015, at 4:25 PM, Doug Hughes wrote: > > > You can use ipadm to set the mtu like so: > > ipadm set-ifprop -p mtu=9000 -m ipv4 bnx0 But you also have to set the dladm property, Doug. And since bnx is closed-source, I can't just confirm it works. Dan From danmcd at omniti.com Mon Sep 21 20:36:14 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 21 Sep 2015 16:36:14 -0400 Subject: [OmniOS-discuss] Configuring Jumbo frames In-Reply-To: <22016.26727.898720.290813@glaurung.bb-c.de> References: <230d3016035088ebd88d5aaf3104c677.squirrel@emailmg.netfirms.com> <22016.26727.898720.290813@glaurung.bb-c.de> Message-ID: > On Sep 21, 2015, at 4:28 PM, Volker A. Brandt wrote: > No. The line you quoted defines the settings for 16 instances > of the bnx driver. To set jumbo frames for instances #2 and #3 only. > you would need remove the comment sign and change it to > > mtu=1500,1500,9000,9000,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500,1500; > >> in bnx.conf and can I control the mtu on the particular >> interface with ifconfig and hostname.bnx files?? > > Forget those. This is OmniOS and the 21st century. :-) > > Use dladm, as in: > > dladm set-linkprop -p mtu=9000 > > where is the name of your datalink, e.g. "bnx2" (if you have > not renamed it to be something else). It's not clear the stock bnx driver is in the 21st Century, however. Glad you have the if-dladm-fails workaround, however. Thanks Volker! Dan From KBruene at simmonsperrine.com Mon Sep 21 21:05:00 2015 From: KBruene at simmonsperrine.com (Kyle Bruene) Date: Mon, 21 Sep 2015 21:05:00 +0000 Subject: [OmniOS-discuss] L2ARC bug Message-ID: <202C92988C5CF249BD3F9F21B2B199CB8D50284A@SPMAIL1.spae.local> I've read some of the posts about the recent bug #6214 and am wondering if I might be affected. I am running omnios-170cea2 and do use l2arc. It is very difficult to get to a point where I can reboot these machines. If I am affected, is it as simple as removing the l2arc drives from the pool? Thanks guys. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at omniti.com Mon Sep 21 21:14:47 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 21 Sep 2015 17:14:47 -0400 Subject: [OmniOS-discuss] L2ARC bug In-Reply-To: <202C92988C5CF249BD3F9F21B2B199CB8D50284A@SPMAIL1.spae.local> References: <202C92988C5CF249BD3F9F21B2B199CB8D50284A@SPMAIL1.spae.local> Message-ID: <8FE0D91E-741B-43A7-B09F-13AF4DFDF9D8@omniti.com> > On Sep 21, 2015, at 5:05 PM, Kyle Bruene wrote: > > I?ve read some of the posts about the recent bug #6214 and am wondering if I might be affected. I am running omnios-170cea2 Update NOW. Get the new BE ready NOW. > and do use l2arc. It is very difficult to get to a point where I can reboot these machines. If I am affected, is it as simple as removing the l2arc drives from the pool? And yes, removing the l2arc now will stop potential future corruption. Seriously, update NOW, and reboot ASAP. Dan From bhildebrandt at exegy.com Mon Sep 21 21:54:01 2015 From: bhildebrandt at exegy.com (Hildebrandt, Bill) Date: Mon, 21 Sep 2015 21:54:01 +0000 Subject: [OmniOS-discuss] flow control on Intel X520 10G Message-ID: Hello, First of all, I'm an OmniOS newbie, so take it easy on me. Does anyone have a current status of bug #4063 (last updated 9/2013 @70%)? I recently tried to turn on bi-directional flow control and it did bad things to my switch. Is the Intel X520 just too old for anyone to care about it? I have an alternative of using a Chelsio T440-CR, but I wasn't sure about using the Chelsio driver for OpenIndiana. Thanks, Bill ________________________________ This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at omniti.com Tue Sep 22 01:14:25 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 21 Sep 2015 21:14:25 -0400 Subject: [OmniOS-discuss] flow control on Intel X520 10G In-Reply-To: References: Message-ID: <41F10764-9009-4AD7-B05F-66BA2849A7F0@omniti.com> > On Sep 21, 2015, at 5:54 PM, Hildebrandt, Bill wrote: > > Hello, > > First of all, I?m an OmniOS newbie, so take it easy on me. Does anyone have a current status of bug #4063 (last updated 9/2013 @70%)? I recently tried to turn on bi-directional flow control and it did bad things to my switch. Is the Intel X520 just too old for anyone to care about it? I have an alternative of using a Chelsio T440-CR, but I wasn?t sure about using the Chelsio driver for OpenIndiana. > Wait... you're throwing around OmniOS and OI?! Which are you using? This is a question best posed to the illumos developers list. In particular, there's analysis from Nexenta in there indicating that there may be a fix available but which requires more testing. I'd highly recommend you ask about illumos 4063 specifically on the illumos developer's list. Dan From bhildebrandt at exegy.com Tue Sep 22 01:50:26 2015 From: bhildebrandt at exegy.com (Hildebrandt, Bill) Date: Tue, 22 Sep 2015 01:50:26 +0000 Subject: [OmniOS-discuss] flow control on Intel X520 10G In-Reply-To: <41F10764-9009-4AD7-B05F-66BA2849A7F0@omniti.com> References: , <41F10764-9009-4AD7-B05F-66BA2849A7F0@omniti.com> Message-ID: <93667C37-010F-4F97-93C2-3E4D900C75FC@exegy.com> I'm using OmniOS; however, the driver from Chelsio only comes in two flavors, Solaris and OpenIndiana. On Sep 21, 2015, at 8:14 PM, Dan McDonald wrote: > On Sep 21, 2015, at 5:54 PM, Hildebrandt, Bill wrote: > > Hello, > > First of all, I?m an OmniOS newbie, so take it easy on me. Does anyone have a current status of bug #4063 (last updated 9/2013 @70%)? I recently tried to turn on bi-directional flow control and it did bad things to my switch. Is the Intel X520 just too old for anyone to care about it? I have an alternative of using a Chelsio T440-CR, but I wasn?t sure about using the Chelsio driver for OpenIndiana. Wait... you're throwing around OmniOS and OI?! Which are you using? This is a question best posed to the illumos developers list. In particular, there's analysis from Nexenta in there indicating that there may be a fix available but which requires more testing. I'd highly recommend you ask about illumos 4063 specifically on the illumos developer's list. Dan ________________________________ This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. From danmcd at omniti.com Tue Sep 22 01:53:15 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 21 Sep 2015 21:53:15 -0400 Subject: [OmniOS-discuss] flow control on Intel X520 10G In-Reply-To: <93667C37-010F-4F97-93C2-3E4D900C75FC@exegy.com> References: <41F10764-9009-4AD7-B05F-66BA2849A7F0@omniti.com> <93667C37-010F-4F97-93C2-3E4D900C75FC@exegy.com> Message-ID: > On Sep 21, 2015, at 9:50 PM, Hildebrandt, Bill wrote: > > I'm using OmniOS; however, the driver from Chelsio only comes in two flavors, Solaris and OpenIndiana. Chelsio should call it "illumos" instead of OpenIndiana, then. Pardon my misunderstanding - Chelsio needs correcting. :) Dan From martin.truhlar at archcon.cz Wed Sep 23 08:51:00 2015 From: martin.truhlar at archcon.cz (=?utf-8?B?TWFydGluIFRydWhsw6HFmQ==?=) Date: Wed, 23 Sep 2015 10:51:00 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> Message-ID: Tests revealed, that problem is somewhere in disk array itself. Write performance of disk connected directly (via iSCSI) to KVM is poor as well, even write performance measured on Omnios is very poor. So loop is tightened, but there still remains lot of possible hacks. I strived to use professional hw (disks included), so I would try to seek the error in a software setup first. Do you have any ideas where to search first (and second, third...)? FYI mirror 5 was added lately to the running pool. pool: dpool state: ONLINE scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015 config: NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess dpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t50014EE00400FA16d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE2B40F14DBd0 ONLINE 0 0 0 1 TB WDC WD1003FBYX-0 S:0 H:0 T:0 mirror-1 ONLINE 0 0 0 c1t50014EE05950B131d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE2B5E5A6B8d0 ONLINE 0 0 0 1 TB WDC WD1003FBYZ-0 S:0 H:0 T:0 mirror-2 ONLINE 0 0 0 c1t50014EE05958C51Bd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0595617ACd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 mirror-3 ONLINE 0 0 0 c1t50014EE0AEAE7540d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0AEAE9B65d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 mirror-5 ONLINE 0 0 0 c1t50014EE0AEABB8E7d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 c1t50014EE0AEB44327d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 logs mirror-4 ONLINE 0 0 0 c1t55CD2E404B88ABE1d0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 c1t55CD2E404B88E4CFd0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 cache c1t55CD2E4000339A59d0 ONLINE 0 0 0 180 GB INTEL SSDSC2BW18 S:0 H:0 T:0 spares c2t2d0 AVAIL 1 TB WDC WD10EFRX-68F S:0 H:0 T:0 errors: No known data errors Martin -----Original Message----- From: Dan McDonald [mailto:danmcd at omniti.com] Sent: Wednesday, September 16, 2015 1:51 PM To: Martin Truhl?? Cc: omnios-discuss at lists.omniti.com; Dan McDonald Subject: Re: [OmniOS-discuss] iSCSI poor write performance > On Sep 16, 2015, at 4:04 AM, Martin Truhl?? wrote: > > Yes, I'm aware, that problem can be hidden in many places. > MTU is 1500. All nics and their setup are included at this email. Start by making your 10GigE network use 9000 MTU. You'll need to configure this on both ends (is this directly-attached 10GigE? Or over a switch?). Dan From hannohirschberger at googlemail.com Wed Sep 23 12:43:16 2015 From: hannohirschberger at googlemail.com (Hanno Hirschberger) Date: Wed, 23 Sep 2015 14:43:16 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> Message-ID: <56029E64.1060908@googlemail.com> Hi Martin, On 23.09.2015 10:51, Martin Truhl?? wrote: > Tests revealed, that problem is somewhere in disk array itself. are you familiar with the ashift problem on 4k drives? My best guess would be that the 1 TB WD drives are emulating a block size of 512 bytes while using 4k sectors internally. OmniOS is using a ashift value of 9 then to align the data efficiently (on 512 byte sectors!). This slows the whole pool down - I had the same problem before. The ashift value has to be 12 on 4k drives! Try the command 'zdb' to gather the values for your drives. Look for 'ashift: 9' oder 'ashift: 12'. Regards, Hanno From mail at steffenwagner.com Wed Sep 23 13:51:30 2015 From: mail at steffenwagner.com (Steffen Wagner) Date: Wed, 23 Sep 2015 15:51:30 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <56029E64.1060908@googlemail.com> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> <56029E64.1060908@googlemail.com> Message-ID: <000801d0f606$f3b97a40$db2c6ec0$@steffenwagner.com> Hi Hanno, how do you calculate the best ashift value? Thanks, Steffen -----Original Message----- From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Hanno Hirschberger Sent: Mittwoch, 23. September 2015 14:43 To: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] iSCSI poor write performance Hi Martin, On 23.09.2015 10:51, Martin Truhl?? wrote: > Tests revealed, that problem is somewhere in disk array itself. are you familiar with the ashift problem on 4k drives? My best guess would be that the 1 TB WD drives are emulating a block size of 512 bytes while using 4k sectors internally. OmniOS is using a ashift value of 9 then to align the data efficiently (on 512 byte sectors!). This slows the whole pool down - I had the same problem before. The ashift value has to be 12 on 4k drives! Try the command 'zdb' to gather the values for your drives. Look for 'ashift: 9' oder 'ashift: 12'. Regards, Hanno _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From martin.truhlar at archcon.cz Wed Sep 23 14:06:53 2015 From: martin.truhlar at archcon.cz (=?utf-8?B?TWFydGluIFRydWhsw6HFmQ==?=) Date: Wed, 23 Sep 2015 16:06:53 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <56029E64.1060908@googlemail.com> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> <56029E64.1060908@googlemail.com> Message-ID: Hi Hanno Thank you for your advice, unfortunatelly on dpool is ashift already set to 12 without any impact on performance. Martin dpool: version: 5000 name: 'dpool' state: 0 txg: 423442 pool_guid: 8301756920046328435 hostid: 390978448 hostname: 'archnas' vdev_children: 6 vdev_tree: type: 'root' id: 0 guid: 8301756920046328435 create_txg: 4 children[0]: type: 'mirror' id: 0 guid: 1196673952777635344 metaslab_array: 34 metaslab_shift: 33 ashift: 12 asize: 1000191557632 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8124964091934866578 path: '/dev/dsk/c1t50014EE00400FA16d0s0' devid: 'id1,sd at n50014ee00400fa16/a' phys_path: '/scsi_vhci/disk at g50014ee00400fa16:a' whole_disk: 1 DTL: 490 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 9348868466535755709 path: '/dev/dsk/c1t50014EE2B40F14DBd0s0' devid: 'id1,sd at n50014ee2b40f14db/a' phys_path: '/scsi_vhci/disk at g50014ee2b40f14db:a' whole_disk: 1 DTL: 489 create_txg: 4 children[1]: type: 'mirror' id: 1 guid: 9943497592636049032 metaslab_array: 38 metaslab_shift: 33 ashift: 12 asize: 1000191557632 is_log: 0 create_txg: 34 children[0]: type: 'disk' id: 0 guid: 2705367364579591435 path: '/dev/dsk/c1t50014EE05950B131d0s0' devid: 'id1,sd at n50014ee05950b131/a' phys_path: '/scsi_vhci/disk at g50014ee05950b131:a' whole_disk: 1 DTL: 488 create_txg: 34 children[1]: type: 'disk' id: 1 guid: 5412107877453931054 path: '/dev/dsk/c1t50014EE2B5E5A6B8d0s0' devid: 'id1,sd at n50014ee2b5e5a6b8/a' phys_path: '/scsi_vhci/disk at g50014ee2b5e5a6b8:a' whole_disk: 1 DTL: 487 create_txg: 34 children[2]: type: 'mirror' id: 2 guid: 4337686502023930092 whole_disk: 0 metaslab_array: 40 metaslab_shift: 33 ashift: 12 asize: 1000191557632 is_log: 0 create_txg: 65 children[0]: type: 'disk' id: 0 guid: 12065653943105190290 path: '/dev/dsk/c1t50014EE05958C51Bd0s0' devid: 'id1,sd at n50014ee05958c51b/a' phys_path: '/scsi_vhci/disk at g50014ee05958c51b:a' whole_disk: 1 DTL: 486 create_txg: 65 children[1]: type: 'disk' id: 1 guid: 7956964322079560255 path: '/dev/dsk/c1t50014EE0595617ACd0s0' devid: 'id1,sd at n50014ee0595617ac/a' phys_path: '/scsi_vhci/disk at g50014ee0595617ac:a' whole_disk: 1 DTL: 482 create_txg: 65 children[3]: type: 'mirror' id: 3 guid: 13515811785015942389 metaslab_array: 43 metaslab_shift: 33 ashift: 12 asize: 1000191557632 is_log: 0 create_txg: 119 children[0]: type: 'disk' id: 0 guid: 2010958773514461606 path: '/dev/dsk/c1t50014EE0AEAE7540d0s0' devid: 'id1,sd at n50014ee0aeae7540/a' phys_path: '/scsi_vhci/disk at g50014ee0aeae7540:a' whole_disk: 1 DTL: 484 create_txg: 119 children[1]: type: 'disk' id: 1 guid: 6920452460884353416 path: '/dev/dsk/c1t50014EE0AEAE9B65d0s0' devid: 'id1,sd at n50014ee0aeae9b65/a' phys_path: '/scsi_vhci/disk at g50014ee0aeae9b65:a' whole_disk: 1 DTL: 491 create_txg: 119 children[4]: type: 'mirror' id: 4 guid: 13450996153705674574 metaslab_array: 45 metaslab_shift: 30 ashift: 9 asize: 120020795392 is_log: 1 create_txg: 172 children[0]: type: 'disk' id: 0 guid: 642840549260709901 path: '/dev/dsk/c1t55CD2E404B88ABE1d0s0' devid: 'id1,sd at n55cd2e404b88abe1/a' phys_path: '/scsi_vhci/disk at g55cd2e404b88abe1:a' whole_disk: 1 DTL: 494 create_txg: 172 children[1]: type: 'disk' id: 1 guid: 17473204952243782915 path: '/dev/dsk/c1t55CD2E404B88E4CFd0s0' devid: 'id1,sd at n55cd2e404b88e4cf/a' phys_path: '/scsi_vhci/disk at g55cd2e404b88e4cf:a' whole_disk: 1 DTL: 493 create_txg: 172 children[5]: type: 'mirror' id: 5 guid: 6461803899340698053 metaslab_array: 520 metaslab_shift: 33 ashift: 12 asize: 1000191557632 is_log: 0 create_txg: 422833 children[0]: type: 'disk' id: 0 guid: 15790186799979059305 path: '/dev/dsk/c1t50014EE0AEABB8E7d0s0' devid: 'id1,sd at n50014ee0aeabb8e7/a' phys_path: '/scsi_vhci/disk at g50014ee0aeabb8e7:a' whole_disk: 1 create_txg: 422833 children[1]: type: 'disk' id: 1 guid: 3033691275784652782 path: '/dev/dsk/c1t50014EE0AEB44327d0s0' devid: 'id1,sd at n50014ee0aeb44327/a' phys_path: '/scsi_vhci/disk at g50014ee0aeb44327:a' whole_disk: 1 create_txg: 422833 features_for_read: com.delphix:hole_birth com.delphix:embedded_data -----Original Message----- From: Hanno Hirschberger [mailto:hannohirschberger at googlemail.com] Sent: Wednesday, September 23, 2015 2:43 PM To: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] iSCSI poor write performance Hi Martin, On 23.09.2015 10:51, Martin Truhl?? wrote: > Tests revealed, that problem is somewhere in disk array itself. are you familiar with the ashift problem on 4k drives? My best guess would be that the 1 TB WD drives are emulating a block size of 512 bytes while using 4k sectors internally. OmniOS is using a ashift value of 9 then to align the data efficiently (on 512 byte sectors!). This slows the whole pool down - I had the same problem before. The ashift value has to be 12 on 4k drives! Try the command 'zdb' to gather the values for your drives. Look for 'ashift: 9' oder 'ashift: 12'. Regards, Hanno _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss From mir at miras.org Wed Sep 23 14:40:01 2015 From: mir at miras.org (Michael Rasmussen) Date: Wed, 23 Sep 2015 16:40:01 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> <56029E64.1060908@googlemail.com> Message-ID: Have you tried running iperf? On September 23, 2015 4:06:53 PM CEST, "Martin Truhl??" wrote: >Hi Hanno > >Thank you for your advice, unfortunatelly on dpool is ashift already >set to 12 without any impact on performance. > >Martin > >dpool: > version: 5000 > name: 'dpool' > state: 0 > txg: 423442 > pool_guid: 8301756920046328435 > hostid: 390978448 > hostname: 'archnas' > vdev_children: 6 > vdev_tree: > type: 'root' > id: 0 > guid: 8301756920046328435 > create_txg: 4 > children[0]: > type: 'mirror' > id: 0 > guid: 1196673952777635344 > metaslab_array: 34 > metaslab_shift: 33 > ashift: 12 > asize: 1000191557632 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 8124964091934866578 > path: '/dev/dsk/c1t50014EE00400FA16d0s0' > devid: 'id1,sd at n50014ee00400fa16/a' > phys_path: '/scsi_vhci/disk at g50014ee00400fa16:a' > whole_disk: 1 > DTL: 490 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 9348868466535755709 > path: '/dev/dsk/c1t50014EE2B40F14DBd0s0' > devid: 'id1,sd at n50014ee2b40f14db/a' > phys_path: '/scsi_vhci/disk at g50014ee2b40f14db:a' > whole_disk: 1 > DTL: 489 > create_txg: 4 > children[1]: > type: 'mirror' > id: 1 > guid: 9943497592636049032 > metaslab_array: 38 > metaslab_shift: 33 > ashift: 12 > asize: 1000191557632 > is_log: 0 > create_txg: 34 > children[0]: > type: 'disk' > id: 0 > guid: 2705367364579591435 > path: '/dev/dsk/c1t50014EE05950B131d0s0' > devid: 'id1,sd at n50014ee05950b131/a' > phys_path: '/scsi_vhci/disk at g50014ee05950b131:a' > whole_disk: 1 > DTL: 488 > create_txg: 34 > children[1]: > type: 'disk' > id: 1 > guid: 5412107877453931054 > path: '/dev/dsk/c1t50014EE2B5E5A6B8d0s0' > devid: 'id1,sd at n50014ee2b5e5a6b8/a' > phys_path: '/scsi_vhci/disk at g50014ee2b5e5a6b8:a' > whole_disk: 1 > DTL: 487 > create_txg: 34 > children[2]: > type: 'mirror' > id: 2 > guid: 4337686502023930092 > whole_disk: 0 > metaslab_array: 40 > metaslab_shift: 33 > ashift: 12 > asize: 1000191557632 > is_log: 0 > create_txg: 65 > children[0]: > type: 'disk' > id: 0 > guid: 12065653943105190290 > path: '/dev/dsk/c1t50014EE05958C51Bd0s0' > devid: 'id1,sd at n50014ee05958c51b/a' > phys_path: '/scsi_vhci/disk at g50014ee05958c51b:a' > whole_disk: 1 > DTL: 486 > create_txg: 65 > children[1]: > type: 'disk' > id: 1 > guid: 7956964322079560255 > path: '/dev/dsk/c1t50014EE0595617ACd0s0' > devid: 'id1,sd at n50014ee0595617ac/a' > phys_path: '/scsi_vhci/disk at g50014ee0595617ac:a' > whole_disk: 1 > DTL: 482 > create_txg: 65 > children[3]: > type: 'mirror' > id: 3 > guid: 13515811785015942389 > metaslab_array: 43 > metaslab_shift: 33 > ashift: 12 > asize: 1000191557632 > is_log: 0 > create_txg: 119 > children[0]: > type: 'disk' > id: 0 > guid: 2010958773514461606 > path: '/dev/dsk/c1t50014EE0AEAE7540d0s0' > devid: 'id1,sd at n50014ee0aeae7540/a' > phys_path: '/scsi_vhci/disk at g50014ee0aeae7540:a' > whole_disk: 1 > DTL: 484 > create_txg: 119 > children[1]: > type: 'disk' > id: 1 > guid: 6920452460884353416 > path: '/dev/dsk/c1t50014EE0AEAE9B65d0s0' > devid: 'id1,sd at n50014ee0aeae9b65/a' > phys_path: '/scsi_vhci/disk at g50014ee0aeae9b65:a' > whole_disk: 1 > DTL: 491 > create_txg: 119 > children[4]: > type: 'mirror' > id: 4 > guid: 13450996153705674574 > metaslab_array: 45 > metaslab_shift: 30 > ashift: 9 > asize: 120020795392 > is_log: 1 > create_txg: 172 > children[0]: > type: 'disk' > id: 0 > guid: 642840549260709901 > path: '/dev/dsk/c1t55CD2E404B88ABE1d0s0' > devid: 'id1,sd at n55cd2e404b88abe1/a' > phys_path: '/scsi_vhci/disk at g55cd2e404b88abe1:a' > whole_disk: 1 > DTL: 494 > create_txg: 172 > children[1]: > type: 'disk' > id: 1 > guid: 17473204952243782915 > path: '/dev/dsk/c1t55CD2E404B88E4CFd0s0' > devid: 'id1,sd at n55cd2e404b88e4cf/a' > phys_path: '/scsi_vhci/disk at g55cd2e404b88e4cf:a' > whole_disk: 1 > DTL: 493 > create_txg: 172 > children[5]: > type: 'mirror' > id: 5 > guid: 6461803899340698053 > metaslab_array: 520 > metaslab_shift: 33 > ashift: 12 > asize: 1000191557632 > is_log: 0 > create_txg: 422833 > children[0]: > type: 'disk' > id: 0 > guid: 15790186799979059305 > path: '/dev/dsk/c1t50014EE0AEABB8E7d0s0' > devid: 'id1,sd at n50014ee0aeabb8e7/a' > phys_path: '/scsi_vhci/disk at g50014ee0aeabb8e7:a' > whole_disk: 1 > create_txg: 422833 > children[1]: > type: 'disk' > id: 1 > guid: 3033691275784652782 > path: '/dev/dsk/c1t50014EE0AEB44327d0s0' > devid: 'id1,sd at n50014ee0aeb44327/a' > phys_path: '/scsi_vhci/disk at g50014ee0aeb44327:a' > whole_disk: 1 > create_txg: 422833 > features_for_read: > com.delphix:hole_birth > com.delphix:embedded_data > > >-----Original Message----- >From: Hanno Hirschberger [mailto:hannohirschberger at googlemail.com] >Sent: Wednesday, September 23, 2015 2:43 PM >To: omnios-discuss at lists.omniti.com >Subject: Re: [OmniOS-discuss] iSCSI poor write performance > >Hi Martin, > >On 23.09.2015 10:51, Martin Truhl?? wrote: >> Tests revealed, that problem is somewhere in disk array itself. > >are you familiar with the ashift problem on 4k drives? My best guess >would be that the 1 TB WD drives are emulating a block size of 512 >bytes while using 4k sectors internally. OmniOS is using a ashift value >of 9 then to align the data efficiently (on 512 byte sectors!). This >slows the whole pool down - I had the same problem before. The ashift >value has to be 12 on 4k drives! > >Try the command 'zdb' to gather the values for your drives. Look for >'ashift: 9' oder 'ashift: 12'. > >Regards, > >Hanno >_______________________________________________ >OmniOS-discuss mailing list >OmniOS-discuss at lists.omniti.com >http://lists.omniti.com/mailman/listinfo/omnios-discuss >_______________________________________________ >OmniOS-discuss mailing list >OmniOS-discuss at lists.omniti.com >http://lists.omniti.com/mailman/listinfo/omnios-discuss -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. ---- This mail was virus scanned and spam checked before delivery. This mail is also DKIM signed. See header dkim-signature. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alka at hfg-gmuend.de Wed Sep 23 14:55:21 2015 From: alka at hfg-gmuend.de (Guenther Alka) Date: Wed, 23 Sep 2015 16:55:21 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> Message-ID: <5602BD59.1080509@hfg-gmuend.de> Poor write performance is often related to sync write. Enable write back cache for your logical units (and disable ZFS sync property on the filesystem for filebased lu's) and redo some performance tests. Gea Am 23.09.2015 um 10:51 schrieb Martin Truhl??: > Tests revealed, that problem is somewhere in disk array itself. Write performance of disk connected directly (via iSCSI) to KVM is poor as well, even write performance measured on Omnios is very poor. So loop is tightened, but there still remains lot of possible hacks. > I strived to use professional hw (disks included), so I would try to seek the error in a software setup first. Do you have any ideas where to search first (and second, third...)? > > FYI mirror 5 was added lately to the running pool. > > pool: dpool > state: ONLINE > scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015 > config: > > NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess > dpool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c1t50014EE00400FA16d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE2B40F14DBd0 ONLINE 0 0 0 1 TB WDC WD1003FBYX-0 S:0 H:0 T:0 > mirror-1 ONLINE 0 0 0 > c1t50014EE05950B131d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE2B5E5A6B8d0 ONLINE 0 0 0 1 TB WDC WD1003FBYZ-0 S:0 H:0 T:0 > mirror-2 ONLINE 0 0 0 > c1t50014EE05958C51Bd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE0595617ACd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > mirror-3 ONLINE 0 0 0 > c1t50014EE0AEAE7540d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE0AEAE9B65d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > mirror-5 ONLINE 0 0 0 > c1t50014EE0AEABB8E7d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE0AEB44327d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > logs > mirror-4 ONLINE 0 0 0 > c1t55CD2E404B88ABE1d0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 > c1t55CD2E404B88E4CFd0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 > cache > c1t55CD2E4000339A59d0 ONLINE 0 0 0 180 GB INTEL SSDSC2BW18 S:0 H:0 T:0 > spares > c2t2d0 AVAIL 1 TB WDC WD10EFRX-68F S:0 H:0 T:0 > > errors: No known data errors > > Martin > > > -----Original Message----- > From: Dan McDonald [mailto:danmcd at omniti.com] > Sent: Wednesday, September 16, 2015 1:51 PM > To: Martin Truhl?? > Cc: omnios-discuss at lists.omniti.com; Dan McDonald > Subject: Re: [OmniOS-discuss] iSCSI poor write performance > > >> On Sep 16, 2015, at 4:04 AM, Martin Truhl?? wrote: >> >> Yes, I'm aware, that problem can be hidden in many places. >> MTU is 1500. All nics and their setup are included at this email. > Start by making your 10GigE network use 9000 MTU. You'll need to configure this on both ends (is this directly-attached 10GigE? Or over a switch?). > > Dan > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From stephan.budach at JVM.DE Wed Sep 23 15:23:24 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Wed, 23 Sep 2015 17:23:24 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> Message-ID: <5602C3EC.3070308@jvm.de> Am 23.09.15 um 10:51 schrieb Martin Truhl??: > Tests revealed, that problem is somewhere in disk array itself. Write performance of disk connected directly (via iSCSI) to KVM is poor as well, even write performance measured on Omnios is very poor. So loop is tightened, but there still remains lot of possible hacks. > I strived to use professional hw (disks included), so I would try to seek the error in a software setup first. Do you have any ideas where to search first (and second, third...)? > > FYI mirror 5 was added lately to the running pool. > > pool: dpool > state: ONLINE > scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015 > config: > > NAME STATE READ WRITE CKSUM CAP Product /napp-it IOstat mess > dpool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c1t50014EE00400FA16d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE2B40F14DBd0 ONLINE 0 0 0 1 TB WDC WD1003FBYX-0 S:0 H:0 T:0 > mirror-1 ONLINE 0 0 0 > c1t50014EE05950B131d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE2B5E5A6B8d0 ONLINE 0 0 0 1 TB WDC WD1003FBYZ-0 S:0 H:0 T:0 > mirror-2 ONLINE 0 0 0 > c1t50014EE05958C51Bd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE0595617ACd0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > mirror-3 ONLINE 0 0 0 > c1t50014EE0AEAE7540d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE0AEAE9B65d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > mirror-5 ONLINE 0 0 0 > c1t50014EE0AEABB8E7d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > c1t50014EE0AEB44327d0 ONLINE 0 0 0 1 TB WDC WD1002F9YZ-0 S:0 H:0 T:0 > logs > mirror-4 ONLINE 0 0 0 > c1t55CD2E404B88ABE1d0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 > c1t55CD2E404B88E4CFd0 ONLINE 0 0 0 120 GB INTEL SSDSC2BW12 S:0 H:0 T:0 > cache > c1t55CD2E4000339A59d0 ONLINE 0 0 0 180 GB INTEL SSDSC2BW18 S:0 H:0 T:0 > spares > c2t2d0 AVAIL 1 TB WDC WD10EFRX-68F S:0 H:0 T:0 > > errors: No known data errors > > Martin > > > -----Original Message----- > From: Dan McDonald [mailto:danmcd at omniti.com] > Sent: Wednesday, September 16, 2015 1:51 PM > To: Martin Truhl?? > Cc: omnios-discuss at lists.omniti.com; Dan McDonald > Subject: Re: [OmniOS-discuss] iSCSI poor write performance > > >> On Sep 16, 2015, at 4:04 AM, Martin Truhl?? wrote: >> >> Yes, I'm aware, that problem can be hidden in many places. >> MTU is 1500. All nics and their setup are included at this email. > Start by making your 10GigE network use 9000 MTU. You'll need to configure this on both ends (is this directly-attached 10GigE? Or over a switch?). > > Dan > To understand what might be going on with our zpool, I'd monitor the disks using iostat -xme 5 and keep an eye on the errors and svc_t. Just today I had an issue, where the zpools on one of my OmniOS boxes showed incredible svc_t for all my zpools, although the drives themselves showed only moderate ones. The impact was a very high load on the initiators, which were connected to the targets exported from those zpools. As I couldn't figure out what was going on, I decided to boot that box and afterwards things returned to normal again. Luckily, this was only one side of an ASM mirror, so bouncing the box didn't matter. Also, when you say, that mirror-5 has been added recently, how is the data spread across thre vdevs? If the other vdevs have already been quite full, than that could also lead to significant performance issues. At any way, you will need to get the performance of your zpools straight first, before even beginning to think on how to tweak the performance over the network. Cheers, stephan From mir at miras.org Wed Sep 23 16:59:27 2015 From: mir at miras.org (Michael Rasmussen) Date: Wed, 23 Sep 2015 18:59:27 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <5602C3EC.3070308@jvm.de> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> <5602C3EC.3070308@jvm.de> Message-ID: <20150923185927.4fc34561@sleipner.datanom.net> On Wed, 23 Sep 2015 17:23:24 +0200 Stephan Budach wrote: > > At any way, you will need to get the performance of your zpools straight first, before even beginning to think on how to tweak the performance over the network. > Since his pool is comprised of vdev mirror pairs where on disk is local and the other disk is attached via iSCSI solving network performance is also part of solving the pool performance. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: This fortune is encrypted -- get your decoder rings ready! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From stephan.budach at JVM.DE Wed Sep 23 17:56:26 2015 From: stephan.budach at JVM.DE (Stephan Budach) Date: Wed, 23 Sep 2015 19:56:26 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <20150923185927.4fc34561@sleipner.datanom.net> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> <5602C3EC.3070308@jvm.de> <20150923185927.4fc34561@sleipner.datanom.net> Message-ID: <5602E7CA.8080504@jvm.de> Am 23.09.15 um 18:59 schrieb Michael Rasmussen: > On Wed, 23 Sep 2015 17:23:24 +0200 > Stephan Budach wrote: > >> At any way, you will need to get the performance of your zpools straight first, before even beginning to think on how to tweak the performance over the network. >> > Since his pool is comprised of vdev mirror pairs where on disk is local > and the other disk is attached via iSCSI solving network performance is > also part of solving the pool performance. > Huh? Where did that escape me? I don't think, that the pool layout showed any remote disks, they all seemed to be from the same controller, aren't they? And even, if that was the case, then one would always start at the zpool and work one's way up from there, no? Cheers, Stephan From mir at miras.org Wed Sep 23 18:22:04 2015 From: mir at miras.org (Michael Rasmussen) Date: Wed, 23 Sep 2015 20:22:04 +0200 Subject: [OmniOS-discuss] iSCSI poor write performance In-Reply-To: <5602E7CA.8080504@jvm.de> References: <15C9B79E-7BC4-4C01-9660-FFD64353304D@omniti.com> <8D1002D9-69E2-4857-945A-746B821B27A1@omniti.com> <5602C3EC.3070308@jvm.de> <20150923185927.4fc34561@sleipner.datanom.net> <5602E7CA.8080504@jvm.de> Message-ID: <20150923202204.4f22b646@sleipner.datanom.net> On Wed, 23 Sep 2015 19:56:26 +0200 Stephan Budach wrote: > Huh? Where did that escape me? I don't think, that the pool layout showed any remote disks, they all Sorry, reading to hastig. Misread phys_path: '/scsi_vhci/disk at g50014ee00400fa16:a' for iscsi. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C mir miras org http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: Whatever doesn't succeed in two months and a half in California will never succeed. -- Rev. Henry Durant, founder of the University of California -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: OpenPGP digital signature URL: From mayuresh at kathe.in Fri Sep 25 19:15:11 2015 From: mayuresh at kathe.in (Mayuresh Kathe) Date: Sat, 26 Sep 2015 00:45:11 +0530 Subject: [OmniOS-discuss] omnios : r151014 : usb install media : installation failure ... Message-ID: hello, i got the latest omnios-r151014.usb-dd from the omniti website. checksum is fine, so download probably not corrupt. wrote the image to a brand-new usb pendrive using; "dd if=omnios-r151014.usb-dd of=/dev/sdb bs=1024 conv=sync" under ubuntu 15.04 (amd64) desktop. the write went through successfully, and at the end of it, executed "sync" to be doubly sure that the operation succeeded. when i tried the boot the system using that pendrive, the bootup went okay (hardware clock failure and lack of kvm support messages), but, at the point where the install scripts start, the whole thing just failed on me and put me into some kind of maintenance mode where it asked me to enter a username (which i gave as "root"), and a password where I just pressed the "enter" key. it went a bit further and asked me to read up /lib/svc/share/README and that turned out to be not of much help to me due to inexperience with illumos based systems. is there any way to work around this hurdle? what may i have done or be doing wrong? i have only one machine, and moving back and forth between the two systems is quite painful. :( best regards, ~mayuresh From danmcd at omniti.com Fri Sep 25 20:08:54 2015 From: danmcd at omniti.com (Dan McDonald) Date: Fri, 25 Sep 2015 16:08:54 -0400 Subject: [OmniOS-discuss] omnios : r151014 : usb install media : installation failure ... In-Reply-To: References: Message-ID: <65994635-8969-4E97-8246-39C04F455062@omniti.com> This is going to sound odd, but can you configure a floppy drive on your VM? I know this is an issue with VMWARE, and it may be similar in your situation. Also, the INSTALLER doesn't like blkdev devices, so you may need to install on a virtual IDE drive and then clone it to a blkdev vioblk device. Dan Sent from my iPhone (typos, autocorrect, and all) > On Sep 25, 2015, at 3:15 PM, Mayuresh Kathe wrote: > > hello, > > i got the latest omnios-r151014.usb-dd from the omniti website. > > checksum is fine, so download probably not corrupt. > > wrote the image to a brand-new usb pendrive using; "dd if=omnios-r151014.usb-dd of=/dev/sdb bs=1024 conv=sync" under ubuntu 15.04 (amd64) desktop. > the write went through successfully, and at the end of it, executed "sync" to be doubly sure that the operation succeeded. > > when i tried the boot the system using that pendrive, the bootup went okay (hardware clock failure and lack of kvm support messages), but, at the point where the install scripts start, the whole thing just failed on me and put me into some kind of maintenance mode where it asked me to enter a username (which i gave as "root"), and a password where I just pressed the "enter" key. > it went a bit further and asked me to read up /lib/svc/share/README and that turned out to be not of much help to me due to inexperience with illumos based systems. > > is there any way to work around this hurdle? > what may i have done or be doing wrong? > > i have only one machine, and moving back and forth between the two systems is quite painful. :( > > best regards, > > ~mayuresh > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From mayuresh at kathe.in Sat Sep 26 03:11:44 2015 From: mayuresh at kathe.in (Mayuresh Kathe) Date: Sat, 26 Sep 2015 08:41:44 +0530 Subject: [OmniOS-discuss] omnios : r151014 : usb install media : installation failure ... In-Reply-To: <65994635-8969-4E97-8246-39C04F455062@omniti.com> References: <65994635-8969-4E97-8246-39C04F455062@omniti.com> Message-ID: <20150926031144.GA1971@18-5201ix> actually, i am trying to install to bare metal, it's an hp-aio, one on which the 151012 'lts' edition installs and works just fine. i tried the vm (virtualbox) approach, but my system is so horribly underpowered that it was painful to use it in that manner, hence trying to install omnios native. thanks, ~mayuresh On Fri, Sep 25, 2015 at 04:08:54PM -0400, Dan McDonald wrote: > This is going to sound odd, but can you configure a floppy drive on your VM? I know this is an issue with VMWARE, and it may be similar in your situation. > > Also, the INSTALLER doesn't like blkdev devices, so you may need to install on a virtual IDE drive and then clone it to a blkdev vioblk device. > > Dan > > Sent from my iPhone (typos, autocorrect, and all) > > > On Sep 25, 2015, at 3:15 PM, Mayuresh Kathe wrote: > > > > hello, > > > > i got the latest omnios-r151014.usb-dd from the omniti website. > > > > checksum is fine, so download probably not corrupt. > > > > wrote the image to a brand-new usb pendrive using; "dd if=omnios-r151014.usb-dd of=/dev/sdb bs=1024 conv=sync" under ubuntu 15.04 (amd64) desktop. > > the write went through successfully, and at the end of it, executed "sync" to be doubly sure that the operation succeeded. > > > > when i tried the boot the system using that pendrive, the bootup went okay (hardware clock failure and lack of kvm support messages), but, at the point where the install scripts start, the whole thing just failed on me and put me into some kind of maintenance mode where it asked me to enter a username (which i gave as "root"), and a password where I just pressed the "enter" key. > > it went a bit further and asked me to read up /lib/svc/share/README and that turned out to be not of much help to me due to inexperience with illumos based systems. > > > > is there any way to work around this hurdle? > > what may i have done or be doing wrong? > > > > i have only one machine, and moving back and forth between the two systems is quite painful. :( > > > > best regards, > > > > ~mayuresh > > > > _______________________________________________ > > OmniOS-discuss mailing list > > OmniOS-discuss at lists.omniti.com > > http://lists.omniti.com/mailman/listinfo/omnios-discuss From richard at netbsd.org Sat Sep 26 15:33:44 2015 From: richard at netbsd.org (Richard PALO) Date: Sat, 26 Sep 2015 17:33:44 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> Message-ID: <5606BAD8.8090101@netbsd.org> Le 08/09/15 06:32, Richard PALO a ?crit : > Thought I would try snoop with port 22. > > From omnios, in one window I issued: >> pfexec snoop -rv -d e1000g0 port 22 |& tee snoop.out > > From another I connected to the OI machine and did nothing further (as it hangs in that direction too): >> ssh xx.xx.xxx.xx > > In the attached snoop.output, I edited snoop.out to put in a comment after the initial connection > (search for "pause after connection") > before the traffic seemingly when things go sour... I notice a Window changed to 1024?? > > At the moment I'm running with the gate @ 2ed96329a073f74bd33f766ab982be14f3205bc9 is it possible that the following has something to do with it (it is in about the right timeframe)? > commit 1f183ba0b0be3e10202501aa3740753df6512804 > Author: Lauri Tirkkonen > AuthorDate: Wed Apr 15 16:30:46 2015 +0300 > Commit: Robert Mustacchi > CommitDate: Thu Jul 30 08:33:51 2015 -0700 > > 5850 tcp timestamping behavior changed mid-connection > If so, would it be safe to revert for a test build to try? -- Richard PALO From richard at netbsd.org Sun Sep 27 06:24:48 2015 From: richard at netbsd.org (Richard PALO) Date: Sun, 27 Sep 2015 08:24:48 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <5606BAD8.8090101@netbsd.org> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> Message-ID: <56078BB0.7010503@netbsd.org> Le 26/09/15 17:33, Richard PALO a ?crit : > Le 08/09/15 06:32, Richard PALO a ?crit : >> Thought I would try snoop with port 22. >> >> From omnios, in one window I issued: >>> pfexec snoop -rv -d e1000g0 port 22 |& tee snoop.out >> >> From another I connected to the OI machine and did nothing further (as it hangs in that direction too): >>> ssh xx.xx.xxx.xx >> >> In the attached snoop.output, I edited snoop.out to put in a comment after the initial connection >> (search for "pause after connection") >> before the traffic seemingly when things go sour... I notice a Window changed to 1024?? >> >> At the moment I'm running with the gate @ 2ed96329a073f74bd33f766ab982be14f3205bc9 > > > is it possible that the following has something to do with it (it is in about the right timeframe)? >> commit 1f183ba0b0be3e10202501aa3740753df6512804 >> Author: Lauri Tirkkonen >> AuthorDate: Wed Apr 15 16:30:46 2015 +0300 >> Commit: Robert Mustacchi >> CommitDate: Thu Jul 30 08:33:51 2015 -0700 >> >> 5850 tcp timestamping behavior changed mid-connection >> > > If so, would it be safe to revert for a test build to try? > Stroke of luck, tried a recent build with this reverted and have been able to work over an hour without problems on a couple of sessions in parallel doing things that used to hang after a few moments. I'll file an issue, this should probably be reverted until things are worked out. -- Richard PALO From lotheac at iki.fi Mon Sep 28 12:08:53 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Mon, 28 Sep 2015 15:08:53 +0300 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <55F14D2C.3010403@netbsd.org> References: <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5EE7303C-7920-4087-9B0F-5FB15E9315C7@omniti.com> <55F14D2C.3010403@netbsd.org> Message-ID: <20150928120853.GA17072@gutsman.lotheac.fi> On Thu, Sep 10 2015 11:28:12 +0200, Richard PALO wrote: > Le 08/09/15 14:12, Dan McDonald a ?crit : > > > >>On Sep 8, 2015, at 12:32 AM, Richard PALO wrote: > >> > >>before the traffic seemingly when things go sour... I notice a Window changed to 1024?? > > > >Which side is advertising the window change again? And which side is > >running -gate from 2ed96329a073f74bd33f766ab982be14f3205bc9 ? > > > >This thread has been paged out, so to speak, for long enough. Can > >you give me the context of which machine is running what to explain > >the context of the snoop file? > > > >Thanks, > >Dan > > > Just for completeness, same histoire from the OI side, snoop and ssh > > here, 192.168.1.2 is smicro (oi_151a9) > >e1000g0 192.168.1.1 255.255.255.255 00:12:ef:21:9c:f8 > >e1000g0 192.168.1.2 255.255.255.255 SPLA 00:30:48:f4:33:f0 > and 192.168.1.1 is an Orange Business Services SDSL router. Are these captures both from the same connection? If so, there is obviously a middle box modifying the traffic. On *both* ends, it looks like the other end is sending an empty ACK requesting the window change to 1024 (packet 41 in snoop.output, with dst 192.168.0.6, and packet 41 in snoop-OI.output, with dst 192.168.1.2). Both of these TCP segments are missing the required timestamp options. With the fix for 5850, illumos should never send a segment without timestamps on a connection which has negotiated timestamps (this one has, since they are present on previous segments). In addition, as part of 5850, we follow the RFC recommendation to drop any arriving segments *without* timestamps on a timestamp-negotiated connection [0]. This is likely the reason why your use case worked before; the older behavior was to stop generating timestamps altogether on a connection where any received segment omits them, but that's the wrong thing to do. There is a new dtrace probe 'tcp:::droppedtimestamp' which should fire whenever a segment is dropped by this behavior. You could use that to verify my speculation, eg. # dtrace -n 'tcp:::droppedtimestamp { trace(probefunc); }' should generate output when the connection hangs (and more information about the connection is available in (tcp_t*)arg0). Based on the data you have made available I believe this is an issue with a middlebox injecting erroneous traffic into the TCP stream for both peers. This injected segment is ignored by the box with the fix for 5850 appliec, but it causes the older illumos box to stop generation of timestamps, after which all segments it sends are rejected by the newer box. Oh, and in the future, please post snoop capture files (from snoop -o); it's much easier to find the desired information in those :) [0]: http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/inet/tcp/tcp_input.c#2878 -- Lauri Tirkkonen | lotheac @ IRCnet From danmcd at omniti.com Mon Sep 28 12:13:04 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 28 Sep 2015 08:13:04 -0400 Subject: [OmniOS-discuss] omnios : r151014 : usb install media : installation failure ... In-Reply-To: <20150926031144.GA1971@18-5201ix> References: <65994635-8969-4E97-8246-39C04F455062@omniti.com> <20150926031144.GA1971@18-5201ix> Message-ID: <8F84D59D-11CF-4F06-A4E6-0A7419ECDD7B@omniti.com> > On Sep 25, 2015, at 11:11 PM, Mayuresh Kathe wrote: > > actually, i am trying to install to bare metal, it's an hp-aio, one on > which the 151012 'lts' edition installs and works just fine. 012 isn't LTS. It's a stable release that's going to be EOSLed in a matter of weeks. So '012 installs, but '014 doesn't? That it prompted you for single-user mode suggests something really odd. "svcs -xv" would be useful in that case (though it will be verbose). I'm not sure why the '014 installer didn't work, but if '012 DID install, you can follow the upgrade directions here: http://omnios.omniti.com/wiki.php/Upgrade_to_r151014 after a successful '012 installation. Dan From danmcd at omniti.com Mon Sep 28 12:15:33 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 28 Sep 2015 08:15:33 -0400 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <5606BAD8.8090101@netbsd.org> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> Message-ID: <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> If 5850 is indeed the problem, you need to report this to the illumos developers list, including a deterministic way of reproducing it. Funny though, the fix was brought forth because of specific middlebox behavior. It is POSSIBLE your middlebox is behaving differently than the bug-filer's middlebox. Please keep that in mind. Dan > On Sep 26, 2015, at 11:33 AM, Richard PALO wrote: > > Le 08/09/15 06:32, Richard PALO a ?crit : >> Thought I would try snoop with port 22. >> >> From omnios, in one window I issued: >>> pfexec snoop -rv -d e1000g0 port 22 |& tee snoop.out >> >> From another I connected to the OI machine and did nothing further (as it hangs in that direction too): >>> ssh xx.xx.xxx.xx >> >> In the attached snoop.output, I edited snoop.out to put in a comment after the initial connection >> (search for "pause after connection") >> before the traffic seemingly when things go sour... I notice a Window changed to 1024?? >> >> At the moment I'm running with the gate @ 2ed96329a073f74bd33f766ab982be14f3205bc9 > > > is it possible that the following has something to do with it (it is in about the right timeframe)? >> commit 1f183ba0b0be3e10202501aa3740753df6512804 >> Author: Lauri Tirkkonen >> AuthorDate: Wed Apr 15 16:30:46 2015 +0300 >> Commit: Robert Mustacchi >> CommitDate: Thu Jul 30 08:33:51 2015 -0700 >> >> 5850 tcp timestamping behavior changed mid-connection >> > > If so, would it be safe to revert for a test build to try? > -- > Richard PALO > From danmcd at omniti.com Mon Sep 28 12:21:46 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 28 Sep 2015 08:21:46 -0400 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> Message-ID: <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> > On Sep 28, 2015, at 8:15 AM, Dan McDonald wrote: > > If 5850 is indeed the problem, you need to report this to the illumos developers list, including a deterministic way of reproducing it. I see you filed bug 6264, which is a good first step. Please make sure you summarize the how-to-reproduce in it. I also wonder if you patch your oi_151a9 box with 5850, AND keep 5850 on your OmniOS machine, whether or not this problem ALSO goes away. After all, this fix specifically targets machines that drop timestamps... Dan From mayuresh at kathe.in Mon Sep 28 12:33:09 2015 From: mayuresh at kathe.in (Mayuresh Kathe) Date: Mon, 28 Sep 2015 18:03:09 +0530 Subject: [OmniOS-discuss] omnios : r151014 : usb install media : installation failure ... In-Reply-To: <8F84D59D-11CF-4F06-A4E6-0A7419ECDD7B@omniti.com> References: <65994635-8969-4E97-8246-39C04F455062@omniti.com> <20150926031144.GA1971@18-5201ix> <8F84D59D-11CF-4F06-A4E6-0A7419ECDD7B@omniti.com> Message-ID: <12156fa590b989b7eac5e4656098d497@kathe.in> On 2015-09-28 05:43 PM, Dan McDonald wrote: >> On Sep 25, 2015, at 11:11 PM, Mayuresh Kathe >> wrote: >> >> actually, i am trying to install to bare metal, it's an hp-aio, one on >> which the 151012 'lts' edition installs and works just fine. > > 012 isn't LTS. It's a stable release that's going to be EOSLed in a > matter of weeks. oh, okay. > So '012 installs, but '014 doesn't? That it prompted you for > single-user mode suggests something really odd. "svcs -xv" would be > useful in that case (though it will be verbose). thanks for this tip. btw, i think there's something terribly wrong with my hardware, the hardware clock is malfunctioning and even ubuntu is issuing panics at times. i guess it's time to go for a new machine. > I'm not sure why the '014 installer didn't work, but if '012 DID > install, you can follow the upgrade directions here: > > http://omnios.omniti.com/wiki.php/Upgrade_to_r151014 > > after a successful '012 installation. ok, sure, will do. thanks, ~mayuresh From richard at netbsd.org Mon Sep 28 12:51:03 2015 From: richard at netbsd.org (Richard PALO) Date: Mon, 28 Sep 2015 14:51:03 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> References: <55D81839.50301@NetBSD.org> <62284A5B-83D7-4A0C-9F3E-CF7BBDA16BD5@omniti.com> <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> Message-ID: <560937B7.8060702@netbsd.org> Le 28/09/15 14:21, Dan McDonald a ?crit : > >> On Sep 28, 2015, at 8:15 AM, Dan McDonald wrote: >> >> If 5850 is indeed the problem, you need to report this to the illumos developers list, including a deterministic way of reproducing it. > > I see you filed bug 6264, which is a good first step. Please make sure you summarize the how-to-reproduce in it. > > I also wonder if you patch your oi_151a9 box with 5850, AND keep 5850 on your OmniOS machine, whether or not this problem ALSO goes away. After all, this fix specifically targets machines that drop timestamps... > > Dan > > > Unfortunately this being an OI machine in production, I'd need the patched kit available in http://pkg.openindiana.org/dev/ which is currently at illumos 52e13e00ba with the last update being 2014-12-10 16:08:49 I'm not sure anybody deals with non-hipster OI anymore, unfortunately. -- Richard PALO From danmcd at omniti.com Mon Sep 28 12:55:57 2015 From: danmcd at omniti.com (Dan McDonald) Date: Mon, 28 Sep 2015 08:55:57 -0400 Subject: [OmniOS-discuss] omnios : r151014 : usb install media : installation failure ... In-Reply-To: <12156fa590b989b7eac5e4656098d497@kathe.in> References: <65994635-8969-4E97-8246-39C04F455062@omniti.com> <20150926031144.GA1971@18-5201ix> <8F84D59D-11CF-4F06-A4E6-0A7419ECDD7B@omniti.com> <12156fa590b989b7eac5e4656098d497@kathe.in> Message-ID: <9D0195F9-A449-4B9D-AB5F-FBFF3FFEE994@omniti.com> > On Sep 28, 2015, at 8:33 AM, Mayuresh Kathe wrote: > > btw, i think there's something terribly wrong with my hardware, the hardware clock is malfunctioning and even ubuntu is issuing panics at times. I'm hoping this is why '014 flaked out on you. There really shouldn't be any difference in the '014 and '012 installation experiences. Thanks, Dan From lotheac at iki.fi Mon Sep 28 13:46:39 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Mon, 28 Sep 2015 16:46:39 +0300 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> References: <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> Message-ID: <20150928134639.GC17072@gutsman.lotheac.fi> On Mon, Sep 28 2015 08:21:46 -0400, Dan McDonald wrote: > > > On Sep 28, 2015, at 8:15 AM, Dan McDonald wrote: > > > > If 5850 is indeed the problem, you need to report this to the > > illumos developers list, including a deterministic way of > > reproducing it. > > I see you filed bug 6264, which is a good first step. Please make > sure you summarize the how-to-reproduce in it. > > I also wonder if you patch your oi_151a9 box with 5850, AND keep 5850 > on your OmniOS machine, whether or not this problem ALSO goes away. > After all, this fix specifically targets machines that drop > timestamps... If my analysis is correct (see the mail I sent to this thread previously), then applying 5850 to the oi_151a9 box will cause the issue to disappear -- both peers will then ignore the injected window change segment because it has no timestamps. Of course, it's possible that the middlebox won't like being ignored and might cause other failures (it could still inject RSTs, for example, since those are not required to have timestamps). -- Lauri Tirkkonen | lotheac @ IRCnet From richard at netbsd.org Mon Sep 28 14:20:03 2015 From: richard at netbsd.org (Richard PALO) Date: Mon, 28 Sep 2015 16:20:03 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <20150928134639.GC17072@gutsman.lotheac.fi> References: <55DB2BAC.20603@netbsd.org> <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> Message-ID: Le 28/09/15 15:46, Lauri Tirkkonen a ?crit : > On Mon, Sep 28 2015 08:21:46 -0400, Dan McDonald wrote: >> >>> On Sep 28, 2015, at 8:15 AM, Dan McDonald wrote: >>> >>> If 5850 is indeed the problem, you need to report this to the >>> illumos developers list, including a deterministic way of >>> reproducing it. >> >> I see you filed bug 6264, which is a good first step. Please make >> sure you summarize the how-to-reproduce in it. >> >> I also wonder if you patch your oi_151a9 box with 5850, AND keep 5850 >> on your OmniOS machine, whether or not this problem ALSO goes away. >> After all, this fix specifically targets machines that drop >> timestamps... > > If my analysis is correct (see the mail I sent to this thread > previously), then applying 5850 to the oi_151a9 box will cause the issue > to disappear -- both peers will then ignore the injected window change > segment because it has no timestamps. Of course, it's possible that the > middlebox won't like being ignored and might cause other failures (it > could still inject RSTs, for example, since those are not required to > have timestamps). > If I experienced the issue, chances a great anybody else with oi_151a9 have it as well in France as the OI machine is connected to an Orange (previously known as France T?l?com) Business Services SDSL router and the Omnios box to a Freebox (Free T?l?com). Any hint on how to determine which box is doing it (or both)? If not, if I can ssh into someplace that is able to check... perhaps even an ftp session? cheers -- Richard PALO From lotheac at iki.fi Mon Sep 28 14:42:00 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Mon, 28 Sep 2015 17:42:00 +0300 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: References: <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> Message-ID: <20150928144200.GD17072@gutsman.lotheac.fi> On Mon, Sep 28 2015 16:20:03 +0200, Richard PALO wrote: > Le 28/09/15 15:46, Lauri Tirkkonen a ?crit : > > On Mon, Sep 28 2015 08:21:46 -0400, Dan McDonald wrote: > >> > >>> On Sep 28, 2015, at 8:15 AM, Dan McDonald wrote: > >>> > >>> If 5850 is indeed the problem, you need to report this to the > >>> illumos developers list, including a deterministic way of > >>> reproducing it. > >> > >> I see you filed bug 6264, which is a good first step. Please make > >> sure you summarize the how-to-reproduce in it. > >> > >> I also wonder if you patch your oi_151a9 box with 5850, AND keep 5850 > >> on your OmniOS machine, whether or not this problem ALSO goes away. > >> After all, this fix specifically targets machines that drop > >> timestamps... > > > > If my analysis is correct (see the mail I sent to this thread > > previously), then applying 5850 to the oi_151a9 box will cause the issue > > to disappear -- both peers will then ignore the injected window change > > segment because it has no timestamps. Of course, it's possible that the > > middlebox won't like being ignored and might cause other failures (it > > could still inject RSTs, for example, since those are not required to > > have timestamps). > > > > If I experienced the issue, chances a great anybody else with oi_151a9 have it > as well in France as the OI machine is connected to an Orange (previously known > as France T?l?com) Business Services SDSL router and the Omnios box to a Freebox (Free T?l?com). > > Any hint on how to determine which box is doing it (or both)? > If not, if I can ssh into someplace that is able to check... > perhaps even an ftp session? Well, seeing how we only know that neither peer is actually sending the non-timestamped segment, it could be any box along the path - I'd start with examining your routers. It's hard to say what exactly will trigger a repro without knowing what the middlebox is trying to accomplish by injecting this segment, but it might be beneficial to try to get a repro with a simple echo server or something like that, and then try to isolate the issue by trying different connection paths. You could also talk to your providers. It's unfortunate that this manifests in a regression like this, but it's a product of the previous incorrect behavior, an obnoxious middlebox doing unsanitary things, and us (illumos-gate) trying to do the right thing by following the RFC. -- Lauri Tirkkonen | lotheac @ IRCnet From lotheac at iki.fi Mon Sep 28 15:40:27 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Mon, 28 Sep 2015 18:40:27 +0300 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: References: <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> Message-ID: <20150928154027.GD5062@gutsman.lotheac.fi> On Mon, Sep 28 2015 16:20:03 +0200, Richard PALO wrote: > Le 28/09/15 15:46, Lauri Tirkkonen a ?crit : > > On Mon, Sep 28 2015 08:21:46 -0400, Dan McDonald wrote: > >> > >>> On Sep 28, 2015, at 8:15 AM, Dan McDonald wrote: > >>> > >>> If 5850 is indeed the problem, you need to report this to the > >>> illumos developers list, including a deterministic way of > >>> reproducing it. > >> > >> I see you filed bug 6264, which is a good first step. Please make > >> sure you summarize the how-to-reproduce in it. > >> > >> I also wonder if you patch your oi_151a9 box with 5850, AND keep 5850 > >> on your OmniOS machine, whether or not this problem ALSO goes away. > >> After all, this fix specifically targets machines that drop > >> timestamps... > > > > If my analysis is correct (see the mail I sent to this thread > > previously), then applying 5850 to the oi_151a9 box will cause the issue > > to disappear -- both peers will then ignore the injected window change > > segment because it has no timestamps. Of course, it's possible that the > > middlebox won't like being ignored and might cause other failures (it > > could still inject RSTs, for example, since those are not required to > > have timestamps). > > > > If I experienced the issue, chances a great anybody else with oi_151a9 have it > as well in France as the OI machine is connected to an Orange (previously known > as France T?l?com) Business Services SDSL router and the Omnios box to a Freebox (Free T?l?com). It just occurred to me that if timestamp options don't get negotiated at all on the connection, both peers should be fine with this injection and continue to function. So as a workaround you could try disabling timestamps on the oi_151a9 box. I see the following ndd options: % ndd -get tcp ?|grep tstamp tcp_tstamp_always (read and write) tcp_tstamp_if_wscale (read and write) You could try setting those to 0 and see if that works around the hang (untested, so beware). This obviously turns off TCP timestamps, but how useful are they on the pre-5850 box anyway if your middlebox has been defeating their use all this time? :) -- Lauri Tirkkonen | lotheac @ IRCnet From richard at netbsd.org Tue Sep 29 10:19:09 2015 From: richard at netbsd.org (Richard PALO) Date: Tue, 29 Sep 2015 12:19:09 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <20150928154027.GD5062@gutsman.lotheac.fi> References: <55DB4084.6090005@netbsd.org> <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> <20150928154027.GD5062@gutsman.lotheac.fi> Message-ID: Le 28/09/15 17:40, Lauri Tirkkonen a ?crit : > It just occurred to me that if timestamp options don't get negotiated at > all on the connection, both peers should be fine with this injection and > continue to function. So as a workaround you could try disabling > timestamps on the oi_151a9 box. I see the following ndd options: > > % ndd -get tcp ?|grep tstamp > tcp_tstamp_always (read and write) > tcp_tstamp_if_wscale (read and write) > > You could try setting those to 0 and see if that works around the hang > (untested, so beware). This obviously turns off TCP timestamps, but how > useful are they on the pre-5850 box anyway if your middlebox has been > defeating their use all this time? :) > On OI (actually on both): > richard at smicro:~$ ndd -get tcp tcp_tstamp_always > 0 > richard at smicro:~$ ndd -get tcp tcp_tstamp_if_wscale > 1 so if I understand correctly, setting tcp_tstamp_if_wscale on OI will turn off timestamps avoiding the issue with 5850 on Omnios. I'll give it a try. Since I'm not having any issues with netbsd (6.1), which seemingly is still at rfc1323 > richard at omnis:/home/richard$ ssh netbsd.org /sbin/sysctl net.inet.tcp.rfc1323 > net.inet.tcp.rfc1323 = 1 I'd like to do some additional tests involving a non-illumos host as well just to make sure. terveisin, risto3 From lotheac at iki.fi Tue Sep 29 10:35:07 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Tue, 29 Sep 2015 13:35:07 +0300 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: References: <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> <20150928154027.GD5062@gutsman.lotheac.fi> Message-ID: <20150929103507.GE17072@gutsman.lotheac.fi> On Tue, Sep 29 2015 12:19:09 +0200, Richard PALO wrote: > Since I'm not having any issues with netbsd (6.1), which seemingly is still > at rfc1323 > >richard at omnis:/home/richard$ ssh netbsd.org /sbin/sysctl net.inet.tcp.rfc1323 > >net.inet.tcp.rfc1323 = 1 > > I'd like to do some additional tests involving a non-illumos host as well > just to make sure. To be clear, it's not implementing RFC 1323 (and not even *not* implementing 7323) that causes the issue. 1323 actually didn't specify what to do with non-timestamped segments on a timestamp-negotiated connection, and illumos pre-5850 did something very surprising which I doubt nobody else did (stop generating timestamps on all future segments), so I don't think you will be able to reproduce the hang with other operating systems, but you'll likely be able to see the unexpected non-timestamped segments in connections between other OSes as well (but I still can't be sure because I don't know what middlebox is injecting them or why :) -- Lauri Tirkkonen | lotheac @ IRCnet From bhildebrandt at exegy.com Tue Sep 29 18:22:50 2015 From: bhildebrandt at exegy.com (Hildebrandt, Bill) Date: Tue, 29 Sep 2015 18:22:50 +0000 Subject: [OmniOS-discuss] possible bug Message-ID: Over the past few weeks, I have had 3 separate occurrences where my OmniOS/Napp-it NAS stops responding to NFS and CIFS. The first time was during the week of the ZFS corruption bug announcement. The system and it's replicated storage were both scrubbed and zdb analyzed, and nothing looked wrong. I rebuilt the NAS from scratch with updated patches and imported the pool. Same thing happened three days later, and now today, eight days later. Each time, a reboot is performed to bring it back. All services appear to be running. The odd thing is that an "ls -l" hangs on every mountpoint. Has anyone heard of this issue? Since I am not OmniOS savvy, is there anything I can capture while in that state that could help debug it? Thanks, Bill ________________________________ This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at omniti.com Tue Sep 29 18:46:42 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 29 Sep 2015 14:46:42 -0400 Subject: [OmniOS-discuss] possible bug In-Reply-To: References: Message-ID: Which OmniOS are you running? Cat /etc/release and look at uname -v. Let's first make sure you're up to date. Also, when it hangs, take a kernel dump - "reboot -d" and share the system dump. Thanks, Dan Sent from my iPhone (typos, autocorrect, and all) > On Sep 29, 2015, at 2:22 PM, Hildebrandt, Bill wrote: > > Over the past few weeks, I have had 3 separate occurrences where my OmniOS/Napp-it NAS stops responding to NFS and CIFS. The first time was during the week of the ZFS corruption bug announcement. The system and it?s replicated storage were both scrubbed and zdb analyzed, and nothing looked wrong. I rebuilt the NAS from scratch with updated patches and imported the pool. Same thing happened three days later, and now today, eight days later. Each time, a reboot is performed to bring it back. All services appear to be running. The odd thing is that an ?ls ?l? hangs on every mountpoint. Has anyone heard of this issue? Since I am not OmniOS savvy, is there anything I can capture while in that state that could help debug it? > > Thanks, > Bill > > > This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhildebrandt at exegy.com Tue Sep 29 18:49:09 2015 From: bhildebrandt at exegy.com (Hildebrandt, Bill) Date: Tue, 29 Sep 2015 18:49:09 +0000 Subject: [OmniOS-discuss] possible bug In-Reply-To: References: Message-ID: root at moslexnas02b:/root# cat /etc/release OmniOS v11 r151014 Copyright 2015 OmniTI Computer Consulting, Inc. All rights reserved. Use is subject to license terms. root at moslexnas02b:/root# uname -v omnios-cffff65 Thanks . . . I?ll try to take a kernel dump next time. From: Dan McDonald [mailto:danmcd at omniti.com] Sent: Tuesday, September 29, 2015 1:47 PM To: Hildebrandt, Bill; Dan McDonald Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] possible bug Which OmniOS are you running? Cat /etc/release and look at uname -v. Let's first make sure you're up to date. Also, when it hangs, take a kernel dump - "reboot -d" and share the system dump. Thanks, Dan Sent from my iPhone (typos, autocorrect, and all) On Sep 29, 2015, at 2:22 PM, Hildebrandt, Bill > wrote: Over the past few weeks, I have had 3 separate occurrences where my OmniOS/Napp-it NAS stops responding to NFS and CIFS. The first time was during the week of the ZFS corruption bug announcement. The system and it?s replicated storage were both scrubbed and zdb analyzed, and nothing looked wrong. I rebuilt the NAS from scratch with updated patches and imported the pool. Same thing happened three days later, and now today, eight days later. Each time, a reboot is performed to bring it back. All services appear to be running. The odd thing is that an ?ls ?l? hangs on every mountpoint. Has anyone heard of this issue? Since I am not OmniOS savvy, is there anything I can capture while in that state that could help debug it? Thanks, Bill ________________________________ This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss ________________________________ This e-mail and any documents accompanying it may contain legally privileged and/or confidential information belonging to Exegy, Inc. Such information may be protected from disclosure by law. The information is intended for use by only the addressee. If you are not the intended recipient, you are hereby notified that any disclosure or use of the information is strictly prohibited. If you have received this e-mail in error, please immediately contact the sender by e-mail or phone regarding instructions for return or destruction and do not use or disclose the content to others. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chip at innovates.com Tue Sep 29 18:58:40 2015 From: chip at innovates.com (Schweiss, Chip) Date: Tue, 29 Sep 2015 13:58:40 -0500 Subject: [OmniOS-discuss] possible bug In-Reply-To: References: Message-ID: I've seen issues like this when you run out of NFS locks. NFSv3 in Illumos is really slow at releasing locks. On all my NFS servers I do: sharectl set -p lockd_listen_backlog=256 nfs sharectl set -p lockd_servers=2048 nfs Everywhere I can, I use NFSv4 instead of v3. It handles lock much better. -Chip On Tue, Sep 29, 2015 at 1:22 PM, Hildebrandt, Bill wrote: > Over the past few weeks, I have had 3 separate occurrences where my > OmniOS/Napp-it NAS stops responding to NFS and CIFS. The first time was > during the week of the ZFS corruption bug announcement. The system and > it?s replicated storage were both scrubbed and zdb analyzed, and nothing > looked wrong. I rebuilt the NAS from scratch with updated patches and > imported the pool. Same thing happened three days later, and now today, > eight days later. Each time, a reboot is performed to bring it back. All > services appear to be running. The odd thing is that an ?ls ?l? hangs on > every mountpoint. Has anyone heard of this issue? Since I am not OmniOS > savvy, is there anything I can capture while in that state that could help > debug it? > > > > Thanks, > > Bill > > ------------------------------ > > This e-mail and any documents accompanying it may contain legally > privileged and/or confidential information belonging to Exegy, Inc. Such > information may be protected from disclosure by law. The information is > intended for use by only the addressee. If you are not the intended > recipient, you are hereby notified that any disclosure or use of the > information is strictly prohibited. If you have received this e-mail in > error, please immediately contact the sender by e-mail or phone regarding > instructions for return or destruction and do not use or disclose the > content to others. > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at omniti.com Wed Sep 30 02:08:10 2015 From: danmcd at omniti.com (Dan McDonald) Date: Tue, 29 Sep 2015 22:08:10 -0400 Subject: [OmniOS-discuss] New updates for r151014 and r151006 Message-ID: <119837A1-D878-4D70-B637-0AB46266634A@omniti.com> NOTE for r151012 users --> If you are an r151012 user, please update to r151014 NOW! I've pushed fresh illumos-omnios and kayak bits to r151014, and a subset of the illumos-omnios bits to r151006. Please update your installations now. This update, on either version, will require a reboot because it changes the kernel. The release notes for r151014 has more: http://omnios.omniti.com/wiki.php/ReleaseNotes/r151014 Thanks, Dan From richard at netbsd.org Wed Sep 30 07:56:47 2015 From: richard at netbsd.org (Richard PALO) Date: Wed, 30 Sep 2015 09:56:47 +0200 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <20150929103507.GE17072@gutsman.lotheac.fi> References: <55E2C3E9.9000702@netbsd.org> <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> <20150928154027.GD5062@gutsman.lotheac.fi> <20150929103507.GE17072@gutsman.lotheac.fi> Message-ID: <560B95BF.4080404@netbsd.org> Le 29/09/15 12:35, Lauri Tirkkonen a ?crit : > On Tue, Sep 29 2015 12:19:09 +0200, Richard PALO wrote: >> Since I'm not having any issues with netbsd (6.1), which seemingly is still >> at rfc1323 >>> richard at omnis:/home/richard$ ssh netbsd.org /sbin/sysctl net.inet.tcp.rfc1323 >>> net.inet.tcp.rfc1323 = 1 >> >> I'd like to do some additional tests involving a non-illumos host as well >> just to make sure. > > To be clear, it's not implementing RFC 1323 (and not even *not* > implementing 7323) that causes the issue. 1323 actually didn't specify > what to do with non-timestamped segments on a timestamp-negotiated > connection, and illumos pre-5850 did something very surprising which I > doubt nobody else did (stop generating timestamps on all future > segments), so I don't think you will be able to reproduce the hang with > other operating systems, but you'll likely be able to see the unexpected > non-timestamped segments in connections between other OSes as well (but > I still can't be sure because I don't know what middlebox is injecting > them or why :) > In that case, wouldn't setting tcp_tstamp_always on OI to '1' be better in this case (or would OI not honour that setting correctly)? From lotheac at iki.fi Wed Sep 30 08:02:48 2015 From: lotheac at iki.fi (Lauri Tirkkonen) Date: Wed, 30 Sep 2015 11:02:48 +0300 Subject: [OmniOS-discuss] strangeness ssh into omnios from oi_151a9 In-Reply-To: <560B95BF.4080404@netbsd.org> References: <5606BAD8.8090101@netbsd.org> <33923013-0E59-4223-8EF4-A77A168E1C70@omniti.com> <8963D7A6-2339-4E6F-9559-9DBAAAAD23BF@omniti.com> <20150928134639.GC17072@gutsman.lotheac.fi> <20150928154027.GD5062@gutsman.lotheac.fi> <20150929103507.GE17072@gutsman.lotheac.fi> <560B95BF.4080404@netbsd.org> Message-ID: <20150930080248.GA4668@gutsman.lotheac.fi> On Wed, Sep 30 2015 09:56:47 +0200, Richard PALO wrote: > >To be clear, it's not implementing RFC 1323 (and not even *not* > >implementing 7323) that causes the issue. 1323 actually didn't specify > >what to do with non-timestamped segments on a timestamp-negotiated > >connection, and illumos pre-5850 did something very surprising which I > >doubt nobody else did (stop generating timestamps on all future > >segments), so I don't think you will be able to reproduce the hang with > >other operating systems, but you'll likely be able to see the unexpected > >non-timestamped segments in connections between other OSes as well (but > >I still can't be sure because I don't know what middlebox is injecting > >them or why :) > > > > In that case, wouldn't setting tcp_tstamp_always on OI to '1' be better in > this case (or would OI not honour that setting correctly)? It wouldn't work. From what I can tell, those ndd settings only affect the SYN segments (ie. timestamp negotiation); pre-5850 illumos will always stop timestamping mid-connection if it receives a non-timestamped segment. -- Lauri Tirkkonen | lotheac @ IRCnet