<div dir="ltr">OmniOS ships with pipeviewer (pv), if you use pv -s <several megs>, it would have close to the same effect as using mbuffer.</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Aug 11, 2014 at 2:06 AM, Hafiz Rafibeyli <span dir="ltr"><<a href="mailto:rafibeyli@gmail.com" target="_blank">rafibeyli@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Tobias thank you for great job,it was missing backup part for zfs on omnios,<br>
<br>
I think ssh will slow for bigger datasets,as you mention znapzend 0.11 supporting use of mbuffer.<br>
<br>
I could not find mbuffer package for omnios,could you explain how to setup/use mbuffer on omnios please?<br>
<br>
regards<br>
<br>
<br>
<br>
----- Original Message -----<br>
From: <a href="mailto:omnios-discuss-request@lists.omniti.com">omnios-discuss-request@lists.omniti.com</a><br>
To: <a href="mailto:omnios-discuss@lists.omniti.com">omnios-discuss@lists.omniti.com</a><br>
Sent: Tuesday, 29 July, 2014 10:29:42 PM<br>
Subject: OmniOS-discuss Digest, Vol 28, Issue 8<br>
<br>
Send OmniOS-discuss mailing list submissions to<br>
<a href="mailto:omnios-discuss@lists.omniti.com">omnios-discuss@lists.omniti.com</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:omnios-discuss-request@lists.omniti.com">omnios-discuss-request@lists.omniti.com</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:omnios-discuss-owner@lists.omniti.com">omnios-discuss-owner@lists.omniti.com</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of OmniOS-discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. announcement znapzend a new zfs backup tool (Tobias Oetiker)<br>
2. Re: announcement znapzend a new zfs backup tool<br>
(Theo Schlossnagle)<br>
3. Re: announcement znapzend a new zfs backup tool (Saso Kiselkov)<br>
4. Re: Slow scrub performance (wuffers)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Tue, 29 Jul 2014 17:50:02 +0200 (CEST)<br>
From: Tobias Oetiker <<a href="mailto:tobi@oetiker.ch">tobi@oetiker.ch</a>><br>
To: <a href="mailto:omnios-discuss@lists.omniti.com">omnios-discuss@lists.omniti.com</a><br>
Subject: [OmniOS-discuss] announcement znapzend a new zfs backup tool<br>
Message-ID: <<a href="mailto:alpine.DEB.2.02.1407291748500.6752@froburg.oetiker.ch">alpine.DEB.2.02.1407291748500.6752@froburg.oetiker.ch</a>><br>
Content-Type: TEXT/PLAIN; charset=US-ASCII<br>
<br>
Just out:<br>
<br>
ZnapZend a Multilevel Backuptool for ZFS<br>
<br>
It is on Github. Check out<br>
<br>
<a href="http://www.znapzend.org" target="_blank">http://www.znapzend.org</a><br>
<br>
cheers<br>
tobi<br>
<br>
--<br>
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland<br>
<a href="http://www.oetiker.ch" target="_blank">www.oetiker.ch</a> <a href="mailto:tobi@oetiker.ch">tobi@oetiker.ch</a> <a href="tel:%2B41%2062%20775%209902" value="+41627759902">+41 62 775 9902</a><br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Tue, 29 Jul 2014 11:54:07 -0400<br>
From: Theo Schlossnagle <<a href="mailto:jesus@omniti.com">jesus@omniti.com</a>><br>
To: "<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a>"<br>
<<a href="mailto:omnios-discuss@lists.omniti.com">omnios-discuss@lists.omniti.com</a>><br>
Subject: Re: [OmniOS-discuss] announcement znapzend a new zfs backup<br>
tool<br>
Message-ID:<br>
<<a href="mailto:CACLsAptC_wDb%2BStkw2-jZkgp7oQZ4OwEUWG_Nnrm_xkaoOkGRg@mail.gmail.com">CACLsAptC_wDb+Stkw2-jZkgp7oQZ4OwEUWG_Nnrm_xkaoOkGRg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Awesome!<br>
<br>
<br>
On Tue, Jul 29, 2014 at 11:50 AM, Tobias Oetiker <<a href="mailto:tobi@oetiker.ch">tobi@oetiker.ch</a>> wrote:<br>
<br>
> Just out:<br>
><br>
> ZnapZend a Multilevel Backuptool for ZFS<br>
><br>
> It is on Github. Check out<br>
><br>
> <a href="http://www.znapzend.org" target="_blank">http://www.znapzend.org</a><br>
><br>
> cheers<br>
> tobi<br>
><br>
> --<br>
> Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland<br>
> <a href="http://www.oetiker.ch" target="_blank">www.oetiker.ch</a> <a href="mailto:tobi@oetiker.ch">tobi@oetiker.ch</a> <a href="tel:%2B41%2062%20775%209902" value="+41627759902">+41 62 775 9902</a><br>
><br>
> _______________________________________________<br>
> OmniOS-discuss mailing list<br>
> <a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
> <a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
><br>
<br>
<br>
<br>
--<br>
<br>
Theo Schlossnagle<br>
<br>
<a href="http://omniti.com/is/theo-schlossnagle" target="_blank">http://omniti.com/is/theo-schlossnagle</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="https://omniosce.org/ml-archive/attachments/20140729/f8adbbf5/attachment-0001.html" target="_blank">https://omniosce.org/ml-archive/attachments/20140729/f8adbbf5/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Tue, 29 Jul 2014 17:59:18 +0200<br>
From: Saso Kiselkov <<a href="mailto:skiselkov.ml@gmail.com">skiselkov.ml@gmail.com</a>><br>
To: <a href="mailto:omnios-discuss@lists.omniti.com">omnios-discuss@lists.omniti.com</a><br>
Subject: Re: [OmniOS-discuss] announcement znapzend a new zfs backup<br>
tool<br>
Message-ID: <<a href="mailto:53D7C4D6.5060308@gmail.com">53D7C4D6.5060308@gmail.com</a>><br>
Content-Type: text/plain; charset=ISO-8859-1<br>
<br>
On 7/29/14, 5:50 PM, Tobias Oetiker wrote:<br>
> Just out:<br>
><br>
> ZnapZend a Multilevel Backuptool for ZFS<br>
><br>
> It is on Github. Check out<br>
><br>
> <a href="http://www.znapzend.org" target="_blank">http://www.znapzend.org</a><br>
<br>
Neat, especially the feature that the backup config is part of a<br>
dataset's properties. Very cool.<br>
<br>
--<br>
Saso<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Tue, 29 Jul 2014 15:29:38 -0400<br>
From: wuffers <<a href="mailto:moo@wuffers.net">moo@wuffers.net</a>><br>
To: Richard Elling <<a href="mailto:richard.elling@richardelling.com">richard.elling@richardelling.com</a>><br>
Cc: omnios-discuss <<a href="mailto:omnios-discuss@lists.omniti.com">omnios-discuss@lists.omniti.com</a>><br>
Subject: Re: [OmniOS-discuss] Slow scrub performance<br>
Message-ID:<br>
<<a href="mailto:CA%2BtR_KwX_1HN4tVa%2B-ZOFJk2mN7RE-nFh31sMcTNo7TJJjfyLg@mail.gmail.com">CA+tR_KwX_1HN4tVa+-ZOFJk2mN7RE-nFh31sMcTNo7TJJjfyLg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Going to try to answer both responses in one message..<br>
<br>
Short answer, yes. ? Keep in mind that<br>
><br>
> 1. a scrub runs in the background (so as not to impact production I/O,<br>
> this was not always the case and caused serious issues in the past with a<br>
> pool being unresponsive due to a scrub)<br>
><br>
> 2. a scrub essentially walks the zpool examining every transaction in<br>
> order (as does a resilver)<br>
><br>
> So the time to complete a scrub depends on how many write transactions<br>
> since the pool was created (which is generally related to the amount of<br>
> data but not always). You are limited by the random I/O capability of the<br>
> disks involved. With VMs I assume this is a file server, so the I/O size<br>
> will also affect performance.<br>
<br>
<br>
I haven't noticed any slowdowns in our virtual environments, so I guess<br>
that's a good thing it's so low priority that it doesn't impact workloads.<br>
<br>
Run the numbers? you are scanning 24.2TB at about 5.5MB/sec ? 4,613,734<br>
> seconds or 54 days. And that assumes the same rate for all of the scan. The<br>
> rate will change as other I/O competes for resources.<br>
><br>
<br>
The number was fluctuating when I started the scrub, and I had seen it go<br>
as high as 35MB/s at one point. I am certain that our Hyper-V workload has<br>
increased since the last scrub, so this does make sense.<br>
<br>
<br>
> Looks like you have a fair bit of activity going on (almost 1MB/sec of<br>
> writes per spindle).<br>
><br>
<br>
As Richard correctly states below, this is the aggregate since boot (uptime<br>
~56 days). I have another output from iostat as per his instructions below.<br>
<br>
<br>
> Since this is storage for VMs, I assume this is the storage server for<br>
> separate compute servers? Have you tuned the block size for the file share<br>
> you are using? That can make a huge difference in performance.<br>
><br>
<br>
Both the Hyper-V and VMware LUNs are created with 64K block sizes. From<br>
what I've read of other performance and tuning articles, that is the<br>
optimal block size (I did some limited testing when first configuring the<br>
SAN, but results were somewhat inconclusive). Hyper-V hosts our testing<br>
environment (we integrate with TFS, a MS product, so we have no choice<br>
here) and probably make up the bulk of the workload (~300+ test VMs with<br>
various OSes). VMware hosts our production servers (Exchange, file servers,<br>
SQL, AD, etc - ~50+ VMs).<br>
<br>
I also noted that you only have a single LOG device. Best Practice is to<br>
> mirror log devices so you do not lose any data in flight if hit by a power<br>
> outage (of course, if this server has more UPS runtime that all the clients<br>
> that may not matter).<br>
><br>
<br>
Actually, I do have a mirror ZIL device, it's just disabled at this time<br>
(my ZIL devices are ZeusRAMs). At some point, I was troubleshooting some<br>
kernel panics (turned out to be a faulty SSD on the rpool), and hadn't<br>
re-enabled it yet. Thanks for the reminder (and yes, we do have a UPS as<br>
well).<br>
<br>
And oops.. re-attaching the ZIL as a mirror triggered a resilver now,<br>
suspending or canceling the scrub? Will monitor this and restart the scrub<br>
if it doesn't by itself.<br>
<br>
pool: tank<br>
state: ONLINE<br>
status: One or more devices is currently being resilvered. The pool will<br>
continue to function, possibly in a degraded state.<br>
action: Wait for the resilver to complete.<br>
scan: resilver in progress since Tue Jul 29 14:48:48 2014<br>
3.89T scanned out of 24.5T at 3.06G/s, 1h55m to go<br>
0 resilvered, 15.84% done<br>
<br>
At least it's going very fast. EDIT: Now about 67% done as I finish writing<br>
this, speed dropping to ~1.3G/s.<br>
<br>
maybe, maybe not<br>
>><br>
>> this is slower than most, surely slower than desired<br>
>><br>
><br>
Unfortunately reattaching the mirror to my log device triggered a resilver.<br>
Not sure if this is desired behavior, but yes, 5.5MB/s seems quite slow.<br>
Hopefully after the resilver the scrub will progress where it left off.<br>
<br>
<br>
> The estimate is often very wrong, especially for busy systems.<br>
>> If this is an older ZFS implementation, this pool is likely getting<br>
>> pounded by the<br>
>> ZFS write throttle. There are some tunings that can be applied, but the<br>
>> old write<br>
>> throttle is not a stable control system, so it will always be a little<br>
>> bit unpredictable.<br>
>><br>
><br>
The system is on r151008 (my BE states that I upgraded back in February,<br>
putting me in r151008j or so), with all the pools upgraded for the new<br>
enhancements as well as activating the new L2ARC compression feature.<br>
Reading the release notes, the ZFS write throttle enhancements were in<br>
since r151008e so I should be good there.<br>
<br>
<br>
> # iostat -xnze<br>
>><br>
>><br>
>> Unfortunately, this is the performance since boot and is not suitable for<br>
>> performance<br>
>> analysis unless the system has been rebooted in the past 10 minutes or<br>
>> so. You'll need<br>
>> to post the second batch from "iostat -zxCn 60 2"<br>
>><br>
><br>
Ah yes, that was my mistake. Output from second count (before re-attaching<br>
log mirror):<br>
<br>
# iostat -zxCn 60 2<br>
<br>
extended device statistics<br>
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device<br>
255.7 1077.7 6294.0 41335.1 0.0 1.9 0.0 1.4 0 153 c1<br>
5.3 23.9 118.5 811.9 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055F8723Bd0<br>
5.9 14.5 110.0 834.3 0.0 0.0 0.0 1.3 0 2<br>
c1t5000C50055E66B63d0<br>
5.6 16.6 123.8 822.7 0.0 0.0 0.0 1.3 0 2<br>
c1t5000C50055F87E73d0<br>
4.7 27.8 118.6 796.6 0.0 0.0 0.0 1.3 0 3<br>
c1t5000C50055F8BFA3d0<br>
5.6 14.5 139.7 833.8 0.0 0.0 0.0 1.6 0 3<br>
c1t5000C50055F9E123d0<br>
4.4 27.1 112.3 825.2 0.0 0.0 0.0 0.8 0 2<br>
c1t5000C50055F9F0B3d0<br>
5.0 20.2 121.7 803.4 0.0 0.0 0.0 1.2 0 3<br>
c1t5000C50055F9D3B3d0<br>
5.4 26.4 137.0 857.3 0.0 0.0 0.0 1.4 0 4<br>
c1t5000C50055E4FDE7d0<br>
4.7 12.3 123.7 832.7 0.0 0.0 0.0 2.0 0 3<br>
c1t5000C50055F9A607d0<br>
5.0 23.9 125.9 830.9 0.0 0.0 0.0 1.3 0 3<br>
c1t5000C50055F8CDA7d0<br>
4.5 31.4 112.2 814.6 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055E65877d0<br>
5.2 24.4 130.6 872.5 0.0 0.0 0.0 1.2 0 3<br>
c1t5000C50055F9E7D7d0<br>
4.1 21.8 103.7 797.2 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055FA0AF7d0<br>
5.5 24.8 129.8 802.8 0.0 0.0 0.0 1.5 0 4<br>
c1t5000C50055F9FE87d0<br>
5.7 17.7 137.2 797.6 0.0 0.0 0.0 1.4 0 3<br>
c1t5000C50055F9F91Bd0<br>
6.0 30.6 139.1 852.0 0.0 0.1 0.0 1.5 0 4<br>
c1t5000C50055F9FEABd0<br>
6.1 34.1 137.8 929.2 0.0 0.1 0.0 1.9 0 6<br>
c1t5000C50055F9F63Bd0<br>
4.1 15.9 101.8 791.4 0.0 0.0 0.0 1.6 0 3<br>
c1t5000C50055F9F3EBd0<br>
6.4 23.2 155.2 878.6 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055F9F80Bd0<br>
4.5 23.5 106.2 825.4 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055F9FB8Bd0<br>
4.0 23.2 101.1 788.9 0.0 0.0 0.0 1.3 0 3<br>
c1t5000C50055F9F92Bd0<br>
4.4 11.3 125.7 782.3 0.0 0.0 0.0 1.9 0 3<br>
c1t5000C50055F8905Fd0<br>
4.6 20.4 129.2 823.0 0.0 0.0 0.0 1.5 0 3<br>
c1t5000C50055F8D48Fd0<br>
5.1 19.7 142.9 887.2 0.0 0.0 0.0 1.7 0 3<br>
c1t5000C50055F9F89Fd0<br>
5.6 11.4 129.1 776.0 0.0 0.0 0.0 1.9 0 3<br>
c1t5000C50055F9EF2Fd0<br>
5.6 23.7 137.4 811.9 0.0 0.0 0.0 1.2 0 3<br>
c1t5000C50055F8C3ABd0<br>
6.8 13.9 132.4 834.3 0.0 0.0 0.0 1.8 0 3<br>
c1t5000C50055E66053d0<br>
5.2 26.7 126.9 857.3 0.0 0.0 0.0 1.2 0 3<br>
c1t5000C50055E66503d0<br>
4.2 27.1 104.6 825.2 0.0 0.0 0.0 1.0 0 3<br>
c1t5000C50055F9D3E3d0<br>
5.2 30.7 140.9 852.0 0.0 0.1 0.0 1.5 0 4<br>
c1t5000C50055F84FB7d0<br>
5.4 16.1 124.3 791.4 0.0 0.0 0.0 1.7 0 3<br>
c1t5000C50055F8E017d0<br>
3.8 31.4 89.7 814.6 0.0 0.0 0.0 1.1 0 4<br>
c1t5000C50055E579F7d0<br>
4.6 27.5 116.0 796.6 0.0 0.1 0.0 1.6 0 4<br>
c1t5000C50055E65807d0<br>
4.0 21.5 99.7 797.2 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055F84A97d0<br>
4.7 20.2 116.3 803.4 0.0 0.0 0.0 1.4 0 3<br>
c1t5000C50055F87D97d0<br>
5.0 11.5 121.5 776.0 0.0 0.0 0.0 2.0 0 3<br>
c1t5000C50055F9F637d0<br>
4.9 11.3 112.4 782.3 0.0 0.0 0.0 2.3 0 3<br>
c1t5000C50055E65ABBd0<br>
5.3 11.8 142.5 832.7 0.0 0.0 0.0 2.4 0 3<br>
c1t5000C50055F8BF9Bd0<br>
5.0 20.3 121.4 823.0 0.0 0.0 0.0 1.7 0 3<br>
c1t5000C50055F8A22Bd0<br>
6.6 24.3 170.3 872.5 0.0 0.0 0.0 1.3 0 3<br>
c1t5000C50055F9379Bd0<br>
5.8 16.3 121.7 822.7 0.0 0.0 0.0 1.3 0 2<br>
c1t5000C50055E57A5Fd0<br>
5.3 17.7 146.5 797.6 0.0 0.0 0.0 1.4 0 3<br>
c1t5000C50055F8CCAFd0<br>
5.7 34.1 141.5 929.2 0.0 0.1 0.0 1.7 0 5<br>
c1t5000C50055F8B80Fd0<br>
5.5 23.8 125.7 830.9 0.0 0.0 0.0 1.2 0 3<br>
c1t5000C50055F9FA1Fd0<br>
5.0 23.2 127.9 878.6 0.0 0.0 0.0 1.1 0 3<br>
c1t5000C50055E65F0Fd0<br>
5.2 14.0 163.7 833.8 0.0 0.0 0.0 2.0 0 3<br>
c1t5000C50055F8BE3Fd0<br>
4.6 18.9 122.8 887.2 0.0 0.0 0.0 1.6 0 3<br>
c1t5000C50055F8B21Fd0<br>
5.5 23.6 137.4 825.4 0.0 0.0 0.0 1.5 0 3<br>
c1t5000C50055F8A46Fd0<br>
4.9 24.6 116.7 802.8 0.0 0.0 0.0 1.4 0 4<br>
c1t5000C50055F856CFd0<br>
4.9 23.4 120.8 788.9 0.0 0.0 0.0 1.4 0 3<br>
c1t5000C50055E6606Fd0<br>
234.9 170.1 4079.9 11127.8 0.0 0.2 0.0 0.5 0 9 c2<br>
119.0 28.9 2083.8 670.8 0.0 0.0 0.0 0.3 0 3<br>
c2t500117310015D579d0<br>
115.9 27.4 1996.1 634.2 0.0 0.0 0.0 0.3 0 3<br>
c2t50011731001631FDd0<br>
0.0 113.8 0.0 9822.8 0.0 0.1 0.0 1.0 0 2<br>
c2t5000A72A3007811Dd0<br>
0.1 18.5 0.0 64.8 0.0 0.0 0.0 0.0 0 0 c4<br>
0.1 9.2 0.0 32.4 0.0 0.0 0.0 0.0 0 0 c4t0d0<br>
0.0 9.2 0.0 32.4 0.0 0.0 0.0 0.0 0 0 c4t1d0<br>
229.8 58.1 3987.4 1308.0 0.0 0.1 0.0 0.3 0 6 c12<br>
114.2 27.7 1994.8 626.0 0.0 0.0 0.0 0.3 0 3<br>
c12t500117310015D59Ed0<br>
115.5 30.4 1992.6 682.0 0.0 0.0 0.0 0.3 0 3<br>
c12t500117310015D54Ed0<br>
0.1 17.1 0.0 64.8 0.0 0.0 0.6 0.1 0 0 rpool<br>
720.3 1298.4 14361.2 53770.8 18.7 2.3 9.3 1.1 6 68 tank<br>
<br>
Is 153% busy correct on c1? Seems to me that disks are quite "busy", but<br>
are handling the workload just fine (wait at 6% and asvc_t at 1.1ms)<br>
<br>
Interestingly, this is the same output now that the resilver is running:<br>
<br>
extended device statistics<br>
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device<br>
2876.9 1041.1 25400.7 38189.1 0.0 37.9 0.0 9.7 0 2011 c1<br>
60.8 26.1 540.1 845.2 0.0 0.7 0.0 8.3 0 39<br>
c1t5000C50055F8723Bd0<br>
58.4 14.2 511.6 740.7 0.0 0.7 0.0 10.1 0 39<br>
c1t5000C50055E66B63d0<br>
60.2 16.3 529.3 756.1 0.0 0.8 0.0 10.1 0 41<br>
c1t5000C50055F87E73d0<br>
57.5 24.9 527.6 841.7 0.0 0.7 0.0 9.0 0 40<br>
c1t5000C50055F8BFA3d0<br>
57.9 14.5 543.5 765.1 0.0 0.7 0.0 9.8 0 38<br>
c1t5000C50055F9E123d0<br>
57.9 23.9 516.6 806.9 0.0 0.8 0.0 9.3 0 40<br>
c1t5000C50055F9F0B3d0<br>
59.8 24.6 554.1 857.5 0.0 0.8 0.0 9.6 0 42<br>
c1t5000C50055F9D3B3d0<br>
56.5 21.0 480.4 715.7 0.0 0.7 0.0 8.9 0 37<br>
c1t5000C50055E4FDE7d0<br>
54.8 9.7 473.5 737.9 0.0 0.7 0.0 11.2 0 39<br>
c1t5000C50055F9A607d0<br>
55.8 20.2 457.3 708.7 0.0 0.7 0.0 9.9 0 40<br>
c1t5000C50055F8CDA7d0<br>
57.8 28.6 487.0 796.1 0.0 0.9 0.0 9.9 0 45<br>
c1t5000C50055E65877d0<br>
60.8 27.1 572.6 823.7 0.0 0.8 0.0 8.8 0 41<br>
c1t5000C50055F9E7D7d0<br>
55.8 21.1 478.2 766.6 0.0 0.7 0.0 9.7 0 40<br>
c1t5000C50055FA0AF7d0<br>
57.0 22.8 528.3 724.5 0.0 0.8 0.0 9.6 0 41<br>
c1t5000C50055F9FE87d0<br>
56.2 10.8 465.2 715.6 0.0 0.7 0.0 10.4 0 38<br>
c1t5000C50055F9F91Bd0<br>
59.2 29.4 524.6 740.9 0.0 0.8 0.0 8.9 0 41<br>
c1t5000C50055F9FEABd0<br>
57.3 30.7 496.7 788.3 0.0 0.8 0.0 9.1 0 42<br>
c1t5000C50055F9F63Bd0<br>
55.5 16.3 461.9 652.9 0.0 0.7 0.0 10.1 0 39<br>
c1t5000C50055F9F3EBd0<br>
57.2 22.1 495.1 701.1 0.0 0.8 0.0 9.8 0 41<br>
c1t5000C50055F9F80Bd0<br>
59.5 30.2 543.1 741.8 0.0 0.9 0.0 9.6 0 45<br>
c1t5000C50055F9FB8Bd0<br>
56.5 25.1 515.4 786.9 0.0 0.7 0.0 8.6 0 38<br>
c1t5000C50055F9F92Bd0<br>
61.8 12.5 540.6 790.9 0.0 0.8 0.0 10.3 0 41<br>
c1t5000C50055F8905Fd0<br>
57.0 19.8 521.0 774.3 0.0 0.7 0.0 9.6 0 39<br>
c1t5000C50055F8D48Fd0<br>
56.3 16.3 517.7 724.7 0.0 0.7 0.0 9.9 0 38<br>
c1t5000C50055F9F89Fd0<br>
57.0 13.4 504.5 790.5 0.0 0.8 0.0 10.7 0 40<br>
c1t5000C50055F9EF2Fd0<br>
55.0 26.1 477.6 845.2 0.0 0.7 0.0 8.3 0 36<br>
c1t5000C50055F8C3ABd0<br>
57.8 14.1 518.7 740.7 0.0 0.8 0.0 10.8 0 41<br>
c1t5000C50055E66053d0<br>
55.9 20.8 490.2 715.7 0.0 0.7 0.0 9.0 0 37<br>
c1t5000C50055E66503d0<br>
57.0 24.1 509.7 806.9 0.0 0.8 0.0 10.0 0 41<br>
c1t5000C50055F9D3E3d0<br>
59.1 29.2 504.1 740.9 0.0 0.8 0.0 9.3 0 44<br>
c1t5000C50055F84FB7d0<br>
54.4 16.3 449.5 652.9 0.0 0.7 0.0 10.4 0 39<br>
c1t5000C50055F8E017d0<br>
57.8 28.4 503.3 796.1 0.0 0.9 0.0 10.1 0 45<br>
c1t5000C50055E579F7d0<br>
58.2 24.9 502.0 841.7 0.0 0.8 0.0 9.2 0 40<br>
c1t5000C50055E65807d0<br>
58.2 20.7 513.4 766.6 0.0 0.8 0.0 9.8 0 41<br>
c1t5000C50055F84A97d0<br>
56.5 24.9 508.0 857.5 0.0 0.8 0.0 9.2 0 40<br>
c1t5000C50055F87D97d0<br>
53.4 13.5 449.9 790.5 0.0 0.7 0.0 10.7 0 38<br>
c1t5000C50055F9F637d0<br>
57.0 11.8 503.0 790.9 0.0 0.7 0.0 10.6 0 39<br>
c1t5000C50055E65ABBd0<br>
55.4 9.6 461.1 737.9 0.0 0.8 0.0 11.6 0 40<br>
c1t5000C50055F8BF9Bd0<br>
55.7 19.7 484.6 774.3 0.0 0.7 0.0 9.9 0 40<br>
c1t5000C50055F8A22Bd0<br>
57.6 27.1 518.2 823.7 0.0 0.8 0.0 8.9 0 40<br>
c1t5000C50055F9379Bd0<br>
59.6 17.0 528.0 756.1 0.0 0.8 0.0 10.1 0 41<br>
c1t5000C50055E57A5Fd0<br>
61.2 10.8 530.0 715.6 0.0 0.8 0.0 10.7 0 40<br>
c1t5000C50055F8CCAFd0<br>
58.0 30.8 493.3 788.3 0.0 0.8 0.0 9.4 0 43<br>
c1t5000C50055F8B80Fd0<br>
56.5 19.9 490.7 708.7 0.0 0.8 0.0 10.0 0 40<br>
c1t5000C50055F9FA1Fd0<br>
56.1 22.4 484.2 701.1 0.0 0.7 0.0 9.5 0 39<br>
c1t5000C50055E65F0Fd0<br>
59.2 14.6 560.9 765.1 0.0 0.7 0.0 9.8 0 39<br>
c1t5000C50055F8BE3Fd0<br>
57.9 16.2 546.0 724.7 0.0 0.7 0.0 10.1 0 40<br>
c1t5000C50055F8B21Fd0<br>
59.5 30.0 553.2 741.8 0.0 0.9 0.0 9.8 0 45<br>
c1t5000C50055F8A46Fd0<br>
57.4 22.5 504.0 724.5 0.0 0.8 0.0 9.6 0 41<br>
c1t5000C50055F856CFd0<br>
58.4 24.6 531.4 786.9 0.0 0.7 0.0 8.4 0 38<br>
c1t5000C50055E6606Fd0<br>
511.0 161.4 7572.1 11260.1 0.0 0.3 0.0 0.4 0 14 c2<br>
252.3 20.1 3776.3 458.9 0.0 0.1 0.0 0.2 0 6<br>
c2t500117310015D579d0<br>
258.8 18.0 3795.7 350.0 0.0 0.1 0.0 0.2 0 6<br>
c2t50011731001631FDd0<br>
0.0 123.4 0.0 10451.1 0.0 0.1 0.0 1.0 0 3<br>
c2t5000A72A3007811Dd0<br>
0.2 16.1 1.9 56.7 0.0 0.0 0.0 0.0 0 0 c4<br>
0.2 8.1 1.6 28.3 0.0 0.0 0.0 0.0 0 0 c4t0d0<br>
0.0 8.1 0.3 28.3 0.0 0.0 0.0 0.0 0 0 c4t1d0<br>
495.6 163.6 7168.9 11290.3 0.0 0.2 0.0 0.4 0 14 c12<br>
0.0 123.4 0.0 10451.1 0.0 0.1 0.0 1.0 0 3<br>
c12t5000A72B300780FFd0<br>
248.2 18.1 3645.8 323.0 0.0 0.1 0.0 0.2 0 5<br>
c12t500117310015D59Ed0<br>
247.4 22.1 3523.1 516.2 0.0 0.1 0.0 0.2 0 6<br>
c12t500117310015D54Ed0<br>
0.2 14.8 1.9 56.7 0.0 0.0 0.6 0.1 0 0 rpool<br>
3883.5 1357.7 40141.6 60739.5 22.8 38.6 4.4 7.4 54 100 tank<br>
<br>
It is very busy with alot of wait % and higher asvc_t (2011% busy on c1?!).<br>
I'm assuming resilvers are alot more aggressive than scrubs.<br>
<br>
There are many variables here, the biggest of which is the current<br>
>> non-scrub load.<br>
>><br>
><br>
I might have lost 2 weeks of scrub time, depending on whether the scrub<br>
will resume where it left off. I'll update when I can.<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="https://omniosce.org/ml-archive/attachments/20140729/1b53a492/attachment.html" target="_blank">https://omniosce.org/ml-archive/attachments/20140729/1b53a492/attachment.html</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
<br>
<br>
------------------------------<br>
<br>
End of OmniOS-discuss Digest, Vol 28, Issue 8<br>
*********************************************<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
This message has been scanned for viruses and<br>
dangerous content by MailScanner, and is<br>
believed to be clean.<br>
<br>
_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>
<p>Theo Schlossnagle</p>
<p><a href="http://omniti.com/is/theo-schlossnagle" target="_blank">http://omniti.com/is/theo-schlossnagle</a></p>
</div>