<div dir="ltr"><br>So it looks like re-distribution issue. Initially there were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which which we added 24 more disks and created additional vdevs. The initial vdevs are filled up and so write speed declined. Now how to find files that are present in a Vdev or a disk. That way I can remove and re-copy back to distribute data. Any other way to solve this ?<br>
<br>Total capacity of pool - 98Tb<br>Used - 44Tb<br>Free - 54 Tb<br><br>root@host:# zpool iostat -v<br> capacity operations bandwidth<br>pool alloc free read write read write<br>----------- ----- ----- ----- ----- ----- -----<br>
test 54.0T 62.7T 52 1.12K 2.16M 5.78M<br> raidz1 11.2T 2.41T 13 30 176K 146K<br> c2t0d0 - - 5 18 42.1K 39.0K<br> c2t1d0 - - 5 18 42.2K 39.0K<br>
c2t2d0 - - 5 18 42.5K 39.0K<br> c2t3d0 - - 5 18 42.9K 39.0K<br> c2t4d0 - - 5 18 42.6K 39.0K<br> raidz1 13.3T 308G 13 100 213K 521K<br>
c2t5d0 - - 5 94 50.8K 135K<br> c2t6d0 - - 5 94 51.0K 135K<br> c2t7d0 - - 5 94 50.8K 135K<br> c2t8d0 - - 5 94 51.1K 135K<br>
c2t9d0 - - 5 94 51.1K 135K<br> raidz1 13.4T 19.1T 9 455 743K 2.31M<br> c2t12d0 - - 3 137 69.6K 235K<br> c2t13d0 - - 3 129 69.4K 227K<br>
c2t14d0 - - 3 139 69.6K 235K<br> c2t15d0 - - 3 131 69.6K 227K<br> c2t16d0 - - 3 141 69.6K 235K<br> c2t17d0 - - 3 132 69.5K 227K<br>
c2t18d0 - - 3 142 69.6K 235K<br> c2t19d0 - - 3 133 69.6K 227K<br> c2t20d0 - - 3 143 69.6K 235K<br> c2t21d0 - - 3 133 69.5K 227K<br>
c2t22d0 - - 3 143 69.6K 235K<br> c2t23d0 - - 3 133 69.5K 227K<br> raidz1 2.44T 16.6T 5 103 327K 485K<br> c2t24d0 - - 2 48 50.8K 87.4K<br>
c2t25d0 - - 2 49 50.7K 87.4K<br> c2t26d0 - - 2 49 50.8K 87.3K<br> c2t27d0 - - 2 49 50.8K 87.3K<br> c2t28d0 - - 2 49 50.8K 87.3K<br>
c2t29d0 - - 2 49 50.8K 87.3K<br> c2t30d0 - - 2 49 50.8K 87.3K<br> raidz1 8.18T 10.8T 5 295 374K 1.54M<br> c2t31d0 - - 2 131 58.2K 279K<br>
c2t32d0 - - 2 131 58.1K 279K<br> c2t33d0 - - 2 131 58.2K 279K<br> c2t34d0 - - 2 132 58.2K 279K<br> c2t35d0 - - 2 132 58.1K 279K<br>
c2t36d0 - - 2 133 58.3K 279K<br> c2t37d0 - - 2 133 58.2K 279K<br> raidz1 5.42T 13.6T 5 163 383K 823K<br> c2t38d0 - - 2 61 59.4K 146K<br>
c2t39d0 - - 2 61 59.3K 146K<br> c2t40d0 - - 2 61 59.4K 146K<br> c2t41d0 - - 2 61 59.4K 146K<br> c2t42d0 - - 2 61 59.3K 146K<br>
c2t43d0 - - 2 62 59.2K 146K<br> c2t44d0 - - 2 62 59.3K 146K<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Feb 12, 2013 at 4:20 AM, Denis Cheong <span dir="ltr"><<a href="mailto:denis@denisandyuki.net" target="_blank">denis@denisandyuki.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>You haven't enabled dedup on any zfs volumes? That will drop performance by 30x to 300x especially on an array that size unless you have an insane amount of memory.<br>
<br></div><br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra">
<br><br><div class="gmail_quote">On Tue, Feb 12, 2013 at 3:52 AM, Ram Chander <span dir="ltr"><<a href="mailto:ramquick@gmail.com" target="_blank">ramquick@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">cp of 1Gb file takes 20min to the pool whereas on normal disk it takes 35sec.<br></div><div><div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Feb 11, 2013 at 10:06 PM, Eric Sproul <span dir="ltr"><<a href="mailto:esproul@omniti.com" target="_blank">esproul@omniti.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">You still haven't said how you know that performance has dropped 30x.<br>
Where are the numbers?<br>
<div><div><br>
On Mon, Feb 11, 2013 at 11:33 AM, Ram Chander <<a href="mailto:ramquick@gmail.com" target="_blank">ramquick@gmail.com</a>> wrote:<br>
> I am not sure what happened on Jan 5. I havent run scrub or replaced devices<br>
> for more than 3 months. The underlying hardware is Dell Md1200 which has 48<br>
> disks.<br>
> How to recover from this ? I had tried rebooting but issue comes back.<br>
><br>
><br>
> On Mon, Feb 11, 2013 at 8:14 PM, Eric Sproul <<a href="mailto:esproul@omniti.com" target="_blank">esproul@omniti.com</a>> wrote:<br>
>><br>
>> On Mon, Feb 11, 2013 at 7:48 AM, Ram Chander <<a href="mailto:ramquick@gmail.com" target="_blank">ramquick@gmail.com</a>> wrote:<br>
>> > Hi,<br>
>> ><br>
>> > My OI box is expreiencing slow zfs writes ( around 30 times slower ).<br>
>> > iostat<br>
>> > reports below error though pool is healthy. This is happening in past 4<br>
>> > days<br>
>> > though no change was done to system. Is the hard disks faulty ?<br>
>> > Please help.<br>
>><br>
>> How have you measured this 30x drop in performance? You haven't<br>
>> provided any data.<br>
>><br>
>><br>
>> > c4t0d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0<br>
>> > Vendor: iDRAC Product: Virtual CD Revision: 0323 Serial No:<br>
>> > Size: 0.00GB <0 bytes><br>
>> > Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0<br>
>> > Illegal Request: 1 Predictive Failure Analysis: 0<br>
>><br>
>> I wouldn't worry about errors here. This is a virtual device provided<br>
>> by your server's lights-out management system.<br>
>><br>
>><br>
>> > root@host:~# fmadm faulty<br>
>> > --------------- ------------------------------------ --------------<br>
>> > ---------<br>
>> > TIME EVENT-ID MSG-ID<br>
>> > SEVERITY<br>
>> > --------------- ------------------------------------ --------------<br>
>> > ---------<br>
>> > Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC<br>
>> > Major<br>
>> ><br>
>> > Host : host<br>
>> > Platform : PowerEdge-R810<br>
>> > Product_sn :<br>
>> ><br>
>> > Fault class : fault.fs.zfs.io_failure_wait<br>
>> > Affects : zfs://pool=test<br>
>> > faulted but still in service<br>
>> > Problem in : zfs://pool=test<br>
>> > faulted but still in service<br>
>> ><br>
>> > Description : The ZFS pool has experienced currently unrecoverable I/O<br>
>> > failures. Refer to<br>
>> > <a href="http://illumos.org/msg/ZFS-8000-HC" target="_blank">http://illumos.org/msg/ZFS-8000-HC</a><br>
>> > for<br>
>> > more information.<br>
>> ><br>
>> > Response : No automated response will be taken.<br>
>> ><br>
>> > Impact : Read and write I/Os cannot be serviced.<br>
>> ><br>
>> > Action : Make sure the affected devices are connected, then run<br>
>> > 'zpool clear'.<br>
>><br>
>> What has happened since January 5? The pool appears fine now. Did<br>
>> you run a scrub? Replace devices? Reboot? It looks like ZFS<br>
>> encountered an underlying problem with the hardware.<br>
>><br>
>> Eric<br>
>> _______________________________________________<br>
>> OmniOS-discuss mailing list<br>
>> <a href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.com</a><br>
>> <a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
><br>
><br>
_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
<br></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
<br></blockquote></div><br></div>