<div dir="ltr"><div><div><div>Thanks Richard for your help.<br><br></div>My problem is that i have a network ISCSI traffic of 2 MB/s, each 5 seconds i need to write on disks 10 MB of network traffic but on pool filervm2 I am writing much more that, approximatively 60 MB each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5 second. When i check with smartmootools every ssd is writing approximatively 250 GB of data each day.<br><br></div>How can i reduce amont of data writting on each ssd ? i have try to reduce block size of zvol but it change nothing.<br><br></div>Anthony<br><div><br><br><div><br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-09-28 1:29 GMT+02:00 Richard Elling <span dir="ltr"><<a href="mailto:richard.elling@richardelling.com" target="_blank">richard.elling@richardelling.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Comment below...<br>
<div><div class="h5"><br>
> On Sep 27, 2017, at 12:57 AM, anthony omnios <<a href="mailto:icoomnios@gmail.com">icoomnios@gmail.com</a>> wrote:<br>
><br>
> Hi,<br>
><br>
> i have a problem, i used many ISCSI zvol (for each vm), network traffic is 2MB/s between kvm host and filer but i write on disks many more than that. I used a pool with separated mirror zil (intel s3710) and 8 ssd samsung 850 evo 1To<br>
><br>
> zpool status<br>
> pool: filervm2<br>
> state: ONLINE<br>
> scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 15:45:48 2017<br>
> config:<br>
><br>
> NAME STATE READ WRITE CKSUM<br>
> filervm2 ONLINE 0 0 0<br>
> mirror-0 ONLINE 0 0 0<br>
> c7t5002538D41657AAFd0 ONLINE 0 0 0<br>
> c7t5002538D41F85C0Dd0 ONLINE 0 0 0<br>
> mirror-2 ONLINE 0 0 0<br>
> c7t5002538D41CC7105d0 ONLINE 0 0 0<br>
> c7t5002538D41CC7127d0 ONLINE 0 0 0<br>
> mirror-3 ONLINE 0 0 0<br>
> c7t5002538D41CD7F7Ed0 ONLINE 0 0 0<br>
> c7t5002538D41CD83FDd0 ONLINE 0 0 0<br>
> mirror-4 ONLINE 0 0 0<br>
> c7t5002538D41CD7F7Ad0 ONLINE 0 0 0<br>
> c7t5002538D41CD7F7Dd0 ONLINE 0 0 0<br>
> logs<br>
> mirror-1 ONLINE 0 0 0<br>
> c4t2d0 ONLINE 0 0 0<br>
> c4t4d0 ONLINE 0 0 0<br>
><br>
> i used correct ashift of 13 for samsung 850 evo<br>
> zdb|grep ashift :<br>
><br>
> ashift: 13<br>
> ashift: 13<br>
> ashift: 13<br>
> ashift: 13<br>
> ashift: 13<br>
><br>
> But i write a lot on ssd every 5 seconds (many more than the network traffic of 2 MB/s)<br>
><br>
> iostat -xn -d 1 :<br>
><br>
> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device<br>
> 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2<br>
<br>
</div></div>filervm2 is seeing 3067 writes per second. This is the interface to the upper layers.<br>
These writes are small.<br>
<span class=""><br>
> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool<br>
> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0<br>
> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0<br>
> 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t2d0<br>
> 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t4d0<br>
<br>
</span>The log devices are seeing 552 writes per second and since sync=standard that<br>
means that the upper layers are requesting syncs.<br>
<span class=""><br>
> 1.0 233.3 48.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41657AAFd0<br>
> 5.0 250.3 144.2 13207.3 0.0 0.0 0.0 0.1 0 3 c7t5002538D41CC7127d0<br>
> 2.0 254.3 24.0 13207.3 0.0 0.0 0.0 0.1 0 4 c7t5002538D41CC7105d0<br>
> 3.0 235.3 72.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41F85C0Dd0<br>
> 0.0 228.3 0.0 16178.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD83FDd0<br>
> 0.0 225.3 0.0 16210.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD7F7Ed0<br>
> 0.0 282.3 0.0 19991.1 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Dd0<br>
> 0.0 280.3 0.0 19871.0 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Ad0<br>
<br>
</span>The pool disks see 1989 writes per second total or 994 writes per second logically.<br>
<br>
It seems to me that reducing 3067 requested writes to 994 logical writes is the opposite<br>
of amplification. What do you expect?<br>
<span class="HOEnZb"><font color="#888888"> -- richard<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
><br>
> I used zvol of 64k, i try with 8k and problem is the same.<br>
><br>
> zfs get all filervm2/hdd-110022a :<br>
><br>
> NAME PROPERTY VALUE SOURCE<br>
> filervm2/hdd-110022a type volume -<br>
> filervm2/hdd-110022a creation Tue May 16 10:24 2017 -<br>
> filervm2/hdd-110022a used 5.26G -<br>
> filervm2/hdd-110022a available 2.90T -<br>
> filervm2/hdd-110022a referenced 5.24G -<br>
> filervm2/hdd-110022a compressratio 3.99x -<br>
> filervm2/hdd-110022a reservation none default<br>
> filervm2/hdd-110022a volsize 25G local<br>
> filervm2/hdd-110022a volblocksize 64K -<br>
> filervm2/hdd-110022a checksum on default<br>
> filervm2/hdd-110022a compression lz4 local<br>
> filervm2/hdd-110022a readonly off default<br>
> filervm2/hdd-110022a copies 1 default<br>
> filervm2/hdd-110022a refreservation none default<br>
> filervm2/hdd-110022a primarycache all default<br>
> filervm2/hdd-110022a secondarycache all default<br>
> filervm2/hdd-110022a usedbysnapshots 15.4M -<br>
> filervm2/hdd-110022a usedbydataset 5.24G -<br>
> filervm2/hdd-110022a usedbychildren 0 -<br>
> filervm2/hdd-110022a usedbyrefreservation 0 -<br>
> filervm2/hdd-110022a logbias latency default<br>
> filervm2/hdd-110022a dedup off default<br>
> filervm2/hdd-110022a mlslabel none default<br>
> filervm2/hdd-110022a sync standard local<br>
> filervm2/hdd-110022a refcompressratio 3.99x -<br>
> filervm2/hdd-110022a written 216K -<br>
> filervm2/hdd-110022a logicalused 20.9G -<br>
> filervm2/hdd-110022a logicalreferenced 20.9G -<br>
> filervm2/hdd-110022a snapshot_limit none default<br>
> filervm2/hdd-110022a snapshot_count none default<br>
> filervm2/hdd-110022a redundant_metadata all default<br>
><br>
> Sorry for my bad english.<br>
><br>
> What can be the problem ? thanks<br>
><br>
> Best regards,<br>
><br>
> Anthony<br>
><br>
><br>
</div></div><div class="HOEnZb"><div class="h5">> ______________________________<wbr>_________________<br>
> OmniOS-discuss mailing list<br>
> <a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.<wbr>com</a><br>
> <a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" rel="noreferrer" target="_blank">http://lists.omniti.com/<wbr>mailman/listinfo/omnios-<wbr>discuss</a><br>
<br>
</div></div></blockquote></div><br></div>