<div dir="ltr">Is there a way to adjust the default Window Size for CIFS or NFS?</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 28, 2016 at 1:39 PM, Mini Trader <span dir="ltr"><<a href="mailto:miniflowtrader@gmail.com" target="_blank">miniflowtrader@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I also tried the following. Which seems to have improved iperf speeds. But I am still getting the same CIFS speeds.<div><br></div><div><div>root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p recv_buf=1048576 tcp</div><div>root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p send_buf=1048576 tcp</div><div>root@storage1:/var/web-gui/data/tools/iperf# ipadm set-prop -p max_buf=4194304 tcp</div></div><div><br></div><div><br></div><div><span class=""><div>------------------------------------------------------------</div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to storage1.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div></span><div>[ 4] local 10.255.0.141 port 33452 connected with 10.255.0.15 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 106 MBytes 892 Mbits/sec</div><span class=""><div>[ 4] 1.0- 2.0 sec 111 MBytes 928 Mbits/sec</div></span><div>[ 4] 2.0- 3.0 sec 108 MBytes 904 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 109 MBytes 916 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 110 MBytes 923 Mbits/sec</div><div>[ 4] 5.0- 6.0 sec 110 MBytes 919 Mbits/sec</div><div>[ 4] 6.0- 7.0 sec 110 MBytes 919 Mbits/sec</div><div>[ 4] 7.0- 8.0 sec 105 MBytes 884 Mbits/sec</div><div>[ 4] 8.0- 9.0 sec 109 MBytes 915 Mbits/sec</div><div>[ 4] 9.0-10.0 sec 111 MBytes 928 Mbits/sec</div><div>[ 4] 0.0-10.0 sec 1.06 GBytes 912 Mbits/sec</div><div>[ 4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 50899</div><div>[ 4] 0.0- 1.0 sec 97.5 MBytes 818 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 110 MBytes 923 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 49.3 MBytes 414 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 98.0 MBytes 822 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 96.7 MBytes 811 Mbits/sec</div><div>[ 4] 5.0- 6.0 sec 99.7 MBytes 836 Mbits/sec</div><div>[ 4] 6.0- 7.0 sec 103 MBytes 861 Mbits/sec</div><div>[ 4] 7.0- 8.0 sec 101 MBytes 851 Mbits/sec</div><div>[ 4] 8.0- 9.0 sec 104 MBytes 876 Mbits/sec</div><div>[ 4] 9.0-10.0 sec 104 MBytes 876 Mbits/sec</div><div>[ 4] 0.0-10.0 sec 966 MBytes 808 Mbits/sec</div></div><div><br></div><div><div>root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p recv_buf tcp</div><div>root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p send_buf tcp</div><div>root@storage1:/var/web-gui/data/tools/iperf# ipadm reset-prop -p max_buf tcp</div></div><div><br></div><div><span class=""><div>------------------------------------------------------------</div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to storage1.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div></span><div>[ 4] local 10.255.0.141 port 33512 connected with 10.255.0.15 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 35.2 MBytes 296 Mbits/sec</div><span class=""><div>[ 4] 1.0- 2.0 sec 35.0 MBytes 294 Mbits/sec</div></span><div>[ 4] 2.0- 3.0 sec 34.2 MBytes 287 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 33.4 MBytes 280 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 34.1 MBytes 286 Mbits/sec</div><div>[ 4] 5.0- 6.0 sec 35.2 MBytes 296 Mbits/sec</div><div>[ 4] 6.0- 7.0 sec 35.4 MBytes 297 Mbits/sec</div><div>[ 4] 7.0- 8.0 sec 34.4 MBytes 288 Mbits/sec</div><div>[ 4] 8.0- 9.0 sec 35.0 MBytes 294 Mbits/sec</div><div>[ 4] 9.0-10.0 sec 33.4 MBytes 280 Mbits/sec</div><div>[ 4] 0.0-10.0 sec 346 MBytes 289 Mbits/sec</div><div>[ 4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 41435</div><div>[ 4] 0.0- 1.0 sec 57.6 MBytes 483 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 87.2 MBytes 732 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 99.3 MBytes 833 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 99.5 MBytes 835 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 100 MBytes 842 Mbits/sec</div><div>[ 4] 5.0- 6.0 sec 103 MBytes 866 Mbits/sec</div><div>[ 4] 6.0- 7.0 sec 100 MBytes 840 Mbits/sec</div><div>[ 4] 7.0- 8.0 sec 98.7 MBytes 828 Mbits/sec</div><div>[ 4] 8.0- 9.0 sec 101 MBytes 847 Mbits/sec</div><div>[ 4] 9.0-10.0 sec 105 MBytes 882 Mbits/sec</div><div>[ 4] 0.0-10.0 sec 954 MBytes 799 Mbits/sec</div></div><div><br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 28, 2016 at 11:34 AM, Mini Trader <span dir="ltr"><<a href="mailto:miniflowtrader@gmail.com" target="_blank">miniflowtrader@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Thank you for all the responses! Ive run some more detailed tests using iperf 2. The results that I see are inline with the transfer rates so they describe the behavior that I am seeing.</div><div><br></div><div>Note I used a laptop on same connection as desktop. So that there would be a basis to compare it to the Desktop.</div><div><br></div><div>For some reason the laptop has a limit of around 500-600 mbit/sec for its downloads, regardless the test still seem to show the behavior</div><div>that I am seeing. Note that Linux does not seem to have the same issues where OmniOS does. Additionally OmniOS does not have the issue</div><div>when using a direct ethernet connection. One thing I can say about Linux is that its downloads on the adapters are less than its uploads which</div><div>is the complete opposite as OmniOS. This Linux behavior is not seen when using ethernet.</div><div><br></div><div>Both Linux and OmniOS are running on ESXi 6U1. OmniOS is using the vmxnet driver.</div><div><br></div><div>The adapters being used are Adaptec ECB6200. These are bonded Moca 2.0 adapters and are running the latest firmware.</div><div><br></div><div>Source Machine: Desktop</div><div>Connection: Adapter</div><div>Windows <-> OmniOS </div><div><br></div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to storage1, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>[ 4] local 10.255.0.141 port 31595 connected with 10.255.0.15 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 34.9 MBytes 293 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 35.0 MBytes 294 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 35.2 MBytes 296 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 34.4 MBytes 288 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 34.5 MBytes 289 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 174 MBytes 292 Mbits/sec</div><div>[ 4] local 10.255.0.141 port 5001 connected with 10.255.0.15 port 33341</div><div>[ 4] 0.0- 1.0 sec 46.2 MBytes 388 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 101 MBytes 849 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 104 MBytes 872 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 101 MBytes 851 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 102 MBytes 855 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 457 MBytes 763 Mbits/sec</div><div><br></div><div>Source Machine: Desktop</div><div>Connection: Adapter</div><div>Windows <-> Linux</div><div><br></div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to media.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>[ 4] local 10.255.0.141 port 31602 connected with 10.255.0.73 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 108 MBytes 902 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 111 MBytes 929 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 111 MBytes 928 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 106 MBytes 892 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 109 MBytes 918 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 545 MBytes 914 Mbits/sec</div><div>[ 4] local 10.255.0.141 port 5001 connected with 10.255.0.73 port 55045</div><div>[ 4] 0.0- 1.0 sec 67.0 MBytes 562 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 75.6 MBytes 634 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 75.1 MBytes 630 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 74.5 MBytes 625 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 75.7 MBytes 635 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 368 MBytes 616 Mbits/sec</div><div><br></div><div><br></div><div>Machine: Laptop</div><div>Connection: Adapter</div><div>Windows <-> OmniOS Notice same issue with 35mb cap.</div><div><br></div><div>------------------------------------------------------------</div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to storage1.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>[ 4] local 10.255.0.54 port 57487 connected with 10.255.0.15 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 35.5 MBytes 298 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 35.0 MBytes 294 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 35.0 MBytes 294 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 34.2 MBytes 287 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 33.9 MBytes 284 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 174 MBytes 291 Mbits/sec</div><div>[ 4] local 10.255.0.54 port 5001 connected with 10.255.0.15 port 40779</div><div>[ 4] 0.0- 1.0 sec 28.8 MBytes 242 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 55.8 MBytes 468 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 43.7 MBytes 366 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 50.7 MBytes 425 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 52.7 MBytes 442 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 233 MBytes 389 Mbits/sec</div><div><br></div><div>Machine: Laptop</div><div>Connection: Adapter</div><div>Windows <-> Linux (not issue on upload, same as desktop)</div><div><br></div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to media.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>[ 4] local 10.255.0.54 port 57387 connected with 10.255.0.73 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 110 MBytes 919 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 110 MBytes 920 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 110 MBytes 921 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 110 MBytes 923 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 110 MBytes 919 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 548 MBytes 919 Mbits/sec</div><div>[ 4] local 10.255.0.54 port 5001 connected with 10.255.0.73 port 52723</div><div>[ 4] 0.0- 1.0 sec 49.8 MBytes 418 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 55.1 MBytes 462 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 55.1 MBytes 462 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 53.6 MBytes 449 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 56.9 MBytes 477 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 271 MBytes 454 Mbits/sec</div><div><br></div><div>Machine: Laptop</div><div>Connection: Ethernet</div><div>Windows <-> OmniOS (No issues on upload)</div><div>------------------------------------------------------------</div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to storage1.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>[ 4] local 10.255.0.54 port 57858 connected with 10.255.0.15 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 113 MBytes 950 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 111 MBytes 928 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 109 MBytes 912 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 111 MBytes 931 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 106 MBytes 889 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 550 MBytes 921 Mbits/sec</div><div>[ 4] local 10.255.0.54 port 5001 connected with 10.255.0.15 port 42565</div><div>[ 4] 0.0- 1.0 sec 38.4 MBytes 322 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 68.9 MBytes 578 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 67.7 MBytes 568 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 66.7 MBytes 559 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 63.2 MBytes 530 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 306 MBytes 513 Mbits/sec</div><div><br></div><div>Machine: Laptop</div><div>Connection: Ethernet</div><div>Windows <-> Linux (Exact same speeds this time as OmnioOS)</div><div>------------------------------------------------------------</div><div>Server listening on TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>------------------------------------------------------------</div><div>Client connecting to media.midway, TCP port 5001</div><div>TCP window size: 977 KByte</div><div>------------------------------------------------------------</div><div>[ 4] local 10.255.0.54 port 57966 connected with 10.255.0.73 port 5001</div><div>[ ID] Interval Transfer Bandwidth</div><div>[ 4] 0.0- 1.0 sec 110 MBytes 920 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 111 MBytes 932 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 111 MBytes 931 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 108 MBytes 902 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 106 MBytes 887 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 545 MBytes 913 Mbits/sec</div><div>[ 4] local 10.255.0.54 port 5001 connected with 10.255.0.73 port 52726</div><div>[ 4] 0.0- 1.0 sec 63.4 MBytes 532 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 62.9 MBytes 528 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 66.7 MBytes 560 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 65.3 MBytes 548 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 66.8 MBytes 560 Mbits/sec</div><div>[ 4] 0.0- 5.0 sec 326 MBytes 545 Mbits/sec</div><div><br></div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 27, 2016 at 10:35 PM, Bob Friesenhahn <span dir="ltr"><<a href="mailto:bfriesen@simple.dallas.tx.us" target="_blank">bfriesen@simple.dallas.tx.us</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On Wed, 27 Jan 2016, Mini Trader wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Slow CIFS Writes when using Moca 2.0 Adapter.<br>
<br>
I am experiencing this only under OmniOS. I do not see this in Windows or Linux.<br>
<br>
I have a ZFS CIFS share setup which can easily do writes that would saturate a 1GBe connection.<br>
<br>
My problem appears to be related somehow to the interaction between OmniOS and ECB6200 Moca 2.0 adapters.<br>
<br>
1. If I write to my OmniOS CIFS share using ethernet my speeds up/down are around 110 mb/sec - good<br>
<br>
2. If I write to my share using the same source but over the adapter my speeds are around 35mb/sec - problem<br>
</blockquote>
<br></span>
MoCA has a 3.0+ millisecond latency (I typically see 3.5ms when using ping). This latency is fairly large compared with typical hard drive latencies and vastly higher than Ethernet. There is nothing which can be done about this latency.<br>
<br>
Unbonded MoCA 2.0 throughput for streaming data is typically 500Mbit/second, and bonded (two channels) MoCA 2.0 doubles that (the claimed specs are of course higher than this and higher speeds can be measured under ideal conditions). This means that typical MoCA 2.0 (not bonded) achieves a bit less than half of what gigabit Ethernet achieves when streaming data over TCP.<span><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
3. If I read from the share using the same device over the adapter my speeds are around 110mb/sec - good<br>
</blockquote>
<br></span>
Reading is normally more of a streaming operation so the TCP will stream rather well.<span><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
4. If I setup a share on a Windows machine and write to it from the same source using the adapter the speeds are<br>
around 110mb/sec. The Windows machine is actually a VM whos disks are backed by a ZFS NFS share on the same<br>
machine<br>
</blockquote>
<br></span>
This seems rather good. Quite a lot depends on what the server side does. If it commits each write to disk before accepting more, then the write speed would suffer.<span><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
So basically the issue only takes place when writing to the OmniOS CIFS share using the adapter, if the adapter is<br>
not used than the write speed is perfect.<br>
</blockquote>
<br></span>
If the MoCA adaptor supports bonded mode, then it is useful to know that usually bonded mode needs to be enabled. Is it possible that the Windows driver is enabling bonded mode but the OmniOS driver does not?<br>
<br>
Try running a TCP streaming benchmark (program to program) to see what the peak network throughput is in each case.<span><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Any ideas why/how a Moca 2.0 adapter which is just designed to convert an ethernet signal to a coax and back to<br>
ethernet would cause issues with writes on OmniOS when the exact same share has no issues when using an actual<br>
ethernet connection? More importantly, why is this happening with OmniOS CIFS and not anything else?<br>
</blockquote>
<br></span>
Latency, synchronous writes, and possibly bonding not enabled. Also, OmniOS r151016 or later is need to get the latest CIFS implementation (based on Nexenta changes), which has been reported on this list to be quite a lot faster than the older one.<span><font color="#888888"><br>
<br>
Bob<br>
-- <br>
Bob Friesenhahn<br>
<a href="mailto:bfriesen@simple.dallas.tx.us" target="_blank">bfriesen@simple.dallas.tx.us</a>, <a href="http://www.simplesystems.org/users/bfriesen/" rel="noreferrer" target="_blank">http://www.simplesystems.org/users/bfriesen/</a><br>
GraphicsMagick Maintainer, <a href="http://www.GraphicsMagick.org/" rel="noreferrer" target="_blank">http://www.GraphicsMagick.org/</a></font></span></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>