<div class="gmail_extra"><br><div class="gmail_quote">On Mon, Nov 19, 2012 at 1:20 PM, Paul B. Henson <span dir="ltr"><<a href="mailto:henson@acm.org" target="_blank">henson@acm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> I have 10 of them in a raidz2 (i.e. 8 x way data drives). Arguably with<br><div class="im">
<br>
</div>That's big for a raidz2.<br></blockquote><div><br>Not really. I've seen bigger, and if you are running with 2 parity, the only other logical config is 4 data drives + 2 data which means a 33% overhead which is ridiculous.<br>
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="im">
> Sub-100Mbit speeds on a VM that was connected to native dual gigabit ports<br>
> in the host.<br>
<br>
</div>Hmm. I'm almosted tempted to go into work on my vacation week to cut my<br>
test box over to a gig port to see what happens :).<br></blockquote><div><br>I've booted up the KVMs now, fixed a few network configuration issues, done some more testing, and found this (iperf server is running on the VM):<br>
<span style="font-family:courier new,monospace">------------------------------------------------------------<br>Server listening on UDP port 5001<br>Receiving 1470 byte datagrams<br>UDP buffer size: 224 KByte (default)<br>
------------------------------------------------------------<br>[ 3] local X port 5001 connected with <b>Y</b> port 46307<br>[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams<br>[ 3] 0.0-10.0 sec 687 MBytes 576 Mbits/sec 0.046 ms 202862/692593 (29%)<br>
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order<br>[ 4] local X port 5001 connected with <b>Z</b> port 41118<br>[ 4] 0.0-10.2 sec 517 MBytes 423 Mbits/sec 15.583 ms 539423/908178 (59%)</span><br><br>Note:<br>
<b>Y</b> is on my local network connected over gigabit LAN and getting 29% packet loss<br><b>Z</b> is the <b>host</b> and is getting 59% packet loss<br><br>This is clearly NQR ... host to VM transfers should not get any packet loss.<br>
<br>Going from the VM is even worse:<span style="font-family:courier new,monospace"><br>------------------------------------------------------------<br>Client connecting to Y, UDP port 5001<br>Sending 1470 byte datagrams<br>
UDP buffer size: 224 KByte (default)<br>------------------------------------------------------------<br>[ 3] local X port 34263 connected with <b>Y</b> port 5001<br>[ ID] Interval Transfer Bandwidth<br>[ 3] 0.0-10.0 sec 167 MBytes 140 Mbits/sec<br>
[ 3] Sent 119441 datagrams<br>[ 3] Server Report:<br>[ 3] 0.0-10.0 sec 167 MBytes 140 Mbits/sec 0.123 ms 0/119440 (0%)<br>[ 3] 0.0-10.0 sec 1 datagrams received out-of-order</span><br><br>This is the 100mbit-speeds on gigabit ethernet, that I was mentioning before.<br>
<br>In my further experimentation, I've found that this appears to be caused by the virtio network device - when using e1000 emulation, it all works much better.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
So what switch was on the other side of the aggregation? Port<br>
aggregation can be pretty flaky sometimes :(. </blockquote><div><br>Dell PowerConnect 5448 (the one with iSCSI acceleration), so nothing cheap or dodgy.<br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="im">
> NFS still goes over the ethernet connectivity, and despite being host-VM<br>
> native and should not go through the hardware, it was still far less than<br>
> gigabit throughput, when I would expect it to exceed gigabit ...<br>
</div><br>Hmm, my quick guest to host test showed over 700Mb/s. With a virtual<br>
e1000, it might be limited to gigabit no matter what, since it's<br>
pretending to be a gig card. The vmxnet3 driver in esx claims to be a<br>
10G card. I'm not sure what virtio reports to the OS, I haven't tried it<br>
yet.<br>
</blockquote></div><br>My experiments over the last few days show it to be tied to virtio, no issue with e1000 emulation, so this is consistent.<br><br>The problems with virtio could be either the Solaris KVM implementation, or an issue with the virtio drivers in my guest (Linux version 3.2.12-gentoo (root@slate) (gcc version 4.5.3 (Gentoo 4.5.3-r2 p1.5, pie-0.4.7) ) #1 SMP).<br>
<br>Now I'm off to do a rebuild of the guest kernel, to see if that fixes the virtio issues ... if so, then it's just an issue with that Linux kernel version that I was using and therefore easily fixed. If not, then there is a pretty significant issue with the Illumnos KVM virtio ethernet, with a workaround being use e1000 instead. This is still not ideal though ... <a href="http://vmstudy.blogspot.com.au/2010/04/network-speed-test-iperf-in-kvm-virtio.html">http://vmstudy.blogspot.com.au/2010/04/network-speed-test-iperf-in-kvm-virtio.html</a><br>
<br><br><br></div>