<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Am 07.01.15 um 18:00 schrieb Richard
Elling:<br>
</div>
<blockquote
cite="mid:ACE15BA6-97B4-4C94-B758-654AF147FE27@richardelling.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<br class="">
<div>
<blockquote type="cite" class="">
<div class="">On Jan 7, 2015, at 2:28 AM, Stephan Budach <<a
moz-do-not-send="true" href="mailto:stephan.budach@JVM.DE"
class="">stephan.budach@JVM.DE</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> <font
class="" face="Helvetica, Arial, sans-serif">Hello
everyone,<br class="">
<br class="">
I am sharing my zfs via NFS to a couple of OVM nodes. I
noticed really bad NFS read performance, when rsize goes
beyond 128k, whereas the performance is just fine at
32k. The issue is, that the ovs-agent, which is
performing the actual mount, doesn't accept or pass any
NFS mount options to the NFS server. </font></div>
</div>
</blockquote>
<div><br class="">
</div>
<div>The other issue is that illumos/Solaris on x86 tuning of
server-side size settings does</div>
<div>not work because the compiler optimizes away the tunables.
There is a trivial fix, but it</div>
<div>requires a rebuild.</div>
<br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""><font
class="" face="Helvetica, Arial, sans-serif">To give
some numbers, a rsize of 1mb results in a read
throughput of approx. 2Mb/s, whereas a rsize of 32k
gives me 110Mb/s. Mounting a NFS export from a OEL 6u4
box has no issues with this, as the read speeds from
this export are 108+MB/s regardles of the rsize of the
NFS mount.<br class="">
</font></div>
</div>
</blockquote>
<div><br class="">
</div>
<div>Brendan wrote about a similar issue in the Dtrace book as a
case study. See chapter 5</div>
<div>case study on ZFS 8KB mirror reads.</div>
<br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""><font
class="" face="Helvetica, Arial, sans-serif"> <br
class="">
The OmniOS box is currently connected to a 10GbE port at
our core 6509, but the NFS client is connected through a
1GbE port only. MTU is at 1500 and can currently not be
upped.<br class="">
Anyone having a tip, why a rsize of 64k+ will result in
such a performance drop?<br class="">
</font></div>
</div>
</blockquote>
<div><br class="">
</div>
<div>It is entirely due to optimizations for small I/O going way
back to the 1980s.</div>
<div> -- richard</div>
</div>
</blockquote>
But, doesn't that mean, that Oracle Solaris will have the same issue
or has Oracle addressed that in recent Solaris versions? Not, that I
am intending to switch over, but that would be something I'd like to
give my SR engineer to chew on…<br>
<br>
In any way, the first bummer is, that Oracle chose to not have it's
ovs-agent be capable of accepting and passing the NFS mount options…<br>
<br>
Cheers,<br>
budy<br>
</body>
</html>