[OmniOS-discuss] Slow CIFS Writes when using Moca 2.0 Adapter

Guenther Alka alka at hfg-gmuend.de
Fri Jan 29 08:39:11 UTC 2016


With the default mtu 1500 you can max out 1G networks but on 10G you are 
limited to about 300-400 MB/s.
With mtu 9000 that is supported on all of my switches and computers the 
SMB2 limit is near the limit of 10G

Higher mtu values may be of interest when 40G+ becomes more available.
Would be a problem for my switches.

More important is the question if OmniOS could be optimized per default 
to be better prepared to 10G like ipbuffer or some NFS settings beside 
hotplug support for AHCI or the timeout for disks. This would mean 
minimal more RAM for the OS, minimal better 1G performance but opens the 
potential of 10G

Currently if you want to use OmniOS, you must do after setup from ISO/USB
- setup networking manually at CLI (annoying, everywhere in the installer)
- check nic driver config if mtu 9000 is allowed there
- enable hotplug behaviour with AHCI
- reduce timeouts of disks ex to the 7s of TLER (way too high per default)
http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
- modify ip buffers and NFS settings for a proper NFS/SMB performance

while there is no "global best setting" the current OmniOS defaults are 
worser than suboptimal.
If someone compares a default OmniOS vs a BSD or Linux system, the 
OmniOS results are far below the potential.

Even this MoCa problem would have been obsolete with higher ip buffers 
per default


Gea

Am 28.01.2016 um 22:40 schrieb Dale Ghent:
> For what it's worth, the max MTU for X540 (and X520, and X550) is 15.5k. You can nearly double the frame size that you used in your tests, switch and the MacOS ixgbe driver allowing, of course.
>
>
>> On Jan 28, 2016, at 4:20 PM, Günther Alka <alka at hfg-gmuend.de> wrote:
>>
>> I have done some tests about different tuning options (network, disk, service, client related) -
>> mainly with 10G ethernet in mind but this may give some ideas about options (on new 151017 bloody)
>>
>> http://napp-it.org/doc/downloads/performance_smb2.pdf
>>
>>
>> Am 28.01.2016 um 21:15 schrieb Mini Trader:
>>> I most definitely will.  Any other tunables worth looking at or can most of these issues be fixed by send/receive buffer size?
>>>
>>> This was a nice crash course on how TCP Window sizes can affect your data throughput!
>>>
>>> On Thu, Jan 28, 2016 at 2:49 PM, Dan McDonald <danmcd at omniti.com> wrote:
>>>
>>>> On Jan 28, 2016, at 2:44 PM, Mini Trader <miniflowtrader at gmail.com> wrote:
>>>>
>>>> Problem has been resolved :)
>>>>
>>> Makes sense.  Those settings are only inherited by new TCP connections.  Sorry I missed a good chunk of this thread, but you pretty much figured it all out.
>>>
>>> And you should check out this bloody cycle... SMB2 is on it, and it may help you further.  Or you can wait until r151018, but early testing is why we have bloody.  :)
>>>
>>> Dan
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>>
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss

-- 
H          f   G
Hochschule für Gestaltung
university of design

Schwäbisch Gmünd
Rektor-Klaus Str. 100
73525 Schwäbisch Gmünd

Guenther Alka, Dipl.-Ing. (FH)
Leiter des Rechenzentrums
head of computer center

Tel 07171 602 627
Fax 07171 69259
guenther.alka at hfg-gmuend.de
http://rz.hfg-gmuend.de



More information about the OmniOS-discuss mailing list