[OmniOS-discuss] Network throughout 1GB/sec

Guenther Alka alka at hfg-gmuend.de
Sat Sep 17 08:50:54 UTC 2016


hi
Intention of this test and tuning cycle was to check 4k video editing 
capability
from OSX and Windows to a Solaris or OmniOS storage over 10G/40G to SSD 
or NVMe storage.

With tunings and large 4k videofiles I was able to get about 900MB/s on 
write with peaks up to 1000 MB/s over SMB 2.1 tested with the video 
editing tool AJA on Windows and speed test on OSX. These are more or 
less sequential tests with a lot of large files. Values on Solaris were 
slightly better than on OmniOS so I asume OS or driver defaults are more 
optimized there at least regarding 10G/40G. Reads were always slower and 
more critical to settings and cablings. NFS values on OSX were not 
nearly as good and quite disappointing (at least on OSX).

With smaller NTSC/PAL video settings (test uses many small files then) 
performance went down to 500-600 MB/s with writes and a little lower on 
reads.

I currently do some tests with i40e on Intel XL710 where the difference 
is heavy with 2200MB write on Solaris and 1500 MB/s on OmniOS with same 
settings while reads are currently a disaster at least on Windows with 
up to 300 MB/s on Solaris and 150MB/s on OmniOS.

In best cases these values were nearly as good as iperf values so near 
wire speed.

I would not asume that you can reach high performance values with rsync. 
Zfs send should be faster as it creates a filestream. I woukd not expect 
that they can come close to the above even with zfs send over mbuffer or 
netcat. A pure copy should be as fast from/to OSX orWindows or with cp 
over netcat.

Gea


Am 17.09.2016 um 02:57 schrieb Ergi Thanasko:
> HI Gea,
> Great info, are you seeing 1000MB/s doing iperf or actually transfer rates  rsync, cp bbcp….
>
>
>> On Sep 16, 2016, at 12:57 PM, Guenther Alka <alka at hfg-gmuend.de> wrote:
>>
>> I have made some investigations into 10G and found that 300-400MB/s is expected with default settings. Improvements are possible up to 1000MB/s via mtu 9000 and if you increase ip buffers ex
>> max_buf=4097152 tcp
>> send_buf=2048576 tcp
>> recv_buf=2048576 tcp,
>>
>> NFS lockd servers (ex 1024), NFS number of threads (ex 64) and NFS transfer size (ex 1048576)
>>
>> http://napp-it.org/doc/downloads/performance_smb2.pdf
>>
>>
>> Gea
>>
>> Am 16.09.2016 um 19:43 schrieb Ergi Thanasko:
>>> Hi all,
>>> We have a a few servers  conected via 10g nic  LACP, some of them have  4nic and some have 6nic in a link aggregation mode. We been moving a lot of data around and we are trying to get the maximum performance. I have seen zpool can deliver  2-3GB accumulated  throughput. Iperf does about 600-800MB/sec between those two servers.
>>> Given  the hardware that we have and the zpool performance,   we expected   to see some serious data transfer rates  however we only see around 200-300MB/sec average  using rsync or copy paste over NFS.  Standard MTU 1500 and nfs block size.  I want to ask the community what to do get some higher throughout and the application level. I hear ZFS send/receive   or ZFS shadow does work faster but it does snapshots. Out data (Terabytes) is constantly evolving   and we prefer something in the nature of rsync  but to utilize the network hardware.
>>>
>>> If Anyone has a hardware setup that can see 1GB/sec  throughput  and does not mind sharing?
>>> Any software  that  use multithreads  sessions to move data around  zfs friendly? We do not mind getting going with a commercial solution like camvault or veeam if they work.
>>>
>>> Thank you for your time
>>>
>>>   
>>>
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list