<div dir="ltr">Networking has always used *bps - that's been the standard for many years. Megabits, Gigabits ...<div><br></div><div>Disk tools have always measured in bytes since that is how the capacity is defined.</div><div><br></div><div>How did you create your etherstub? I know you can set a maxbw (maximum bandiwdth), but I don't know what the default behavior is.</div><div><br></div><div>Ian<br><div><br></div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 14, 2017 at 9:13 AM, Dirk Willems <span dir="ltr"><<a href="mailto:dirk.willems@exitas.be" target="_blank">dirk.willems@exitas.be</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Thank you all, the water is already clearing up :)</p>
<p><br>
</p>
<p>So infiniband is 40 Gbps an not 40GB/s, very confusing GB/s Gbps
why they not take a standaard and set everything in GB/s or MB/s ?</p>
<p>A lot of people make a lot of mistakes between them, me too ... <br>
</p>
<p> If it is 40 Gbps a factor of 8 then we theoretical have max 5
GB/s throughput.<br>
</p>
<p>Little difference 40 or 5 :) </p>
<p>So Ian you have the full blow with 36Gbps very cool looks more
like it :)</p>
<p>Did I play with the frame size, not really sure what you mean by
that sorry but I think its default on 9000</p>
<p>Backend_Switch0 etherstub 9000 up <br>
</p>
<p><br>
</p>
<p>Do understand that if we use UDP streams from process to process
it will be much quicker over the etherstub gonna need more test to
do.</p>
<p>We used for a customer Mbuffer with zfs send over Lan that is
also very quick sometimes I also use it at my home very good prog.</p>
<p>But still do not understand how it is that I copy from 1 NGZ with
100MB/s, I receive on the other NGZ 250MB/s very strange ?</p>
<p><br>
</p>
<p>the command dlstat difference between OmniOSce and Solaris ?<br>
</p>
<p>RBYTES => receiving</p>
<p>OBYTES => sending<br>
</p><span class="">
<p>root@test2:~# dlstat -i 2<br>
><br>
> LINK IPKTS RBYTES OPKTS OBYTES<br>
> net1 25.76K 185.14M 10.08K 2.62M<br>
> net1 27.04K 187.16M 11.23K 3.22M</p>
<p><br>
</p>
<p><br>
</p>
</span><p>BYTES => receiving and sending ?</p>
<p>But then still if the copy is not running I have 0 so doesn't
explain why I see 216 MB where come the rest of the 116 MB from is
it compression ?<br>
</p><span class="">
<p>root@NGINX:/root# dlstat show-link NGINX1 -i 2<br>
><br>
> LINK TYPE ID INDEX PKTS BYTES<br>
> NGINX1 rx bcast -- 0 0<br>
> NGINX1 rx sw -- 0 0<br>
> NGINX1 tx bcast -- 0 0<br>
> NGINX1 tx sw -- 9.26K 692.00K<br>
> NGINX1 rx local -- 26.00K 216.32M</p>
<p><br>
</p>
</span><p>Thank you all for your feedback much appreciations !<br>
</p>
<p><br>
</p>
<p>Kind Regards,</p>
<p><br>
</p>
<p>Dirk<br>
</p><div><div class="h5">
<p><br>
</p>
<br>
<div class="m_-348233399284751032moz-cite-prefix">On 14-09-17 17:07, Ian Kaufman wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Some other things you need to take into account:
<div><br>
</div>
<div>QDR Infiniband is 40Gbps, not 40GB/s. That is a factor of 8
difference. That is also a theoretical maximum throughput,
there is some overhead. In reality, you will never see
40Gbps. </div>
<div><br>
</div>
<div>My system tested out at 6Gbps - 8Gbps using NFS over IPoIB,
with DDR (20Gbps) nodes and a QDR (40Gbps) storage server.
IPoIB drops the theoretical max rates to 18Gbps and 36Gbps
respectively. </div>
<div><br>
</div>
<div>If you are getting 185MB/s, you are seeing 1.48Gbps. </div>
<div><br>
</div>
<div>Keep your B's and b's straight. Did you play with your
frame size at all?</div>
<div><br>
</div>
<div>Ian</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 14, 2017 at 7:10 AM, Jim
Klimov <span dir="ltr"><<a href="mailto:jimklimov@cos.ru" target="_blank">jimklimov@cos.ru</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="m_-348233399284751032HOEnZb">
<div class="m_-348233399284751032h5">On September 14, 2017 2:26:13 PM
GMT+02:00, Dirk Willems <<a href="mailto:dirk.willems@exitas.be" target="_blank">dirk.willems@exitas.be</a>>
wrote:<br>
>Hello,<br>
><br>
><br>
>I'm trying to understand something let me explain.<br>
><br>
><br>
>Oracle always told to me that if you create a
etherstub switch it has<br>
>infiniband speed 40GB/s.<br>
><br>
>But I have a customer running on Solaris (Yeah I
know but let me<br>
>explain) who is copy from 1 NGZ to another NGZ on
the same GZ over Lan<br>
>(I know told him to to use etherstub).<br>
><br>
>The copy witch is performed for a Oracle database
with sql command, the<br>
><br>
>DBA witch have 5 streams say it's waiting on the
disk, the disk are 50<br>
>-<br>
>60 % busy the speed is 30 mb/s.<br>
><br>
><br>
>So I did some test just to see and understand if
it's the database or<br>
>the system, but with doing my tests I get very
confused ???<br>
><br>
><br>
>On another Solaris at my work copy over etherstub
switch => copy speed<br>
>is 185MB/s expected much more of infiniband speed
???<br>
><br>
><br>
>root@test1:/export/home/Admin<wbr># scp test10G<br>
><a class="m_-348233399284751032moz-txt-link-abbreviated" href="mailto:Admin@192.168.1.2:/export/" target="_blank">Admin@192.168.1.2:/export/</a>hom<wbr>e/Admin/<br>
>Password:<br>
>test10G 100%<br>
>|****************************<wbr>******************************<wbr>******|<br>
>10240<br>
>MB 00:59<br>
><br>
><br>
>root@test2:~# dlstat -i 2<br>
><br>
> LINK IPKTS RBYTES OPKTS OBYTES<br>
> net1 25.76K 185.14M 10.08K 2.62M<br>
> net1 27.04K 187.16M 11.23K 3.22M<br>
> net1 26.97K 186.37M 11.24K 3.23M<br>
> net1 26.63K 187.67M 10.82K 2.99M<br>
> net1 27.94K 186.65M 12.17K 3.75M<br>
> net1 27.45K 187.46M 11.70K 3.47M<br>
> net1 26.01K 181.95M 10.63K 2.99M<br>
> net1 27.95K 188.19M 12.14K 3.69M<br>
> net1 27.91K 188.36M 12.08K 3.64M<br>
><br>
>The disks are all separate luns with all separated
pools => disk are 20<br>
><br>
>- 30% busy<br>
><br>
><br>
>On my OmniOSce at my lab over etherstub<br>
><br>
><br>
>root@GNUHealth:~# scp test10G
<a class="m_-348233399284751032moz-txt-link-abbreviated" href="mailto:witte@192.168.20.3:/export/" target="_blank">witte@192.168.20.3:/export/</a>hom<wbr>e/witte/<br>
>Password:<br>
>test10G 76% 7853MB 116.4MB/s<br>
><br>
><br>
>=> copy is 116.4 MB/s => expected much more
from infiniband speed is<br>
>just the same as Lan ???<br>
><br>
><br>
>Is not that my disk can not follow 17% busy there
sleeping ...<br>
><br>
> extended device statistics<br>
> r/s w/s Mr/s Mw/s wait actv wsvc_t
asvc_t %w %b device<br>
> 0,0 248,4 0,0 2,1 0,0 1,3 0,0
5,3 0 102 c1<br>
> 0,0 37,5 0,0 0,7 0,0 0,2 0,0
4,7 0 17 c1t0d0 =><br>
>rpool<br>
> 0,0 38,5 0,0 0,7 0,0 0,2 0,0
4,9 0 17 c1t1d0 =><br>
>rpool<br>
> 0,0 40,5 0,0 0,1 0,0 0,2 0,0
5,6 0 17 c1t2d0 =><br>
>data pool<br>
> 0,0 43,5 0,0 0,2 0,0 0,2 0,0
5,4 0 17 c1t3d0 =><br>
>data pool<br>
> 0,0 44,5 0,0 0,2 0,0 0,2 0,0
5,5 0 18 c1t4d0 =><br>
>data pool<br>
> 0,0 44,0 0,0 0,2 0,0 0,2 0,0
5,4 0 17 c1t5d0 =><br>
>data pool<br>
> 0,0 76,0 0,0 1,5 7,4 0,4 97,2
4,9 14 18 rpool<br>
> 0,0 172,4 0,0 0,6 2,0 0,9 11,4
5,5 12 20 DATA<br>
><br>
><br>
><br>
>root@NGINX:/root# dlstat show-link NGINX1 -i 2<br>
><br>
> LINK TYPE ID INDEX PKTS BYTES<br>
> NGINX1 rx bcast -- 0
0<br>
> NGINX1 rx sw -- 0
0<br>
> NGINX1 tx bcast -- 0
0<br>
> NGINX1 tx sw -- 9.26K
692.00K<br>
> NGINX1 rx local -- 26.00K
216.32M<br>
> NGINX1 rx bcast -- 0
0<br>
> NGINX1 rx sw -- 0
0<br>
> NGINX1 tx bcast -- 0
0<br>
> NGINX1 tx sw -- 7.01K
531.38K<br>
> NGINX1 rx local -- 30.65K
253.73M<br>
> NGINX1 rx bcast -- 0
0<br>
> NGINX1 rx sw -- 0
0<br>
> NGINX1 tx bcast -- 0
0<br>
> NGINX1 tx sw -- 8.95K
669.32K<br>
> NGINX1 rx local -- 29.10K
241.15M<br>
><br>
><br>
>On the other NGZ I receive 250MB/s ????<br>
><br>
><br>
>- So my question is how comes that the speed is
equal to Lan 100MB/s on<br>
><br>
>OmniOSce but i receive 250MB/s ?<br>
><br>
>- Why is etherstub so slow if infiniband speed is
40GB/s ???<br>
><br>
><br>
>I'm very confused right now ...<br>
><br>
><br>
>And want to know for sure how to understand and see
this in the right<br>
>way, because this customer will be the first
customer from my who gonna<br>
><br>
>switch complety over to OmniOSce on production and
because this<br>
>customer<br>
>is one or the biggest company's in Belgium I really
don't want to mess<br>
>up !!!<br>
><br>
><br>
>So any help and clarification will be highly
appreciate !!!<br>
><br>
><br>
>Thank you very much.<br>
><br>
><br>
>Kind Regards,<br>
><br>
><br>
>Dirk<br>
<br>
</div>
</div>
I am not sure where the infiniband claim comes from, but
copying data disk to disk, you involve the slow layers like
disk, skewed by faster layers like cache of already-read
data and delayed writes :)<br>
<br>
If you have a wide pipe that you may fill, it doesn't mean
you do have the means to fill it with a few disks.<br>
<br>
To estimate the speeds, try pure UDP streams from process to
process (no disk), large-packet floodping, etc.<br>
<br>
I believe etherstub is not constrained artificially, and
defaults to jumbo frames. Going to LAN and back can in fact
use external hardware (IIRC there may be a system option to
disable that, not sure) and so is constrained by that.<br>
<br>
Jim<br>
--<br>
Typos courtesy of K-9 Mail on my Android<br>
<div class="m_-348233399284751032HOEnZb">
<div class="m_-348233399284751032h5">______________________________<wbr>_________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.co<wbr>m</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" rel="noreferrer" target="_blank">http://lists.omniti.com/mailma<wbr>n/listinfo/omnios-discuss</a><br>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div class="m_-348233399284751032gmail_signature" data-smartmail="gmail_signature">Ian
Kaufman<br>
Research Systems Administrator<br>
UC San Diego, Jacobs School of Engineering ikaufman AT ucsd
DOT edu <br>
</div>
</div>
<br>
<fieldset class="m_-348233399284751032mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
OmniOS-discuss mailing list
<a class="m_-348233399284751032moz-txt-link-abbreviated" href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.<wbr>com</a>
<a class="m_-348233399284751032moz-txt-link-freetext" href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/<wbr>mailman/listinfo/omnios-<wbr>discuss</a>
</pre>
</blockquote>
<br>
</div></div><div class="m_-348233399284751032moz-signature">-- <br><span class="">
<table style="width:400px;border-collapse:collapse" cellspacing="0" cellpadding="0" width="400">
<tbody>
<tr>
<td width="150"> <img src="http://signatures.sidekick.be/exitas/images/placeholder-exitas.jpg" style="border:none;border-collapse:collapse;width:130px;height:130px" border="0" width="130" height="130"></td>
<td style="width:230px;font-family:Arial;font-size:12px;line-height:15px;border-collapse:collapse;font-weight:300;color:#215973;vertical-align:top" width="250"> <span style="font-weight:700;font-size:15px">Dirk Willems</span><br>
<span style="font-weight:500;font-size:12px;color:e20521;line-height:21px"> System Engineer </span><br>
<br>
<br>
<a href="tel:+32%203%20443%2012%2038" value="+3234431238" target="_blank">+32 (0)3 443 12 38</a><br>
<a href="mailto:Dirk.Willems@exitas.be" style="color:#215973" target="_blank">Dirk.Willems@exitas.be</a><br>
<br>
Quality. Passion. Personality </td>
</tr>
<tr>
<td> <br>
</td>
</tr>
<tr>
<td colspan="2" style="font-family:Arial;font-size:12px;line-height:15px;border-collapse:collapse;font-weight:300;color:#215973;padding-top:5px"><a href="http://www.exitas.be/" style="color:#215973" target="_blank">www.exitas.be</a>
| Veldkant 31 | 2550 Kontich</td>
</tr>
<tr>
<td> <br>
</td>
</tr>
<tr>
<td style="font-family:Arial;font-size:12px;line-height:15px;border-collapse:collapse;font-weight:500;color:#215973;padding-top:5px">
Illumos OmniOS Installation and Configuration
Implementation Specialist.
<br>
Oracle Solaris 11 Installation and Configuration Certified
Implementation Specialist.
</td>
<td width="400"><img src="cid:part8.0C6AA6B9.798DC1B2@exitas.be" style="border:none;border-collapse:collapse;width:236px;height:126px" border="0" width="236" height="126"></td>
</tr>
</tbody>
</table>
</span></div>
</div>
<br>______________________________<wbr>_________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.<wbr>com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" rel="noreferrer" target="_blank">http://lists.omniti.com/<wbr>mailman/listinfo/omnios-<wbr>discuss</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Ian Kaufman<br>Research Systems Administrator<br>UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu <br></div>
</div>