[OmniOS-discuss] OmniOS/OpenSM

Narayan Desai narayan.desai at gmail.com
Thu Jun 20 07:33:33 EDT 2013


We're primarily using iSER. We had similar problems using iscsi with tcp
over the ipoib. We're using the system as openstack block storage, so
iscsi+iSER was the simplest protocol to use.

Depending on the depending on the PCIe chipset, we see up to about 3200
MB/s with a connectX2 on gen2 PCIe (on linux). ConnectX3's are gen3 so
those wouldn't be bus limited with a single port card on a system with gen3
PCIe.
 -nld


On Wed, Jun 19, 2013 at 11:15 PM, David Bomba <turbo124 at gmail.com> wrote:

> We use both IPoIB and iSER. iSER can flood the HCA.
>
> With IPoIB because TCP is done in software we only see around 450MB/s,
> using connected mode with 65k MTU and some tuning with ndd for larger frame
> sizes.
>
> With 40Gb/s equipment the bottleneck will be the PCIe 2.0 BUS, so I doubt
> we will be able to get over 3Gb/s..
>
> What protocols are you using for your testing? SRP , iSER or NFSoRDMA?
>
>
> On 20 June 2013 14:09, Narayan Desai <narayan.desai at gmail.com> wrote:
>
>> I'm curious to hear what performance you end up seeing. We've got QDR
>> connectX 2 cards, and only seen ~2300 MB/s out of them, compared with ~3100
>> on linux with the same hardware. (this is even just the completely local bw
>> tests to the cards)
>>
>> Our of curiosity, are you using ipoib or iSER? We've seen much better
>> performance with the latter, topping out around 1800 MB/s, compared with
>> ~300 on ipoib without much tuning. (single clients, in both cases) We can't
>> hit the RDMA benchmark bandwidths with a single client, even though we have
>> sufficient spindles on the back end.
>>  -nld
>>
>>
>> On Wed, Jun 19, 2013 at 10:01 PM, David Bomba <turbo124 at gmail.com> wrote:
>>
>>> We run 10G Infiniband on OmniOS using the ComSTAR iscsi target.
>>>
>>> We can flood the network easily between two OmniOS machines.
>>>
>>> We are about to upgrade to 40Gb/s HCAs, and i would expect similar
>>> performance.
>>>
>>> Dave
>>>
>>> On 20/06/2013, at 12:25 PM, Narayan Desai wrote:
>>>
>>> It looks like someone has managed to get things working on OI:
>>> http://syoyo.wordpress.com/category/infiniband/
>>>
>>> Just fyi, there don't seem to be many people running IB with omniOS or
>>> OI, in general. We are, but aren't getting the performance we get with the
>>> same hardware on linux.
>>>  -nld
>>>
>>>
>>> On Wed, Jun 19, 2013 at 8:29 PM, Moises Medina <raenac at gmail.com> wrote:
>>>
>>>> Hi Michael, thanks for the response.  I tried the usual opensm command,
>>>> returned nothing, and searched for it in pkg, maybe its named something
>>>> else?
>>>>
>>>> Moises
>>>>
>>>>
>>>> On Wed, Jun 19, 2013 at 4:07 PM, Michael Palmer <palmertime at gmail.com>wrote:
>>>>
>>>>> I believe that OpenSM is included in OmniOS but i'm not sure if VMware
>>>>> ESXi 5.x and above supports it.  If you are using ESXi 4.0 then some
>>>>> vendors might have drivers for infiniband on ESXi.
>>>>>
>>>>> Thanks,
>>>>> Michael
>>>>>
>>>>>
>>>>> On Wed, Jun 19, 2013 at 10:30 AM, Moises Medina <raenac at gmail.com>wrote:
>>>>>
>>>>>> Hi All, new to the list.  Question I had is has anyone successfully
>>>>>> ran OpenSM on OmniOS?  Trying to use this as ZFS server to ESXI but do not
>>>>>> have a switch at all.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Raenac
>>>>>>
>>>>>> _______________________________________________
>>>>>> OmniOS-discuss mailing list
>>>>>> OmniOS-discuss at lists.omniti.com
>>>>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> OmniOS-discuss mailing list
>>>> OmniOS-discuss at lists.omniti.com
>>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>>
>>>>
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20130620/d06feff8/attachment.html>


More information about the OmniOS-discuss mailing list