[OmniOS-discuss] Mellanox Infiniband 2x 10Gbps MHEL-CF128-TC

Garrett D'Amore garrett.damore at dey-sys.com
Sun Aug 18 16:30:21 UTC 2013


On Aug 17, 2013, at 6:40 PM, Thibault VINCENT <thibault.vincent at smartjog.com> wrote:

>> Also, if I just have a few workstations I need to connect to an NFS
>> server over 10 GbE, I was thinking of just getting a couple of
>> adapters and crossover cables, instead of going with a switch, just
>> for cost reasons. Does this make sense, and is there any downside?
>> Switches right now seem to be pretty expensive, and for just a couple
>> of workstations/servers, it seems like crossover cables might do just
>> as well. Thoughts?
> 
> A quick comparison on Intel cards and cheap switchs shows the 10Gbe RJ45
> port costs double on the adapter. Choose wisely, I'll go with the switch
> over four workstations. And having lots of NICs in the same server may
> not scale well : too many interrupts, under usage of ring buffers and
> queues, lots of kernel tasks.

Actually, because modern 10G hardware all have multiple rings, these wind up behaving much like having multiple NICs… i.e. more interrupts, more kernel tasks, etc.  That's generally a *good* thing, because most CPUs can't keep up with 10G load (unless you're using jumbo frames and/or TCP offload.)  So I wouldn't assume that CPU resources will be any different for 10 x 1G cards vs 1 x 10 G.   They will probably be fairly close.

Factors that *should* drive your decisions:

1. B/w needs.  If you don't need close to 10G, but just a little more than 1G (say 2G), then multiple cards may be cheaper.
2. Reliability.  A single 10G card and link are a single point of failure.  But, multiple 1G cards introduce multiple points of failure -- so more likely to encounter a failure, but more likely that the failure doesn't prevent basic functionality.
3. Complexity.  A single 10G card is *lots* simpler than arranging for 802.3ad link aggregation.
4. Port costs.  Again, 2x1G ports probably inexpensive.  But at 10x1G, you may find otherwise.
5. Power consumption.  More cards == more power consumption.  (10G consumes much more than 1G, but consumes far less than 10x the power. :-)
6. Slot availability in your servers, and port availability on your switches.

All things being equal, I'd probably opt for a 10 GbE link unless I can be satisfied by 1GbE.  I'd only use link aggregation to give some reliability, and I'd elect to do that with *either* 1G or 10G if I needed the reliability.

*Do* consider the upstream bottlenecks too!  If you're connected to the internet with a 10Mbps connection, then having 10GbE isn't going to give you faster internet.  In your datacenter, if your traffic is a bunch of client machines hitting a single server, it may make sense to have a 10GbE to the server, and 1GbE to the clients.  (Assuming you have a switch with a 10GbE uplink port. :-)

	- Garrett

> 
> -- 
> Thibault VINCENT - Infrastructure Engineer 
> SmartJog | T: +33 1 5868 6238
> 27 Blvd Hippolyte Marquès, 94200 Ivry-sur-Seine, France
> www.smartjog.com | a TDF Group company
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list