[OmniOS-discuss] Shopping for an all-in-one server

Chris Ferebee cf at ferebee.net
Mon Jun 2 12:00:01 UTC 2014


Jim,

If you haven’t already, you certainly want to study Joyent’s parts lists closely:

	<https://github.com/joyent/manufacturing>

Generally speaking, Supermicro is preferred, LSI HBAs in IT mode are almost mandatory, and a STEC ZeusRAM slog device might best for your use case if you don’t mind the cost premium over the Intel DC S3700. For raidz you need at least raidz2, if not higher, so the cost advantage over a pool of mirrors may not be worth it. As I understand it, writes to each raidz vdev are limited to the IOPS of a single disk. I’m still learning the ropes myself, so take all this with a grain of salt.

But about ESXi: I have an experimental machine running ESXi with two guests, SmartOS (with a lot of storage for various backups) and one OS X (which I use as my primary workstation). Things I’ve learned:

- There does not appear to be a supported VMXNET3 driver for Illumos, so virtualized networking for ESXi guests has to use the ESXi virtual e1000 device. I have had problems with that, with network throughput becoming irregular and finally stalling completely to where I had to reboot the SmartOS guest. The OS X guest works fine with the virtual e1000 device using the Apple driver.

- Because of this, I installed an additional Intel ethernet card and gave it to SmartOS via PCI passthru. Performance is now as expected. PCI passthru also works well with the LSI HBAs for SmartOS, and with an AMD GPU for OS X.

- The e1000 virtual device is limited to 1 Gbit/s, unlike VMXNET3. I would like to have SmartOS provide iSCSI volumes to ESXi to use as backing storage for VM (again, this is an experimental setup…), and the best idea I have come up with is to install two X520 10GbE NICs, give one to ESXi and pass one through to SmartOS. We’ll see how that goes.

Note that once you activate PCI passthru, you lose many of the advanced features of ESXi, such as migration.

Long story short, ESXi is impressive technology, and an excellent solution for virtualizing Windows (and OS X), as well as Linux. Support for Solaris, let alone Illumos, is sketchy.

For production use, I would definitely stick with SmartOS and KVM if you want to put everything on one box, that’s the sort of thing it’s designed for.

Best,
Chris


Am 02.06.2014 um 09:38 schrieb Jim Klimov <jimklimov at cos.ru>:

> Hello friends, and sorry for cross-posting to different audiences like this,
> 
> I am helping to spec out a new server for a software development department, and I am inclined to use an illumos-based system for the benefits of ZFS and file-serving and zones primarily. However, much of the target work is with Linux environments, and my tests with the latest revival of SUNWlx this year have not shown it to be a good fit (recent Debians either fail to boot or splash many errors, even if I massage the FS contents appropriately - ultimate problem being with some absent syscalls etc.); due to this, the build and/or per-dev environments would likely live in VMs based on illumos kvm, virtualbox, or bare-metal vmware hosting the illumos system as well as the other vm's.
> 
> Thus the box we'd build should be good with storage (including responsive read-write NFS) and VM hosting. I am not sure whether OI, OmniOS or ESX(i?) with HBA passthrough onto an illumos-based storage/infrastructure services VM would be a better fit. Also, I was away from shopping for new server gear for a while and its compatibility with illumos in particular, so I'd kindly ask for suggestions for a server like that ;)
> 
> The company's preference is to deal with HP, so while it is not an impenetrable barrier, buying whatever is available under that brand is much simpler for the department. Cost seems a much lesser constraint ;)
> 
> The box should be a reliable rackable server with remote management, substantial ECC RAM for efficient ZFS and VM needs (128-256gb likely, possibly more), CPUs with all those VT-* bits needed for illumos-kvm and a massive amount of cores (some large-scale OS rebuilds from source are likely a frequent task), and enough disk bays for rpool (hdd or ssd), ssd-based zil and l2arc devices (that's already half a dozen bays), possibly an ssd-based scratch area (raid0 or raid1, this depends), as well as several TB of HDD storage. Later expansions should be possible with JBODs.
> 
> I am less certain about HBAs (IT mode, without HW-RAID crap), and the practically recommended redundancy (raidzN? raid10? how many extra disks in modern size ranges are recommended - 3?) Also i am not sure about modern considerations of multiple PCI buses - especially with regard to separation of ssd's onto a separate HBA (or several?) to avoid bottlenecks in performance and/or failures.
> 
> Finally, are departmental all-in-one combines following the Thumper ideology of data quickly accessible to applications living on the same host without uncertainties and delays of remote networking still at all 'fashionable'? ;)
> Buying a single purchase initially may be easier to justify than multiple boxes with separate roles, but there are other considerations too. In particular, their corporate network is crappy and slow, so splitting into storage+server nodes would need either direct cabling for data, or new switching gear which i don't know yet if it would be a problem; localhost data transfers are likely to be a lot faster. I am also not convinced about higher reliability of split-head solutions, though for high loads i am eager to believe that separating the tasks can lead to higher performance. I am uncertain if this setup and its tasks would qualify for that; but it might be expanded later on, including role-separation, if a practical need is found after all.
> 
> PS: how do you go about backing up such a thing? Would some N54L's suffice to receive zfs-send's of select datasets? :)
> 
> So... any hints and suggestions are most welcome! ;)
> Thanks in advance,
> //Jim Klimov 
> --
> Typos courtesy of K-9 Mail on my Samsung Android
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list