[OmniOS-discuss] Best infrastructure for VSphere/Hyper-V

Nate Smith nsmith at careyweb.com
Fri Apr 3 15:11:12 UTC 2015


Update. I turned off sr/iov and also the I/OAT DMA Engine in BIOS to see if that solved the problem. IT definitely seemed to help. My fibre targets didn´t drop out immediately on a backup, but when the system came back under a decent load it dropped the luns again. 
 
Thinking about messing with the Queue Depth like you mentioned, Thomas.
 
From: T. Wiese, OVIS IT Consulting GmbH [mailto:thomas.wiese at ovis-consulting.de] 
Sent: Thursday, April 02, 2015 3:44 PM
To: Nate Smith
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Best infrastructure for VSphere/Hyper-V
 
You wrote "I get weird "Drop outs" in certain IO situations." 
In which situations? On VMware or HyperV? 
Can you see any timeouts in an VM log?
How many Disks in your pool?
Wich Raidz? Raidcontroller?
Do you use zoning in your Fabric?
 
 


Mit freundlichen Grüßen

Thomas Wiese
Geschäftsführender Gesellschafter
OVIS IT Consulting GmbH
Arnulfstraße 95, 12105 Berlin
 Tel.:
 030 - 2201206 01
 Fax:
 030 - 2201206 30
https://www.ovis-consulting.de
Geschäftsführer: Thomas Wiese
Handelsregister: Berlin - Charlottenburg, HRB 155424B
UST-IdNr.: DE293333139
Am 02.04.2015 um 21:24 schrieb Nate Smith <nsmith at careyweb.com>:
 
I´m running maybe 7 active VMs on two servers across two different luns/pools (basically have a "Fast" storage Pool and a "Slower" storage Pool).
 
 
From: T. Wiese, OVIS IT Consulting GmbH [mailto:thomas.wiese at ovis-consulting.de] 
Sent: Thursday, April 02, 2015 3:14 PM
To: Nate Smith
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Best infrastructure for VSphere/Hyper-V
 
I remember that i had also a problem on IBM Server... 
See http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1030265
That helps me...
How many VMs did you had on every lun? 
Big lungs make sometimes trouble on vmware. 
 



Mit freundlichen Grüßen

Thomas Wiese
Geschäftsführender Gesellschafter
OVIS IT Consulting GmbH
Arnulfstraße 95, 12105 Berlin
 Tel.:
 030 - 2201206 01
 Fax:
 030 - 2201206 30
https://www.ovis-consulting.de
Geschäftsführer: Thomas Wiese
Handelsregister: Berlin - Charlottenburg, HRB 155424B
UST-IdNr.: DE293333139
Am 02.04.2015 um 20:58 schrieb Nate Smith <nsmith at careyweb.com>:
 
You mentioned Queue depth. I will have to check into recommended settings for HyperV...

My Storage Pool is about 32TB raw (10TB used), and I have 6 luns. I also have a 6TB pool that is 2 luns feeding my 3 system hyper-v cluster. 
 
Situations that trigger dopout behavior include: snapshot initiation on the hypervisor, checkpoint deletion (with accompanying vhdx merge). I´m using Dell R720s with QLE2562 cards. 
 
I´ve also had this problem on an IBM server SR2625ULXLR which was an intel 5520 chipset. We discussed this a bit back, but both these systems have risers, and we surmised it could be a riser issue, but I don´t know if I buy it.
 
So FC Switch is a dual (full mesh) HP8-20Q which is a Qlogic OEM. 
 
How do I turn of sr-iov and pci-express speed? Is this a Bios or Omnios setting? I posted errors a couple times.  
(you responded these could be bios)
 
I get weird "Drop outs" in certain IO situations. I get no notice of the drop out, just a notice when it comes back up.
 
Apr  2 09:30:32 storm3 fct: [ID 132490 kern.notice] NOTICE: qlt2,0 LINK UP, portid 20100, topology Fabric Pt-to-Pt,speed 8G
Apr  2 09:30:32 storm3 fct: [ID 132490 kern.notice] NOTICE: qlt0,0 LINK UP, portid 20000, topology Fabric Pt-to-Pt,speed 8G
Apr  2 09:31:25 storm3 last message repeated 1 time
Apr  2 09:31:54 storm3 fct: [ID 132490 kern.notice] NOTICE: qlt2,0 LINK UP, portid 20100, topology Fabric Pt-to-Pt,speed 8G
Apr  2 09:31:56 storm3 fct: [ID 132490 kern.notice] NOTICE: qlt1,0 LINK UP, portid 10000, topology Fabric Pt-to-Pt,speed 8G
Apr  2 09:31:56 storm3 fct: [ID 132490 kern.notice] NOTICE: qlt3,0 LINK UP, portid 10100, topology Fabric Pt-to-Pt,speed 8G
 
 
From: T. Wiese, OVIS IT Consulting GmbH [mailto:thomas.wiese at ovis-consulting.de] 
Sent: Thursday, April 02, 2015 2:25 PM
To: Nate Smith
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Best infrastructure for VSphere/Hyper-V
 
Which server system do you use for storage? HP, DELL, Supermicro? 
Turn off sr-iov thats make many problems and try to turn off auto detection für pci-express speed. And don´t use any PCI-Express slots for graphic cards!
 
Thomas
 
Mit freundlichen Grüßen

Thomas Wiese
Geschäftsführender Gesellschafter
OVIS IT Consulting GmbH
Arnulfstraße 95, 12105 Berlin
 
Tel.:
 
030 - 2201206 01
 
Fax:
 
030 - 2201206 30
https://www.ovis-consulting.de
Geschäftsführer: Thomas Wiese
Handelsregister: Berlin - Charlottenburg, HRB 155424B
UST-IdNr.: DE293333139
Am 02.04.2015 um 20:16 schrieb Nate Smith <nsmith at careyweb.com>:
 
Hmmm. Was hoping for an 8GB variant (I´m on QL2562), might have to go down to this to see how well it works.
 
Thanks,
 
-Nate
 
From: T. Wiese, OVIS IT Consulting GmbH [mailto:thomas.wiese at ovis-consulting.de] 
Sent: Thursday, April 02, 2015 2:15 PM
To: Nate Smith
Subject: Re: [OmniOS-discuss] Best infrastructure for VSphere/Hyper-V
 
Of course... 
We use Emulex LPe11002 Dual Port 4Gb PCIe Fibre
With latest firmware. Works great!!!
All VMServer use RoundRoubin as pathselection.
For FC Switch we use 2 x Brocade 5000. 
Low cost, good speed an stable ^^
 
bye
Thomas
 




Mit freundlichen Grüßen

Thomas Wiese
Geschäftsführender Gesellschafter
OVIS IT Consulting GmbH
Arnulfstraße 95, 12105 Berlin
 
Tel.:
 
030 - 2201206 01
 
Fax:
 
030 - 2201206 30
https://www.ovis-consulting.de
Geschäftsführer: Thomas Wiese
Handelsregister: Berlin - Charlottenburg, HRB 155424B
UST-IdNr.: DE293333139
Am 02.04.2015 um 19:35 schrieb Nate Smith <nsmith at careyweb.com>:
 
Can I ask what model cards, Thomas?
 
From: T. Wiese, OVIS IT Consulting GmbH [mailto:thomas.wiese at ovis-consulting.de] 
Sent: Thursday, April 02, 2015 1:34 PM
To: Dan McDonald
Cc: Nate Smith; omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Best infrastructure for VSphere/Hyper-V
 
Hi,
we had very good experience with omnios and fibrechannel. 
We tried it with infiniband and SRP but no look. Because the Mellanox driver were only community based. 
But after we changed all back to FibreChannel ist works great. 
We use only EmulexCards, because there were listed by VMware in the HCL for ESXi 5.5
 
We had 10 VMHosts, 3 Storageserver each witch 1 dual port FC Card.
They host 124 VMs, 10 Terminalserver, 5 SQL-Server, and 8 ExchangeServer. 
No Probs thins were changed back to FC!
 
So i can say, use HCL listed Cards for VMWare!
 
Bye
Thomas
 
 
 





Mit freundlichen Grüßen

Thomas Wiese
Geschäftsführender Gesellschafter
OVIS IT Consulting GmbH
Arnulfstraße 95, 12105 Berlin
 
Tel.:
 
030 - 2201206 01
 
Fax:
 
030 - 2201206 30
https://www.ovis-consulting.de
Geschäftsführer: Thomas Wiese
Handelsregister: Berlin - Charlottenburg, HRB 155424B
UST-IdNr.: DE293333139
Am 02.04.2015 um 18:58 schrieb Dan McDonald <danmcd at omniti.com>:
 





On Apr 2, 2015, at 12:43 PM, Nate Smith <nsmith at careyweb.com> wrote:

Is it limited to NFS on VSphere, or is there some way I can get this working with Hyper-V (which would be vastly preferable due to licensing advantages for me)?

Hyper-V uses SMB3.0, right?  Like iSCSI, illumos community member (and storage vendor) Nexenta has done nontrivial amounts of work here.  It's "simply" a matter of upstreaming it.

Dan

_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss at lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150403/1f125fcd/attachment-0001.html>


More information about the OmniOS-discuss mailing list