[OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed

Richard Elling richard.elling at richardelling.com
Mon Jan 19 17:25:03 UTC 2015


Thanks Rune, more below...

> On Jan 19, 2015, at 5:23 AM, Rune Tipsmark <rt at steait.net> wrote:
> 
> From: Richard Elling <richard.elling at richardelling.com>
> Sent: Monday, January 19, 2015 1:57 PM
> To: Rune Tipsmark
> Cc: omnios-discuss at lists.omniti.com
> Subject: Re: [OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed
>  
> 
>> On Jan 19, 2015, at 3:55 AM, Rune Tipsmark <rt at steait.net <mailto:rt at steait.net>> wrote:
>> 
>> hi all,
>>  
>> just in case there are other people out there using their ZFS box against vSphere 5.1 or later... I found my storage vmotion were slow... really slow... not much info available and so after a while of trial and error I found a nice combo that works very well in terms of performance, latency as well as throughput and storage vMotion.
>>  
>> - Use ZFS volumes instead of thin provisioned LU's - Volumes support two of the VAAI features
>> 
> 
> AFAIK, ZFS is not available in VMware. Do you mean run iSCSI to connect the ESX box to
> the server running ZFS? If so...
> >> I run 8G Fibre Channel

ok, still it is COMSTAR, so the backend is the same

>> - Use thick provisioning disks, lazy zeroed disks in my case reduced storage vMotion by 90% or so - machine 1 dropped from 8½ minutes to 23 seconds and machine 2 dropped from ~7 minutes to 54 seconds... a rather nice improvement simply by changing from thin to thick provisioning.
>> 
> 
> This makes no difference in ZFS. The "thick provisioned" volume is simply a volume with a reservation.
> All allocations are copy-on-write. So the only difference between a "thick" and "thin" volume occurs when
> you run out of space in the pool.
> >> I am talking thick provisioning in VMware, that's where it makes a huge difference

yes, you should always let VMware think it is thick provisioned, even if it isn't. VMware is too
ignorant of copy-on-write file systems to be able to make good decisions.

>> - I dropped my Qlogic HBA max queue depth from default 64 to 16 on all ESXi hosts and now I see an average latency of less than 1ms per data store (on 8G fibre channel).  Of course there are spikes when doing storage vMotion at these speeds but its well worth it.
>> 
> 
> I usually see storage vmotion running at wire speed for well configured systems. When you get 
> into the 2GByte/sec range this can get tricky, because maintaining that flow through the RAM
> and disks requires nontrivial amounts of hardware.
> >> I don't even get close to wire speed unfortunately my SLOGs can only do around 5-600 MBbyte/sec with sync=always.

Indeed, the systems we make fast have enough hardware to be fast.

> More likely, you're seeing the effects of caching, which is very useful for storage vmotion and
> allows you to hit line rate.
> 
> >> Not sure this is the case with using sync=always?

Caching will make a big difference. You should also see effective use of the ZFS prefetcher.

Thanks for sharing your experience.
 -- richard

>>  
>> I am getting to the point where I am almost happy with my ZFS backend for vSphere.
>> 
> 
> excellent!
>  -- richard
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150119/232096c8/attachment.html>


More information about the OmniOS-discuss mailing list