[OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed

Rune Tipsmark rt at steait.net
Mon Jan 19 11:55:09 UTC 2015


hi all,



just in case there are other people out there using their ZFS box against vSphere 5.1 or later... I found my storage vmotion were slow... really slow... not much info available and so after a while of trial and error I found a nice combo that works very well in terms of performance, latency as well as throughput and storage vMotion.



- Use ZFS volumes instead of thin provisioned LU's - Volumes support two of the VAAI features

- Use thick provisioning disks, lazy zeroed disks in my case reduced storage vMotion by 90% or so - machine 1 dropped from 8½ minutes to 23 seconds and machine 2 dropped from ~7 minutes to 54 seconds... a rather nice improvement simply by changing from thin to thick provisioning.

- I dropped my Qlogic HBA max queue depth from default 64 to 16 on all ESXi hosts and now I see an average latency of less than 1ms per data store (on 8G fibre channel).  Of course there are spikes when doing storage vMotion at these speeds but its well worth it.



I am getting to the point where I am almost happy with my ZFS backend for vSphere.



br,

Rune
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150119/f63f097c/attachment-0001.html>


More information about the OmniOS-discuss mailing list