[OmniOS-discuss] Restrucuring ZPool required?

priyadarshan priyadarshan at scs.re
Tue Jun 19 06:52:55 UTC 2018


Thank you Gea,

Very useful and informative details.

Priyadarshan


> On 18 Jun 2018, at 11:46, Guenther Alka <alka at hfg-gmuend.de> wrote:
> 
> An Slog (you wrote ZIL but you meant Slog as ZIL is onpool logging while Slog is logging on a dedicated device) is not a write cache. It's a logging feature when sync write is enabled and only read after a crash on next bootup. CIFS does not use sync per default and for NFS (that wants sync per default) you can and should disable when you use NFS as a pure backup target. Your benchmarks clearly show that sync is not enabled otherwise write performance with a large vdev would be more like 30-50 MB/s instead your 400-700 MB/s. 
> 
> If you want to enable sync, you should look at Intel Optane as Slog as this is far better than any other Flash based Slog.
> 
> ZFS use RAM as read and write cache. The default write cache is 10% of RAM up to 4GB so the first option to improve write (and read) performance is to add more RAM. A fast L2Arc (ex Intel Optane, size 5-max 10x RAM) can help if you cannot increase RAM or if you want a read ahead functionality that you can enable on an L2Arc. Even write performance is improved with a larger read cache as even writes need to read metadata.
> 
> Beside that, I would not create a raid Zn vdev from 24 disks. I would prefer 3 vdevs from 8 disks or at least two vdevs from 12 disks as pool iops scale with number of vdevs.
> 
> 
> Gea
> @napp-it.org
> 
> Am 18.06.2018 um 08:50 schrieb priyadarshan:
>>> On 18 Jun 2018, at 08:27, Oliver Weinmann <oliver.weinmann at icloud.com>
>>>  wrote:
>>> 
>>> Hi,
>>> 
>>> we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then when we created the pool we only cared about disk space and so we created a raidz2 pool with all 24disks in one vdev. I have the impression that this is cool for disk space but is really bad for IO since this only provides the IO of a single disk. We only use it for backups and cold CIFS data but I have the impression that especially running a single VEEAM backup copy job really maxes out the IO. In our case the VEEAM backup copy job reads and writes the data from the storage. Now I wonder if it makes sense to restructure the Pool. I have to admit that I don't have any other system with a lot of disk space so I can't simply mirror the snapshots to another system and recreate the pool from scratch.
>>> 
>>> Would adding two ZIL SSDs improve performance?
>>> 
>>> Any help is much appreciated.
>>> 
>>> Best Regards,
>>> Oliver
>>> 
>> Hi,
>> 
>> I would be interested to know as well.
>> 
>> Sometimes we have same issue: need for large space vs need to optmise for speed (read, write, or both). We also are using, at the moment, 10TB disks, although never do RAID-Z2 with more than 10 disks.
>> 
>> This page has some testing that was useful to us: 
>> https://calomel.org/zfs_raid_speed_capacity.html
>> 
>> 
>> Section «Spinning platter hard drive raids» has your use case (although 4TB, not 10TB):
>> 
>> 24x 4TB, 12 striped mirrors,   45.2 TB,  w=696MB/s , rw=144MB/s , r=898MB/s 
>> 24x 4TB, raidz (raid5),        86.4 TB,  w=567MB/s , rw=198MB/s , r=1304MB/s 
>> 24x 4TB, raidz2 (raid6),       82.0 TB,  w=434MB/s , rw=189MB/s , r=1063MB/s 
>> 24x 4TB, raidz3 (raid7),       78.1 TB,  w=405MB/s , rw=180MB/s , r=1117MB/s 
>> 24x 4TB, striped raid0,        90.4 TB,  w=692MB/s , rw=260MB/s , r=1377MB/s 
>> 
>> Different adapters/disks will change the results, but I do not thing ratio will change much.
>> 
>> It would be interesting to see how ZIL would affect that.
>> 
>> 
>> Priyadarshan
>> _______________________________________________
>> OmniOS-discuss mailing list
>> 
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list