[OmniOS-discuss] Restrucuring ZPool required?

Olaf Marzocchi lists at marzocchi.net
Mon Jun 18 07:33:14 UTC 2018


In that page you should also check the raw output of dd, showing in the last column the IOPs.

Olaf




Il 18 giugno 2018 08:50:16 CEST, priyadarshan <priyadarshan at scs.re> ha scritto:
>
>> On 18 Jun 2018, at 08:27, Oliver Weinmann
><oliver.weinmann at icloud.com> wrote:
>> 
>> Hi,
>> 
>> we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that
>back then when we created the pool we only cared about disk space and
>so we created a raidz2 pool with all 24disks in one vdev. I have the
>impression that this is cool for disk space but is really bad for IO
>since this only provides the IO of a single disk. We only use it for
>backups and cold CIFS data but I have the impression that especially
>running a single VEEAM backup copy job really maxes out the IO. In our
>case the VEEAM backup copy job reads and writes the data from the
>storage. Now I wonder if it makes sense to restructure the Pool. I have
>to admit that I don't have any other system with a lot of disk space so
>I can't simply mirror the snapshots to another system and recreate the
>pool from scratch.
>> 
>> Would adding two ZIL SSDs improve performance?
>> 
>> Any help is much appreciated.
>> 
>> Best Regards,
>> Oliver
>
>Hi,
>
>I would be interested to know as well.
>
>Sometimes we have same issue: need for large space vs need to optmise
>for speed (read, write, or both). We also are using, at the moment,
>10TB disks, although never do RAID-Z2 with more than 10 disks.
>
>This page has some testing that was useful to us:
>https://calomel.org/zfs_raid_speed_capacity.html
>
>Section «Spinning platter hard drive raids» has your use case (although
>4TB, not 10TB):
>
>24x 4TB, 12 striped mirrors,   45.2 TB,  w=696MB/s , rw=144MB/s ,
>r=898MB/s 
>24x 4TB, raidz (raid5),        86.4 TB,  w=567MB/s , rw=198MB/s ,
>r=1304MB/s 
>24x 4TB, raidz2 (raid6),       82.0 TB,  w=434MB/s , rw=189MB/s ,
>r=1063MB/s 
>24x 4TB, raidz3 (raid7),       78.1 TB,  w=405MB/s , rw=180MB/s ,
>r=1117MB/s 
>24x 4TB, striped raid0,        90.4 TB,  w=692MB/s , rw=260MB/s ,
>r=1377MB/s 
>
>Different adapters/disks will change the results, but I do not thing
>ratio will change much.
>
>It would be interesting to see how ZIL would affect that.
>
>
>Priyadarshan
>_______________________________________________
>OmniOS-discuss mailing list
>OmniOS-discuss at lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss


More information about the OmniOS-discuss mailing list