[OmniOS-discuss] zpool fragmentation question

Richard Elling richard.elling at richardelling.com
Mon Feb 15 22:11:40 UTC 2016


> On Feb 15, 2016, at 4:40 AM, Dominik Hassler <hasslerd at gmx.li> wrote:
> 
> Hi there,
> 
> on my server at home (OmniOS r16, patched to the latest version) I added a brand new zpool (simple 2 HDD mirror).
> 
> zpool list shows a fragmentation of 14% on my main pool. I did a recursive snapshot on a dataset on the main pool. transferred the dataset via replication stream to the new pool (zfs send -R mainpool/dataset at backup | zfs recv -F newpool/dataset).
> 
> now zpool list shows a fragmentation of 27% on the *newpool* (no other data have ever been written to that pool).
> 
> How can this be? Was my assumption wrong that send/recv acts like defrag on the receiving end?

The pool’s fragmentation is a roll-up of the metaslab fragmentation. A metaslab’s fragmentation metric is a weighted
estimate of the number of small unallocated spaces in the metaslab. As such, a 100% free metaslab has no
fragmentation. Similarly, a metaslab with a lot of 512-byte spaces free has a higher fragmentation metric.

To get a better idea of the layout, free space, and computed fragmentation metric, use “zdb -mm poolname”

It is not clear how useful the metric is in practice, particularly when comparing pools of different size and 
metaslab counts. IMHO, the zdb -mm output is much more useful than the aggregate metric.
 — richard

--

Richard.Elling at RichardElling.com
+1-760-896-4422



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160215/d7a43375/attachment-0001.html>


More information about the OmniOS-discuss mailing list