[OmniOS-discuss] Oddity in how much reserved space there is in ZFS pools?

Chris Siebenmann cks at cs.toronto.edu
Thu Oct 23 16:59:38 UTC 2014


 If you have a ZFS pool with mirror vdevs and you look at 'zfs list'
versus 'zpool list', you can see that the available space is somewhat
different between the two. This is a known issue and comes about because
the ZFS code reserves some amount of space that can't be consumed in
normal use and 'zfs list' accounts for this; see eg
	http://cuddletech.com/blog/pivot/entry.php?id=1013

 In current OmniOS and Illumos source this is discussed (and handled)
in the code and comments around spa_slop_shift, spa_get_slop_space(),
and dsl_pool_adjustedsize(). In this, both the code and the comments say
that the reserved space should be about 3.2% of the pool's space, or more
specifically 1/32nd of it. However, this is not the behavior I observe
in test pools; instead when I run a test pool completely out of space at
the user level (eg with 'dd if=/dev/zero of=spaceeater'), with 'zfs list'
reporting 0 bytes free, 'zpool list' reports only about 1.56% free.

 I'm concerned that there's something funny going on here such that
the space limitation is not working properly. Are things working as
expected and I'm just missing something?

 Thanks in advance.

(This is on OmniOS r151010 on an ashift=12 pool with two vdevs, each of
them a 2-way mirror. Specific numbers are that my test pool reports a
total size (in 'zpool list') of 1392 GB and when 'zfs list' reports 0
bytes free 'zpool list' reports 21.7 GB free.  Unless I'm doing the math
wrong, 1/32nd would be 43 GB or so.)

	- cks


More information about the OmniOS-discuss mailing list