[OmniOS-discuss] How do non-rpool ZFS filesystems get mounted?

Jim Klimov jimklimov at cos.ru
Wed Mar 5 08:56:50 UTC 2014


On 2014-03-05 00:29, Mark Harrison wrote:
> You mention 'directories' being empty. Does /fs3-test-02 contain empty
> directories before being mounted? If so, this will be why zfs thinks
> it's isn't empty and then fail to mount it. However, the child
> filesystems might still mount because their directories are empty,
> giving the appearance of everything being mounted OK.

Just in case, such cases my be verified with df which returns the
actual mounted filesystem which provides the tested directory or file:

# df -k /lib/libzfs.so /lib/libc.so /var/log/syslog
Filesystem            kbytes    used   avail capacity  Mounted on
rpool/ROOT/sol10u10  30707712 1826637 7105279    21%    /
rpool/ROOT/sol10u10/usr
                      30707712  508738 7105279     7%    /usr
rpool/SHARED/var/log 4194304    1491 3638955     1%    /var/log


This way you can test for example if a directory is "standalone"
or an actively used mountpoint of a ZFS POSIX dataset.

I think a "zpool list" can help in your debugging to see if the
pools in question are in fact imported before "zfs mount -a",
or if some unexpected magic happens and the "zfs" command does
indeed trigger the imports.

On 2014-03-05 00:03, Chris Siebenmann wrote:
 > As far as I can tell from running truss on the 'zfs mount -a' in
 > /lib/svc/method/fs-local, this *does not* mount filesystems from pools
 > other than rpool. However the mounts are absent immediately before it
 > runs and present immediately afterwards. So: does anyone understand
 > how this works? I assume 'zfs mount -a' is doing some ZFS action that
 > activates non-rpool pools and causes them to magically mount their
 > filesystems?

Regarding the "zfs mount -a" - I am not sure why it errors out
in your case, I can only think of some extended attributes being
in use, or overlay-mounts, or stuff like that - though such things
are likely to come up in "strange" runtime cases to mostly block
un-mounts, not in orderly startup scenarios...

Namely, one thing that may be a problem is if a directory in
question is a current-working-dir for some process, or if a file
has been created, used, deleted (while it remains open by some
process) which is quite possible for the likes of /var/tmp paths.
But even so, it is likely to block unmounts but not over-mounts
as long as the directory is (seems) empty.

Also, as at least a workaround, you can switch the mountpoint
to "legacy" and refer the dataset from /etc/vfstab including
the "-O" option for overlay-mount. Unfortunately there is no
equivalent dataset attribute at the moment, so it is not a very
convenient solution for possible trees of datasets - but may
be quite acceptable for leaf datasets where you don't need to
automate any sub-mounts.
Vote for https://www.illumos.org/issues/997 ;)

And finally, I also don't know where the pools get imported,
but "zfs mount -a" *should* only mount datasets with canmount=on
and zoned=off (if in global zone) and a valid mountpoint path,
picked from any pools imported at the moment. The mounts from
different pools may be done in parallel, so if you need some
specific order of mounts (i.e. rpool/export/home and then
datapool/export/home/user... okay, there is in fact no problem
with these - but just to give *some* viable example) you may
have to specify stuff in /etc/vfstab.

I can guess (but would need to grok the code) that something
like "zpool import -N -a" is done in some part of the root
environment preparation to prepare all pools referenced in
/etc/zfs/zpool.cache, perhaps some time after the rpool is
imported and the chosen root dataset is mounted explicitly
to anchor the running kernel.

As another workaround, you can export the pool which contains
your "problematic" datasets so it is un-cached from zpool.cache
and is not automatically imported nor mounted during the system
bootup - so that the system becomes able to boot successfully
to the point of being accessible over ssh for example. Then you
import and mount that other pool as an SMF service, upon which
your other services can depend to proceed, see here for ideas
and code snippets:

http://wiki.openindiana.org/oi/Advanced+-+ZFS+Pools+as+SMF+services+and+iSCSI+loopback+mounts

HTH,
//Jim Klimov



More information about the OmniOS-discuss mailing list