[OmniOS-discuss] mountpoint on the parent pool

Dirk Willems dirk.willems at exitas.be
Sun Feb 12 21:53:08 UTC 2017


Hi Jim,


Thanks,

if I install the zone without creating the filesystem DATA/Zones2/Test  
I'm not allowed to install the zone like next example


root at OmniOS:/root# zfs list
NAME                                         USED  AVAIL  REFER
MOUNTPOINT
DATA                                         435G   774G    23K none
DATA/Backup                                  432G   774G   432G /Backup
DATA/Zones2                                 2,13G   774G    23K /Zones2
DATA/Zones2/NGINX                            959M   774G    24K /Zones2/NGINX
DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K /export/home
DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K /export/home/witte
DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
DATA/Zones2/Percona                         1,19G   774G    24K /Zones2/Percona
DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K /export/home
DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K /export/home/witte
DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy


But when I create the filesystem DATA/Zones2/Test then I am allowed to 
install the zone.


root at OmniOS:/root# zfs list
NAME                                         USED  AVAIL  REFER
MOUNTPOINT
DATA                                         435G   774G    23K  none
DATA/Backup                                  432G   774G   432G /Backup
DATA/Zones2                                 2,13G   774G    23K /Zones2
DATA/Zones2/NGINX                            959M   774G    24K /Zones2/NGINX
DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K /export/home
DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K /export/home/witte
DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
DATA/Zones2/Percona                         1,19G   774G    24K /Zones2/Percona
DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K /export/home
DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K /export/home/witte
DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
DATA/Zones2/Test                              23K   774G    23K  /Zones2/Test

After installing the Test zone I have this

root at OmniOS:/root# zfs list
NAME                                         USED  AVAIL  REFER
MOUNTPOINT
DATA                                         435G   774G    23K  none
DATA/Backup                                  433G   774G   433G /Backup
DATA/Zones2                                 2,66G   774G    24K /Zones2
DATA/Zones2/NGINX                            959M   774G    24K /Zones2/NGINX
DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K /export/home
DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K /export/home/witte
DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
DATA/Zones2/Percona                         1,19G   774G    24K /Zones2/Percona
DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K /export/home
DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K /export/home/witte
DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
DATA/Zones2/Test                             540M   774G    23K  /Zones2/Test
DATA/Zones2/Test/ROOT                        540M   774G    23K legacy
DATA/Zones2/Test/ROOT/zbe                    540M   774G   540M legacy


And if I uninstall the zone it also clean up the filesystem 
DATA/Zones2/Test like it should be.


root at OmniOS:/root# zoneadm -z Test uninstall
Are you sure you want to uninstall zone Test (y/[n])? y
root at OmniOS:/root# zfs list
NAME                                         USED  AVAIL  REFER MOUNTPOINT
DATA                                         435G   774G    23K none
DATA/Backup                                  433G   774G   433G /Backup
DATA/Zones2                                 2,13G   774G    24K /Zones2
DATA/Zones2/NGINX                            959M   774G    24K 
/Zones2/NGINX
DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K /export/home
DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K 
/export/home/witte
DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
DATA/Zones2/Percona                         1,19G   774G    24K 
/Zones2/Percona
DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K /export/home
DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K 
/export/home/witte
DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy


So in my opinion zoneadm does not create the filesystem automatic, but 
clean afterwards well the filesystem like it should be.


That's make me questioning about missing some logic in the zoneadm for 
creating automatisch the filesystem,  after all cleaning up after 
uninstall the zone is working like designed ....


Kind Regards,

Dirk


On 12-02-17 20:46, Jim Klimov wrote:
> 12 февраля 2017 г. 16:08:11 CET, Dirk Willems <dirk.willems at exitas.be> пишет:
>> Hi Jim,
>>
>>
>> Thank you for your feedback and your thoughs, I make some test for
>> clarifying my experience ...
>>
>>
>> *_On Solaris 11_*
>>
>> root at soledg14:~# zpool create -O mountpoint=none -O compression=lz4
>> Testpool c0t60060160F4D0320042F89EFA1FF1E611d0
>>
>> root at soledg14:~# zpool list
>> Testpool            49.8G   122K  49.7G 0%  1.00x  ONLINE  -
>> rpool                278G  83.7G   194G  30%  1.00x  ONLINE  -
>>
>> root at soledg14:~# zfs list
>> Testpool 143K  49.0G    31K  none
>> Testpool/TestZones                                31K  49.0G 31K
>> /TestZones
>>
>> root at soledg14:~# zoneadm list -civ
>>   - TestZone         configured  /TestZones/TestZone solaris    excl
>>
>> root at soledg14:~# zoneadm -z TestZone install
>> The following ZFS file system(s) have been created:
>>      Testpool/TestZones/TestZone
>> Progress being logged to
>> /var/log/zones/zoneadm.20170212T140109Z.TestZone.install
>>         Image: Preparing at /TestZones/TestZone/root.
>>
>>   Install Log: /system/volatile/install.16565/install_log
>>   AI Manifest: /tmp/manifest.xml.vvcbTb
>>    SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
>>      Zonename: TestZone
>> Installation: Starting ...
>>
>>          Creating IPS image
>> Startup linked: 1/1 done
>>          Installing packages from:
>>              solaris
>>               origin: http://oracle-oem-oc-mgmt-ops-proxy-edg:8002/IPS/
>>              cacao
>>               origin: http://oracle-oem-oc-mgmt-ops-proxy-edg:8002/IPS/
>>              mp-re
>>               origin: http://oracle-oem-oc-mgmt-ops-proxy-edg:8002/IPS/
>>              opscenter
>>               origin: http://oracle-oem-oc-mgmt-ops-proxy-edg:8002/IPS/
>> DOWNLOAD                                PKGS         FILES    XFER
>> (MB)   SPEED
>> Completed                            285/285   55878/55878 479.8/479.8
>>
>> 5.9M/s
>>
>> PHASE                                          ITEMS
>> Installing new actions                   73669/73669
>> Updating package state database                 Done
>> Updating package cache                           0/0
>> Updating image state                            Done
>> Creating fast lookup database                   Done
>> Updating package cache                           4/4
>> Installation: Succeeded
>>
>>        Note: Man pages can be obtained by installing pkg:/system/manual
>>
>>   done.
>>
>>          Done: Installation completed in 221.859 seconds.
>>
>>
>>   Next Steps: Boot the zone, then log into the zone console (zlogin -C)
>>
>>                to complete the configuration process.
>>
>> Log saved in non-global zone as
>> /TestZones/TestZone/root/var/log/zones/zoneadm.20170212T140109Z.TestZone.install
>>
>>
>> root at soledg14:~# zfs list
>> Testpool 932M  48.1G    31K  none
>> Testpool/TestZones                                   931M 48.1G    32K
>>
>> /TestZones
>> Testpool/TestZones/TestZone                          931M 48.1G    32K
>>
>> /TestZones/TestZone
>> Testpool/TestZones/TestZone/rpool                    931M 48.1G    31K
>>
>> /rpool
>> Testpool/TestZones/TestZone/rpool/ROOT               931M 48.1G    31K
>>
>> legacy
>> Testpool/TestZones/TestZone/rpool/ROOT/solaris       931M  48.1G 847M
>> /TestZones/TestZone/root
>> Testpool/TestZones/TestZone/rpool/ROOT/solaris/var  84.3M  48.1G 83.4M
>>
>> /TestZones/TestZone/root/var
>> Testpool/TestZones/TestZone/rpool/VARSHARE            31K 48.1G    31K
>>
>> /var/share
>> Testpool/TestZones/TestZone/rpool/export              62K 48.1G    31K
>>
>> /export
>> Testpool/TestZones/TestZone/rpool/export/home         31K 48.1G    31K
>>
>> /export/home
>>
>>
>> So on Solaris 11 it allow you to install a zone even if the parent
>> mountpoint = none
>>
>>
>> *_Did also some test on OmniOS_*
>>
>> root at OmniOS:/root# zfs list
>> NAME                                         USED  AVAIL  REFER
>> MOUNTPOINT
>> DATA 435G   774G    23K  /LXZones
>> DATA/Backup                                  432G   774G   432G /Backup
>> DATA/LXMatterMost                            408M   774G   347M
>> /LXZones/LXMatterMost
>> DATA/Zones2                                 2,13G   774G    23K /Zones2
>> DATA/Zones2/NGINX                            959M   774G    24K
>> /Zones2/NGINX
>> DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
>> DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
>> DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K
>> /export/home
>> DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
>> DATA/Zones2/Percona                         1,19G   774G    24K
>> /Zones2/Percona
>> DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
>> DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
>> DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K
>> /export/home
>> DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
>>
>>
>> root at OmniOS:/root# zfs set mountpoint=none DATA
>> root at OmniOS:/root# zfs list
>> NAME                                         USED  AVAIL  REFER
>> MOUNTPOINT
>> DATA 435G   774G    23K  none
>> DATA/Backup                                  432G   774G   432G /Backup
>> DATA/LXMatterMost                            408M   774G   347M none
>> DATA/Zones2                                 2,13G   774G    23K /Zones2
>> DATA/Zones2/NGINX                            959M   774G    24K
>> /Zones2/NGINX
>> DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
>> DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
>> DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K
>> /export/home
>> DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
>> DATA/Zones2/Percona                         1,19G   774G    24K
>> /Zones2/Percona
>> DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
>> DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
>> DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K
>> /export/home
>> DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
>>
>> root at OmniOS:/root# zonecfg -z Test
>> Test: No such zone configured
>> Use 'create' to begin configuring a new zone.
>> zonecfg:Test> create
>> zonecfg:Test> set zonepath=/Zones2/Test
>> zonecfg:Test> info
>> zonecfg:Test> exit
>> root at OmniOS:/root# zonecfg -z Test info
>> zonename: Test
>> zonepath: /Zones2/Test
>> brand: ipkg
>> autoboot: false
>> bootargs:
>> pool:
>> limitpriv:
>> scheduling-class:
>> ip-type: shared
>> hostid:
>> fs-allowed:
>> root at OmniOS:/root#
>>
>> I'm not making here the filesytem DATA/Zones2/Test !!!!
>>
>>
>> root at OmniOS:/root# zfs list
>> NAME                                         USED  AVAIL  REFER
>> MOUNTPOINT
>> DATA                                         435G   774G    23K none
>> DATA/Backup                                  432G   774G   432G /Backup
>> DATA/LXMatterMost                            408M   774G   347M none
>> DATA/Zones2                                 2,13G   774G    23K /Zones2
>> DATA/Zones2/NGINX                            959M   774G    24K
>> /Zones2/NGINX
>> DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
>> DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
>> DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K
>> /export/home
>> DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
>> DATA/Zones2/Percona                         1,19G   774G    24K
>> /Zones2/Percona
>> DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
>> DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
>> DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K
>> /export/home
>> DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
>>
>> root at OmniOS:/root# zoneadm list -civ
>>    ID NAME             STATUS     PATH BRAND    IP
>>     0 global           running    / ipkg     shared
>>     - DNS              installed  /Zones/DNS lipkg    excl
>>     - LXDebian8        installed  /Zones/LXDebian8 lx       excl
>>     - LXMatterMost     installed  /LXZones/LXMatterMost lx       excl
>>     - Percona          installed  /Zones2/Percona lipkg    excl
>>     - NGINX            installed  /Zones2/NGINX lipkg    excl
>>     - Test             configured /Zones2/Test ipkg     shared
>>
>> root at OmniOS:/root# zoneadm -z Test install
>> WARNING: zone LXMatterMost is installed, but its zonepath
>> /LXZones/LXMatterMost does not exist.
>> Sanity Check: Looking for 'entire' incorporation.
>> ERROR: the zonepath must be a ZFS dataset.
>> The parent directory of the zonepath must be a ZFS dataset so that the
>> zonepath ZFS dataset can be created properly.
>> root at OmniOS:/root#
>>
>> On this point I had to uninstall the LXMatterMost zone because I set
>> the
>> mountpoint=none on the DATA pool and the LXMatterMost zone depend on
>> the
>> mountpoint=/LXZones of the DATA pool.
>>
>> Then I also created a filesystem DATA/Zones2/Test
>>
>>
>> root at OmniOS:/root# zfs create -o mountpoint=/Zones2/Test
>> DATA/Zones2/Test
>> root at OmniOS:/root# zfs list
>> NAME                                         USED  AVAIL  REFER
>> MOUNTPOINT
>> DATA 435G   774G    23K  none
>> DATA/Backup                                  432G   774G   432G /Backup
>> DATA/Zones2                                 2,13G   774G    23K /Zones2
>> DATA/Zones2/NGINX                            959M   774G    24K
>> /Zones2/NGINX
>> DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
>> DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
>> DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K
>> /export/home
>> DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
>> DATA/Zones2/Percona                         1,19G   774G    24K
>> /Zones2/Percona
>> DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
>> DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
>> DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K
>> /export/home
>> DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
>> DATA/Zones2/Test 23K   774G    23K  /Zones2/Test
>>
>>
>> And now it is allowed to install the zone :)
>>
>>
>> root at OmniOS:/root# zoneadm -z Test install
>> /Zones2/Test must not be group readable.
>> /Zones2/Test must not be group executable.
>> /Zones2/Test must not be world readable.
>> /Zones2/Test must not be world executable.
>> /Zones2/Test: changing permissions to 0700.
>> Sanity Check: Looking for 'entire' incorporation.
>>         Image: Preparing at /Zones2/Test/root.
>>
>>     Publisher: Using omnios (http://pkg.omniti.com/omnios/r151020/).
>>         Cache: Using /var/pkg/publisher.
>>    Installing: Packages (output follows)
>> Packages to install: 388
>> Mediators to change:   1
>>   Services to change:   5
>>
>> DOWNLOAD                                PKGS         FILES    XFER
>> (MB)   SPEED
>> Completed                            388/388   37406/37406
>> 332.0/332.0    0B/s
>>
>> PHASE                                          ITEMS
>> Installing new actions                   61025/61025
>> Updating package state database                 Done
>> Updating package cache                           0/0
>> Updating image state                            Done
>> Creating fast lookup database                   Done
>>
>>        Note: Man pages can be obtained by installing pkg:/system/manual
>>   Postinstall: Copying SMF seed repository ... done.
>>          Done: Installation completed in 93,895 seconds.
>>
>>   Next Steps: Boot the zone, then log into the zone console (zlogin -C)
>>                to complete the configuration process
>>
>> root at OmniOS:/root# zoneadm -z Test boot; zlogin -C Test
>> [Connected to zone 'Test' console]
>> Loading smf(5) service descriptions: 113/113
>> Hostname: Test
>>
>> Test console login:
>>
>> root at OmniOS:/root# zfs list
>> NAME                                         USED  AVAIL  REFER
>> MOUNTPOINT
>> DATA 435G   774G    23K  none
>> DATA/Backup                                  433G   774G   433G /Backup
>> DATA/Zones2                                 2,66G   774G    24K /Zones2
>> DATA/Zones2/NGINX                            959M   774G    24K
>> /Zones2/NGINX
>> DATA/Zones2/NGINX/ROOT                       959M   774G    23K legacy
>> DATA/Zones2/NGINX/ROOT/export                 71K   774G    23K /export
>> DATA/Zones2/NGINX/ROOT/export/home            48K   774G    23K
>> /export/home
>> DATA/Zones2/NGINX/ROOT/export/home/witte      25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/NGINX/ROOT/zbe                   959M   774G   959M legacy
>> DATA/Zones2/Percona                         1,19G   774G    24K
>> /Zones2/Percona
>> DATA/Zones2/Percona/ROOT                    1,19G   774G    23K legacy
>> DATA/Zones2/Percona/ROOT/export               71K   774G    23K /export
>> DATA/Zones2/Percona/ROOT/export/home          48K   774G    23K
>> /export/home
>> DATA/Zones2/Percona/ROOT/export/home/witte    25K   774G    25K
>> /export/home/witte
>> DATA/Zones2/Percona/ROOT/zbe                1,19G   774G  1,19G legacy
>> DATA/Zones2/Test 540M   774G    23K  /Zones2/Test
>> DATA/Zones2/Test/ROOT                        540M   774G    23K legacy
>> DATA/Zones2/Test/ROOT/zbe                    540M   774G   540M legacy
>>
>> So that's why I ask the question :)
>>
>> In my opinion we miss some logic in the zoneadm, we don't need to set a
>>
>> parent mountpoint like i proof with above example.
>>
>> Maybe someone can have a look on the code it would be great to fix
>> this,
>> if I had the knowledge how to fix it it should be great, but I'm
>> unfortunately not so smart and intelligent as you guy's  sorry :(
>>
>> I wish ;)
>>
>>
>> Kind Regards,
>>
>>
>> Dirk
>>
>>
>> On 10-02-17 22:19, Jim Klimov wrote:
>>> 9 февраля 2017 г. 18:09:04 CET, Dirk Willems <dirk.willems at exitas.be>
>> пишет:
>>>> Thanks Dan,
>>>>
>>>>
>>>> I just was wondering about it, because in Solaris it's allowed we do
>> it
>>>> all the time and it's become a automatic behavior for us, but if we
>>>> know
>>>> it we can arrange it like you do.
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 09-02-17 17:41, Dan McDonald wrote:
>>>>>> On Feb 9, 2017, at 11:28 AM, Dirk Willems <dirk.willems at exitas.be>
>>>> wrote:
>>>>>> Hello OmniOS,
>>>>>>
>>>>>> Why isn't allowed to install a LX-Zone or Zone if the DATA pool
>>>> doesn't have a mountpoint on the parent ?
>>>>>> below example doesn't allow you to install a Zone or LX-Zone
>>>>>>
>>>>>> root at OmniOS:/root# zfs list
>>>>>> NAME                           USED  AVAIL  REFER  MOUNTPOINT
>>>>>> DATA                           432G   777G    23K  none
>>>>>> DATA/Backup                    432G   777G   432G  /Backup
>>>>>> DATA/LXZones                    23K   777G    23K  /LXZones
>>>>>>
>>>>>>
>>>>>> below example does allow you to install a Zone or LX-Zone
>>>>>>
>>>>>> root at OmniOS:/root# zfs list
>>>>>> NAME                           USED  AVAIL  REFER  MOUNTPOINT
>>>>>> DATA                           433G   776G    23K  /LXZones
>>>>>> DATA/Backup                    432G   776G   432G  /Backup
>>>>>> DATA/LXMatterMost              229M   776G   228M
>>>> /LXZones/LXMatterMost
>>>>>> It's kind ignoring because i like to make separated filesystems
>> for
>>>> having a nice overview :)
>>>>> All zones (not just LX) need to be a subdirectory of a higher-level
>>>> ZFS filesystem.  This is so zoneadm(1M) can create the zone root.
>>>>> bloody(~)[0]% zfs list | grep zones | grep -v zbe
>>>>> data/zones                              3.90G   509G    23K  /zones
>>>>> data/zones/lipkg0                       3.14G   509G    24K
>>>> /zones/lipkg0
>>>>> data/zones/lipkg0/ROOT                  3.14G   509G    23K  legacy
>>>>> data/zones/lx0                           465M   509G   465M
>>>> /zones/lx0
>>>>> data/zones/lx1                           315M   509G   239M
>>>> /zones/lx1
>>>>> bloody(~)[0]%
>>>>>
>>>>> My bloody box has zones name per their brand, so you can see what I
>>>> mean.
>>>>> Dan
>>>>>
>>> Just in case, do you not mistake zone roots here with delegated
>> datasets (e.g. for data or local progs, but with zfs structure managed
> >from inside the zone)? This is indeed usable even from different pools
>> in solaris 10 up to openindiana at least.
>>> --
>>> Typos courtesy of K-9 Mail on my Samsung Android
> Well, seems there is some mix of apples and oranges here - but you don't sen mountpoint of a pool (or rather its root dataset), while you do have a mountpoint of a sub-dataset dedicated to zones (zones2 and testzones) so the system can find it. Would be more interesting if these did not exist with such names and/or mountpoints before zoneadm call. Otherwise, if there is indeed an error or point of improvement, it is likely in brand scripts and should be easily fixable.
>
> Jim
> --
> Typos courtesy of K-9 Mail on my Samsung Android

-- 

<http://www.exitas.be/>

	
	

**Dirk Willems* *
System Engineer

	

*+32 (0)3 443 12 38* <tel:003234431238>

	

*dirk.willems at exitas.be* <mailto:dirk.willems at exitas.be>

	

*Veldkant 31 - 2550 Kontich* <https://goo.gl/maps/2fuSkEZe3Lz>

	

*www.exitas.be* <http://www.exitas.be>

*Disclaimer* <http://www.exitas.be/legal/disclaimer/>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170212/68380c8e/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: exitas_PMS.png
Type: image/png
Size: 8486 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170212/68380c8e/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gsm.png
Type: image/png
Size: 524 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170212/68380c8e/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Mail.png
Type: image/png
Size: 791 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170212/68380c8e/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: plaats.png
Type: image/png
Size: 803 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170212/68380c8e/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: huis.png
Type: image/png
Size: 796 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170212/68380c8e/attachment-0009.png>


More information about the OmniOS-discuss mailing list