<div dir="ltr">Sorry, Adding list. <br><div>Zpool status showing this it is not showing size of the pool anywhere in the output<br><br>root@omni:~# zpool status acipool<br> pool: acipool<br> state: ONLINE<br> scan: scrub repaired 0 in 0h32m with 0 errors on Wed Sep 25 13:41:38 2013<br>
config:<div class="im"><br><br> NAME STATE READ WRITE CKSUM<br> acipool ONLINE 0 0 0<br> raidz2-0 ONLINE 0 0 0<br> c2t0d0 ONLINE 0 0 0<br>
c2t1d0 ONLINE 0 0 0<br>
c2t2d0 ONLINE 0 0 0<br> c2t3d0 ONLINE 0 0 0<br><br></div>errors: No known data errors<br><br></div>It is quite confusing every command shows very different output may be my lack of understanding with the technology. <br>
<br><br>root@omni:~# df -h<div class="im"><br>Filesystem Size Used Avail Use% Mounted on<br></div>rpool/ROOT/napp-it-0.9b3 <div> 27G 20G 7.0G 75% /<br>swap 5.1G 336K 5.1G 1% /etc/svc/volatile<br>
/usr/lib/libc/libc_hwcap1.so.1 27G 20G 7.0G 75% /lib/libc.so.1<br>swap 5.1G 1.5M 5.1G 1% /tmp<br>swap 5.1G 52K 5.1G 1% /var/run<br>acipool 284G 31G 253G 11% /acipool<br>
acipool/aci-nfs 253G 48K 253G 1% /acipool/aci-nfs<br>acipool/cmdnfs 422G 170G 253G 41% /acipool/cmdnfs<br>acipool/iscsi 253G 59K 253G 1% /acipool/iscsi<br>rpool/export 7.0G 32K 7.0G 1% /export<br>
rpool/export/home 7.0G 31K 7.0G 1% /export/home<br>rpool 7.0G 39K 7.0G 1% /rpool<br><br>root@omni:~# zfs list acipool<br>NAME USED AVAIL REFER MOUNTPOINT<br>acipool 202G 253G 30.6G /acipool<br>
<br>root@omni:~# zpool list<br>NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT<br>acipool 928G 406G 522G - 43% 1.00x ONLINE -<br>rpool 37G 27.4G 9.60G - 74% 1.00x ONLINE -<br>
<br><div><div class="gmail_extra"><br></div><div class="gmail_extra">df -h showing size of 284G<br></div><div class="gmail_extra">zfs list showing used 202G available 253G means 202+253 = 455GB<br></div><div class="gmail_extra">
zpool list is showing size 928 which is also not true. lets say if it is
calculating the actual size of the disk rather then usable size then
its still not true.<br>root@omni:~# parted -l | grep Disk<br>Disk /dev/dsk/c1d0p0: 40.0GB<br>
Disk /dev/dsk/c2t0d0p0: 320GB<br>Disk /dev/dsk/c2t1d0p0: 500GB<br>Disk /dev/dsk/c2t2d0p0: 250GB<br>Disk /dev/dsk/c2t3d0p0: 250GB<br><br></div><div class="gmail_extra">if i calculate the physical size 320+500+250+250 = 1320G and zpool list showing 928G.<br>
<br></div><div class="gmail_extra">O_o any help will be highly appreciated....<br></div><div class="gmail_extra">Thanks,</div></div></div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Sep 27, 2013 at 12:03 AM, Richard Elling <span dir="ltr"><<a href="mailto:richard.elling@richardelling.com" target="_blank">richard.elling@richardelling.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><br><div><div class="im"><div>On Sep 26, 2013, at 2:41 AM, Muhammad Yousuf Khan <<a href="mailto:sirtcp@gmail.com" target="_blank">sirtcp@gmail.com</a>> wrote:</div>
<br><blockquote type="cite"><div dir="ltr"><div><div><div>is this a bug or result of wrong configuration, to me it seems like a bug. <br>i notice that one of my FS set on top a pool (acipool) and that has been grown instead of base pool.<br>
</div></div></div></div></blockquote><div><br></div></div><div>The zfs command does not show the size of the pool. What does "zpool status" show for the</div><div>size of the pool?</div><div> -- richard</div><br>
<blockquote type="cite"><div><div class="h5"><div dir="ltr"><div><div><div><br>Filesystem Size Used Avail Use% Mounted on<br>
rpool/ROOT/napp-it-0.9b3 27G 19G 8.3G 70% /<br>swap 13G 340K 13G 1% /etc/svc/volatile<br>/usr/lib/libc/libc_hwcap1.so.1 27G 19G 8.3G 70% /lib/libc.so.1<br>swap 13G 0 13G 0% /tmp<br>
swap 13G 52K 13G 1% /var/run<br>acipool 284G 25G 259G 9% /acipool<br>acipool/aci-nfs 259G 48K 259G 1% /acipool/aci-nfs<br>acipool/cmdnfs 428G 170G 259G 40% /acipool/cmdnfs<br>
acipool/iscsi 259G 58K 259G 1% /acipool/iscsi<br><br><br></div>as u can see acipool/cmdnfs is grown to 428GB which resides on acipool, in reality acipool has to be on this size.<br></div><br></div>
<div>
here are some other impotent findings.<br></div><div><br>NAME PROPERTY VALUE SOURCE<br>acipool/cmdnfs type filesystem -<br>acipool/cmdnfs creation Tue Sep 10 12:33 2013 -<br>
acipool/cmdnfs used 169G -<br>acipool/cmdnfs available 258G -<br>acipool/cmdnfs referenced 169G -<br>acipool/cmdnfs compressratio 1.00x -<br>
acipool/cmdnfs mounted yes -<br>acipool/cmdnfs quota none default<br>acipool/cmdnfs reservation none default<br>acipool/cmdnfs recordsize 128K default<br>
acipool/cmdnfs mountpoint /acipool/cmdnfs default<br>acipool/cmdnfs sharenfs rw local<br>ver sub FS grown to the actual disk size.<br><br></div><div>as you can see the FS size via ZFS command is still the same 258GB which previously in df -h command showed as 429GB<br>
</div><div><br></div><div>any idea. <br><br>is this a bug or my wrong configuration, can anybody throw some light on this matter.<br><br>Thanks,<br><br>Myk<br></div><div><br></div><div><br></div></div></div></div>
_______________________________________________<br>OmniOS-discuss mailing list<br><a href="mailto:OmniOS-discuss@lists.omniti.com" target="_blank">OmniOS-discuss@lists.omniti.com</a><br><a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
</blockquote></div><br><div>
<div style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-wrap:break-word;word-spacing:0px">
<span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div style="word-wrap:break-word">
<span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div style="word-wrap:break-word">
<span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div style="word-wrap:break-word">
<span style="border-spacing:0px;text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div style="word-wrap:break-word">
--</div><div style="word-wrap:break-word"><br></div><div style="word-wrap:break-word"><a href="mailto:Richard.Elling@RichardElling.com" target="_blank">Richard.Elling@RichardElling.com</a><br><a href="tel:%2B1-760-896-4422" value="+17608964422" target="_blank">+1-760-896-4422</a><br>
<br><br></div></span></div></span></div></span></div></span></div>
</div>
<br></div></blockquote></div><br></div>