It will take a but more to understand the issue, but that assertion indicates that zfs was unable to read the configs(labels) off the vdevs.  And, it's a crappy error message *I think* due to a but in error handling.  However, there is an open illumos ticket about GPT support.  It is unclear if it is unsupported completely or just in boot scenarios.  If there is a complete lack of support, that would explain the issue.<div>
<br></div><div><a href="https://www.illumos.org/issues/208">https://www.illumos.org/issues/208</a><br><br><div class="gmail_quote">On Fri, Dec 28, 2012 at 2:49 AM, Alistair Harding <span dir="ltr"><<a href="mailto:alistair.harding@gmail.com" target="_blank">alistair.harding@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Apologize in advance for this, I haven't filed a bug report in years, and I'm entirely useless with Solaris.<div>
<br></div><div>System is a Supermicro SC-847 Chassis, 36 2TB drives, LSI 9265 for the front 24 drives (all exported as JBOD) and a LSI SAS2308 based card as an HBA.</div>
<div><br></div><div>zpool was originally created under ZFSOnLinux 0.6.0-rc13, which is filesystem version 5, pool version 28.</div><div><br></div><div>After installing OmniOS Bloody 20121107,  I attempted to do a zpool import Storage</div>

<div><br></div><div>Error:  Assertion failed: rn->rn_nozpool == B_FALSE, file ../common/libzfs_import.c, line 1086, function zpool_open_func from omnios</div><div><br></div><div>zpool crashed, and dumped an 9mb core.</div>

<div><br></div><div>I then booted FreeBSD 9.1-RELEASE (supposedly) and was able to import the pool.</div><div><br></div><div><div># zpool status</div><div>  pool: Storage</div><div> state: ONLINE</div>
<div>  scan: scrub in progress since Thu Dec 27 18:08:44 2012</div><div>        16.4T scanned out of 38.2T at 856M/s, 7h27m to go</div><div>        0 repaired, 42.76% done</div><div>config:</div><div><br></div><div>        NAME                                            STATE     READ WRITE CKSUM</div>

<div>        Storage                                         ONLINE       0     0     0</div><div>          raidz3-0                                      ONLINE       0     0     0</div><div>            mfid0p1                                     ONLINE       0     0     0</div>

<div>            mfid1p1                                     ONLINE       0     0     0</div><div>            mfid2p1                                     ONLINE       0     0     0</div><div>            mfid3p1                                     ONLINE       0     0     0</div>

<div>            mfid4p1                                     ONLINE       0     0     0</div><div>            mfid5p1                                     ONLINE       0     0     0</div><div>            mfid6p1                                     ONLINE       0     0     0</div>

<div>            mfid7p1                                     ONLINE       0     0     0</div><div>          raidz3-1                                      ONLINE       0     0     0</div><div>            mfid8p1                                     ONLINE       0     0     0</div>

<div>            mfid9p1                                     ONLINE       0     0     0</div><div>            mfid10p1                                    ONLINE       0     0     0</div><div>            mfid11p1                                    ONLINE       0     0     0</div>

<div>            mfid12p1                                    ONLINE       0     0     0</div><div>            mfid13p1                                    ONLINE       0     0     0</div><div>            mfid14p1                                    ONLINE       0     0     0</div>

<div>            mfid15p1                                    ONLINE       0     0     0</div><div>          raidz3-2                                      ONLINE       0     0     0</div><div>            mfid16p1                                    ONLINE       0     0     0</div>

<div>            mfid17p1                                    ONLINE       0     0     0</div><div>            mfid18p1                                    ONLINE       0     0     0</div><div>            mfid19p1                                    ONLINE       0     0     0</div>

<div>            mfid20p1                                    ONLINE       0     0     0</div><div>            mfid21p1                                    ONLINE       0     0     0</div><div>            mfid22p1                                    ONLINE       0     0     0</div>

<div>            mfid23p1                                    ONLINE       0     0     0</div><div>          raidz3-3                                      ONLINE       0     0     0</div><div>            da5p1                                       ONLINE       0     0     0</div>

<div>            da9p1                                       ONLINE       0     0     0</div><div>            gptid/cf9318bd-c045-3d4b-b685-31498a450171  ONLINE       0     0     0</div><div>            da4p1                                       ONLINE       0     0     0</div>

<div>            da10p1                                      ONLINE       0     0     0</div><div>            da2p1                                       ONLINE       0     0     0</div><div>            da6p1                                       ONLINE       0     0     0</div>

<div>            da8p1                                       ONLINE       0     0     0</div><div>        spares</div><div>          da12p1                                        AVAIL</div><div>          da11p1                                        AVAIL</div>

<div>          da3p1                                         AVAIL</div><div>          da1p1                                         AVAIL</div><div><br></div><div>errors: No known data errors</div><div><br></div><div>
The only two issues I can maybe think of, is OmniOS may not like the one drive being GPT?, and the array was in the middle of a scrub before I exported it from linux, However FreeBSD Doesn't seem to have an issue with either of these things.</div>

<div><br></div><div>The only thing I did after installing OmniOS, was log in, configure my network, then attempt to zpool import Storage. /dev/dsk looked like <a href="http://pastebin.com/huVBqrvH" target="_blank">http://pastebin.com/huVBqrvH</a>. I tried one mailing list post of mv /dev/dsk /dev/dsk-old and same for rdsk, then restart some service, but the issue persisted.</div>

</div></div>
<br>_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br>
<p>Theo Schlossnagle</p>
<p><a href="http://omniti.com/is/theo-schlossnagle" target="_blank">http://omniti.com/is/theo-schlossnagle</a></p>
</div>