[OmniOS-discuss] issue importing zpool on S11.1 from omniOS LUNs

Dale Ghent daleg at omniti.com
Wed Jan 25 17:37:31 UTC 2017


Oh, ok, I misunderstood you as trying to import illumos vdevs directly onto a Oracle Solaris server.

This line:

>>> status: The pool was last accessed by another system.

indicates that the zpool was uncleanly exported (or not exported at all) on the previous system it was imported on and the hostid of that system is still imprinted in the vdev labels for the zpool (not the hostid of the system you are trying to import on)

Have you tried 'zpool import -f vsmPool10' ?

/dale

> On Jan 25, 2017, at 12:14 PM, Stephan Budach <stephan.budach at jvm.de> wrote:
> 
> Hi Dale,
> 
> I know that and it's not that I am trying to import a S11.1 zpool on omniOS or vice versa. It's that the targets are omniOS and the initiator is S11.1. I am still trying to import the zpool on S11.1. My question was more directed at COMSTAR, which both still should have some fair overlapping, no?
> 
> I am in contact with Oracle and he mentioned some issues with zvols over iSCSI targets, which may be present for both systems, so I thought, that I'd give it a shot, that's all.
> 
> Cheers
> Stephan
> 
> Am 25.01.17 um 18:07 schrieb Dale Ghent:
>> ZFS as implemented in Oracle Solaris is *not* OpenZFS, which is what illumos (and all illumos distros), FreeBSD, and the ZFS on Linux/macOS projects use. Up to a level of features, the two are compatible - but then they diverge in features. If one pool has features the zfs driver does not understand, you could run the risk of refusal to import as indicated here.
>> 
>> Seeing as how Oracle itself does not include OpenZFS features in its ZFS implementation, and Oracle does not provide any information to OpenZFS regarding features it invents, this will unfortunately be the state of things unless Oracle changes its open source or information sharing policies. Unfortunate but that's just the way things are.
>> 
>> /dale
>> 
>> 
>>> On Jan 25, 2017, at 8:54 AM, Stephan Budach <stephan.budach at JVM.DE>
>>>  wrote:
>>> 
>>> Hi guys,
>>> 
>>> I have been trying to import a zpool, based on a 3way-mirror provided by three omniOS boxes via iSCSI. This zpool had been working flawlessly until some random reboot of the S11.1 host. Since then, S11.1 has been importing this zpool without success.
>>> 
>>> This zpool consists of three 108TB LUNs, based on a raidz-2 zvols… yeah I know, we shouldn't have done that in the first place, but performance was not the primary goal for that, as this one is a backup/archive pool.
>>> 
>>> When issueing a zpool import, it says this:
>>> 
>>> root at solaris11atest2:~# zpool import
>>>   pool: vsmPool10
>>>     id: 12653649504720395171
>>>  state: DEGRADED
>>> status: The pool was last accessed by another system.
>>> action: The pool can be imported despite missing or damaged devices.  The
>>>         fault tolerance of the pool may be compromised if imported.
>>>    see:
>>> http://support.oracle.com/msg/ZFS-8000-EY
>>> 
>>> config:
>>> 
>>>         vsmPool10                                  DEGRADED
>>>           mirror-0                                 DEGRADED
>>>             c0t600144F07A3506580000569398F60001d0  DEGRADED  corrupted data
>>>             c0t600144F07A35066C00005693A0D90001d0  DEGRADED  corrupted data
>>>             c0t600144F07A35001A00005693A2810001d0  DEGRADED  corrupted data
>>> 
>>> device details:
>>> 
>>>         c0t600144F07A3506580000569398F60001d0    DEGRADED         scrub/resilver needed
>>>         status: ZFS detected errors on this device.
>>>                 The device is missing some data that is recoverable.
>>> 
>>>         c0t600144F07A35066C00005693A0D90001d0    DEGRADED         scrub/resilver needed
>>>         status: ZFS detected errors on this device.
>>>                 The device is missing some data that is recoverable.
>>> 
>>>         c0t600144F07A35001A00005693A2810001d0    DEGRADED         scrub/resilver needed
>>>         status: ZFS detected errors on this device.
>>>                 The device is missing some data that is recoverable.
>>> 
>>> However, when  actually running zpool import -f vsmPool10, the system starts to perform a lot of writes on the LUNs and iostat report an alarming increase in h/w errors:
>>> 
>>> root at solaris11atest2:~# iostat -xeM 5
>>>                          extended device statistics         ---- errors ---
>>> device    r/s    w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn tot
>>> sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>> sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>> sd2       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0  71   0  71
>>> sd3       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>> sd4       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>> sd5       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>>                          extended device statistics         ---- errors ---
>>> device    r/s    w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn tot
>>> sd0      14.2  147.3    0.7    0.4  0.2  0.1    2.0   6   9   0   0   0   0
>>> sd1      14.2    8.4    0.4    0.0  0.0  0.0    0.3   0   0   0   0   0   0
>>> sd2       0.0    4.2    0.0    0.0  0.0  0.0    0.0   0   0   0  92   0  92
>>> sd3     157.3   46.2    2.1    0.2  0.0  0.7    3.7   0  14   0  30   0  30
>>> sd4     123.9   29.4    1.6    0.1  0.0  1.7   10.9   0  36   0  40   0  40
>>> sd5     142.5   43.0    2.0    0.1  0.0  1.9   10.2   0  45   0  88   0  88
>>>                          extended device statistics         ---- errors ---
>>> device    r/s    w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn tot
>>> sd0       0.0  234.5    0.0    0.6  0.2  0.1    1.4   6  10   0   0   0   0
>>> sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>> sd2       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0  92   0  92
>>> sd3       3.6   64.0    0.0    0.5  0.0  4.3   63.2   0  63   0 235   0 235
>>> sd4       3.0   67.0    0.0    0.6  0.0  4.2   60.5   0  68   0 298   0 298
>>> sd5       4.2   59.6    0.0    0.4  0.0  5.2   81.0   0  72   0 406   0 406
>>>                          extended device statistics         ---- errors ---
>>> device    r/s    w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn tot
>>> sd0       0.0  234.8    0.0    0.7  0.4  0.1    2.2  11  10   0   0   0   0
>>> sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0   0   0   0
>>> sd2       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0  92   0  92
>>> sd3       5.4   54.4    0.0    0.3  0.0  2.9   48.5   0  67   0 384   0 384
>>> sd4       6.0   53.4    0.0    0.3  0.0  4.6   77.7   0  87   0 519   0 519
>>> sd5       6.0   60.8    0.0    0.3  0.0  4.8   72.5   0  87   0 727   0 727
>>> 
>>> 
>>> I have tried pulling data from the LUNs using dd to /dev/null and I didn't get any h/w error, this just started, when trying to actually import the zpool. As the h/w errors are constantly rising, I am wondering what could cause this and if there can something be done about this?
>>> 
>>> Cheers,
>>> Stephan
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> 
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> 
> --
> Krebs's 3 Basic Rules for Online Safety
> 1st - "If you didn't go looking for it, don't install it!"
> 2nd - "If you installed it, update it."
> 3rd - "If you no longer need it, remove it."
> 
> http://krebsonsecurity.com/2011/05/krebss-3-basic-rules-for-online-safety
> 
> 
> 
> Stephan Budach
> Head of IT
> Jung von Matt AG
> Glashüttenstraße 79
> 20357 Hamburg
> 
> 
> Tel: +49 40-4321-1353
> Fax: +49 40-4321-1114
> E-Mail:
> stephan.budach at jvm.de
> 
> Internet:
> http://www.jvm.com
> 
> CiscoJabber Video:
> https://exp-e2.jvm.de/call/stephan.budach
> 
> 
> Vorstand: Dr. Peter Figge, Jean-Remy von Matt, Larissa Pohl, Thomas Strerath, Götz Ulmer
> Vorsitzender des Aufsichtsrates: Hans Hermann Münchmeyer
> AG HH HRB 72893
> 
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP
URL: <https://omniosce.org/ml-archive/attachments/20170125/b7f1dfe9/attachment.bin>


More information about the OmniOS-discuss mailing list