[OmniOS-discuss] failsafe boot?

Richard Elling richard.elling at richardelling.com
Fri May 9 02:00:01 UTC 2014


On May 8, 2014, at 9:20 AM, Robin P. Blanchard <robin at coraid.com> wrote:

> # zfs list -r -t snapshot
> NAME                                      USED  AVAIL  REFER  MOUNTPOINT
> rpool/ROOT/omnios at 2013-12-08-00:42:38        0      -  3.66G  -
> rpool/ROOT/omnios-6 at install               292M      -  1.38G  -
> rpool/ROOT/omnios-6 at 2013-12-07-14:47:31   134M      -  3.33G  -
> rpool/ROOT/omnios-6 at 2013-12-08-00:42:37   291M      -  3.66G  -
> rpool/ROOT/omnios-6 at 2013-12-08-01:05:46   578M      -  3.80G  -
> rpool/ROOT/omnios-6 at 2013-12-11-18:06:57   314M      -  3.66G  -
> rpool/ROOT/omnios-6 at 2014-01-21-18:21:58   300M      -  7.00G  -
> rpool/ROOT/omnios-6 at 2014-01-21-18:28:40  22.6M      -  7.00G  -
> rpool/ROOT/omnios-6 at 2014-03-10-20:05:42  31.4M      -  7.09G  -
> rpool/ROOT/omnios-6 at 2014-04-08-03:12:40   301M      -  7.15G  -
> rpool/ROOT/omnios-6 at 2014-04-11-17:09:48   325M      -  7.18G  -
> rpool/ROOT/omnios-6 at 2014-05-08-12:35:52   345M      -  7.67G  -
> 
> 
> is the beadm mount not enough? do i still need to manually mount its snapshot?

You can manually mount the dataset. Something like:
	mount -F zfs zpool/ROOT/omnios-6 /mnt

 -- richard

> 
> 
> 
> On May 8, 2014, at 12:06 PM, Robin P. Blanchard <robin at coraid.com> wrote:
> 
>> Replying to myself here...
>> Presumably my other BEs are failing since my rpool is now upgraded.
>> 
>> So I've decided to try to boot from latest ISO and attempt to mount the BE and fix it.
>> 
>> so what am I missing here:
>> 
>> from live media:
>> 
>> # mkdir -p /rescue
>> 
>> # zpool import -R /rescue 14750227168826216208
>> 
>> # zfs list
>> NAME                           USED  AVAIL  REFER  MOUNTPOINT
>> rpool                         48.2G   865G    40K  /rescue/rpool
>> rpool/ROOT                    14.7G   865G    31K  legacy
>> rpool/ROOT/omnios             7.39M   865G  3.50G  /rescue
>> rpool/ROOT/omnios-1           10.3M   865G  3.53G  /rescue
>> rpool/ROOT/omnios-2            287M   865G  3.83G  /rescue
>> rpool/ROOT/omnios-3            279M   865G  7.15G  /rescue
>> rpool/ROOT/omnios-4            282M   865G  7.32G  /rescue
>> rpool/ROOT/omnios-4-backup-1    40K   865G  7.00G  /rescue
>> rpool/ROOT/omnios-4-backup-2   137K   865G  7.09G  /rescue
>> rpool/ROOT/omnios-5            285M   865G  7.47G  /rescue
>> rpool/ROOT/omnios-5-backup-1    71K   865G  7.18G  /rescue
>> rpool/ROOT/omnios-6           13.6G   865G  7.82G  /rescue
>> rpool/ROOT/omnios-backup-1      84K   865G  3.33G  /rescue
>> rpool/ROOT/omnios-backup-2      96K   865G  3.50G  /rescue
>> rpool/ROOT/omniosvar            31K   865G    31K  legacy
>> rpool/dump                    28.0G   865G  28.0G  -
>> rpool/export                  1.38G   865G    32K  /rescue/export
>> rpool/export/home             1.38G   865G  1.38G  /rescue/export/home
>> rpool/swap                    4.13G   869G  5.16M  -
>> 
>> # beadm list
>> BE                Active Mountpoint Space Policy Created
>> omnios            -      -          7.39M static 2013-11-19 21:11
>> omnios-1          -      -          10.3M static 2013-12-08 00:42
>> omnios-2          -      -          287M  static 2013-12-08 01:05
>> omnios-3          -      -          279M  static 2013-12-11 18:06
>> omnios-4          -      -          282M  static 2014-01-21 18:21
>> omnios-4-backup-1 -      -          40.0K static 2014-01-21 18:28
>> omnios-4-backup-2 -      -          137K  static 2014-03-10 20:05
>> omnios-5          -      -          285M  static 2014-04-08 03:12
>> omnios-5-backup-1 -      -          71.0K static 2014-04-11 17:09
>> omnios-6          R      -          16.5G static 2014-05-08 12:35
>> omnios-backup-1   -      -          84.0K static 2013-12-07 14:47
>> omnios-backup-2   -      -          96.0K static 2013-12-08 00:42
>> omniosvar         -      -          31.0K static 2013-11-19 21:11
>> 
>> # mkdir -p /rescue/be
>> 
>> # beadm mount omnios-6 /rescue/be/
>> Mounted successfully on: '/rescue/be/'
>> 
>> # find /rescue/be/
>> /rescue/be/
>> 
>> 
>> nothing here. and same for the other BEs....
>> 
>> 
>> On May 8, 2014, at 11:04 AM, Robin P. Blanchard <robin at coraid.com> wrote:
>> 
>>> Hi guys,
>>> 
>>> I managed to destroy my /kernel/drv/scsi_vhci.conf and/or sd.conf and can no longer boot (into any BE) :/
>>> 
>>> Is there a way (other than live media) to boot into some sort of rescue/failsafe mode?
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>> 
>> -- 
>> Robin P. Blanchard
>> Technical Solutions Engineer
>> Coraid Global Field Services and Support
>> www.coraid.com
>> +1 650.730.5140
>> 
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> 
> -- 
> Robin P. Blanchard
> Technical Solutions Engineer
> Coraid Global Field Services and Support
> www.coraid.com
> +1 650.730.5140
> 
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

--

Richard.Elling at RichardElling.com
+1-760-896-4422



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20140508/952331b5/attachment-0001.html>


More information about the OmniOS-discuss mailing list