[OmniOS-discuss] failsafe boot?

Robin P. Blanchard robin at coraid.com
Thu May 8 16:06:44 UTC 2014


Replying to myself here...
Presumably my other BEs are failing since my rpool is now upgraded.

So I've decided to try to boot from latest ISO and attempt to mount the BE and fix it.

so what am I missing here:

from live media:

# mkdir -p /rescue

# zpool import -R /rescue 14750227168826216208

# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                         48.2G   865G    40K  /rescue/rpool
rpool/ROOT                    14.7G   865G    31K  legacy
rpool/ROOT/omnios             7.39M   865G  3.50G  /rescue
rpool/ROOT/omnios-1           10.3M   865G  3.53G  /rescue
rpool/ROOT/omnios-2            287M   865G  3.83G  /rescue
rpool/ROOT/omnios-3            279M   865G  7.15G  /rescue
rpool/ROOT/omnios-4            282M   865G  7.32G  /rescue
rpool/ROOT/omnios-4-backup-1    40K   865G  7.00G  /rescue
rpool/ROOT/omnios-4-backup-2   137K   865G  7.09G  /rescue
rpool/ROOT/omnios-5            285M   865G  7.47G  /rescue
rpool/ROOT/omnios-5-backup-1    71K   865G  7.18G  /rescue
rpool/ROOT/omnios-6           13.6G   865G  7.82G  /rescue
rpool/ROOT/omnios-backup-1      84K   865G  3.33G  /rescue
rpool/ROOT/omnios-backup-2      96K   865G  3.50G  /rescue
rpool/ROOT/omniosvar            31K   865G    31K  legacy
rpool/dump                    28.0G   865G  28.0G  -
rpool/export                  1.38G   865G    32K  /rescue/export
rpool/export/home             1.38G   865G  1.38G  /rescue/export/home
rpool/swap                    4.13G   869G  5.16M  -

# beadm list
BE                Active Mountpoint Space Policy Created
omnios            -      -          7.39M static 2013-11-19 21:11
omnios-1          -      -          10.3M static 2013-12-08 00:42
omnios-2          -      -          287M  static 2013-12-08 01:05
omnios-3          -      -          279M  static 2013-12-11 18:06
omnios-4          -      -          282M  static 2014-01-21 18:21
omnios-4-backup-1 -      -          40.0K static 2014-01-21 18:28
omnios-4-backup-2 -      -          137K  static 2014-03-10 20:05
omnios-5          -      -          285M  static 2014-04-08 03:12
omnios-5-backup-1 -      -          71.0K static 2014-04-11 17:09
omnios-6          R      -          16.5G static 2014-05-08 12:35
omnios-backup-1   -      -          84.0K static 2013-12-07 14:47
omnios-backup-2   -      -          96.0K static 2013-12-08 00:42
omniosvar         -      -          31.0K static 2013-11-19 21:11

# mkdir -p /rescue/be

# beadm mount omnios-6 /rescue/be/
Mounted successfully on: '/rescue/be/'

# find /rescue/be/
/rescue/be/


nothing here. and same for the other BEs....


On May 8, 2014, at 11:04 AM, Robin P. Blanchard <robin at coraid.com> wrote:

> Hi guys,
> 
> I managed to destroy my /kernel/drv/scsi_vhci.conf and/or sd.conf and can no longer boot (into any BE) :/
> 
> Is there a way (other than live media) to boot into some sort of rescue/failsafe mode?
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

-- 
Robin P. Blanchard
Technical Solutions Engineer
Coraid Global Field Services and Support
www.coraid.com
+1 650.730.5140



More information about the OmniOS-discuss mailing list