[OmniOS-discuss] Status of TRIM support?

Schweiss, Chip chip at innovates.com
Wed May 28 19:18:24 UTC 2014


On Wed, May 28, 2014 at 1:55 PM, Dan Swartzendruber <dswartz at druber.com>wrote:

> > It looks to me like Sa¨o's design is active/standby failover.  Zpool
> > import on the standby should obtain a clean transaction group as long
> > as the originally active system is still not using the pool.  The
> > result would be similar to the power fail situation.
>
> As long as the right fencing is done in the case where the active node
> goes south, agreed.  In my case, I have 3 server, two running vsphere and
> one running illumos.  All active guests run on server V1, with V2 as a HA
> backup for V1.  Since V2 is doing little else, it also hosts a virtualized
> illumos appliance, which currently has two 1TB disks for a hourly zfs send
> replication job.  I intend to put an HBA in V2 and pass it through to the
> storage appliance and go from there.  The only fly in the ointment is that
> while V1 can be readily fenced using the on-board IPMI, I have no easy way
> to fence the virtualized appliance.  I seem to recall seeing a vmware
> fencing agent, but it may not be reliable enough for me (e.g. what if the
> reason the virtualized appliance is not working properly is because the
> host is wigging out?)  It struck me that since nothing else normally runs
> on V2, I can fence the virtualized appliance by fencing the host it runs
> on using V2's onboard IPMI.  If a hard failover needs to be done, the
> standby appliance will need to import the pool with '-f', which is scary
> if your fencing is not extremely reliable...
>

Assuming you have real SAS devices in the pool, not SATA with interposers,
you can use SCSI reservations.  This can block the other host from
accessing a pool you are about to take over.

sg3_utils has utilities for managing SCSI reservations.

-Chip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20140528/4463c022/attachment.html>


More information about the OmniOS-discuss mailing list