From omnios at citrus-it.net Mon Dec 3 12:26:52 2018 From: omnios at citrus-it.net (Andy Fiddaman) Date: Mon, 3 Dec 2018 12:26:52 +0000 (UTC) Subject: [OmniOS-discuss] A reminder that omnios-discuss is moving to topicbox Message-ID: Just a reminder that this mailing list (omnios-discuss at lists.omniti.com) will be closing at the end of the year. A replacement list has been set up for us by the illumos community over at TopicBox - https://illumos.topicbox.com/groups/omnios-discuss We've seen a great take-up of subscriptions to that list but if you have not yet made the switch, please head over there and sign up. We also have a new Newsletter mailing list which is used for announcements such as information about new releases and updates - you can subscribe to that at http://eepurl.com/dL1z7k For the full list of contact options for the OmniOS Community Edition project, please see https://omniosce.org/about/contact Thanks, Andy On Tue, 13 Nov 2018, Tobias Oetiker wrote: ; Dear OmniOS Friends ; ; It's been 18 months now since we started the OmniOS Community Edition and with the third stable release under our belt we are finalising the transfer of the remaining services from OmniTI. ; ; To that end, the existing OmniOS mailing lists (including this one) will be closed down at the end of December. ; ; With thanks to Topicbox and the illumos project, we have a new omnios-discuss list which can be found at: ; ; https://illumos.topicbox.com/groups/omnios-discuss ; ; Subscriptions to this list will NOT be transferred automatically so please head over there to subscribe. ; ; We also have a new Newsletter mailing list for keeping up to date with the latest news; you can subscribe to that at: ; ; http://eepurl.com/dL1z7k ; ; cheers ; tobi ; ; -- Citrus IT Limited | +44 (0)333 0124 007 | enquiries at citrus-it.co.uk Rock House Farm | Green Moor | Wortley | Sheffield | S35 7DQ Registered in England and Wales | Company number 4899123 From chip at innovates.com Fri Dec 14 14:39:31 2018 From: chip at innovates.com (Schweiss, Chip) Date: Fri, 14 Dec 2018 08:39:31 -0600 Subject: [OmniOS-discuss] NVMe JBOF Message-ID: Has the NVMe support in Illumos come far enough along to properly support two servers connected to NVMe JBOF storage such as the Supermicro SSG-136R-N32JBF? While I do not run HA because of too many issues, I still build everything with two server nodes. This makes updates and reboots possible by moving a pool to the sister host and greatly minimizing downtime. This is essential when the NFS target is hosting 300+ vSphere VMs. Thanks! -Chip -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.elling at richardelling.com Fri Dec 14 16:20:35 2018 From: richard.elling at richardelling.com (Richard Elling) Date: Fri, 14 Dec 2018 08:20:35 -0800 Subject: [OmniOS-discuss] NVMe JBOF In-Reply-To: References: Message-ID: > On Dec 14, 2018, at 6:39 AM, Schweiss, Chip wrote: > > Has the NVMe support in Illumos come far enough along to properly support two servers connected to NVMe JBOF storage such as the Supermicro SSG-136R-N32JBF? I can't speak to the Supermicro, but I can talk in detail about https://www.vikingenterprisesolutions.com/products-2/nds-2244/ > > While I do not run HA because of too many issues, I still build everything with two server nodes. This makes updates and reboots possible by moving a pool to the sister host and greatly minimizing downtime. This is essential when the NFS target is hosting 300+ vSphere VMs. The NDS-2244 is a 24-slot u.2 NVMe chassis with programmable PCIe switches. To the host, the devices look like locally attached NVMe and there is no software changes required. Multiple hosts can connect, up to the PCIe port limits. If you use dual-port NVMe drives, then you can share the drives between any two hosts concurrently. Programming the switches is accomplished out-of-band by an HTTTP-based interface that also monitors the enclosure. In other words, if you want a NVMe equivalent to a dual-hosted SAS JBOD, the NDS-2244 is very capable, but more configurable. -- richard From chip at innovates.com Fri Dec 14 19:54:44 2018 From: chip at innovates.com (Schweiss, Chip) Date: Fri, 14 Dec 2018 13:54:44 -0600 Subject: [OmniOS-discuss] NVMe JBOF In-Reply-To: References: Message-ID: On Fri, Dec 14, 2018 at 10:20 AM Richard Elling < richard.elling at richardelling.com> wrote: > > I can't speak to the Supermicro, but I can talk in detail about > https://www.vikingenterprisesolutions.com/products-2/nds-2244/ > > > > > While I do not run HA because of too many issues, I still build > everything with two server nodes. This makes updates and reboots possible > by moving a pool to the sister host and greatly minimizing downtime. This > is essential when the NFS target is hosting 300+ vSphere VMs. > > The NDS-2244 is a 24-slot u.2 NVMe chassis with programmable PCIe switches. > To the host, the devices look like locally attached NVMe and there is no > software > changes required. Multiple hosts can connect, up to the PCIe port limits. > If you use > dual-port NVMe drives, then you can share the drives between any two hosts > concurrently. > Programming the switches is accomplished out-of-band by an HTTTP-based > interface > that also monitors the enclosure. > > In other words, if you want a NVMe equivalent to a dual-hosted SAS JBOD, > the NDS-2244 > is very capable, but more configurable. > -- richard > > This is execellent. I like the idea of only one host seeing the SSDs at once, but a programatic way to flip them to the other host. This solves the fencing problem in ZFS nicely. Thanks for product reference. The Viking JBOF looks like what I need. -Chip -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffn at gnaa.net Mon Dec 17 07:11:26 2018 From: geoffn at gnaa.net (Geoff Nordli) Date: Sun, 16 Dec 2018 23:11:26 -0800 Subject: [OmniOS-discuss] Creating shares using the cifs service in a zone -- share permissions Message-ID: Hi. I am trying to get the CIFS service in a zone working properly. At this point everything works fine until I try to set share level permissions. I add the share: sharemgr add-share -r test -s /tank/fs1-data/test mygroup I can connect to the share, etc... I can't seem to set the share permissions.? Normally there is file in the .zfs/shares folder and you go in and assign the share level permissions using that file. For some reason there is no file in that shares folder. Any thoughts? thanks, Geoff From omnios-discuss at base8.org Tue Dec 18 22:18:54 2018 From: omnios-discuss at base8.org (omnios-discuss at base8.org) Date: Tue, 18 Dec 2018 14:18:54 -0800 Subject: [OmniOS-discuss] p3700 under OmniOS not detected Message-ID: <6E49DCE8-236F-4814-9085-070737BFDD17@base8.org> I'm trying to add a P3700s for VM storage under ESXi but when passing through the PCI device to OmniOS it shows as unknown type and isn't available for formatting, partitioning or including into any arrays. I verified I can pass through the device correctly to a Server 2016 VM running on the same hardware so wondered if there was an OmniOS issue I needed to address? Disk7 is the P3700 ``` AVAILABLE DISK SELECTIONS: 0. c2t0d0 /pci at 0,0/pci15ad,1976 at 10/sd at 0,0 1. c61t0d0 /pci at 0,0/pci15ad,7a0 at 16/pci15d9,86d at 0/disk at 0,0 2. c61t1d0 /pci at 0,0/pci15ad,7a0 at 16/pci15d9,86d at 0/disk at 1,0 3. c61t2d0 /pci at 0,0/pci15ad,7a0 at 16/pci15d9,86d at 0/disk at 2,0 4. c61t3d0 /pci at 0,0/pci15ad,7a0 at 16/pci15d9,86d at 0/disk at 3,0 5. c61t4d0 /pci at 0,0/pci15ad,7a0 at 16/pci15d9,86d at 0/disk at 4,0 6. c61t5d0 /pci at 0,0/pci15ad,7a0 at 16/pci15d9,86d at 0/disk at 5,0 7. c63t1d0 /pci at 0,0/pci15ad,7a0 at 16,1/pci8086,3703 at 0/blkdev at 1,0 ``` Hardware: Supermicro x10sdv-1541 motherboard, 128GB RAM, latest BIOS 2.0a Bifurcated x16 PCIe slot with 4x4x4x4x and AOC-SLG3-4E4R U2 HBA (https://www.supermicro.com/products/accessories/addon/AOC-SLG3-4E4R.cfm) Intel P3700 2.5" firmware 8DV101H0 3*HGST 7200rpm drives 2 * Intel S3700 1 * Intel s3710 Software: OmniOS: 5.11, omnios-r151028-d3d0427bff, November 2018 Napp-it: 18.12, dev any help greatly appreciated, thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: