From oliver.weinmann at telespazio-vega.de Mon Jun 11 07:11:46 2018 From: oliver.weinmann at telespazio-vega.de (Oliver Weinmann) Date: Mon, 11 Jun 2018 07:11:46 +0000 Subject: [OmniOS-discuss] zfs send | recv Message-ID: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> Hi, We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta calls this feature autosync. While they say it is only 100% supported between nexenta systems, we managed to get it working with OmniOS too. It's Not rocket science. But there is one big problem. In the autosync job on the Nexenta system one can specify how many snaps to keep local on the nexenta and how many to keep on the target system. Somehow we always have the same amount of snaps on both systems. Autosync always cleans all snaps on the dest that don't exist on the source. I contacted nexenta support and they told me that this is due to different versions of zfs send and zfs recv. There should be a -K flag, that instructs the destination to not destroy snapshots that don't exist on the source. Is such a flag available in OmniOS? I assume the flag is set on the sending side so that the receiving side has to understand it. Best Regards, Oliver [cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png] Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png Type: image/png Size: 7535 bytes Desc: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png URL: From priyadarshan at scs.re Mon Jun 11 07:46:25 2018 From: priyadarshan at scs.re (priyadarshan) Date: Mon, 11 Jun 2018 09:46:25 +0200 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> Message-ID: > On 11 Jun 2018, at 09:11, Oliver Weinmann wrote: > > Hi, > > We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta calls this feature autosync. While they say it is only 100% supported between nexenta systems, we managed to get it working with OmniOS too. It?s Not rocket science. But there is one big problem. In the autosync job on the Nexenta system one can specify how many snaps to keep local on the nexenta and how many to keep on the target system. Somehow we always have the same amount of snaps on both systems. Autosync always cleans all snaps on the dest that don?t exist on the source. I contacted nexenta support and they told me that this is due to different versions of zfs send and zfs recv. There should be a ?K flag, that instructs the destination to not destroy snapshots that don't exist on the source. Is such a flag available in OmniOS? I assume the flag is set on the sending side so that the receiving side has to understand it. > > Best Regards, > Oliver > Hello, OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks zfs from illumos-gate. One can follow various upstream merges here: https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed Based on that, illumos man pages also apply to OmniOS: https://omnios.omniti.com/wiki.php/ManSections Illumos zfs man page is here: https://illumos.org/man/1m/zfs That page does not seem to offer a -K flag. You may want to consider third party tools. We have a very similar use-case as you detailed, fulfilled by using zfsnap, with reliable and consistent results. git repository: https://github.com/zfsnap/zfsnap site: http://www.zfsnap.org/ man page: http://www.zfsnap.org/zfsnap_manpage.html With zfsnap we have been maintaining (almost) live replicas of mail servers, including snapshotting, either automatically synchronised to master, or kept aside for special needs. One just needs to tweak a shell script (or simply, one or more cron jobs) to what is desired. Priyadarshan From alka at hfg-gmuend.de Mon Jun 11 07:54:33 2018 From: alka at hfg-gmuend.de (Guenther Alka) Date: Mon, 11 Jun 2018 09:54:33 +0200 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> Message-ID: did you replicate recursively? keeping a different snap history should be possible when you send single filesystems. gea @napp-it.org Am 11.06.2018 um 09:11 schrieb Oliver Weinmann: > > Hi, > > We are replicating snapshots from a Nexenta system to an OmniOS > system. Nexenta calls this feature autosync. While they say it is only > 100% supported between nexenta systems, we managed to get it working > with OmniOS too. It?s Not rocket science. But there is one big > problem. In the autosync job on the Nexenta system one can specify how > many snaps to keep local on the nexenta and how many to keep on the > target system. Somehow we always have the same amount of snaps on both > systems. Autosync always cleans all snaps on the dest that don?t exist > on the source. I contacted nexenta support and they told me that this > is due to different versions of zfs send and zfs recv. There should be > a ?K ?flag, that instructs the destination to not destroy snapshots > that don't exist on the source. Is such a flag available in OmniOS? I > assume the flag is set on the sending side so that the receiving side > has to understand it. > > Best Regards, > > Oliver > > *Oliver Weinmann* > Head of Corporate ICT > > Telespazio VEGA Deutschland GmbH > Europaplatz 5 - 64293 Darmstadt - Germany > Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 > oliver.weinmann at telespazio-vega.de > > www.telespazio-vega.de > > Registered office/Sitz: Darmstadt, Register court/Registergericht: > Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png Type: image/png Size: 7535 bytes Desc: not available URL: From oliver.weinmann at telespazio-vega.de Mon Jun 11 08:07:56 2018 From: oliver.weinmann at telespazio-vega.de (Oliver Weinmann) Date: Mon, 11 Jun 2018 08:07:56 +0000 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> Message-ID: <767138E0D064A148B03FE8EC1E9325A20138E9DD31@gedaevw60.a.space.corp> Hi Priyadarshan, Thanks a lot for the quick and comprehensive answer. I agree that using a third party tool might be helpful. When we started using the two ZFS systems, I really had a hard time testing a few third party tools. One of the biggest problems was that I wanted to be able to use the omnios systems as a DR site. However flipping the mirror always caused the nexenta system to crash. So until today there is no real solution to use the omnios system as a real DR site. This is due to different versions of zfs send and recv on the two systems and not related to using third party tools. I have tested zrep as it contains a DR mode and looked at znapzend but had not time to test it. We were told that the new version of nexenta no longer supports an ordinary way to sync snaps to a non nexenta system as there is no shell access anymore. Nexenta 5.x provides an API for this. I need to find some time to test it. Best Regards, Oliver Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller-----Original Message----- From: priyadarshan Sent: Montag, 11. Juni 2018 09:46 To: Oliver Weinmann Cc: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs send | recv > On 11 Jun 2018, at 09:11, Oliver Weinmann wrote: > > Hi, > > We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta calls this feature autosync. While they say it is only 100% supported between nexenta systems, we managed to get it working with OmniOS too. It?s Not rocket science. But there is one big problem. In the autosync job on the Nexenta system one can specify how many snaps to keep local on the nexenta and how many to keep on the target system. Somehow we always have the same amount of snaps on both systems. Autosync always cleans all snaps on the dest that don?t exist on the source. I contacted nexenta support and they told me that this is due to different versions of zfs send and zfs recv. There should be a ?K flag, that instructs the destination to not destroy snapshots that don't exist on the source. Is such a flag available in OmniOS? I assume the flag is set on the sending side so that the receiving side has to understand it. > > Best Regards, > Oliver > Hello, OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks zfs from illumos-gate. One can follow various upstream merges here: https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed Based on that, illumos man pages also apply to OmniOS: https://omnios.omniti.com/wiki.php/ManSections Illumos zfs man page is here: https://illumos.org/man/1m/zfs That page does not seem to offer a -K flag. You may want to consider third party tools. We have a very similar use-case as you detailed, fulfilled by using zfsnap, with reliable and consistent results. git repository: https://github.com/zfsnap/zfsnap site: http://www.zfsnap.org/ man page: http://www.zfsnap.org/zfsnap_manpage.html With zfsnap we have been maintaining (almost) live replicas of mail servers, including snapshotting, either automatically synchronised to master, or kept aside for special needs. One just needs to tweak a shell script (or simply, one or more cron jobs) to what is desired. Priyadarshan From priyadarshan at scs.re Mon Jun 11 08:19:49 2018 From: priyadarshan at scs.re (priyadarshan) Date: Mon, 11 Jun 2018 10:19:49 +0200 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: <767138E0D064A148B03FE8EC1E9325A20138E9DD31@gedaevw60.a.space.corp> References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> <767138E0D064A148B03FE8EC1E9325A20138E9DD31@gedaevw60.a.space.corp> Message-ID: <2777D26F-E0A9-4D47-8928-AFDECAC6CFBC@scs.re> Hi Oliver, Thank you for sharing your use-case on Nexenta. A few years ago we also needed to have a DR site, and I tested several third party tools. The introduction of ZFS to Linux, with ZOL, made things more confused, by adding several more tools (like sanoid: https://github.com/jimsalterjrs/sanoid). Ultimately we settled on a simple tool, which we tested over the years with satisfactory results (zfsnap). Of course, if Nexenta will not allow shell access anymore, that makes it a bit more difficult. I would defer to more expert people, like Guenther, who also replied to your message. Kind regards, Priyadarshan > On 11 Jun 2018, at 10:07, Oliver Weinmann wrote: > > Hi Priyadarshan, > > Thanks a lot for the quick and comprehensive answer. I agree that using a third party tool might be helpful. When we started using the two ZFS systems, I really had a hard time testing a few third party tools. One of the biggest problems was that I wanted to be able to use the omnios systems as a DR site. However flipping the mirror always caused the nexenta system to crash. So until today there is no real solution to use the omnios system as a real DR site. This is due to different versions of zfs send and recv on the two systems and not related to using third party tools. I have tested zrep as it contains a DR mode and looked at znapzend but had not time to test it. We were told that the new version of nexenta no longer supports an ordinary way to sync snaps to a non nexenta system as there is no shell access anymore. Nexenta 5.x provides an API for this. I need to find some time to test it. > > Best Regards, > Oliver > > > > > > Oliver Weinmann > Head of Corporate ICT > Telespazio VEGA Deutschland GmbH > Europaplatz 5 - 64293 Darmstadt - Germany > Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 > oliver.weinmann at telespazio-vega.de > www.telespazio-vega.de > Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller-----Original Message----- > From: priyadarshan > Sent: Montag, 11. Juni 2018 09:46 > To: Oliver Weinmann > Cc: omnios-discuss at lists.omniti.com > Subject: Re: [OmniOS-discuss] zfs send | recv > > > >> On 11 Jun 2018, at 09:11, Oliver Weinmann wrote: >> >> Hi, >> >> We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta calls this feature autosync. While they say it is only 100% supported between nexenta systems, we managed to get it working with OmniOS too. It?s Not rocket science. But there is one big problem. In the autosync job on the Nexenta system one can specify how many snaps to keep local on the nexenta and how many to keep on the target system. Somehow we always have the same amount of snaps on both systems. Autosync always cleans all snaps on the dest that don?t exist on the source. I contacted nexenta support and they told me that this is due to different versions of zfs send and zfs recv. There should be a ?K flag, that instructs the destination to not destroy snapshots that don't exist on the source. Is such a flag available in OmniOS? I assume the flag is set on the sending side so that the receiving side has to understand it. >> >> Best Regards, >> Oliver >> > > Hello, > > OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks zfs from illumos-gate. > > One can follow various upstream merges here: > https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed > > Based on that, illumos man pages also apply to OmniOS: https://omnios.omniti.com/wiki.php/ManSections > > Illumos zfs man page is here: https://illumos.org/man/1m/zfs > > That page does not seem to offer a -K flag. > > You may want to consider third party tools. > > We have a very similar use-case as you detailed, fulfilled by using zfsnap, with reliable and consistent results. > > git repository: https://github.com/zfsnap/zfsnap > site: http://www.zfsnap.org/ > man page: http://www.zfsnap.org/zfsnap_manpage.html > > With zfsnap we have been maintaining (almost) live replicas of mail servers, including snapshotting, either automatically synchronised to master, or kept aside for special needs. > > One just needs to tweak a shell script (or simply, one or more cron jobs) to what is desired. > > > Priyadarshan > > From oliver.weinmann at telespazio-vega.de Mon Jun 11 08:58:57 2018 From: oliver.weinmann at telespazio-vega.de (Oliver Weinmann) Date: Mon, 11 Jun 2018 08:58:57 +0000 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> Message-ID: <767138E0D064A148B03FE8EC1E9325A20138E9DDC1@gedaevw60.a.space.corp> Yes it is recursively. We have hundreds of child datasets so single filesystems would be a real headache to maintain. :( [cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png] Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller From: OmniOS-discuss On Behalf Of Guenther Alka Sent: Montag, 11. Juni 2018 09:55 To: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs send | recv did you replicate recursively? keeping a different snap history should be possible when you send single filesystems. gea @napp-it.org Am 11.06.2018 um 09:11 schrieb Oliver Weinmann: Hi, We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta calls this feature autosync. While they say it is only 100% supported between nexenta systems, we managed to get it working with OmniOS too. It's Not rocket science. But there is one big problem. In the autosync job on the Nexenta system one can specify how many snaps to keep local on the nexenta and how many to keep on the target system. Somehow we always have the same amount of snaps on both systems. Autosync always cleans all snaps on the dest that don't exist on the source. I contacted nexenta support and they told me that this is due to different versions of zfs send and zfs recv. There should be a -K flag, that instructs the destination to not destroy snapshots that don't exist on the source. Is such a flag available in OmniOS? I assume the flag is set on the sending side so that the receiving side has to understand it. Best Regards, Oliver [cid:image001.png at 01D40173.314884D0] Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7535 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png Type: image/png Size: 7535 bytes Desc: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png URL: From alka at hfg-gmuend.de Mon Jun 11 10:15:53 2018 From: alka at hfg-gmuend.de (Guenther Alka) Date: Mon, 11 Jun 2018 12:15:53 +0200 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: <767138E0D064A148B03FE8EC1E9325A20138E9DDC1@gedaevw60.a.space.corp> References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> <767138E0D064A148B03FE8EC1E9325A20138E9DDC1@gedaevw60.a.space.corp> Message-ID: <0046a362-787f-02f7-ebf7-f4f2ce220078@hfg-gmuend.de> I suppose you can either keep the last snaps identical on source and target with a simple zfs send recursively or you need a script that cares about and does a send per filesystem to allow a different number of snaps on the target system. This is not related to Nexenta but I saw the same on current OmniOS -> OmniOS as they use the same Open-ZFS base Illumos Gea @naapp-it.org Am 11.06.2018 um 10:58 schrieb Oliver Weinmann: > > Yes it is recursively. We have hundreds of child datasets so single > filesystems would be a real headache to maintain. L > > *Oliver Weinmann* > Head of Corporate ICT > > Telespazio VEGA Deutschland GmbH > Europaplatz 5 - 64293 Darmstadt - Germany > Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 > oliver.weinmann at telespazio-vega.de > > www.telespazio-vega.de > > Registered office/Sitz: Darmstadt, Register court/Registergericht: > Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller > > *From:*OmniOS-discuss *On > Behalf Of *Guenther Alka > *Sent:* Montag, 11. Juni 2018 09:55 > *To:* omnios-discuss at lists.omniti.com > *Subject:* Re: [OmniOS-discuss] zfs send | recv > > did you replicate recursively? > keeping a different snap history should be possible when you send > single filesystems. > > > gea > @napp-it.org > > Am 11.06.2018 um 09:11 schrieb Oliver Weinmann: > > Hi, > > We are replicating snapshots from a Nexenta system to an OmniOS > system. Nexenta calls this feature autosync. While they say it is > only 100% supported between nexenta systems, we managed to get it > working with OmniOS too. It?s Not rocket science. But there is one > big problem. In the autosync job on the Nexenta system one can > specify how many snaps to keep local on the nexenta and how many > to keep on the target system. Somehow we always have the same > amount of snaps on both systems. Autosync always cleans all snaps > on the dest that don?t exist on the source. I contacted nexenta > support and they told me that this is due to different versions of > zfs send and zfs recv. There should be a ?K ?flag, that instructs > the destination to not destroy snapshots that don't exist on the > source. Is such a flag available in OmniOS? I assume the flag is > set on the sending side so that the receiving side has to > understand it. > > Best Regards, > > Oliver > > *Oliver Weinmann* > Head of Corporate ICT > > Telespazio VEGA Deutschland GmbH > Europaplatz 5 - 64293 Darmstadt - Germany > Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 > oliver.weinmann at telespazio-vega.de > > www.telespazio-vega.de > > Registered office/Sitz: Darmstadt, Register court/Registergericht: > Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller > > > > > _______________________________________________ > > OmniOS-discuss mailing list > > OmniOS-discuss at lists.omniti.com > > > http://lists.omniti.com/mailman/listinfo/omnios-discuss > > > > -- -- H f G Hochschule f?r Gestaltung university of design Schw?bisch Gm?nd Rektor-Klaus Str. 100 73525 Schw?bisch Gm?nd Guenther Alka, Dipl.-Ing. (FH) Leiter des Rechenzentrums head of computer center Tel 07171 602 627 Fax 07171 69259 guenther.alka at hfg-gmuend.de http://rz.hfg-gmuend.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png Type: image/png Size: 7535 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7535 bytes Desc: not available URL: From oliver.weinmann at telespazio-vega.de Mon Jun 11 10:31:31 2018 From: oliver.weinmann at telespazio-vega.de (Oliver Weinmann) Date: Mon, 11 Jun 2018 10:31:31 +0000 Subject: [OmniOS-discuss] zfs send | recv In-Reply-To: <0046a362-787f-02f7-ebf7-f4f2ce220078@hfg-gmuend.de> References: <767138E0D064A148B03FE8EC1E9325A20138E9CCCF@gedaevw60.a.space.corp> <767138E0D064A148B03FE8EC1E9325A20138E9DDC1@gedaevw60.a.space.corp> <0046a362-787f-02f7-ebf7-f4f2ce220078@hfg-gmuend.de> Message-ID: <767138E0D064A148B03FE8EC1E9325A20138E9DE66@gedaevw60.a.space.corp> I think I really have to start investigating using 3rd party apps again. Nexenta doesn't let me change the zfs send command. I can only adjust settings for the autosync job. [cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png] Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller From: OmniOS-discuss On Behalf Of Guenther Alka Sent: Montag, 11. Juni 2018 12:16 To: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs send | recv I suppose you can either keep the last snaps identical on source and target with a simple zfs send recursively or you need a script that cares about and does a send per filesystem to allow a different number of snaps on the target system. This is not related to Nexenta but I saw the same on current OmniOS -> OmniOS as they use the same Open-ZFS base Illumos Gea @naapp-it.org Am 11.06.2018 um 10:58 schrieb Oliver Weinmann: Yes it is recursively. We have hundreds of child datasets so single filesystems would be a real headache to maintain. L [cid:image001.png at 01D40180.1FD80740] Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller From: OmniOS-discuss On Behalf Of Guenther Alka Sent: Montag, 11. Juni 2018 09:55 To: omnios-discuss at lists.omniti.com Subject: Re: [OmniOS-discuss] zfs send | recv did you replicate recursively? keeping a different snap history should be possible when you send single filesystems. gea @napp-it.org Am 11.06.2018 um 09:11 schrieb Oliver Weinmann: Hi, We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta calls this feature autosync. While they say it is only 100% supported between nexenta systems, we managed to get it working with OmniOS too. It's Not rocket science. But there is one big problem. In the autosync job on the Nexenta system one can specify how many snaps to keep local on the nexenta and how many to keep on the target system. Somehow we always have the same amount of snaps on both systems. Autosync always cleans all snaps on the dest that don't exist on the source. I contacted nexenta support and they told me that this is due to different versions of zfs send and zfs recv. There should be a -K flag, that instructs the destination to not destroy snapshots that don't exist on the source. Is such a flag available in OmniOS? I assume the flag is set on the sending side so that the receiving side has to understand it. Best Regards, Oliver [cid:image001.png at 01D40180.1FD80740] Oliver Weinmann Head of Corporate ICT Telespazio VEGA Deutschland GmbH Europaplatz 5 - 64293 Darmstadt - Germany Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799 oliver.weinmann at telespazio-vega.de www.telespazio-vega.de Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss -- -- H f G Hochschule f?r Gestaltung university of design Schw?bisch Gm?nd Rektor-Klaus Str. 100 73525 Schw?bisch Gm?nd Guenther Alka, Dipl.-Ing. (FH) Leiter des Rechenzentrums head of computer center Tel 07171 602 627 Fax 07171 69259 guenther.alka at hfg-gmuend.de http://rz.hfg-gmuend.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7535 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png Type: image/png Size: 7535 bytes Desc: Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1111111111111.png URL: From oliver.weinmann at me.com Thu Jun 14 07:04:45 2018 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Thu, 14 Jun 2018 07:04:45 +0000 (GMT) Subject: [OmniOS-discuss] VEEAM backups to CIFS fail only on OmniOS hypercongerged VM Message-ID: Dear All, I?m struggling with this issue since day one and I have not found any solution for it yet. We use VEEAM to back up our VMs and an OmniOS VM as a CIFS target. We have one OmniOS VM for internal and one for DMZ. VEEAM Backups to the one for internal work fine. No problems at all. Backups to the DMZ one fail every time. I can access the CIFS share just fine from windows. When the backup starts two or three VMs are backed up and then it fails. I have requested support from VEEAM and it turns out the same job running against a Windows server CIFS share works just fine. I couldn?t believe that OmniOS is the culprit as the CIFS implementation from Illumos is very good. So I setup a new OmniOS bare-metal server and created a zone for DMZ. I setup a cifs share and ran the same job. Everything works fine. I compared the settings from both the VM and the Zone and they are 100% identical. Only difference is one is a VM and one is a Zone. But since the VEEAM backup to the internal VM has no problems with the backup, I don?t think virtualization is a problem here. Is there anywhere I can start investigating further? I would be more than happy to use a zone instead of a full blown VM but since there is no ISCSI and NFS server support in a Zone I have to stick with the VM as we need NFS since the VM is also a datastore for a few VMs. Any help is really appreciated. Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From danmcd at kebe.com Thu Jun 14 12:38:55 2018 From: danmcd at kebe.com (Dan McDonald) Date: Thu, 14 Jun 2018 08:38:55 -0400 Subject: [OmniOS-discuss] VEEAM backups to CIFS fail only on OmniOS hypercongerged VM In-Reply-To: References: Message-ID: <471F46A3-4F09-4F27-B5BF-8381ED0F2A82@kebe.com> > On Jun 14, 2018, at 3:04 AM, Oliver Weinmann wrote: > I would be more than happy to use a zone instead of a full blown VM but since there is no ISCSI and NFS server support in a Zone I have to stick with the VM as we need NFS since the VM is also a datastore for a few VMs. You rambled a bit here, so I'm not sure what exactly you're asking. I do know that: - CIFS-server-in-a-zone should work - NFS-server-in-a-zone and iSCSI-target-in-a-zone are both not available right now. There is purported to be a prototype of NFS-server-in-a-zone kicking around *somewhere* but that may have been tied up. I'd watch distros, especially those working on file service, to see if that shows up at some point, where it can be upstreamed to illumos-gate (and then back down to illumos-omnios). Dan From gijs at in2ip.nl Thu Jun 14 17:24:42 2018 From: gijs at in2ip.nl (gijs at in2ip.nl) Date: Thu, 14 Jun 2018 19:24:42 +0200 Subject: [OmniOS-discuss] Poor read performance on fresh zpool Message-ID: Hi, on a OmniosCE r151022 system we have rebuild our zfs pool. The pool is constructed out of 18 mirror vdevs, each consisting of 2 1,8TB SAS drives. Since about half the disks are 512b and the other half are 512e (physical 4k) we opted to use 4k sectors using sd.conf for all devices; thus all vdevs report as ashift 12. During testing today we experienced extremely poor read performance. Doing sequential reads with dd of the pool results in a read rate of 300MB/s, benchmarking with bonnie++ gives about 600MB/s Write performance is as expected, around 1,4GB/s. Scrub also performs as expected, zpool status shows a scrub speed of 2GB/s, iostat -x shows a total pool speed of 4GB/s. Any hints as to what might be causing our poor read performance? Sincerely, Gijs Peskens From steve at linuxsuite.org Fri Jun 15 16:54:05 2018 From: steve at linuxsuite.org (steve at linuxsuite.org) Date: Fri, 15 Jun 2018 12:54:05 -0400 Subject: [OmniOS-discuss] Write performance regression from r14? Message-ID: Howdy! I have about 20 machines built on this image circa 2015? OmniOS_Text_r151014.usb-dd I just noticed that OminOS is being continued as omniosce.org. I downloaded the latest install image and did some testing, and I got about 1/3 - 1/2 the write performance I was expecting on a simple RAIDZ setup. I installed fresh images from omnios.omniti.com and omniosce.org based on r22 r151022.usb-dd as well as the r14 image above. I get about 1/3 - 1/2 the write performance with r22 compared to r14. It is a simple write test using dd and measuring performance with zpool iostat. Hardware and zpool is identical for each test. I simply swapped out the boot disks and booted a different image. The hardware is simple.. DELL R710 with LSI-SAS9201-16e Thoughts? I would like to help resolve this if it interests anyone. r14 suits my purpose well enough... but this issue must affect others... -steve From priyadarshan at scs.re Sat Jun 16 07:36:03 2018 From: priyadarshan at scs.re (priyadarshan) Date: Sat, 16 Jun 2018 09:36:03 +0200 Subject: [OmniOS-discuss] Write performance regression from r14? In-Reply-To: References: Message-ID: <7B69F314-9335-4C18-A130-3431566D97EF@scs.re> Hi Steve, We have just completed a period of testing using OmniOS 151026, having as reference our legacy platform, FreeBSD 11.2-RELEASE. Test was done using exact same hardware, mainly on a simple 8-disk RAID-Z2. I/O throughput was comparable. Did you test using r151026 as well? I saw you opened a ticket, which is probably the favourite way. https://github.com/omniosorg/illumos-omnios/issues/225 Priyadarshan > On 15 Jun 2018, at 18:54, steve at linuxsuite.org wrote: > > Howdy! > > I have about 20 machines built on this image circa 2015? > > OmniOS_Text_r151014.usb-dd > > I just noticed that OminOS is being continued as omniosce.org. > I downloaded the latest install image and did some testing, > and I got about 1/3 - 1/2 the write performance I was expecting on a > simple RAIDZ setup. > > I installed fresh images from omnios.omniti.com and omniosce.org > based on r22 > > r151022.usb-dd > > as well as the r14 image above. > > I get about 1/3 - 1/2 the write performance with r22 compared to r14. > It is a simple write test using dd and measuring performance with > zpool iostat. > > Hardware and zpool is identical for each test. I simply swapped out the > boot disks > and booted a different image. > > The hardware is simple.. DELL R710 with LSI-SAS9201-16e > > Thoughts? > > I would like to help resolve this if it interests anyone. > > r14 suits my purpose well enough... but this issue must affect others... > > -steve > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From mtalbott at lji.org Sat Jun 16 17:45:15 2018 From: mtalbott at lji.org (Michael Talbott) Date: Sat, 16 Jun 2018 10:45:15 -0700 Subject: [OmniOS-discuss] Big Data Message-ID: We've been using OmniOS happily for years now for our storage server needs. But we're rapidly increasing our data footprint and growing so much (multiple PBs per year) that ideally I'd like to move to a cluster based object store based system ontop of OmniOS. I successfully use BeeGFS inside lxzones in OmniOS which seems to work nicely for our HPC scratch volume, but, it doesn't sound like it scales to hundreds of millions of files very well. I am hoping that someone has some ideas for me. Ideally I'd like something that's cluster capable and has erasure coding like Ceph and have cluster aware snapshots (not local zfs snaps) and an s3 compatibility/access layer. Any thoughts on the topic are greatly appreciated. Thanks, Michael Sent from my iPhone From oliver.weinmann at me.com Mon Jun 18 06:19:38 2018 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Mon, 18 Jun 2018 06:19:38 +0000 (GMT) Subject: [OmniOS-discuss] =?utf-8?q?=C2=A0_VEEAM_backups_to_CIFS_fail_only?= =?utf-8?q?_on_OmniOS_hypercongerged_VM?= Message-ID: Hi, sorry for not being clear enough. Anyway, problem solved. I created a fresh OmniOS VM and configured CIFS with no AD connection, created a local user and now the backups are working fine since 4 days. :) Thanks and Best Regards, Oliver Am 14. Juni 2018 um 14:47 schrieb Dan McDonald : On Jun 14, 2018, at 3:04 AM, Oliver Weinmann wrote: I would be more than happy to use a zone instead of a full blown VM but since there is no ISCSI and NFS server support in a Zone I have to stick with the VM as we need NFS since the VM is also a datastore for a few VMs. You rambled a bit here, so I'm not sure what exactly you're asking. I do know that: - CIFS-server-in-a-zone should work - NFS-server-in-a-zone and iSCSI-target-in-a-zone are both not available right now. There is purported to be a prototype of NFS-server-in-a-zone kicking around *somewhere* but that may have been tied up. I'd watch distros, especially those working on file service, to see if that shows up at some point, where it can be upstreamed to illumos-gate (and then back down to illumos-omnios). Dan _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss at lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at icloud.com Mon Jun 18 06:27:30 2018 From: oliver.weinmann at icloud.com (Oliver Weinmann) Date: Mon, 18 Jun 2018 06:27:30 +0000 (GMT) Subject: [OmniOS-discuss] Restrucuring ZPool required? Message-ID: <2a7c5b99-a775-40d8-95cc-ee744a6cbfd7@me.com> Hi, we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then when we created the pool we only cared about disk space and so we created a raidz2 pool with all 24disks in one vdev. I have the impression that this is cool for disk space but is really bad for IO since this only provides the IO of a single disk. We only use it for backups and cold CIFS data but I have the impression that especially running a single VEEAM backup copy job really maxes out the IO. In our case the VEEAM backup copy job reads and writes the data from the storage. Now I wonder if it makes sense to restructure the Pool. I have to admit that I don't have any other system with a lot of disk space so I can't simply mirror the snapshots to another system and recreate the pool from scratch. Would adding two ZIL SSDs improve performance? Any help is much appreciated. Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From priyadarshan at scs.re Mon Jun 18 06:50:16 2018 From: priyadarshan at scs.re (priyadarshan) Date: Mon, 18 Jun 2018 08:50:16 +0200 Subject: [OmniOS-discuss] Restrucuring ZPool required? In-Reply-To: <2a7c5b99-a775-40d8-95cc-ee744a6cbfd7@me.com> References: <2a7c5b99-a775-40d8-95cc-ee744a6cbfd7@me.com> Message-ID: <9DA759CA-8039-48BF-BF08-9961C43AA906@scs.re> > On 18 Jun 2018, at 08:27, Oliver Weinmann wrote: > > Hi, > > we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then when we created the pool we only cared about disk space and so we created a raidz2 pool with all 24disks in one vdev. I have the impression that this is cool for disk space but is really bad for IO since this only provides the IO of a single disk. We only use it for backups and cold CIFS data but I have the impression that especially running a single VEEAM backup copy job really maxes out the IO. In our case the VEEAM backup copy job reads and writes the data from the storage. Now I wonder if it makes sense to restructure the Pool. I have to admit that I don't have any other system with a lot of disk space so I can't simply mirror the snapshots to another system and recreate the pool from scratch. > > Would adding two ZIL SSDs improve performance? > > Any help is much appreciated. > > Best Regards, > Oliver Hi, I would be interested to know as well. Sometimes we have same issue: need for large space vs need to optmise for speed (read, write, or both). We also are using, at the moment, 10TB disks, although never do RAID-Z2 with more than 10 disks. This page has some testing that was useful to us: https://calomel.org/zfs_raid_speed_capacity.html Section ?Spinning platter hard drive raids? has your use case (although 4TB, not 10TB): 24x 4TB, 12 striped mirrors, 45.2 TB, w=696MB/s , rw=144MB/s , r=898MB/s 24x 4TB, raidz (raid5), 86.4 TB, w=567MB/s , rw=198MB/s , r=1304MB/s 24x 4TB, raidz2 (raid6), 82.0 TB, w=434MB/s , rw=189MB/s , r=1063MB/s 24x 4TB, raidz3 (raid7), 78.1 TB, w=405MB/s , rw=180MB/s , r=1117MB/s 24x 4TB, striped raid0, 90.4 TB, w=692MB/s , rw=260MB/s , r=1377MB/s Different adapters/disks will change the results, but I do not thing ratio will change much. It would be interesting to see how ZIL would affect that. Priyadarshan From lists at marzocchi.net Mon Jun 18 07:33:14 2018 From: lists at marzocchi.net (Olaf Marzocchi) Date: Mon, 18 Jun 2018 09:33:14 +0200 Subject: [OmniOS-discuss] Restrucuring ZPool required? In-Reply-To: <9DA759CA-8039-48BF-BF08-9961C43AA906@scs.re> References: <2a7c5b99-a775-40d8-95cc-ee744a6cbfd7@me.com> <9DA759CA-8039-48BF-BF08-9961C43AA906@scs.re> Message-ID: <0B38251B-4113-44DD-AA1A-C5AC2AADD8ED@marzocchi.net> In that page you should also check the raw output of dd, showing in the last column the IOPs. Olaf Il 18 giugno 2018 08:50:16 CEST, priyadarshan ha scritto: > >> On 18 Jun 2018, at 08:27, Oliver Weinmann > wrote: >> >> Hi, >> >> we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that >back then when we created the pool we only cared about disk space and >so we created a raidz2 pool with all 24disks in one vdev. I have the >impression that this is cool for disk space but is really bad for IO >since this only provides the IO of a single disk. We only use it for >backups and cold CIFS data but I have the impression that especially >running a single VEEAM backup copy job really maxes out the IO. In our >case the VEEAM backup copy job reads and writes the data from the >storage. Now I wonder if it makes sense to restructure the Pool. I have >to admit that I don't have any other system with a lot of disk space so >I can't simply mirror the snapshots to another system and recreate the >pool from scratch. >> >> Would adding two ZIL SSDs improve performance? >> >> Any help is much appreciated. >> >> Best Regards, >> Oliver > >Hi, > >I would be interested to know as well. > >Sometimes we have same issue: need for large space vs need to optmise >for speed (read, write, or both). We also are using, at the moment, >10TB disks, although never do RAID-Z2 with more than 10 disks. > >This page has some testing that was useful to us: >https://calomel.org/zfs_raid_speed_capacity.html > >Section ?Spinning platter hard drive raids? has your use case (although >4TB, not 10TB): > >24x 4TB, 12 striped mirrors, 45.2 TB, w=696MB/s , rw=144MB/s , >r=898MB/s >24x 4TB, raidz (raid5), 86.4 TB, w=567MB/s , rw=198MB/s , >r=1304MB/s >24x 4TB, raidz2 (raid6), 82.0 TB, w=434MB/s , rw=189MB/s , >r=1063MB/s >24x 4TB, raidz3 (raid7), 78.1 TB, w=405MB/s , rw=180MB/s , >r=1117MB/s >24x 4TB, striped raid0, 90.4 TB, w=692MB/s , rw=260MB/s , >r=1377MB/s > >Different adapters/disks will change the results, but I do not thing >ratio will change much. > >It would be interesting to see how ZIL would affect that. > > >Priyadarshan >_______________________________________________ >OmniOS-discuss mailing list >OmniOS-discuss at lists.omniti.com >http://lists.omniti.com/mailman/listinfo/omnios-discuss From alka at hfg-gmuend.de Mon Jun 18 09:46:00 2018 From: alka at hfg-gmuend.de (Guenther Alka) Date: Mon, 18 Jun 2018 11:46:00 +0200 Subject: [OmniOS-discuss] Restrucuring ZPool required? In-Reply-To: <9DA759CA-8039-48BF-BF08-9961C43AA906@scs.re> References: <2a7c5b99-a775-40d8-95cc-ee744a6cbfd7@me.com> <9DA759CA-8039-48BF-BF08-9961C43AA906@scs.re> Message-ID: An Slog (you wrote ZIL but you meant Slog as ZIL is onpool logging while Slog is logging on a dedicated device) is not a write cache. It's a logging feature when sync write is enabled and only read after a crash on next bootup. CIFS does not use sync per default and for NFS (that wants sync per default) you can and should disable when you use NFS as a pure backup target. Your benchmarks clearly show that sync is not enabled otherwise write performance with a large vdev would be more like 30-50 MB/s instead your 400-700 MB/s. If you want to enable sync, you should look at Intel Optane as Slog as this is far better than any other Flash based Slog. ZFS use RAM as read and write cache. The default write cache is 10% of RAM up to 4GB so the first option to improve write (and read) performance is to add more RAM. A fast L2Arc (ex Intel Optane, size 5-max 10x RAM) can help if you cannot increase RAM or if you want a read ahead functionality that you can enable on an L2Arc. Even write performance is improved with a larger read cache as even writes need to read metadata. Beside that, I would not create a raid Zn vdev from 24 disks. I would prefer 3 vdevs from 8 disks or at least two vdevs from 12 disks as pool iops scale with number of vdevs. Gea @napp-it.org Am 18.06.2018 um 08:50 schrieb priyadarshan: >> On 18 Jun 2018, at 08:27, Oliver Weinmann wrote: >> >> Hi, >> >> we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then when we created the pool we only cared about disk space and so we created a raidz2 pool with all 24disks in one vdev. I have the impression that this is cool for disk space but is really bad for IO since this only provides the IO of a single disk. We only use it for backups and cold CIFS data but I have the impression that especially running a single VEEAM backup copy job really maxes out the IO. In our case the VEEAM backup copy job reads and writes the data from the storage. Now I wonder if it makes sense to restructure the Pool. I have to admit that I don't have any other system with a lot of disk space so I can't simply mirror the snapshots to another system and recreate the pool from scratch. >> >> Would adding two ZIL SSDs improve performance? >> >> Any help is much appreciated. >> >> Best Regards, >> Oliver > Hi, > > I would be interested to know as well. > > Sometimes we have same issue: need for large space vs need to optmise for speed (read, write, or both). We also are using, at the moment, 10TB disks, although never do RAID-Z2 with more than 10 disks. > > This page has some testing that was useful to us: https://calomel.org/zfs_raid_speed_capacity.html > > Section ?Spinning platter hard drive raids? has your use case (although 4TB, not 10TB): > > 24x 4TB, 12 striped mirrors, 45.2 TB, w=696MB/s , rw=144MB/s , r=898MB/s > 24x 4TB, raidz (raid5), 86.4 TB, w=567MB/s , rw=198MB/s , r=1304MB/s > 24x 4TB, raidz2 (raid6), 82.0 TB, w=434MB/s , rw=189MB/s , r=1063MB/s > 24x 4TB, raidz3 (raid7), 78.1 TB, w=405MB/s , rw=180MB/s , r=1117MB/s > 24x 4TB, striped raid0, 90.4 TB, w=692MB/s , rw=260MB/s , r=1377MB/s > > Different adapters/disks will change the results, but I do not thing ratio will change much. > > It would be interesting to see how ZIL would affect that. > > > Priyadarshan > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From priyadarshan at scs.re Tue Jun 19 06:52:55 2018 From: priyadarshan at scs.re (priyadarshan) Date: Tue, 19 Jun 2018 08:52:55 +0200 Subject: [OmniOS-discuss] Restrucuring ZPool required? In-Reply-To: References: <2a7c5b99-a775-40d8-95cc-ee744a6cbfd7@me.com> <9DA759CA-8039-48BF-BF08-9961C43AA906@scs.re> Message-ID: Thank you Gea, Very useful and informative details. Priyadarshan > On 18 Jun 2018, at 11:46, Guenther Alka wrote: > > An Slog (you wrote ZIL but you meant Slog as ZIL is onpool logging while Slog is logging on a dedicated device) is not a write cache. It's a logging feature when sync write is enabled and only read after a crash on next bootup. CIFS does not use sync per default and for NFS (that wants sync per default) you can and should disable when you use NFS as a pure backup target. Your benchmarks clearly show that sync is not enabled otherwise write performance with a large vdev would be more like 30-50 MB/s instead your 400-700 MB/s. > > If you want to enable sync, you should look at Intel Optane as Slog as this is far better than any other Flash based Slog. > > ZFS use RAM as read and write cache. The default write cache is 10% of RAM up to 4GB so the first option to improve write (and read) performance is to add more RAM. A fast L2Arc (ex Intel Optane, size 5-max 10x RAM) can help if you cannot increase RAM or if you want a read ahead functionality that you can enable on an L2Arc. Even write performance is improved with a larger read cache as even writes need to read metadata. > > Beside that, I would not create a raid Zn vdev from 24 disks. I would prefer 3 vdevs from 8 disks or at least two vdevs from 12 disks as pool iops scale with number of vdevs. > > > Gea > @napp-it.org > > Am 18.06.2018 um 08:50 schrieb priyadarshan: >>> On 18 Jun 2018, at 08:27, Oliver Weinmann >>> wrote: >>> >>> Hi, >>> >>> we have a HGST4u60 SATA JBOD with 24 x 10TB disks. I just saw that back then when we created the pool we only cared about disk space and so we created a raidz2 pool with all 24disks in one vdev. I have the impression that this is cool for disk space but is really bad for IO since this only provides the IO of a single disk. We only use it for backups and cold CIFS data but I have the impression that especially running a single VEEAM backup copy job really maxes out the IO. In our case the VEEAM backup copy job reads and writes the data from the storage. Now I wonder if it makes sense to restructure the Pool. I have to admit that I don't have any other system with a lot of disk space so I can't simply mirror the snapshots to another system and recreate the pool from scratch. >>> >>> Would adding two ZIL SSDs improve performance? >>> >>> Any help is much appreciated. >>> >>> Best Regards, >>> Oliver >>> >> Hi, >> >> I would be interested to know as well. >> >> Sometimes we have same issue: need for large space vs need to optmise for speed (read, write, or both). We also are using, at the moment, 10TB disks, although never do RAID-Z2 with more than 10 disks. >> >> This page has some testing that was useful to us: >> https://calomel.org/zfs_raid_speed_capacity.html >> >> >> Section ?Spinning platter hard drive raids? has your use case (although 4TB, not 10TB): >> >> 24x 4TB, 12 striped mirrors, 45.2 TB, w=696MB/s , rw=144MB/s , r=898MB/s >> 24x 4TB, raidz (raid5), 86.4 TB, w=567MB/s , rw=198MB/s , r=1304MB/s >> 24x 4TB, raidz2 (raid6), 82.0 TB, w=434MB/s , rw=189MB/s , r=1063MB/s >> 24x 4TB, raidz3 (raid7), 78.1 TB, w=405MB/s , rw=180MB/s , r=1117MB/s >> 24x 4TB, striped raid0, 90.4 TB, w=692MB/s , rw=260MB/s , r=1377MB/s >> >> Different adapters/disks will change the results, but I do not thing ratio will change much. >> >> It would be interesting to see how ZIL would affect that. >> >> >> Priyadarshan >> _______________________________________________ >> OmniOS-discuss mailing list >> >> OmniOS-discuss at lists.omniti.com >> http://lists.omniti.com/mailman/listinfo/omnios-discuss > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss From ikaufman at eng.ucsd.edu Tue Jun 19 18:11:12 2018 From: ikaufman at eng.ucsd.edu (Ian Kaufman) Date: Tue, 19 Jun 2018 11:11:12 -0700 Subject: [OmniOS-discuss] Big Data In-Reply-To: References: Message-ID: You might want to read this thread (especially the comments). http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-January/015599.html On Sat, Jun 16, 2018 at 10:46 AM Michael Talbott wrote: > We've been using OmniOS happily for years now for our storage server > needs. But we're rapidly increasing our data footprint and growing so much > (multiple PBs per year) that ideally I'd like to move to a cluster based > object store based system ontop of OmniOS. I successfully use BeeGFS inside > lxzones in OmniOS which seems to work nicely for our HPC scratch volume, > but, it doesn't sound like it scales to hundreds of millions of files very > well. > > I am hoping that someone has some ideas for me. Ideally I'd like something > that's cluster capable and has erasure coding like Ceph and have cluster > aware snapshots (not local zfs snaps) and an s3 compatibility/access layer. > > Any thoughts on the topic are greatly appreciated. > > Thanks, > > Michael > Sent from my iPhone > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From chip at innovates.com Tue Jun 19 18:48:07 2018 From: chip at innovates.com (Schweiss, Chip) Date: Tue, 19 Jun 2018 13:48:07 -0500 Subject: [OmniOS-discuss] Big Data In-Reply-To: References: Message-ID: I've used BeeGFS on Ubuntu 16.04 for about a year now. I like your idea of putting it lxzones on OmniOS for scratch space. I have found it to scale with millions of files very well. It's running on a 4 node cluster. Each node is client, metadata and data nodes. These are very big GPU boxes with 9 Tesla GPUs, 40 CPU cores and 256GB ram. The metadata is mirrored on 2 Samsung Pro SSDs on each node. It sustains about 33k metadata ops with never more than one queued. This is my third iteration of setting it up. Metadata performance was our bottleneck each time previously. What I have found is that latency and horizontal scaling is king with BeeGFS metadata. It doesn't take a lot of CPU, but keep it close as possible on the network to the clients and keep latency low with fast network and SSDs. My complaints about BeeGFS is lack of snapshots, so backup is limited to rsync of a live file system. For this reason it's only used for this very high read demand cluster. I still use ZFS on OmniOS for our PBs of data where snapshots and replication are priceless. -Chip On Sat, Jun 16, 2018 at 12:45 PM, Michael Talbott wrote: > We've been using OmniOS happily for years now for our storage server > needs. But we're rapidly increasing our data footprint and growing so much > (multiple PBs per year) that ideally I'd like to move to a cluster based > object store based system ontop of OmniOS. I successfully use BeeGFS inside > lxzones in OmniOS which seems to work nicely for our HPC scratch volume, > but, it doesn't sound like it scales to hundreds of millions of files very > well. > > I am hoping that someone has some ideas for me. Ideally I'd like something > that's cluster capable and has erasure coding like Ceph and have cluster > aware snapshots (not local zfs snaps) and an s3 compatibility/access layer. > > Any thoughts on the topic are greatly appreciated. > > Thanks, > > Michael > Sent from my iPhone > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From icoomnios at gmail.com Thu Jun 21 10:50:24 2018 From: icoomnios at gmail.com (anthony omnios) Date: Thu, 21 Jun 2018 12:50:24 +0200 Subject: [OmniOS-discuss] Zvol write a lot of data Message-ID: Hi, i am testing a new plateform with OmniosCe with 7 VM on zvol with ISCSI comstar. I have set sync=always for all zvol and i have 2 ssd intel 3700 for zil and two mirror ssd for data. Data Disk are samsung 850 evo (ashift=13). My problem is that the pool commit to data disk approximatively 5MB every 5 second but i have only few data is write on zil (sync=always on zvol) and my 7 vm are only up with no disk activity. With 40 VM with no disk activity on it, i flush to data disk approximatively 30 MB every 5 seconds. How can i write a lot of data on data disk without network iscsi trafic and no disk activity on VM and no disk activity on zil ? What type of data is it (metadata ?) ? zpool status pool: filervm2 state: ONLINE scan: resilvered 0 in 0h0m with 0 errors on Mon Jun 4 17:46:40 2018 config: NAME STATE READ WRITE CKSUM filervm2 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5002538E40102BECd0 ONLINE 0 0 0 c0t5002538E40264251d0 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 c0t55CD2E404B73F8F1d0 ONLINE 0 0 0 c0t55CD2E404C270DD9d0 ONLINE 0 0 0 iostat -xn -d 1 : extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 2,0 39,4 80,6 2014,5 0,2 0,0 5,4 0,4 0 1 filervm2 0,0 0,9 0,3 12,1 0,1 0,0 89,9 3,7 0 0 rpool 0,0 0,5 0,1 6,0 0,0 0,0 0,0 3,4 0 0 c0t55CD2E414D2D4782d0 0,0 0,5 0,1 6,0 0,0 0,0 0,0 3,4 0 0 c0t55CD2E414D2D4713d0 0,0 14,5 0,0 476,5 0,0 0,0 0,0 0,4 0 0 c0t55CD2E404B73F8F1d0 0,0 8,9 0,0 476,5 0,0 0,0 0,0 0,3 0 0 c0t55CD2E404C270DD9d0 1,0 11,0 40,3 530,8 0,0 0,0 0,0 0,2 0 0 c0t5002538E40102BECd0 1,0 11,0 40,3 530,8 0,0 0,0 0,0 0,2 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 20,0 0,0 304,0 0,0 0,0 0,0 0,2 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 20,0 0,0 152,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 10,0 0,0 152,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 212,1 0,0 9826,4 1,9 0,1 8,8 0,3 3 3 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 11,0 0,0 56,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 5,0 0,0 56,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404C270DD9d0 0,0 101,0 0,0 4857,2 0,0 0,0 0,0 0,2 0 2 c0t5002538E40102BECd0 0,0 105,0 0,0 4857,2 0,0 0,0 0,0 0,2 0 2 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 38,0 0,0 735,5 0,0 0,0 0,0 0,2 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 37,0 0,0 367,7 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 19,0 0,0 367,7 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 8,0 0,0 64,0 0,0 0,0 0,0 0,2 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 8,0 0,0 32,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 4,0 0,0 32,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 4,0 0,0 32,0 0,0 0,0 0,0 0,2 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 4,0 0,0 16,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 2,0 0,0 16,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 239,1 0,0 11491,4 2,6 0,1 10,9 0,3 3 3 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 1,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 118,0 0,0 5745,7 0,0 0,0 0,0 0,2 0 2 c0t5002538E40102BECd0 0,0 125,0 0,0 5745,7 0,0 0,0 0,0 0,2 0 2 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 18,0 0,0 272,1 0,0 0,0 0,0 0,5 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 18,0 0,0 136,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 9,0 0,0 136,0 0,0 0,0 0,0 0,2 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 168,0 0,0 6336,8 0,9 0,1 5,4 0,3 2 3 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 15,0 0,0 96,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404B73F8F1d0 0,0 7,0 0,0 96,0 0,0 0,0 0,0 0,1 0 0 c0t55CD2E404C270DD9d0 0,0 76,0 0,0 3072,4 0,0 0,0 0,0 0,2 0 2 c0t5002538E40102BECd0 0,0 82,0 0,0 3072,4 0,0 0,0 0,0 0,2 0 2 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4782d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E414D2D4713d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404B73F8F1d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t55CD2E404C270DD9d0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40102BECd0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 c0t5002538E40264251d0 zfs get all filervm2/hdd-110133a NAME PROPERTY VALUE SOURCE filervm2/hdd-110133a type volume - filervm2/hdd-110133a creation mer. juin 20 17:38 2018 - filervm2/hdd-110133a used 1,80G - filervm2/hdd-110133a available 860G - filervm2/hdd-110133a referenced 2,28G - filervm2/hdd-110133a compressratio 1.74x - filervm2/hdd-110133a origin filervm2/template-ha-centos7-64bits at template1 - filervm2/hdd-110133a reservation none default filervm2/hdd-110133a volsize 25G local filervm2/hdd-110133a volblocksize 64K - filervm2/hdd-110133a checksum on default filervm2/hdd-110133a compression lz4 local filervm2/hdd-110133a readonly off default filervm2/hdd-110133a copies 1 default filervm2/hdd-110133a refreservation none default filervm2/hdd-110133a primarycache all default filervm2/hdd-110133a secondarycache all default filervm2/hdd-110133a usedbysnapshots 150M - filervm2/hdd-110133a usedbydataset 1,65G - filervm2/hdd-110133a usedbychildren 0 - filervm2/hdd-110133a usedbyrefreservation 0 - filervm2/hdd-110133a logbias latency default filervm2/hdd-110133a dedup off default filervm2/hdd-110133a mlslabel none default filervm2/hdd-110133a sync always local filervm2/hdd-110133a refcompressratio 1.67x - filervm2/hdd-110133a written 6,21M - filervm2/hdd-110133a logicalused 3,09G - filervm2/hdd-110133a logicalreferenced 3,81G - filervm2/hdd-110133a snapshot_limit none default filervm2/hdd-110133a snapshot_count none default filervm2/hdd-110133a redundant_metadata all default Best regards, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkateley at kateley.com Thu Jun 21 12:51:40 2018 From: lkateley at kateley.com (Linda Kateley) Date: Thu, 21 Jun 2018 07:51:40 -0500 Subject: [OmniOS-discuss] Zvol write a lot of data In-Reply-To: References: Message-ID: <18057726-8b41-ab54-f90e-dc359eee9f27@kateley.com> You should be able to get zilstat(just google) and get much more details on what is happening in zil. I haven't done this for awhile in omni, but I increase the transaction timeout in freebsd or linux quite often. The 5 seconds is set by a variable. Put this into /etc/system and reboot./ / ///set zfs:zfs_txg_timeout = 1/ // Quite often with zfs you won't see disk activity. That is it's beauty. All of what you need is probably running from ram. Linda On 6/21/18 5:50 AM, anthony omnios wrote: > Hi, > > i am testing a new plateform with OmniosCe with 7 VM on zvol with > ISCSI comstar. > > I have set sync=always for all zvol and? i have 2 ssd intel 3700 for > zil and two mirror ssd for data. > > Data Disk are samsung 850 evo (ashift=13). > > My problem is that the pool commit to data disk approximatively 5MB > every 5 second but i have only few data is write on zil (sync=always > on zvol) and my 7 vm are only up with no disk activity. > > With 40 VM with no disk activity on it, i flush to data disk > approximatively 30 MB every 5 seconds. > > How can i write a lot of data on data disk without network iscsi > trafic and no disk activity on VM and no disk activity on zil ? What > type of data is it (metadata ?) ? > > ?zpool status > ? pool: filervm2 > ?state: ONLINE > ? scan: resilvered 0 in 0h0m with 0 errors on Mon Jun? 4 17:46:40 2018 > config: > > ??????? NAME?????????????????????? STATE???? READ WRITE CKSUM > ??????? filervm2?????????????????? ONLINE?????? 0???? 0???? 0 > ????????? mirror-0???????????????? ONLINE?????? 0???? 0???? 0 > ??????????? c0t5002538E40102BECd0? ONLINE?????? 0???? 0???? 0 > ??????????? c0t5002538E40264251d0? ONLINE?????? 0???? 0???? 0 > ??????? logs > ????????? mirror-1???????????????? ONLINE?????? 0???? 0???? 0 > ??????????? c0t55CD2E404B73F8F1d0? ONLINE?????? 0???? 0???? 0 > ??????????? c0t55CD2E404C270DD9d0? ONLINE?????? 0???? 0???? 0 > > iostat -xn -d 1 : > > ???? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 2,0?? 39,4?? 80,6 2014,5? 0,2? 0,0??? 5,4??? 0,4?? 0?? 1 filervm2 > ??? 0,0??? 0,9??? 0,3?? 12,1? 0,1? 0,0?? 89,9??? 3,7?? 0?? 0 rpool > ??? 0,0??? 0,5??? 0,1??? 6,0? 0,0? 0,0??? 0,0??? 3,4?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,5??? 0,1??? 6,0? 0,0? 0,0??? 0,0??? 3,4?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0?? 14,5??? 0,0? 476,5? 0,0? 0,0??? 0,0??? 0,4?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 8,9??? 0,0? 476,5? 0,0? 0,0??? 0,0??? 0,3?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 1,0?? 11,0?? 40,3? 530,8? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 > c0t5002538E40102BECd0 > ??? 1,0?? 11,0?? 40,3? 530,8? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0?? 20,0??? 0,0? 304,0? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0?? 20,0??? 0,0? 152,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0?? 10,0??? 0,0? 152,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0? 212,1??? 0,0 9826,4? 1,9? 0,1??? 8,8??? 0,3?? 3?? 3 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0?? 11,0??? 0,0?? 56,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 5,0??? 0,0?? 56,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0? 101,0??? 0,0 4857,2? 0,0? 0,0??? 0,0??? 0,2?? 0?? 2 > c0t5002538E40102BECd0 > ??? 0,0? 105,0??? 0,0 4857,2? 0,0? 0,0??? 0,0??? 0,2?? 0?? 2 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0?? 38,0??? 0,0? 735,5? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0?? 37,0??? 0,0? 367,7? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0?? 19,0??? 0,0? 367,7? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 8,0??? 0,0?? 64,0? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 8,0??? 0,0?? 32,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 4,0??? 0,0?? 32,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 4,0??? 0,0?? 32,0? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 4,0??? 0,0?? 16,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 2,0??? 0,0?? 16,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0? 239,1??? 0,0 11491,4? 2,6? 0,1?? 10,9??? 0,3?? 3?? 3 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 1,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0? 118,0??? 0,0 5745,7? 0,0? 0,0??? 0,0??? 0,2?? 0?? 2 > c0t5002538E40102BECd0 > ??? 0,0? 125,0??? 0,0 5745,7? 0,0? 0,0??? 0,0??? 0,2?? 0?? 2 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0?? 18,0??? 0,0? 272,1? 0,0? 0,0??? 0,0??? 0,5?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0?? 18,0??? 0,0? 136,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 9,0??? 0,0? 136,0? 0,0? 0,0??? 0,0??? 0,2?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0? 168,0??? 0,0 6336,8? 0,9? 0,1??? 5,4??? 0,3?? 2?? 3 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0?? 15,0??? 0,0?? 96,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 7,0??? 0,0?? 96,0? 0,0? 0,0??? 0,0??? 0,1?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0?? 76,0??? 0,0 3072,4? 0,0? 0,0??? 0,0??? 0,2?? 0?? 2 > c0t5002538E40102BECd0 > ??? 0,0?? 82,0??? 0,0 3072,4? 0,0? 0,0??? 0,0??? 0,2?? 0?? 2 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > ??????????????????? extended device statistics > ??? r/s??? w/s?? kr/s?? kw/s wait actv wsvc_t asvc_t? %w? %b device > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 filervm2 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 rpool > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4782d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E414D2D4713d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404B73F8F1d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t55CD2E404C270DD9d0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40102BECd0 > ??? 0,0??? 0,0??? 0,0??? 0,0? 0,0? 0,0??? 0,0??? 0,0?? 0?? 0 > c0t5002538E40264251d0 > > zfs get all filervm2/hdd-110133a > NAME????????????????? PROPERTY VALUE SOURCE > filervm2/hdd-110133a? type volume - > filervm2/hdd-110133a? creation????????????? mer. juin 20 17:38 > 2018??????????????????????????????????????????? - > filervm2/hdd-110133a? used 1,80G - > filervm2/hdd-110133a? available 860G - > filervm2/hdd-110133a? referenced 2,28G - > filervm2/hdd-110133a? compressratio 1.74x - > filervm2/hdd-110133a? origin > filervm2/template-ha-centos7-64bits at template1? - > filervm2/hdd-110133a? reservation none default > filervm2/hdd-110133a? volsize 25G local > filervm2/hdd-110133a? volblocksize 64K - > filervm2/hdd-110133a? checksum on default > filervm2/hdd-110133a? compression lz4 local > filervm2/hdd-110133a? readonly off default > filervm2/hdd-110133a? copies 1 default > filervm2/hdd-110133a? refreservation none default > filervm2/hdd-110133a? primarycache all default > filervm2/hdd-110133a? secondarycache all default > filervm2/hdd-110133a? usedbysnapshots 150M - > filervm2/hdd-110133a? usedbydataset 1,65G - > filervm2/hdd-110133a? usedbychildren 0 - > filervm2/hdd-110133a? usedbyrefreservation 0 - > filervm2/hdd-110133a? logbias latency default > filervm2/hdd-110133a? dedup off default > filervm2/hdd-110133a? mlslabel none default > filervm2/hdd-110133a? sync always local > filervm2/hdd-110133a? refcompressratio 1.67x - > filervm2/hdd-110133a? written 6,21M - > filervm2/hdd-110133a? logicalused 3,09G - > filervm2/hdd-110133a? logicalreferenced 3,81G - > filervm2/hdd-110133a? snapshot_limit none default > filervm2/hdd-110133a? snapshot_count none default > filervm2/hdd-110133a? redundant_metadata all default > > > Best regards, > > Anthony > > > > > _______________________________________________ > OmniOS-discuss mailing list > OmniOS-discuss at lists.omniti.com > http://lists.omniti.com/mailman/listinfo/omnios-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From icoomnios at gmail.com Fri Jun 22 08:11:59 2018 From: icoomnios at gmail.com (anthony omnios) Date: Fri, 22 Jun 2018 10:11:59 +0200 Subject: [OmniOS-discuss] Fwd: Zvol write a lot of data In-Reply-To: References: <18057726-8b41-ab54-f90e-dc359eee9f27@kateley.com> Message-ID: Thanks for you reply. I am writing very few data on zil: ./zilstat.ksh -t 5 TIME N-Bytes N-Bytes/s N-Max-Rate B-Bytes B-Bytes/s B-Max-Rate ops <=4kB 4-32kB >=32kB 2018 Jun 21 17:58:53 106560 21312 106560 860160 172032 860160 8 0 0 8 2018 Jun 21 17:58:58 2288 457 2288 262144 52428 262144 2 0 0 2 2018 Jun 21 17:59:03 1949816 389963 1935408 5373952 1074790 4456448 41 0 0 41 2018 Jun 21 17:59:08 0 0 0 0 0 0 0 0 0 0 2018 Jun 21 17:59:13 0 0 0 0 0 0 0 0 0 0 2018 Jun 21 17:59:18 888 177 888 131072 26214 131072 1 0 0 1 2018 Jun 21 17:59:23 414512 82902 414512 1310720 262144 1310720 10 0 0 10 2018 Jun 21 17:59:28 2288 457 2288 262144 52428 262144 2 0 0 2 2018 Jun 21 17:59:33 8568 1713 8568 131072 26214 131072 1 0 0 1 2018 Jun 21 17:59:38 31272 6254 31272 393216 78643 393216 3 0 0 3 2018 Jun 21 17:59:43 888 177 888 131072 26214 131072 1 0 0 1 2018 Jun 21 17:59:48 0 0 0 0 0 0 0 0 0 0 2018 Jun 21 17:59:53 548384 109676 548384 3223552 644710 3223552 44 0 0 44 2018 Jun 21 17:59:58 2288 457 2288 262144 52428 262144 2 0 0 2 2018 Jun 21 18:00:03 405088 81017 305456 2433024 486604 1122304 20 0 0 20 Do you know why i am writing lot of data on disk pool (my 7 VMS are only online without write op?ration), not on zil ssd (but sync=always ! ). With 40 VM with no disk activity on it, i flush to data disk approximatively 30 MB every 5 seconds. What type of data is it (metadata ?) How can i readuce data write on disks ? Best regards 2018-06-21 14:51 GMT+02:00 Linda Kateley : > You should be able to get zilstat(just google) and get much more details > on what is happening in zil. > > I haven't done this for awhile in omni, but I increase the transaction > timeout in freebsd or linux quite often. The 5 seconds is set by a > variable. Put this into /etc/system and reboot. > > *set zfs:zfs_txg_timeout = 1* > > Quite often with zfs you won't see disk activity. That is it's beauty. All > of what you need is probably running from ram. > > Linda > > > On 6/21/18 5:50 AM, anthony omnios wrote: > > Hi, > > i am testing a new plateform with OmniosCe with 7 VM on zvol with ISCSI > comstar. > > I have set sync=always for all zvol and i have 2 ssd intel 3700 for zil > and two mirror ssd for data. > > Data Disk are samsung 850 evo (ashift=13). > > My problem is that the pool commit to data disk approximatively 5MB every > 5 second but i have only few data is write on zil (sync=always on zvol) and > my 7 vm are only up with no disk activity. > > With 40 VM with no disk activity on it, i flush to data disk > approximatively 30 MB every 5 seconds. > > How can i write a lot of data on data disk without network iscsi trafic > and no disk activity on VM and no disk activity on zil ? What type of data > is it (metadata ?) ? > > zpool status > pool: filervm2 > state: ONLINE > scan: resilvered 0 in 0h0m with 0 errors on Mon Jun 4 17:46:40 2018 > config: > > NAME STATE READ WRITE CKSUM > filervm2 ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c0t5002538E40102BECd0 ONLINE 0 0 0 > c0t5002538E40264251d0 ONLINE 0 0 0 > logs > mirror-1 ONLINE 0 0 0 > c0t55CD2E404B73F8F1d0 ONLINE 0 0 0 > c0t55CD2E404C270DD9d0 ONLINE 0 0 0 > > iostat -xn -d 1 : > > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 2,0 39,4 80,6 2014,5 0,2 0,0 5,4 0,4 0 1 filervm2 > 0,0 0,9 0,3 12,1 0,1 0,0 89,9 3,7 0 0 rpool > 0,0 0,5 0,1 6,0 0,0 0,0 0,0 3,4 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,5 0,1 6,0 0,0 0,0 0,0 3,4 0 0 > c0t55CD2E414D2D4713d0 > 0,0 14,5 0,0 476,5 0,0 0,0 0,0 0,4 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 8,9 0,0 476,5 0,0 0,0 0,0 0,3 0 0 > c0t55CD2E404C270DD9d0 > 1,0 11,0 40,3 530,8 0,0 0,0 0,0 0,2 0 0 > c0t5002538E40102BECd0 > 1,0 11,0 40,3 530,8 0,0 0,0 0,0 0,2 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 20,0 0,0 304,0 0,0 0,0 0,0 0,2 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 20,0 0,0 152,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 10,0 0,0 152,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 212,1 0,0 9826,4 1,9 0,1 8,8 0,3 3 3 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 11,0 0,0 56,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 5,0 0,0 56,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404C270DD9d0 > 0,0 101,0 0,0 4857,2 0,0 0,0 0,0 0,2 0 2 > c0t5002538E40102BECd0 > 0,0 105,0 0,0 4857,2 0,0 0,0 0,0 0,2 0 2 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 38,0 0,0 735,5 0,0 0,0 0,0 0,2 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 37,0 0,0 367,7 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 19,0 0,0 367,7 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 8,0 0,0 64,0 0,0 0,0 0,0 0,2 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 8,0 0,0 32,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 4,0 0,0 32,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 4,0 0,0 32,0 0,0 0,0 0,0 0,2 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 4,0 0,0 16,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 2,0 0,0 16,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 239,1 0,0 11491,4 2,6 0,1 10,9 0,3 3 3 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 1,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 118,0 0,0 5745,7 0,0 0,0 0,0 0,2 0 2 > c0t5002538E40102BECd0 > 0,0 125,0 0,0 5745,7 0,0 0,0 0,0 0,2 0 2 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 18,0 0,0 272,1 0,0 0,0 0,0 0,5 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 18,0 0,0 136,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 9,0 0,0 136,0 0,0 0,0 0,0 0,2 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 168,0 0,0 6336,8 0,9 0,1 5,4 0,3 2 3 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 15,0 0,0 96,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 7,0 0,0 96,0 0,0 0,0 0,0 0,1 0 0 > c0t55CD2E404C270DD9d0 > 0,0 76,0 0,0 3072,4 0,0 0,0 0,0 0,2 0 2 > c0t5002538E40102BECd0 > 0,0 82,0 0,0 3072,4 0,0 0,0 0,0 0,2 0 2 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 filervm2 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 rpool > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4782d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E414D2D4713d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404B73F8F1d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t55CD2E404C270DD9d0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40102BECd0 > 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0,0 0 0 > c0t5002538E40264251d0 > > zfs get all filervm2/hdd-110133a > NAME PROPERTY VALUE > SOURCE > filervm2/hdd-110133a type volume > - > filervm2/hdd-110133a creation mer. juin 20 17:38 > 2018 - > filervm2/hdd-110133a used 1,80G > - > filervm2/hdd-110133a available 860G > - > filervm2/hdd-110133a referenced 2,28G > - > filervm2/hdd-110133a compressratio 1.74x > - > filervm2/hdd-110133a origin filervm2/template-ha-centos7-6 > 4bits at template1 - > filervm2/hdd-110133a reservation none > default > filervm2/hdd-110133a volsize 25G > local > filervm2/hdd-110133a volblocksize 64K > - > filervm2/hdd-110133a checksum on > default > filervm2/hdd-110133a compression lz4 > local > filervm2/hdd-110133a readonly off > default > filervm2/hdd-110133a copies 1 > default > filervm2/hdd-110133a refreservation none > default > filervm2/hdd-110133a primarycache all > default > filervm2/hdd-110133a secondarycache all > default > filervm2/hdd-110133a usedbysnapshots 150M > - > filervm2/hdd-110133a usedbydataset 1,65G > - > filervm2/hdd-110133a usedbychildren 0 > - > filervm2/hdd-110133a usedbyrefreservation 0 > - > filervm2/hdd-110133a logbias latency > default > filervm2/hdd-110133a dedup off > default > filervm2/hdd-110133a mlslabel none > default > filervm2/hdd-110133a sync always > local > filervm2/hdd-110133a refcompressratio 1.67x > - > filervm2/hdd-110133a written 6,21M > - > filervm2/hdd-110133a logicalused 3,09G > - > filervm2/hdd-110133a logicalreferenced 3,81G > - > filervm2/hdd-110133a snapshot_limit none > default > filervm2/hdd-110133a snapshot_count none > default > filervm2/hdd-110133a redundant_metadata all > default > > > Best regards, > > Anthony > > > > > _______________________________________________ > OmniOS-discuss mailing listOmniOS-discuss at lists.omniti.comhttp://lists.omniti.com/mailman/listinfo/omnios-discuss > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: