<div dir="ltr"><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra">So here's what I will attempt to test:</div><div class="gmail_extra">- Create thin vmdk @ 10TB with vSphere fat client: PASS </div><div class="gmail_extra">- Create lazy zeroed vmdk @ 10 TB with vSphere fat client: PASS</div><div class="gmail_extra">- Create eager zeroed vmdk @ 10 TB with vSphere web client: PASS! (took 1 hour)</div><div class="gmail_extra"><div class="gmail_extra">- Create thin vmdk @ 10TB with vSphere web client: PASS</div><div class="gmail_extra">- Create lazy zeroed vmdk @ 10 TB with vSphere web client: PASS</div><div class="gmail_extra"><br></div></div></div></blockquote><div><br></div><div>Additionally, I tried:</div><div>- Create fixed vhdx @ 10TB with SCVMM (Hyper-V): PASS (most likely no primitives in use here - this took slightly over 3 hours)</div><div> </div></div></div><div class="gmail_extra">Everything passed (which I didn't expect, especially the 10TB eager zero).. then I tried again on the vSphere web client for a 20TB eager zero disk, and I got another kernel panic altogether (no kmem_flags 0xf set, unfortunately).</div><div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_extra">Mar 27 2015 01:09:33.664060000 2e2305c2-54b5-c1f4-aafd-fb1eccc982dd SUNOS-8000-KL</div><div class="gmail_extra"><br></div><div class="gmail_extra">  TIME                 CLASS                                 ENA</div><div class="gmail_extra">  Mar 27 01:09:33.6307 ireport.os.sunos.panic.dump_available 0x0000000000000000</div><div class="gmail_extra">  Mar 27 01:08:30.6688 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000</div><div class="gmail_extra"><br></div><div class="gmail_extra">nvlist version: 0</div><div class="gmail_extra">        version = 0x0</div><div class="gmail_extra">        class = list.suspect</div><div class="gmail_extra">        uuid = 2e2305c2-54b5-c1f4-aafd-fb1eccc982dd</div><div class="gmail_extra">        code = SUNOS-8000-KL</div><div class="gmail_extra">        diag-time = 1427432973 633746</div><div class="gmail_extra">        de = fmd:///module/software-diagnosis</div><div class="gmail_extra">        fault-list-sz = 0x1</div><div class="gmail_extra">        fault-list = (array of embedded nvlists)</div><div class="gmail_extra">        (start fault-list[0])</div><div class="gmail_extra">        nvlist version: 0</div><div class="gmail_extra">                version = 0x0</div><div class="gmail_extra">                class = defect.sunos.kernel.panic</div><div class="gmail_extra">                certainty = 0x64</div><div class="gmail_extra">                asru = sw:///:path=/var/crash/unknown/.2e2305c2-54b5-c1f4-aafd-fb1eccc982dd</div><div class="gmail_extra">                resource = sw:///:path=/var/crash/unknown/.2e2305c2-54b5-c1f4-aafd-fb1eccc982dd</div><div class="gmail_extra">                savecore-succcess = 1</div><div class="gmail_extra">                dump-dir = /var/crash/unknown</div><div class="gmail_extra">                dump-files = vmdump.2</div><div class="gmail_extra">                os-instance-uuid = 2e2305c2-54b5-c1f4-aafd-fb1eccc982dd</div><div class="gmail_extra">                panicstr = BAD TRAP: type=d (#gp General protection) rp=ffffff01eb72ea70 addr=0</div><div class="gmail_extra">                panicstack = unix:real_mode_stop_cpu_stage2_end+9e23 () | unix:trap+a30 () | unix:cmntrap+e6 () | genunix:anon_decref+35 () | genunix:anon_free+74 () | genunix:segvn_free+242 () | genunix:seg_free+30 () | genunix:segvn_unmap+cde () | genunix:as_free+e7 () | genunix:relvm+220 () | genunix:proc_exit+454 () | genunix:exit+15 () | genunix:rexit+18 () | unix:brand_sys_sysenter+1c9 () |</div><div class="gmail_extra">                crashtime = 1427431421</div><div class="gmail_extra">                panic-time = Fri Mar 27 00:43:41 2015 EDT</div><div class="gmail_extra">        (end fault-list[0])</div><div class="gmail_extra"><br></div><div class="gmail_extra">        fault-status = 0x1</div><div class="gmail_extra">        severity = Major</div><div class="gmail_extra">        __ttl = 0x1</div><div class="gmail_extra">        __tod = 0x5514e60d 0x2794c060</div><div class="gmail_extra"><br></div><div class="gmail_extra">Crash file:</div><div class="gmail_extra"><a href="https://drive.google.com/file/d/0B7mCJnZUzJPKT0lpTW9GZFJCLTg/view?usp=sharing">https://drive.google.com/file/d/0B7mCJnZUzJPKT0lpTW9GZFJCLTg/view?usp=sharing</a><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">It appears I can do thin and lazy zero disks of those sizes, so I will have to be satisfied to use those options as a workaround (plus disabling WRITE_SAME from the hosts if I really wanted the eager zeroed disk) until some of that Nexenta COMSTAR love is upstreamed. For comparison sake, provisioning a 10TB fixed vhdx took approximately 3 hours in Hyper-V, while the same provisioning in VMware took about 1 hour. So we can say that WRITE_SAME accelerated the same job by 3x.</div><div><br></div></div></div>