<div dir="ltr">Hi Tovarishch Jim,<div><br></div><div>I had similar issue with my box and it was related to the NFS locks. I assume you are using it due to the Linux backups. The solution was posted by Chip on the mailing list. Copy of his solution below:</div><div><br></div><div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px">"I've seen issues like this when you run out of NFS locks. NFSv3 in Illumos is really slow at releasing locks. </span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px"><br></span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px">On all my NFS servers I do:</span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px"><br></span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px">sharectl set -p lockd_listen_backlog=256 nfs</span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px">sharectl set -p lockd_servers=2048 nfs</span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px"><br></span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px">Everywhere I can, I use NFSv4 instead of v3. It handles lock much better."</span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px"><br></span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)"><span style="font-size:12.8px">All the Best</span></div><div class="" style="color:rgb(0,0,0);font-family:'Times New Roman';font-size:13.3333px;padding-left:0px;background-color:rgb(253,253,253)">Yavor </div></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Oct 22, 2015 at 11:59 AM, Jim Klimov <span dir="ltr"><<a href="mailto:jim@cos.ru" target="_blank">jim@cos.ru</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>Hello all,</p>
<p>I have this HP-Z400 workstation with 16Gb ECC(should be) RAM running OmniOS bloody, which acts as a backup server for our production systems (regularly rsync'ing large files off Linux boxes, and rotating ZFS auto-snapshots to keep its space free). Sometimes it also runs replicas of infrastructure (DHCP, DNS) and was set up as a VirtualBox + phpVirtualBox host to test that out, but no VMs running.</p>
<p>So the essential loads are ZFS snapshots and ZFS scrubs :)</p>
<p>And it freezes roughly every week. Stops responding to ping, attempts to log in via SSH or physical console - it processes keypresses on the latter, but does not present a login prompt. It used to be stable, and such regular hangs began around summertime.</p>
<p> </p>
<p>My primary guess would be for flaky disks, maybe timing out under load or going to sleep or whatever... But I have yet to prove it, or any other theory. Maybe just CPU is overheating due to regular near-100% load with disk I/O... At least I want to rule out OS errors and rule out (or point out) operator/box errors as much as possible - which is something I can change to try and fix ;)</p>
<p>Before I proceed to TL;DR screenshots, I'd overview what I see:</p>
<p>* In the "top" output, processes owned by zfssnap lead most of the time... But even the SSH shell is noticeably slow to respond (1 sec per line when just pressing enter to clear the screen to prepare nice screenshots).</p>
<p>* SMART was not enabled on 3TB mirrored "pool" SATA disks (is now, long tests initiated), but was in place on the "rpool" SAS disk where it logged some corrected ECC errors - but none uncorrected.</p>
<p>Maybe the cabling should be reseated.</p>
<p>* iostat shows disks are generally not busy (they don't audibly rattle nor visibly blink all the time, either)</p>
<p>* zpool scrubs return clean</p>
<p>* there are partitions of the system rpool disk (10K RPM SAS) used as log and cache devices for the main data pool on 3TB SATA disks. The system disk is fast and underutilized, so what the heck ;) And it was not a problem for the first year of this system's honest and stable workouts. These devices are pretty empty at the moment.</p>
<p> </p>
<p>I have enabled deadman panics according to Wiki, but none have happened so far:</p>
<p># cat /etc/system | egrep -v '(^\*|^$)'<br>set snooping=1<br>set pcplusmp:apic_panic_on_nmi=1<br>set apix:apic_panic_on_nmi = 1<br></p>
<p> </p>
<p> </p>
<p>In the "top" output, processes owned by zfssnap lead most of the time:</p>
<p> </p>
<p>last pid: 22599; load avg: 12.9, 12.2, 11.2; up 0+09:52:11 18:34:41<br>140 processes: 125 sleeping, 13 running, 2 on cpu<br>CPU states: 0.0% idle, 22.9% user, 77.1% kernel, 0.0% iowait, 0.0% swap<br>Memory: 16G phys mem, 1765M free mem, 2048M total swap, 2048M free swap<br>Seconds to delay:<br> PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND<br> 21389 zfssnap 1 43 2 863M 860M run 5:04 35.61% zfs<br> 22360 zfssnap 1 52 2 118M 115M run 0:37 16.50% zfs<br> 21778 zfssnap 1 52 2 563M 560M run 3:15 13.17% zfs<br> 21278 zfssnap 1 52 2 947M 944M run 5:32 6.91% zfs<br> 21881 zfssnap 1 43 2 433M 431M run 2:31 5.41% zfs<br> 21852 zfssnap 1 52 2 459M 456M run 2:39 5.16% zfs<br> 21266 zfssnap 1 43 2 906M 903M run 5:18 3.95% zfs<br> 21757 zfssnap 1 43 2 597M 594M run 3:26 2.91% zfs<br> 21274 zfssnap 1 52 2 930M 927M cpu/0 5:27 2.78% zfs<br> 22588 zfssnap 1 43 2 30M 27M run 0:08 2.48% zfs<br> 22580 zfssnap 1 52 2 49M 46M run 0:14 0.71% zfs<br> 22038 root 1 59 0 5312K 3816K cpu/1 0:01 0.10% top<br> 22014 root 1 59 0 8020K 4988K sleep 0:00 0.02% sshd<br></p>
<p> </p>
<p>Average "iostats" are not that busy:</p>
<p> </p>
<p># zpool iostat -Td 5<br>Thu Oct 22 18:24:59 CEST 2015<br> capacity operations bandwidth<br>pool alloc free read write read write<br>---------- ----- ----- ----- ----- ----- -----<br>pool 2.52T 207G 802 116 28.3M 840K<br>rpool 33.0G 118G 0 4 4.52K 58.7K<br>---------- ----- ----- ----- ----- ----- -----</p>
<p>Thu Oct 22 18:25:04 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 10 0 97.9K<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:09 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:14 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 9 0 93.5K<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:19 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:24 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:29 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:34 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:25:39 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 16 0 374K<br>---------- ----- ----- ----- ----- ----- -----<br>...</p>
<p>Thu Oct 22 18:33:49 CEST 2015<br>pool 2.52T 207G 0 0 0 0<br>rpool 33.0G 118G 0 11 0 94.5K<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:33:54 CEST 2015<br>pool 2.52T 207G 0 13 819 80.0K<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:33:59 CEST 2015<br>pool 2.52T 207G 0 129 0 1.06M<br>rpool 33.0G 118G 0 0 0 0<br>---------- ----- ----- ----- ----- ----- -----<br>Thu Oct 22 18:34:04 CEST 2015<br>pool 2.52T 207G 0 55 0 503K<br>rpool 33.0G 118G 0 11 0 97.9K<br>---------- ----- ----- ----- ----- ----- -----<br>...</p>
<p>just occasional bursts of work. </p>
<p>I've now enabled SMART on the disks (2*3Tb mirror "pool" and 1*300Gb "rpool") and ran some short tests and triggered long tests (hopefully they'd succeed by tomorrow); current results are:<br><br><br># for D in /dev/rdsk/c0*s0; do echo "===== $D :"; smartctl -d sat,12 -a $D ; done ; for D in /dev/rdsk/c4*s0 ; do echo "===== $D :"; smartctl -d scsi -a $D ; done<br>===== /dev/rdsk/c0t3d0s0 :<br>smartctl 6.0 2012-10-10 r3643 [i386-pc-solaris2.11] (local build)<br>Copyright (C) 2002-12, Bruce Allen, Christian Franke, <a href="http://www.smartmontools.org/" target="_blank">www.smartmontools.org</a></p>
<p>=== START OF INFORMATION SECTION ===<br>Device Model: WDC WD3003FZEX-00Z4SA0<br>Serial Number: WD-WCC5D1KKU0PA<br>LU WWN Device Id: 5 0014ee 2610716b7<br>Firmware Version: 01.01A01<br>User Capacity: 3,000,592,982,016 bytes [3.00 TB]<br>Sector Sizes: 512 bytes logical, 4096 bytes physical<br>Rotation Rate: 7200 rpm<br>Device is: Not in smartctl database [for details use: -P showall]<br>ATA Version is: ACS-2 (minor revision not indicated)<br>SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)<br>Local Time is: Thu Oct 22 18:45:28 2015 CEST<br>SMART support is: Available - device has SMART capability.<br>SMART support is: Enabled</p>
<p>=== START OF READ SMART DATA SECTION ===<br>SMART overall-health self-assessment test result: PASSED</p>
<p>General SMART Values:<br>Offline data collection status: (0x82) Offline data collection activity<br> was completed without error.<br> Auto Offline Data Collection: Enabled.<br>Self-test execution status: ( 249) Self-test routine in progress...<br> 90% of test remaining.<br>Total time to complete Offline<br>data collection: (32880) seconds.<br>Offline data collection<br>capabilities: (0x7b) SMART execute Offline immediate.<br> Auto Offline data collection on/off support.<br> Suspend Offline collection upon new<br> command.<br> Offline surface scan supported.<br> Self-test supported.<br> Conveyance Self-test supported.<br> Selective Self-test supported.<br>SMART capabilities: (0x0003) Saves SMART data before entering<br> power-saving mode.<br> Supports SMART auto save timer.<br>Error logging capability: (0x01) Error logging supported.<br> General Purpose Logging supported.<br>Short self-test routine<br>recommended polling time: ( 2) minutes.<br>Extended self-test routine<br>recommended polling time: ( 357) minutes.<br>Conveyance self-test routine<br>recommended polling time: ( 5) minutes.<br>SCT capabilities: (0x7035) SCT Status supported.<br> SCT Feature Control supported.<br> SCT Data Table supported.</p>
<p>SMART Attributes Data Structure revision number: 16<br>Vendor Specific SMART Attributes with Thresholds:<br>ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE<br> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0<br> 3 Spin_Up_Time 0x0027 246 154 021 Pre-fail Always - 6691<br> 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 14<br> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0<br> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0<br> 9 Power_On_Hours 0x0032 094 094 000 Old_age Always - 4869<br> 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0<br> 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0<br> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 14<br> 16 Unknown_Attribute 0x0022 130 070 000 Old_age Always - 2289651870502<br>192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 12<br>193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2<br>194 Temperature_Celsius 0x0022 117 111 000 Old_age Always - 35<br>196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0<br>197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0<br>198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0<br>199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0<br>200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0</p>
<p>SMART Error Log Version: 1<br>No Errors Logged</p>
<p>SMART Self-test log structure revision number 1<br>Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error<br># 1 Short offline Completed without error 00% 4869 -</p>
<p>SMART Selective self-test log data structure revision number 1<br> SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS<br> 1 0 0 Not_testing<br> 2 0 0 Not_testing<br> 3 0 0 Not_testing<br> 4 0 0 Not_testing<br> 5 0 0 Not_testing<br>Selective self-test flags (0x0):<br> After scanning selected spans, do NOT read-scan remainder of disk.<br>If Selective self-test is pending on power-up, resume after 0 minute delay.</p>
<p>===== /dev/rdsk/c0t5d0s0 :<br>smartctl 6.0 2012-10-10 r3643 [i386-pc-solaris2.11] (local build)<br>Copyright (C) 2002-12, Bruce Allen, Christian Franke, <a href="http://www.smartmontools.org/" target="_blank">www.smartmontools.org</a></p>
<p>=== START OF INFORMATION SECTION ===<br>Model Family: Seagate SV35<br>Device Model: ST3000VX000-1ES166<br>Serial Number: Z500S3L8<br>LU WWN Device Id: 5 000c50 079e3757b<br>Firmware Version: CV26<br>User Capacity: 3,000,592,982,016 bytes [3.00 TB]<br>Sector Sizes: 512 bytes logical, 4096 bytes physical<br>Rotation Rate: 7200 rpm<br>Device is: In smartctl database [for details use: -P show]<br>ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b<br>SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)<br>Local Time is: Thu Oct 22 18:45:28 2015 CEST<br>SMART support is: Available - device has SMART capability.<br>SMART support is: Enabled</p>
<p>=== START OF READ SMART DATA SECTION ===<br>SMART overall-health self-assessment test result: PASSED</p>
<p>General SMART Values:<br>Offline data collection status: (0x00) Offline data collection activity<br> was never started.<br> Auto Offline Data Collection: Disabled.<br>Self-test execution status: ( 249) Self-test routine in progress...<br> 90% of test remaining.<br>Total time to complete Offline<br>data collection: ( 80) seconds.<br>Offline data collection<br>capabilities: (0x73) SMART execute Offline immediate.<br> Auto Offline data collection on/off support.<br> Suspend Offline collection upon new<br> command.<br> No Offline surface scan supported.<br> Self-test supported.<br> Conveyance Self-test supported.<br> Selective Self-test supported.<br>SMART capabilities: (0x0003) Saves SMART data before entering<br> power-saving mode.<br> Supports SMART auto save timer.<br>Error logging capability: (0x01) Error logging supported.<br> General Purpose Logging supported.<br>Short self-test routine<br>recommended polling time: ( 1) minutes.<br>Extended self-test routine<br>recommended polling time: ( 325) minutes.<br>Conveyance self-test routine<br>recommended polling time: ( 2) minutes.<br>SCT capabilities: (0x10b9) SCT Status supported.<br> SCT Error Recovery Control supported.<br> SCT Feature Control supported.<br> SCT Data Table supported.</p>
<p>SMART Attributes Data Structure revision number: 10<br>Vendor Specific SMART Attributes with Thresholds:<br>ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE<br> 1 Raw_Read_Error_Rate 0x000f 105 099 006 Pre-fail Always - 8600880<br> 3 Spin_Up_Time 0x0003 096 094 000 Pre-fail Always - 0<br> 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 19<br> 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0<br> 7 Seek_Error_Rate 0x000f 085 060 030 Pre-fail Always - 342685681<br> 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 4214<br> 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0<br> 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 19<br>184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0<br>187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0<br>188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0<br>189 High_Fly_Writes 0x003a 028 028 000 Old_age Always - 72<br>190 Airflow_Temperature_Cel 0x0022 069 065 045 Old_age Always - 31 (Min/Max 29/32)<br>191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0<br>192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 19<br>193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 28<br>194 Temperature_Celsius 0x0022 031 040 000 Old_age Always - 31 (0 20 0 0 0)<br>197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0<br>198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0<br>199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0</p>
<p>SMART Error Log Version: 1<br>No Errors Logged</p>
<p>SMART Self-test log structure revision number 1<br>Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error<br># 1 Extended offline Self-test routine in progress 90% 4214 -<br># 2 Short offline Completed without error 00% 4214 -</p>
<p>SMART Selective self-test log data structure revision number 1<br> SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS<br> 1 0 0 Not_testing<br> 2 0 0 Not_testing<br> 3 0 0 Not_testing<br> 4 0 0 Not_testing<br> 5 0 0 Not_testing<br>Selective self-test flags (0x0):<br> After scanning selected spans, do NOT read-scan remainder of disk.<br>If Selective self-test is pending on power-up, resume after 0 minute delay.</p>
<p>===== /dev/rdsk/c4t5000CCA02A1292DDd0s0 :<br>smartctl 6.0 2012-10-10 r3643 [i386-pc-solaris2.11] (local build)<br>Copyright (C) 2002-12, Bruce Allen, Christian Franke, <a href="http://www.smartmontools.org/" target="_blank">www.smartmontools.org</a></p>
<p>Vendor: HITACHI<br>Product: HUS156030VLS600<br>Revision: HPH1<br>User Capacity: 300,000,000,000 bytes [300 GB]<br>Logical block size: 512 bytes<br>Logical Unit id: 0x5000cca02a1292dc<br>Serial number: LVVA6NHS<br>Device type: disk<br>Transport protocol: SAS<br>Local Time is: Thu Oct 22 18:45:29 2015 CEST<br>Device supports SMART and is Enabled<br>Temperature Warning Enabled<br>SMART Health Status: OK</p>
<p>Current Drive Temperature: 45 C<br>Drive Trip Temperature: 70 C<br>Manufactured in week 14 of year 2012<br>Specified cycle count over device lifetime: 50000<br>Accumulated start-stop cycles: 80<br>Elements in grown defect list: 0<br>Vendor (Seagate) cache information<br> Blocks sent to initiator = 2340336504406016</p>
<p>Error counter log:<br> Errors Corrected by Total Correction Gigabytes Total<br> ECC rereads/ errors algorithm processed uncorrected<br> fast | delayed rewrites corrected invocations [10^9 bytes] errors<br>read: 0 888890 0 888890 0 29326.957 0<br>write: 0 961315 0 961315 0 6277.560 0</p>
<p>Non-medium error count: 283</p>
<p>SMART Self-test log<br>Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ]<br> Description number (hours)<br># 1 Background long Self test in progress ... - NOW - [- - -]<br># 2 Background long Aborted (device reset ?) - 14354 - [- - -]<br># 3 Background short Completed - 14354 - [- - -]<br># 4 Background long Aborted (device reset ?) - 14354 - [- - -]<br># 5 Background long Aborted (device reset ?) - 14354 - [- - -]</p>
<p>Long (extended) Self Test duration: 2506 seconds [41.8 minutes]<br></p>
<p> </p>
<p>The zpool scrub results and general layout:</p>
<p> </p>
<p># zpool status -v<br> pool: pool<br> state: ONLINE<br> scan: scrub repaired 0 in 164h13m with 0 errors on Thu Oct 22 18:13:33 2015<br>config:</p>
<p> NAME STATE READ WRITE CKSUM<br> pool ONLINE 0 0 0<br> mirror-0 ONLINE 0 0 0<br> c0t3d0 ONLINE 0 0 0<br> c0t5d0 ONLINE 0 0 0<br> logs<br> c4t5000CCA02A1292DDd0p2 ONLINE 0 0 0<br> cache<br> c4t5000CCA02A1292DDd0p3 ONLINE 0 0 0</p>
<p>errors: No known data errors</p>
<p> pool: rpool<br> state: ONLINE<br>status: Some supported features are not enabled on the pool. The pool can<br> still be used, but some features are unavailable.<br>action: Enable all features using 'zpool upgrade'. Once this is done,<br> the pool may no longer be accessible by software that does not support<br> the features. See zpool-features(5) for details.<br> scan: scrub repaired 0 in 3h3m with 0 errors on Thu Oct 8 04:12:35 2015<br>config:</p>
<p> NAME STATE READ WRITE CKSUM<br> rpool ONLINE 0 0 0<br> c4t5000CCA02A1292DDd0s0 ONLINE 0 0 0</p>
<p>errors: No known data errors<br></p>
<p># zpool list -v<br>NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT<br>pool 2.72T 2.52T 207G - 68% 92% 1.36x ONLINE /<br> mirror 2.72T 2.52T 207G - 68% 92%<br> c0t3d0 - - - - - -<br> c0t5d0 - - - - - -<br>log - - - - - -<br> c4t5000CCA02A1292DDd0p2 8G 148K 8.00G - 0% 0%<br>cache - - - - - -<br> c4t5000CCA02A1292DDd0p3 120G 1.80G 118G - 0% 1%<br>rpool 151G 33.0G 118G - 76% 21% 1.00x ONLINE -<br> c4t5000CCA02A1292DDd0s0 151G 33.0G 118G - 76% 21%<br></p>
<p>Note the long scrub time may have included the downtime while the system was frozen until it was rebooted.</p>
<p> </p>
<p>Thanks in advance for the fresh pairs of eyeballs,<br>Jim Klimov</p>
<br>_______________________________________________<br>
OmniOS-discuss mailing list<br>
<a href="mailto:OmniOS-discuss@lists.omniti.com">OmniOS-discuss@lists.omniti.com</a><br>
<a href="http://lists.omniti.com/mailman/listinfo/omnios-discuss" rel="noreferrer" target="_blank">http://lists.omniti.com/mailman/listinfo/omnios-discuss</a><br>
<br></blockquote></div><br></div>