<div dir="ltr"><div>So I only upgraded to r151008 recently, and was wondering whether the new L2ARC compression was working. After getting an updated arcstat script which added the l2asize option (which returned a 0), and a few rounds in IRC which lead me to the correct kstat (zfs:0:arcstats:l2_asize), and an even more updated arcstat to fix the 0 result..<br>
<br></div><div>Now both kstat and arcstat are outputting the same info:<br>zfs:0:arcstats:l2_asize 864682956800<br>zfs:0:arcstats:l2_size 1374605708288<br><br>arcstat:<br>read hits miss hit% l2read l2hits l2miss l2hit% arcsz l2size l2asize<br>
2.7K 2.6K 53 98 53 44 9 83 229G 1.3T 806G<br>5.1K 4.8K 282 94 282 17 265 6 229G 1.3T 806G<br>7.3K 7.3K 10 99 10 4 6 40 229G 1.3T 806G<br>
...<br><br></div><div>But.. why is zpool iostat -v showing me my cache devices using up ~1.25T (314Gx4), which is close to the 1.3T l2size?<br><br> capacity operations bandwidth<br>pool alloc free read write read write<br>
------------------------- ----- ----- ----- ----- ----- -----<br>[snip]<br><br></div><div>cache - - - - - -<br> c2t500117310015D579d0 313G 59.4G 19 15 711K 833K<br>
c2t50011731001631FDd0 314G 58.1G 18 15 712K 836K<br> c12t500117310015D59Ed0 314G 58.8G 19 15 710K 835K<br> c12t500117310015D54Ed0 313G 59.7G 18 15 709K 832K<br>------------------------- ----- ----- ----- ----- ----- -----<br>
<br></div><div>What's with the discrepancy? Is zpool iostat calculating the free capacity incorrectly now (my cache drives are 400GB)?<br></div></div>