[OmniOS-discuss] Strange ARC reads numbers

Filip Marvan filip.marvan at aira.cz
Mon May 26 07:36:02 UTC 2014


Hello,

 

just for information, after two weeks, numbers of ARC assesses came back to
high numbers as before deletion of data (you can see that in screenshot).

And I try to delete the same amount of data on different storage server, and
the accesses to ARC droped in the same way as on first pool

 

Interesting.

 

Filip Marvan

 

 

 

 

From: Richard Elling [mailto:richard.elling at richardelling.com] 
Sent: Thursday, May 08, 2014 12:47 AM
To: Filip Marvan
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Strange ARC reads numbers

 

On May 7, 2014, at 1:44 AM, Filip Marvan <filip.marvan at aira.cz> wrote:





Hi Richard,

 

thank you for your reply.

 

1. Workload is still the same or very similar. Zvols, which we deleted from
our pool were disconnected from KVM server a few days before, so the only
change was, that we deleted that zvols with all snapshots.

2. As you wrote, our customers are fine for now :) We have monitoring of all
our virtual servers running from that storage server, and there is no
noticeable change in workload or latencies.

 

good, then there might not be an actual problem, just a puzzle :-)





3. That could be the reason, of course. But in the graph are only data from
arcstat.pl script. We can see, that arcstat is reporting heavy read accesses
every 5 seconds (propably some update of ARC after ZFS writes data to disks
from ZIL? All of them are marked as "cache hits" by arcstat script) and with
only few ARC accesses between that 5 seconds periody. Before we deleted that
zvols (about 0.7 TB data from 10 TB pool, which have 5 TB of free space)
there were about 40k accesses every 5 seconds, now there are no more than 2k
accesses every 5 seconds.

 

This is expected behaviour for older ZFS releases that used a txg_timeout of
5 seconds. You should

see a burst of write activity around that timeout and it can include reads
for zvols. Unfortunately, the

zvol code is not very efficient and you will see a lot more reads than you
expect.

 -- richard

 





 

Most of our zvols have 8K volblocksize (including deleted zvols), only few
have 64K. Unfortunately I have no data about size of the read before that
change. But we have two more storage servers, with similary high ARC read
accesses every 5 seconds as on the first pool before deletion. Maybe I
should try to delete some data on that pools and see what happen with more
detailed monitoring.

 

Thank you,

Filip

 

 

  _____  

From: Richard Elling [mailto:richard.elling at richardelling.com] 
Sent: Wednesday, May 07, 2014 3:56 AM
To: Filip Marvan
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Strange ARC reads numbers

 

Hi Filip,

 

There are two primary reasons for reduction in the number of ARC reads.

            1. the workload isn't reading as much as it used to

            2. the latency of reads has increased

            3. your measurement is b0rken

there are three reasons...

 

The data you shared clearly shows reduction in reads, but doesn't contain
the answers

to the cause. Usually, if #2 is the case, then the phone will be ringing
with angry customers

on the other end.

 

If the above 3 are not the case, then perhaps it is something more subtle.
The arcstat reads

does not record the size of the read. To get the read size for zvols is a
little tricky, you can

infer it from the pool statistics in iostat. The subtleness here is that if
the volblocksize is 

different between the old and new zvols, then the number of (block) reads
will be different

for the same workload.

 -- richard

 

--

 

Richard.Elling at RichardElling.com
+1-760-896-4422



 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20140526/1a3c8e4c/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: arcread_back_dikobraz2.png
Type: image/png
Size: 18616 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20140526/1a3c8e4c/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: arcread_dikobraz1.png
Type: image/png
Size: 18991 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20140526/1a3c8e4c/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6247 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20140526/1a3c8e4c/attachment-0001.bin>


More information about the OmniOS-discuss mailing list