[OmniOS-discuss] Strange ARC reads numbers

Filip Marvan filip.marvan at aira.cz
Wed May 7 08:44:08 UTC 2014


Hi Richard,

thank you for your reply.

1. Workload is still the same or very similar. Zvols, which we deleted from our pool were disconnected from KVM server a few days before, so the only change was, that we deleted that zvols with all snapshots.
2. As you wrote, our customers are fine for now :) We have monitoring of all our virtual servers running from that storage server, and there is no noticeable change in workload or latencies.
3. That could be the reason, of course. But in the graph are only data from arcstat.pl script. We can see, that arcstat is reporting heavy read accesses every 5 seconds (propably some update of ARC after ZFS writes data to disks from ZIL? All of them are marked as "cache hits" by arcstat script) and with only few ARC accesses between that 5 seconds periody. Before we deleted that zvols (about 0.7 TB data from 10 TB pool, which have 5 TB of free space) there were about 40k accesses every 5 seconds, now there are no more than 2k accesses every 5 seconds.

Most of our zvols have 8K volblocksize (including deleted zvols), only few have 64K. Unfortunately I have no data about size of the read before that change. But we have two more storage servers, with similary high ARC read accesses every 5 seconds as on the first pool before deletion. Maybe I should try to delete some data on that pools and see what happen with more detailed monitoring.

Thank you,
Filip


________________________________
From: Richard Elling [mailto:richard.elling at richardelling.com]
Sent: Wednesday, May 07, 2014 3:56 AM
To: Filip Marvan
Cc: omnios-discuss at lists.omniti.com
Subject: Re: [OmniOS-discuss] Strange ARC reads numbers

Hi Filip,

There are two primary reasons for reduction in the number of ARC reads.
            1. the workload isn't reading as much as it used to
            2. the latency of reads has increased
            3. your measurement is b0rken
there are three reasons...

The data you shared clearly shows reduction in reads, but doesn't contain the answers
to the cause. Usually, if #2 is the case, then the phone will be ringing with angry customers
on the other end.

If the above 3 are not the case, then perhaps it is something more subtle. The arcstat reads
does not record the size of the read. To get the read size for zvols is a little tricky, you can
infer it from the pool statistics in iostat. The subtleness here is that if the volblocksize is
different between the old and new zvols, then the number of (block) reads will be different
for the same workload.
 -- richard

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20140507/46c5e9e5/attachment.html>


More information about the OmniOS-discuss mailing list