hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3170) Add more useful metrics for write latency
Date Tue, 03 Jul 2012 20:52:35 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13406034#comment-13406034

Todd Lipcon commented on HDFS-3170:

+  public final static long NANOSECONDS_PER_MILLISECOND = 1000000;

Instead, use {{TimeUnit.convert}}?

But actually, I think we should change the fsync metric to nanos or microseconds anyway. It's
a pretty new one that hasn't been in any releases yet, so I think it's reasonable for us to
change its units now for consistency.

What do you think about making all of these metrics in microseconds? It seems a more natural
unit for admins to understand, whereas nanos are way more precise than the timers we're using.


+                      LOG.debug("Calculated invalid ack time: " + ackTimeNanos + "ns.");
This should be guarded with a {{isDebugEnabled()}} check to prevent a possible perf issue
if for some reason it gets triggered often.

In the case it's triggered, shouldn't we add an RTT=0 to the metric?


+    final long ackEnqueueTimeNanos;

Rename to {{ackEnqueueNanoTime}} -- otherwise it sounds like it's actually an elapsed time
duration, rather than a timestamp. Or perhaps {{ackEnqueueTimestampNanos}}.

Otherwise looking good!

> Add more useful metrics for write latency
> -----------------------------------------
>                 Key: HDFS-3170
>                 URL: https://issues.apache.org/jira/browse/HDFS-3170
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>    Affects Versions: 2.0.0-alpha
>            Reporter: Todd Lipcon
>            Assignee: Matthew Jacobs
>         Attachments: hdfs-3170.txt, hdfs-3170.txt
> Currently, the only write-latency related metric we expose is the total amount of time
taken by opWriteBlock. This is practically useless, since (a) different blocks may be wildly
different sizes, and (b) if the writer is only generating data slowly, it will make a block
write take longer by no fault of the DN. I would like to propose two new metrics:
> 1) *flush-to-disk time*: count how long it takes for each call to flush an incoming packet
to disk (including the checksums). In most cases this will be close to 0, as it only flushes
to buffer cache, but if the backing block device enters congested writeback, it can take much
longer, which provides an interesting metric.
> 2) *round trip to downstream pipeline node*: track the round trip latency for the part
of the pipeline between the local node and its downstream neighbors. When we add a new packet
to the ack queue, save the current timestamp. When we receive an ack, update the metric based
on how long since we sent the original packet. This gives a metric of the total RTT through
the pipeline. If we also include this metric in the ack to upstream, we can subtract the amount
of time due to the later stages in the pipeline and have an accurate count of this particular

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message