hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthew Jacobs (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3170) Add more useful metrics for write latency
Date Sun, 01 Jul 2012 18:36:46 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13404783#comment-13404783
] 

Matthew Jacobs commented on HDFS-3170:
--------------------------------------

Thanks, Todd and Andy.

I agree about the timing issues, I'll use System.nanoTime() rather than milliseconds since
we're interested in capturing sub-millisecond latency. The reason I didn't use nanoTime from
the beginning was to be consistent with the fsync metric, though now I'm thinking it might
also be better to report fsync in nanoseconds as well. And if I want to re-use the time after
the flush() and before the sync(), I'll have to use the high resolution clock for all these
timestamps, anyway. In this case, I'd rename fsync to fsyncNanos or something. If you think
it would be better to keep fsync in milliseconds, I'll still use the monotonic clock and report
the metrics in milliseconds.

What do you think?

You're right regarding the ack math- I worked through that incorrectly. Fortunately it's a
really simple fix.

I'll post an updated patch soon.
                
> Add more useful metrics for write latency
> -----------------------------------------
>
>                 Key: HDFS-3170
>                 URL: https://issues.apache.org/jira/browse/HDFS-3170
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>    Affects Versions: 2.0.0-alpha
>            Reporter: Todd Lipcon
>            Assignee: Matthew Jacobs
>         Attachments: hdfs-3170.txt
>
>
> Currently, the only write-latency related metric we expose is the total amount of time
taken by opWriteBlock. This is practically useless, since (a) different blocks may be wildly
different sizes, and (b) if the writer is only generating data slowly, it will make a block
write take longer by no fault of the DN. I would like to propose two new metrics:
> 1) *flush-to-disk time*: count how long it takes for each call to flush an incoming packet
to disk (including the checksums). In most cases this will be close to 0, as it only flushes
to buffer cache, but if the backing block device enters congested writeback, it can take much
longer, which provides an interesting metric.
> 2) *round trip to downstream pipeline node*: track the round trip latency for the part
of the pipeline between the local node and its downstream neighbors. When we add a new packet
to the ack queue, save the current timestamp. When we receive an ack, update the metric based
on how long since we sent the original packet. This gives a metric of the total RTT through
the pipeline. If we also include this metric in the ack to upstream, we can subtract the amount
of time due to the later stages in the pipeline and have an accurate count of this particular
link.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message