hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5320) Add datanode caching metrics
Date Mon, 21 Oct 2013 02:33:43 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800319#comment-13800319
] 

Colin Patrick McCabe commented on HDFS-5320:
--------------------------------------------

You may be able to start the caching process, but then fail to complete it.  It seems like
with this patch, {{numBlocksFailedToCache}} would not be incremented in this case.  Why not
put the metric inside {{FsDatasetCache}} itself?  That way, if the caching task failed to
complete, it could simply increment the enclosing statistic.

This might be a dumb question, but do we have to do anything special to get a "moving window"
view of this exposed to clients?  Really what they're interested in is number of caching failures
in the last 5 minutes, 50 minutes, 5 days, etc.  It would be nice to make sure we're giving
clients what they need in that regard.

> Add datanode caching metrics
> ----------------------------
>
>                 Key: HDFS-5320
>                 URL: https://issues.apache.org/jira/browse/HDFS-5320
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: HDFS-4949
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>            Priority: Minor
>         Attachments: hdfs-5320-1.patch, hdfs-5320-2.patch
>
>
> It'd be good to hook up datanode metrics for # (blocks/bytes) (cached/uncached/failed
to cache) over different time windows (eternity/1hr/10min/1min).



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message