hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Li Bo (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker
Date Thu, 15 Oct 2015 06:33:06 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Li Bo updated HDFS-8411:
    Attachment: HDFS-8411-002.patch

 When some datanodes are corrupted, all their blocks are to be reconstructed by other healthy
datanodes. The network flow incurred is very high and maybe we want to track it. We can record
the bytes read and written by any datanode. In fact, I think HDFS-8529(block counts) and HDFS-8410(time
consumed) are not necessary. We can estimate the time cost according to the bytes read and
write. Block count metric is not very meaningful when there’re a lot of small files. We
can adjust the metrics for the future requirement.

> Add bytes count metrics to datanode for ECWorker
> ------------------------------------------------
>                 Key: HDFS-8411
>                 URL: https://issues.apache.org/jira/browse/HDFS-8411
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Li Bo
>            Assignee: Li Bo
>         Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch
> This is a sub task of HDFS-7674. It calculates the amount of data that is read from local
or remote to attend decoding work, and also the amount of data that is written to local or
remote datanodes.

This message was sent by Atlassian JIRA

View raw message