hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9719) Refactoring ErasureCodingWorker into smaller reusable constructs
Date Tue, 05 Apr 2016 13:46:25 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15226273#comment-15226273
] 

Hadoop QA commented on HDFS-9719:
---------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue}
Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} | {color:red}
HDFS-9719 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12792504/HDFS-9719-v7.patch
|
| JIRA Issue | HDFS-9719 |
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15067/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Refactoring ErasureCodingWorker into smaller reusable constructs
> ----------------------------------------------------------------
>
>                 Key: HDFS-9719
>                 URL: https://issues.apache.org/jira/browse/HDFS-9719
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
>            Assignee: Kai Zheng
>         Attachments: HDFS-9719-v1.patch, HDFS-9719-v2.patch, HDFS-9719-v3.patch, HDFS-9719-v4.patch,
HDFS-9719-v5.patch, HDFS-9719-v6.patch, HDFS-9719-v7.patch
>
>
> This would suggest and refactor {{ErasureCodingWorker}} into smaller constructs to be
reused in other places like block group checksum computing in datanode side. As discussed
in HDFS-8430 and implemented in HDFS-9694 patch, checksum computing for striped block groups
would be distributed to datanode in the group, where data block data should be able to be
reconstructed when missed/corrupted to recompute the block checksum. The most needed codes
are in the current ErasureCodingWorker and could be reused in order to avoid duplication.
Fortunately, we have very good and complete tests, which would make the refactoring much easier.
The refactoring will also help a lot for subsequent tasks in phase II for non-striping erasure
coded files and blocks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message