hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-4917) multiple BlockFixer should be supported in order to improve scalability and reduce too much work on single BlockFixer
Date Sat, 02 May 2015 04:24:11 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14524598#comment-14524598
] 

Hadoop QA commented on MAPREDUCE-4917:
--------------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply the patch during
dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12563471/MAPREDUCE-4917.2.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f1a152c |
| Console output | https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5505/console |


This message was automatically generated.

> multiple BlockFixer should be supported in order to improve scalability and reduce too
much work on single BlockFixer
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4917
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4917
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/raid
>    Affects Versions: 0.22.0
>            Reporter: Jun Jin
>            Assignee: Jun Jin
>              Labels: patch
>             Fix For: 0.22.0
>
>         Attachments: MAPREDUCE-4917.1.patch, MAPREDUCE-4917.2.patch
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles)
only check the whole DFS file system. multiple BlockFixer will do the same thing and try to
fix same file if multiple BlockFixer launched.
> the change/fix will be mainly in BlockFixer.java and RaidDFSUtil.getCorruptFile(), to
enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message