hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4799) Corrupt replica can be prematurely removed from corruptReplicas map
Date Wed, 08 May 2013 02:43:16 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651571#comment-13651571

Hadoop QA commented on HDFS-4799:

{color:green}+1 overall{color}.  Here are the results of testing the latest attachment 
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 2 new or modified
test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4363//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4363//console

This message is automatically generated.
> Corrupt replica can be prematurely removed from corruptReplicas map
> -------------------------------------------------------------------
>                 Key: HDFS-4799
>                 URL: https://issues.apache.org/jira/browse/HDFS-4799
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.0.4-alpha
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-4799.txt, hdfs-4799-unittest.txt
> We saw the following sequence of events in a cluster result in losing the most recent
genstamp of a block:
> - client is writing to a pipeline of 3
> - the pipeline had nodes fail over some period of time, such that it left 3 old-genstamp
replicas on the original three nodes, having recruited 3 new replicas with a later genstamp.
> -- so, we have 6 total replicas in the cluster, three with old genstamps on downed nodes,
and 3 with the latest genstamp
> - cluster reboots, and the nodes with old genstamps blockReport first. The replicas are
correctly added to the corrupt replicas map since they have a too-old genstamp
> - the nodes with the new genstamp block report. When the latest one block reports, chooseExcessReplicates
is called and incorrectly decides to remove the three good replicas, leaving only the old-genstamp

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message