hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HADOOP-109) Blocks are not replicated when...
Date Wed, 28 Feb 2007 20:19:50 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Doug Cutting resolved HADOOP-109.
---------------------------------

       Resolution: Fixed
    Fix Version/s: 0.12.0

This was fixed as a part of HADOOP-940.

> Blocks are not replicated when...
> ---------------------------------
>
>                 Key: HADOOP-109
>                 URL: https://issues.apache.org/jira/browse/HADOOP-109
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.1.0
>            Reporter: Konstantin Shvachko
>         Assigned To: Konstantin Shvachko
>             Fix For: 0.12.0
>
>
> When the block is under-replicated the namenode places it into
> FSNamesystem.neededReplications list.
> When a datanode D1 sends getBlockwork() request to the namenode, the namenode
> selects another node D2 (which it thinks is up and running) where the new replica of
the
> under-replicated block will be stored.
> Then namenode removes the block from the neededReplications list and places it to
> the pendingReplications list, and then asks D1 to replicate the block to D2.
> If D2 is in fact down, then replication will fail and will never be retried later, because
> the block is not in the neededReplications list, but rather in the pendingReplications
list,
> which namenode never checks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message