hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3161) 20 Append: Excluded DN replica from recovery should be removed from DN.
Date Tue, 17 Apr 2012 02:52:16 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13255269#comment-13255269
] 

Uma Maheswara Rao G commented on HDFS-3161:
-------------------------------------------

Todd, This situation can occur, but we have seen the problem only in append, as it will skip
the recoveries if entry presents in ongoingCreates.

In this case:
bq. DN1 already has blk_N_GS1 in its ongoingCreates map

Block transfer will happen successfully as part of replication. Also reader should be able
to read it with newer genstamp. If there is no append for this file, then there won't be any
recoveries. So, we need not worry about skipping the DN from recovery if it presents in ongoing
creates.

Only thing is, that block and ongoing creates entry will not be cleared until we restart the
cluster.

Anyway let me confirm whether readers are able to read properly or not. And will write a test
with your scenario.

Do you think any other problems?

{quote}
but I agree that, if a replication request happens for a block with a higher genstamp, it
should interrupt the old block's ongoingCreate. If the replication request is a lower genstamp,
it should be ignored.
{quote}
If we really want to address this case, then this would be the fix I feel.
                
> 20 Append: Excluded DN replica from recovery should be removed from DN.
> -----------------------------------------------------------------------
>
>                 Key: HDFS-3161
>                 URL: https://issues.apache.org/jira/browse/HDFS-3161
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: suja s
>            Priority: Critical
>             Fix For: 1.0.3
>
>
> 1) DN1->DN2->DN3 are in pipeline.
> 2) Client killed abruptly
> 3) one DN has restarted , say DN3
> 4) In DN3 info.wasRecoveredOnStartup() will be true
> 5) NN recovery triggered, DN3 skipped from recovery due to above check.
> 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older generation stamp
say 1 and also DN3 still has this block entry in ongoingCreates
> 7) as part of recovery file has closed and got only two live replicas ( from DN1 and
DN2)
> 8) So, NN issued the command for replication. Now DN3 also has the replica with newer
generation stamp.
> 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with referring
to blocksBeingWritten directory.
> When we call append/ leaseRecovery, it may again skip this node for that recovery as
blockId entry still presents in ongoingCreates with startup recovery true.
> It may keep continue this dance for evry recovery.
> And this stale replica will not be cleaned untill we restart the cluster. Actual replica
will be trasferred to this node only through replication process.
> Also unnecessarily that replicated blocks will get invalidated after next recoveries....

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message