hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinay (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas
Date Sat, 23 Nov 2013 14:41:41 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13830682#comment-13830682
] 

Vinay commented on HDFS-5557:
-----------------------------

Great finding Kihwal.
Patch looks quite good and test fails without actual fix.
I have following comments:

{code}+      DFSTestUtil.createFile(fileSys, file, 68000000L, (short)numDataNodes, 0L);{code}
Here why you exactly used *68000000L* ? wanted to write more than one block?
 If yes then you might have to set the block size to 64MB, default is 128M now in trunk.

bq. If the last block is completed, but the penultimate block is not because of this issue,
the file won't be closed.
Better to add a test for this too, You can reproduce this failing the last packet of only
penultimate block. for that you might need to change Mock statement in test and one more line
in DFSOutputStream.java to following
{code}Mockito.when(faultInjector.failPacket()).thenReturn(true, false);{code}
and 
{code}if (isLastPacketInBlock
                && DFSClientFaultInjector.get().failPacket()){code}


Also I looked at the patch HDFS-5558,
I think that will not solve the issue mentioned there i.e. crashing of LeaseManager monitor
thread. that fix actually comes in flow of client's completeFile() call, not from lease recovery.
That change might be required in this issue, to block the client committing the last block.

> Write pipeline recovery for the last packet in the block may cause rejection of valid
replicas
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5557
>                 URL: https://issues.apache.org/jira/browse/HDFS-5557
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.23.9, 2.3.0
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>         Attachments: HDFS-5557.patch, HDFS-5557.patch
>
>
> When a block is reported from a data node while the block is under construction (i.e.
not committed or completed), BlockManager calls BlockInfoUnderConstruction.addReplicaIfNotPresent()
to update the reported replica state. But BlockManager is calling it with the stored block,
not reported block.  This causes the recorded replicas' gen stamp to be that of BlockInfoUnderConstruction
itself, not the one from reported replica.
> When a pipeline recovery is done for the last packet of a block, the incremental block
reports with the new gen stamp may come before the client calling updatePipeline(). If this
happens, these replicas will be incorrectly recorded with the old gen stamp and get removed
later.  The result is close or addAdditionalBlock failure.
> If the last block is completed, but the penultimate block is not because of this issue,
the file won't be closed. If this file is not cleared, but the client goes away, the lease
manager will try to recover the lease/block, at which point it will crash. I will file a separate
jira for this shortly.
> The worst case is to reject all good ones and accepting a bad one. In this case, the
block will get completed, but the data cannot be read until the next full block report containing
one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message