hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2668) Incorrect assertion in BlockManager when block report arrives shortly after invalidation decision
Date Sun, 18 Dec 2011 17:52:30 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171910#comment-13171910
] 

Uma Maheswara Rao G commented on HDFS-2668:
-------------------------------------------

I make sure the issue: i just added throws RuntimeException in processReportedBlock

in BlockManager#processReportedBlock

{code}
// Ignore replicas already scheduled to be removed from the DN
    if(invalidateBlocks.contains(dn.getStorageID(), block)) {
      assert storedBlock.findDatanode(dn) < 0 : "Block " + block
        + " in recentInvalidatesSet should not appear in DN " + dn;
      
      if(storedBlock.findDatanode(dn) >= 0)
        throw new RuntimeException("Block already added into invalidateBlocks. But still this
block associated with DN.... storedBlock.findDatanode(dn) =" + storedBlock.findDatanode(dn));
      return storedBlock;
    }
{code}

After this i ran the above attached Test.
Below are the logs that proves the issue.

2011-12-18 23:02:42,066 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(220)) -
ugi=uma (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/tmp/testBadBlockReportOnTransfer/file1	dst=null
perm=null
All blocks of file /tmp/testBadBlockReportOnTransfer/file1 verified to have replication factor
3
2011-12-18 23:02:42,073 INFO  blockmanagement.BlockManager (BlockManager.java:setReplication(1814))
- Decreasing replication from 3 to 1 for /tmp/testBadBlockReportOnTransfer/file1
2011-12-18 23:02:42,073 INFO  hdfs.StateChange (InvalidateBlocks.java:add(77)) - BLOCK* InvalidateBlocks:
add blk_5137102758256792519_1001 to 127.0.0.1:54432
2011-12-18 23:02:42,073 INFO  hdfs.StateChange (BlockManager.java:chooseExcessReplicates(1954))
- BLOCK* chooseExcessReplicates: (127.0.0.1:54432, blk_5137102758256792519_1001) is added
to recentInvalidateSets
2011-12-18 23:02:42,073 INFO  hdfs.StateChange (InvalidateBlocks.java:add(77)) - BLOCK* InvalidateBlocks:
add blk_5137102758256792519_1001 to 127.0.0.1:54418
2011-12-18 23:02:42,073 INFO  hdfs.StateChange (BlockManager.java:chooseExcessReplicates(1954))
- BLOCK* chooseExcessReplicates: (127.0.0.1:54418, blk_5137102758256792519_1001) is added
to recentInvalidateSets
2011-12-18 23:02:42,076 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(220)) -
ugi=uma (auth:SIMPLE)	ip=/127.0.0.1	cmd=setReplication	src=/tmp/testBadBlockReportOnTransfer/file1
dst=null	perm=null

..................
...................
2011-12-18 23:02:43,343 WARN  datanode.DataNode (BPOfferService.java:offerService(537)) -
RemoteException in offerService
java.lang.RuntimeException: java.lang.RuntimeException: Block already added into invalidateBlocks.
But still this block associated with DN.... storedBlock.findDatanode(dn) =1
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:1498)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1418)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1328)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1303)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:847)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:130)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:16189)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:834)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1605)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
	at java.security.AccessController.doPrivileged(Native Method)

                
> Incorrect assertion in BlockManager when block report arrives shortly after invalidation
decision
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-2668
>                 URL: https://issues.apache.org/jira/browse/HDFS-2668
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.23.0
>            Reporter: Todd Lipcon
>         Attachments: TestToReproduceHDFS-2668.patch
>
>
> I haven't written a test case to verify this yet, but I believe the following assertion
is incorrect:
> {code}
>      // Ignore replicas already scheduled to be removed from the DN
>      if(invalidateBlocks.contains(dn.getStorageID(), block)) {
>        assert storedBlock.findDatanode(dn) < 0 : "Block " + block
>          + " in recentInvalidatesSet should not appear in DN " + dn;
> {code}
> The problem is that, when a block is invalidated due to over-replication, it is not immediately
removed from the block map. So, if a block report arrives just after a block has been marked
as invalidated, but before the block is actually deleted, I think this assertion will trigger
incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message