hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1444) Block allocation method does not check pendingCreates for duplicate block ids
Date Mon, 18 Jun 2007 22:47:26 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Doug Cutting updated HADOOP-1444:
---------------------------------

       Resolution: Fixed
    Fix Version/s: 0.14.0
           Status: Resolved  (was: Patch Available)

I just committed this.  Thanks, Dhruba!

Note: your patch included some improvements to the descriptions in conf/hadoop-default.xml,
but these seemed unrelated to this issue and I assume were intended for some other issue and
included only accidentally, so I ignored them.

> Block allocation method does not check pendingCreates for duplicate block ids
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-1444
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1444
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.14.0
>
>         Attachments: duplicateBlockId3.patch
>
>
> The HDFS namenode allocates a new random blockid when requested. It then checks the blocksMap
to verify if this blockid is already in use. If this block is is already in use, it generates
another random number and above process continues. When a blocksid that does not exist in
the blocksMap is found, it stores this blocksid in pendingCreateBlocks and returns the blocksid
to the requesting client.
> The above check for detecting duplicate blockid should check pendingCreateBlocks as well.
> A related problem exists when a file is deleted. Deleting a file causes all its blocks
to be deleted from the blocksMap immediately. These blockids move to recentInvalidateSets
and are sent out to the corresponding datanodes as part of responses to succeeding heartbeats.
So, there is a time window when a block exists in the datanode but not in the blocksMap. At
this time, if the blockid-random-number generator generates a blockid that exists in the datanode
but not on the blocksMap, then the namenode will fail to detect that this is a duplicate blockid.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message