hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-3015) DataNode should clean up temporary files when writeBlock fails.
Date Thu, 13 Mar 2008 23:37:24 GMT
DataNode should clean up temporary files when writeBlock fails.
---------------------------------------------------------------

                 Key: HADOOP-3015
                 URL: https://issues.apache.org/jira/browse/HADOOP-3015
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.15.3
            Reporter: Raghu Angadi



Once a datanode starts receiving a block and if it fails to complete receiving the block,
it leaves the temporary block files in the temp directory. Because of this, same block can
not be written to this node for next one hour. 

DataNode should really delete these files and allow the next attempt.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message