hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-142) In 0.20, move blocks being written into a blocksBeingWritten directory
Date Tue, 27 Apr 2010 22:11:38 GMT

     [ https://issues.apache.org/jira/browse/HDFS-142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Todd Lipcon updated HDFS-142:
-----------------------------

    Attachment: hdfs-142-commitBlockSynchronization-unknown-datanode.txt
                hdfs-142-testcases.txt

Uploading two more patches for 0.20 append:
 - hdfs-142-commitBlockSynchronization-unknown-datanode.txt fixes a case where FSN.getDatanode
was throwing an UnregisteredDatanodeException since one of the original recovery targets had
departed the cluster (in this case been replaced by a new DN with the same storage but a different
port). This exception was causing the commitBlockSynchronization to fail after removing the
old block from blocksMap but before putting in the new one, making both old and new blocks
inaccessible, and causing any further nextGenerationStamp calls to fail.
- hdfs-142-testcases.txt includes two new test cases:
-- testRecoverFinalizedBlock stops a writer just before it calls completeFile() and then has
another client recover the file
-- testDatanodeFailsToCommit() injects an IOE when the DN calls commitBlockSynchronization
for the first time, to make sure that the retry succeeds even though updateBlocks() was already
called during the first synchronization attempt.
-- These tests pass after applying Sam's patch to fix refinalization of a finalized block.

> In 0.20, move blocks being written into a blocksBeingWritten directory
> ----------------------------------------------------------------------
>
>                 Key: HDFS-142
>                 URL: https://issues.apache.org/jira/browse/HDFS-142
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>            Priority: Blocker
>         Attachments: appendQuestions.txt, deleteTmp.patch, deleteTmp2.patch, deleteTmp5_20.txt,
deleteTmp5_20.txt, deleteTmp_0.18.patch, handleTmp1.patch, hdfs-142-commitBlockSynchronization-unknown-datanode.txt,
HDFS-142-deaddn-fix.patch, HDFS-142-finalize-fix.txt, hdfs-142-minidfs-fix-from-409.txt, HDFS-142-multiple-blocks-datanode-exception.patch,
hdfs-142-testcases.txt, HDFS-142_20.patch, testfileappend4-deaddn.txt
>
>
> Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp  directory since
these files are not valid anymore. But in 0.18 it moves these files to normal directory incorrectly
making them valid blocks. One of the following would work :
> - remove the tmp files during upgrade, or
> - if the files under /tmp are in pre-18 format (i.e. no generation), delete them.
> Currently effect of this bug is that, these files end up failing block verification and
eventually get deleted. But cause incorrect over-replication at the namenode before that.
> Also it looks like our policy regd treating files under tmp needs to be defined better.
Right now there are probably one or two more bugs with it. Dhruba, please file them if you
rememeber.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message