hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5741) In Datanode, update block may fail due to length inconsistency
Date Mon, 11 May 2009 17:39:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708125#action_12708125

Hairong Kuang commented on HADOOP-5741:

In this case, there is no on-going writes. The problem is that data is not flushed to disk
when getBlockMetaDataInfo is called. But updateBlock flushes and closes the file. Therefore,
there is an inconsistency.

> In Datanode, update block may fail due to length inconsistency
> --------------------------------------------------------------
>                 Key: HADOOP-5741
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5741
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Tsz Wo (Nicholas), SZE
> When a primary datanode tries to recover a block.  It calls getBlockMetaDataInfo(..)
to obtains information like block length from each datanode.  Then, it calls updateBlock(..).
> The block length returned in getBlockMetaDataInfo(..) may be obtained from a unclosed
local block file F.   However, in updateBlock(..), it first closes F (if F is open) and then
gets the length.  These two lengths may be different.  In such case, updateBlock(..) throws
an exception.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message