hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3448) CLONE - Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?
Date Mon, 21 May 2012 16:32:42 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280267#comment-13280267

Todd Lipcon commented on HDFS-3448:

Good catch. Want to submit a patch file against branch-1? A unit test would also be appreciated.
Since append() is not supported in branch-1, I think a good unit test would be an addition
to TestFileConcurrentReader which causes the writer trigger pipeline recovery after opening
the file. Maybe something like:
- writer opens file for write
- writer writes some bytes and hflushes
- reader opens file but does not read any bytes (this will cause getBlockLocations() to be
fetched but not actually talk to DNs)
- shut down one of three DNs
- writer writes some more data and hflushes again: this should trigger the genstamp to increase
- reader tries to read.

I think this should trigger the bug you found. Thanks!
> CLONE - Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?
> -------------------------------------------------------------------------------------------------
>                 Key: HDFS-3448
>                 URL: https://issues.apache.org/jira/browse/HDFS-3448
>             Project: Hadoop HDFS
>          Issue Type: Wish
>          Components: hdfs client
>    Affects Versions: 1.0.1
>            Reporter: Li Junjun
>            Assignee: Todd Lipcon
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>  I think there are two situations.
> 1,the file has been swapped with another file,we check the blockId,that's correct we
throw Exceptions!
> 2,but if the file has not been swapped but has been appended ,we should just check the
blockId ,and should not care about the block's stamp , because in fact we got the right and
updated block list , cause file in hdfs can't be truncate .
> so how about we do it like this ?
> if ( oldIter.next().getBlock().getBlockId() != newIter.next().getBlock().getBlockId()
) { throw new IOException("Blocklist for " + src + " has changed!"); }
> after all , between two calls to openInfo() the file can be swapped and then appending,so
we should not ignore the under construction file.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message