hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Li Junjun (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-3448) CLONE - Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?
Date Mon, 21 May 2012 03:30:40 GMT

     [ https://issues.apache.org/jira/browse/HDFS-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Li Junjun updated HDFS-3448:
----------------------------

    Description: 
 I think there are two situations.
1,the file has been swapped with another file,we check the blockId,that's correct we throw
Exceptions!
2,but if the file has not been swapped but has been appended ,we should just check the blockId
,and should not care about the block's stamp , because in fact we got the right and updated
block list , cause file in hdfs can't be truncate .

so how about we do it like this ?

if ( oldIter.next().getBlock().getBlockId() != newIter.next().getBlock().getBlockId() ) {
throw new IOException("Blocklist for " + src + " has changed!"); }

after all , between two calls to openInfo() the file can be swapped and then appending,so
we should not ignore the under construction file.

  was:
This is in the package of org.apache.hadoop.hdfs, DFSClient.openInfo():
if (locatedBlocks != null) {
        Iterator<LocatedBlock> oldIter = locatedBlocks.getLocatedBlocks().iterator();
        Iterator<LocatedBlock> newIter = newInfo.getLocatedBlocks().iterator();
        while (oldIter.hasNext() && newIter.hasNext()) {
          if (! oldIter.next().getBlock().equals(newIter.next().getBlock())) {
            throw new IOException("Blocklist for " + src + " has changed!");
          }
        }
      }
Why we need compare old LocatedBlocks and new LocatedBlocks, and in what case it happen?
Why not "this.locatedBlocks = newInfo" directly?

    
> CLONE - Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3448
>                 URL: https://issues.apache.org/jira/browse/HDFS-3448
>             Project: Hadoop HDFS
>          Issue Type: Wish
>            Reporter: Li Junjun
>            Assignee: Todd Lipcon
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
>  I think there are two situations.
> 1,the file has been swapped with another file,we check the blockId,that's correct we
throw Exceptions!
> 2,but if the file has not been swapped but has been appended ,we should just check the
blockId ,and should not care about the block's stamp , because in fact we got the right and
updated block list , cause file in hdfs can't be truncate .
> so how about we do it like this ?
> if ( oldIter.next().getBlock().getBlockId() != newIter.next().getBlock().getBlockId()
) { throw new IOException("Blocklist for " + src + " has changed!"); }
> after all , between two calls to openInfo() the file can be swapped and then appending,so
we should not ignore the under construction file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message