hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.
Date Mon, 09 Apr 2012 17:53:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250002#comment-13250002
] 

Todd Lipcon commented on HDFS-3222:
-----------------------------------

bq. My point is, even though client flushed the data, DNs will not report to NN right. Did
you check the test above?
Right, but the client reports to the NN. So, the client could report the number of bytes hflushed,
and the NN could fill in the last block with that information when it persists it.

bq. You mean we will retry until we get the locations?
Yea -- treat it the same as we treat a corrupt file.

{quote}
1) client wants to read some partial data which exists in first block itself,
2) open may try to get complete length, and that will block if we retry until DNs reports
to NN.
3) But really that DNs down for long time.

This time, we can not read even until the specified length, which is less than the start offset
of partial block.
{quote}

That's true. Is it possible for us to change the client code to defer this code path until
either (a) the client wants to read from the partial block, or (b) the client explictly asks
for the file length?

Alternatively, maybe this is so rare that it doesn't matter, and it's OK to disallow reading
from an unrecovered file whose last block is missing all of its block locations after a restart.
                
> DFSInputStream#openInfo should not silently get the length as 0 when locations length
is zero for last partial block.
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3222
>                 URL: https://issues.apache.org/jira/browse/HDFS-3222
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3, 2.0.0, 3.0.0
>            Reporter: Uma Maheswara Rao G
>            Assignee: Uma Maheswara Rao G
>         Attachments: HDFS-3222-Test.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial block contained
DNs are not reported to NN. So, locations for this partial block would be 0. In this case,
DFSInputStream assumes that, 1 block size as final size.
> But reader also assuming that, 1 block size is the final length and setting his end marker.
Finally reader ending up reading only partial data. Due to this, HMaster could not replay
the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should present in trunk
as well.
> {code}
>     int replicaNotFoundCount = locatedblock.getLocations().length;
>     
>     for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..........
> ..........
>  // Namenode told us about these locations, but none know about the replica
>     // means that we hit the race between pipeline creation start and end.
>     // we require all 3 because some other exception could have happened
>     // on a DN that has it.  we want to report that error
>     if (replicaNotFoundCount == 0) {
>       return 0;
>     }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message