hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John George (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1907) BlockMissingException upon concurrent read and write: reader was doing file position read while writer is doing write without hflush
Date Fri, 03 Jun 2011 18:31:48 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13043949#comment-13043949
] 

John George commented on HDFS-1907:
-----------------------------------

If the last block is in progress then readPastEnd will be true (I think) because locatedBlocks.getFileLength()
will return only the "committed" blocks length and hence length + offset will be greater than
the committed blocks. 

But in that case the length will be negative and that is not good..

> BlockMissingException upon concurrent read and write: reader was doing file position
read while writer is doing write without hflush
> ------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1907
>                 URL: https://issues.apache.org/jira/browse/HDFS-1907
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.23.0
>         Environment: Run on a real cluster. Using the latest 0.23 build.
>            Reporter: CW Chung
>            Assignee: John George
>             Fix For: 0.23.0
>
>         Attachments: HDFS-1907-2.patch, HDFS-1907-3.patch, HDFS-1907-4.patch, HDFS-1907-5.patch,
HDFS-1907-5.patch, HDFS-1907.patch
>
>
> BlockMissingException is thrown under this test scenario:
> Two different processes doing concurrent file r/w: one read and the other write on the
same file
>   - writer keep doing file write
>   - reader doing position file read from beginning of the file to the visible end of
file, repeatedly
> The reader is basically doing:
>   byteRead = in.read(currentPosition, buffer, 0, byteToReadThisRound);
> where CurrentPostion=0, buffer is a byte array buffer, byteToReadThisRound = 1024*10000;
> Usually it does not fail right away. I have to read, close file, re-open the same file
a few times to create the problem. I'll pose a test program to repro this problem after I've
cleaned up a bit my current test program.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message