hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jens Rabe (JIRA)" <j...@apache.org>
Subject [jira] [Created] (MAPREDUCE-6216) Seeking backwards in MapFiles does not always correctly sync the underlying SequenceFile, resulting in "File is corrupt" exceptions
Date Thu, 15 Jan 2015 10:20:34 GMT
Jens Rabe created MAPREDUCE-6216:
------------------------------------

             Summary: Seeking backwards in MapFiles does not always correctly sync the underlying
SequenceFile, resulting in "File is corrupt" exceptions
                 Key: MAPREDUCE-6216
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6216
             Project: Hadoop Map/Reduce
          Issue Type: Bug
    Affects Versions: 2.4.1
            Reporter: Jens Rabe
            Priority: Critical


In some occasions, when reading MapFiles which were generated by MapFileOutputFormat with
BZIP2 BLOCK compression, using getClosest(key, value, true) on the MapFile reader causes an
IOException to be thrown with the message "File is corrupt!" When doing "hdfs fsck", it shows
that everything is OK, and the underlying data and index files can also be read correctly
if read with a SequenceFile.Reader.

The exception happens in the readBlock() method of the SequenceFile.Reader class.

My guess is that, since MapFile.Reader's seekInternal() method does "seek()" instead of "sync()",
the indices in the index file must point to "synced" positions. When the exception occurrs,
the position the cursor is to be positioned at is not valid.

So I think the culprit is the generation of the index files when MapFiles are output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message