hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jens Rabe (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-6216) Seeking backwards in MapFiles does not always correctly sync the underlying SequenceFile, resulting in "File is corrupt" exceptions
Date Thu, 15 Jan 2015 10:45:35 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jens Rabe updated MAPREDUCE-6216:
---------------------------------
    Description: 
In some occasions, when reading MapFiles which were generated by MapFileOutputFormat with
BZIP2 BLOCK compression, using getClosest(key, value, true) on the MapFile reader causes an
IOException to be thrown with the message "File is corrupt!" When doing "hdfs fsck", it shows
that everything is OK, and the underlying data and index files can also be read correctly
if read with a SequenceFile.Reader.

The exception happens in the readBlock() method of the SequenceFile.Reader class.

My guess is that, since MapFile.Reader's seekInternal() method does "seek()" instead of "sync()",
it is not correctly checked if the cursor is really positioned at a valid location.

  was:
In some occasions, when reading MapFiles which were generated by MapFileOutputFormat with
BZIP2 BLOCK compression, using getClosest(key, value, true) on the MapFile reader causes an
IOException to be thrown with the message "File is corrupt!" When doing "hdfs fsck", it shows
that everything is OK, and the underlying data and index files can also be read correctly
if read with a SequenceFile.Reader.

The exception happens in the readBlock() method of the SequenceFile.Reader class.

My guess is that, since MapFile.Reader's seekInternal() method does "seek()" instead of "sync()",
the indices in the index file must point to "synced" positions. When the exception occurrs,
the position the cursor is to be positioned at is not valid.

So I think the culprit is the generation of the index files when MapFiles are output.


> Seeking backwards in MapFiles does not always correctly sync the underlying SequenceFile,
resulting in "File is corrupt" exceptions
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6216
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6216
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.4.1
>            Reporter: Jens Rabe
>            Priority: Critical
>              Labels: mapfile, sequencefile
>
> In some occasions, when reading MapFiles which were generated by MapFileOutputFormat
with BZIP2 BLOCK compression, using getClosest(key, value, true) on the MapFile reader causes
an IOException to be thrown with the message "File is corrupt!" When doing "hdfs fsck", it
shows that everything is OK, and the underlying data and index files can also be read correctly
if read with a SequenceFile.Reader.
> The exception happens in the readBlock() method of the SequenceFile.Reader class.
> My guess is that, since MapFile.Reader's seekInternal() method does "seek()" instead
of "sync()", it is not correctly checked if the cursor is really positioned at a valid location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message