hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6663) BlockDecompressorStream get EOF exception when decompressing the file compressed from empty file
Date Sat, 03 Apr 2010 20:19:27 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853192#action_12853192
] 

Todd Lipcon commented on HADOOP-6663:
-------------------------------------

+1, I've seen this issue in production as well. Fix and test case look good, except please
add the apache header to the test case, and preferably update the test case to JUnit 4 style

> BlockDecompressorStream get EOF exception when decompressing the file compressed from
empty file
> ------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6663
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6663
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.20.2
>            Reporter: Xiao Kang
>         Attachments: BlockDecompressorStream.java.patch, BlockDecompressorStream.patch
>
>
> An empty file can be compressed using BlockDecompressorStream, which is for block-based
compressiong algorithm such as LZO. However, when decompressing the compressed file, BlockDecompressorStream
get EOF exception.
> Here is a typical exception stack:
> java.io.EOFException
> at org.apache.hadoop.io.compress.BlockDecompressorStream.rawReadInt(BlockDecompressorStream.java:125)
> at org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:96)
> at org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:82)
> at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74)
> at java.io.InputStream.read(InputStream.java:85)
> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
> at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:134)
> at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:39)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:186)
> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:170)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
> at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:18)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
> at org.apache.hadoop.mapred.Child.main(Child.java:196)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message