hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4640) Add ability to split text files compressed with lzo
Date Thu, 13 Nov 2008 00:33:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12647139#action_12647139
] 

Chris Douglas commented on HADOOP-4640:
---------------------------------------

Good idea
* On LzopCodec: Removing the unused bufferSize field is clearly useful. The condition protected
against by decompressedWholeBlock is best left to close() and not verifyChecksum, though...
right? It might be better if this were to finish reading the block and verify the checksum
rather than ignoring it.
* LzopCodec was removed from the default list of codecs, per HADOOP-4030
* +1 for an OutputFormat
* The size of each block (including checksums) depends on the codecs specified in the header;
LzoTextInputFormat::index assumes only one checksum per block, which may not be the case:
{noformat}
+        is.seek(pos + compressedBlockSize + 4); // crc int?
{noformat}
* Each RecordReader doesn't need to slurp and sort the full index. If each FileSplit were
guaranteed to point to the beginning of a block, all the splits could be generated by the
client using the index.

> Add ability to split text files compressed with lzo
> ---------------------------------------------------
>
>                 Key: HADOOP-4640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4640
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: io, mapred
>            Reporter: Johan Oskarsson
>            Assignee: Johan Oskarsson
>            Priority: Trivial
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4640.patch
>
>
> Right now any file compressed with lzop will be processed by one mapper. This is a shame
since the lzo algorithm would be very suitable for large log files and similar common hadoop
data sets. The compression rate is not the best out there but the decompression speed is amazing.
 Since lzo writes compressed data in blocks it would be possible to make an input format that
can split the files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message