hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Kimball (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6708) New file format for very large records
Date Fri, 16 Apr 2010 01:21:25 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12857635#action_12857635
] 

Aaron Kimball commented on HADOOP-6708:
---------------------------------------

I'm not sure what you mean by this optimization. Can you please explain further?

What's the relationship between "blocks" and "chunks" in a TFile? It sounds like a record
can span multiple chunks. Is a record fully contained in a block? If it compresses an 8 GB
record down to, say, 2 GB, will that still require skipping chunk-wise through the compressed
data?

I do plan on using compression. Given the very large record lengths I'm designing for, I expect
that it's acceptable to compress each record individually. The current writeup doesn't propose
how to handle compression elegantly. But I'm leaning toward writing out a table of compressed
record lengths at the end of the file.

> New file format for very large records
> --------------------------------------
>
>                 Key: HADOOP-6708
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6708
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: io
>            Reporter: Aaron Kimball
>            Assignee: Aaron Kimball
>         Attachments: lobfile.pdf
>
>
> A file format that handles multi-gigabyte records efficiently, with lazy disk access

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message