Return-Path: Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: (qmail 22066 invoked from network); 16 Apr 2010 01:21:50 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 16 Apr 2010 01:21:50 -0000 Received: (qmail 32806 invoked by uid 500); 16 Apr 2010 01:21:50 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 32756 invoked by uid 500); 16 Apr 2010 01:21:50 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 32748 invoked by uid 99); 16 Apr 2010 01:21:50 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Apr 2010 01:21:50 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Apr 2010 01:21:47 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o3G1LPCP000826 for ; Thu, 15 Apr 2010 21:21:26 -0400 (EDT) Message-ID: <11754477.861271380885955.JavaMail.jira@thor> Date: Thu, 15 Apr 2010 21:21:25 -0400 (EDT) From: "Aaron Kimball (JIRA)" To: common-issues@hadoop.apache.org Subject: [jira] Commented: (HADOOP-6708) New file format for very large records In-Reply-To: <23756292.153431271370052193.JavaMail.jira@thor> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12857635#action_12857635 ] Aaron Kimball commented on HADOOP-6708: --------------------------------------- I'm not sure what you mean by this optimization. Can you please explain further? What's the relationship between "blocks" and "chunks" in a TFile? It sounds like a record can span multiple chunks. Is a record fully contained in a block? If it compresses an 8 GB record down to, say, 2 GB, will that still require skipping chunk-wise through the compressed data? I do plan on using compression. Given the very large record lengths I'm designing for, I expect that it's acceptable to compress each record individually. The current writeup doesn't propose how to handle compression elegantly. But I'm leaning toward writing out a table of compressed record lengths at the end of the file. > New file format for very large records > -------------------------------------- > > Key: HADOOP-6708 > URL: https://issues.apache.org/jira/browse/HADOOP-6708 > Project: Hadoop Common > Issue Type: New Feature > Components: io > Reporter: Aaron Kimball > Assignee: Aaron Kimball > Attachments: lobfile.pdf > > > A file format that handles multi-gigabyte records efficiently, with lazy disk access -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira