hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ryan rawson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3315) New binary file format
Date Sat, 31 Jan 2009 21:36:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12669256#action_12669256
] 

ryan rawson commented on HADOOP-3315:
-------------------------------------

Hi,

I'm evaluating the use of tfile for performance in hbase... I ran a simple seek program in
a profiler, and I saw something really weird I'd like to call your attention to:

  18.9% - 56,948 ms - 12,640,778 inv. org.apache.hadoop.io.file.tfile.Chunk$ChunkDecoder.close
  - 9.1% - 27,357 ms - 12,642,117 inv. org.apache.hadoop.io.file.tfile.Chunk$ChunkDecoder.skip
  -- 3.0% - 9,107 ms - 12,642,117 inv. org.apache.hadoop.io.file.tfile.Chunk$ChunkDecoder.checkEOF
  -- 1.1% - 3,252 ms - 12,642,117 inv. java.io.DataInputStream.skip
  -- 1.0% - 3,143 ms - 12,642,117 inv. java.lang.Math.min
  -5.9% - 17,826 ms - 25,282,895 inv. org.apache.hadoop.io.file.tfile.Chunk$ChunkDecoder.checkEOF
  -- 2.0% - 5,980 ms - 25,282,895 inv. org.apache.hadoop.io.file.tfile.Chunk$ChunkDecoder.isClosed

It turns out that nearly 19% of my time is closing a block?!  The code for the function is:
    public void close() throws IOException {
      if (closed == false) {
        try {
          while (!checkEOF()) {
            skip(Integer.MAX_VALUE);
          }
        }
        finally {
          closed = true;
        }
      }

This seems kind of weird, why do we read the rest of the block on close?  Why not just close
it?

Thanks!


> New binary file format
> ----------------------
>
>                 Key: HADOOP-3315
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3315
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>            Reporter: Owen O'Malley
>            Assignee: Amir Youssefi
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-3315_20080908_TFILE_PREVIEW_WITH_LZO_TESTS.patch, HADOOP-3315_20080915_TFILE.patch,
hadoop-trunk-tfile.patch, hadoop-trunk-tfile.patch, TFile Specification 20081217.pdf
>
>
> SequenceFile's block compression format is too complex and requires 4 codecs to compress
or decompress. It would be good to have a file format that only needs 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message