hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5521) Move compression/decompression to an encoder specific encoding context
Date Mon, 05 Mar 2012 21:31:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13222621#comment-13222621
] 

Phabricator commented on HBASE-5521:
------------------------------------

mbautin has requested changes to the revision "HBASE-5521 [jira] Move compression/decompression
to an encoder specific encoding context".

  Yongqiang: thanks for taking on this refactoring project. I have added some comments inline.

  What is "columnar encoded block format", by the way? Also, it is not totally clear to me
why we would need to unite compression and encoding steps in order to support "columnar encoded"
block format.

  This requires a test plan and new unit tests. Please add test cases for "encoding context"
implementations.


INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:58 Why do we need
two separate compressKeyValues functions? What is the difference in use cases? Overloaded
methods make the interface confusing.

  Add postEncoding javadoc.
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:179-183 Javadoc
please.
  src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java:53 Why do we need
to take encoding as an explicit parameter? Can we figure it out from dataBlockEncoder?
  src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:323-327
Is this actually what you wanted to check? This code as written will accept any subclass of
HFileBlockDefaultEncodingContext. If you want to only accept an instance of the HFileBlockDefaultEncodingContext
class specifically, you need

    encodingCxt.getClass() == HFileBlockDefaultEncodingContext.class

  However, why do we need to enforce such a constraint? Would not an arbitrary implementation
of HFileBlockEncodingContext be acceptable here?

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1396 Code style: add spaces
after //. Capitalize the first word in a sentence. In short: be consistent with the existing
style of the surrounding code.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1327-1328 Why did you remove
the comment about the peek-into-next-block-header optimization?

  here we already read -> here we have already read

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java:194-195 Why
do we need to instantiate the "encoding context" for every encoding operation?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1253 Why do we need to have
a separate "default encoding context" instance here?

  Can you make the default context a singleton (or a per-compression-type singleton) and use
the relevant unique instance instead of this field?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1357 Code style: space after
"if".

REVISION DETAIL
  https://reviews.facebook.net/D2097

BRANCH
  svn

                
> Move compression/decompression to an encoder specific encoding context
> ----------------------------------------------------------------------
>
>                 Key: HBASE-5521
>                 URL: https://issues.apache.org/jira/browse/HBASE-5521
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: He Yongqiang
>            Assignee: He Yongqiang
>         Attachments: HBASE-5521.1.patch, HBASE-5521.D2097.1.patch
>
>
> As part of working on HBASE-5313, we want to add a new columnar encoder/decoder. It makes
sense to move compression to be part of encoder/decoder:
> 1) a scanner for a columnar encoded block can do lazy decompression to a specific part
of a key value object
> 2) avoid an extra bytes copy from encoder to hblock-writer. 
> If there is no encoder specified for a writer, the HBlock.Writer will use a default compression-context
to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message