hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)
Date Wed, 18 Jan 2012 22:10:44 GMT

    [ https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188741#comment-13188741

Phabricator commented on HBASE-4218:

Kannan has commented on the revision "[jira] [HBASE-4218] HFile data block encoding framework
and delta encoding implementation".

  another partial round of comments.

  src/main/java/org/apache/hadoop/hbase/io/deltaencoder/PrefixKeyDeltaEncoder.java:45 rename
offset to prevKeyOffset for clarity.
  src/main/java/org/apache/hadoop/hbase/io/deltaencoder/DeltaEncoderAlgorithms.java:115 has
-> have
  src/main/java/org/apache/hadoop/hbase/io/deltaencoder/DeltaEncoderAlgorithms.java:124 DeltaEncoder
-> DeltaEncoders
  src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java:93 Another related question/clarification
needed on the spec of this feature...

  If delta-encoding is on in cache, then is blocksize setting for the CF based on the encoded
size or the un-encoded size.

  [Personally, think the encoded size should be used for the blocksize. But can you clarify
either way.]
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:196 getDeltaEncodingId()
and getDeltaEncodedId() seem to be identical, but for their names.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockDeltaEncoder.java:78 operate ->
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockDeltaEncoder.java:112 I wasn't
clear what "useEncodedScanner" is meant for. Currently, it seems to be used HFileReaderV1.
Could you clarify the purpose of this...
  src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java:98 remove " in cache"
  src/main/java/org/apache/hadoop/hbase/KeyValue.java:1901 sounds like b.length should be
l on this line also.

  Is this is a pre-existing bug?
  src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java:84 Why 2 * ClassSize.REFERENCE?
This change adds one reference to the encoding enum, correct?
  src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java:752 We can use a MutableInteger
here to avoid creating lots of Integer objects. But, since this is just for a test, not a
big deal.
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:30 KeyValue ->
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:32 iterated always
-> always iterated
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:79 Must the implementation
make a deep copy? Or is it legal for the implementation to point to have the returned ByteBuffer
point to a byte array in the input "block"?
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:129 confusing comment.
"same key" as what?

  Should comment be something like:

  "Seek to specified key in the block."

  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:137 blockSeekTo
--> seekToKeyInBlock, perhaps?
  src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:87 this
logic appears to assume that the target ("this") we are copying into was positioned at the
previous key. No?


> Data Block Encoding of KeyValues  (aka delta encoding / prefix compression)
> ---------------------------------------------------------------------------
>                 Key: HBASE-4218
>                 URL: https://issues.apache.org/jira/browse/HBASE-4218
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.94.0
>            Reporter: Jacek Migdal
>            Assignee: Mikhail Bautin
>              Labels: compression
>             Fix For: 0.94.0
>         Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch, 0001-Delta-encoding.patch,
4218-2012-01-14.txt, 4218-v16.txt, 4218.txt, D447.1.patch, D447.10.patch, D447.11.patch, D447.12.patch,
D447.13.patch, D447.14.patch, D447.15.patch, D447.16.patch, D447.17.patch, D447.18.patch,
D447.19.patch, D447.2.patch, D447.20.patch, D447.21.patch, D447.22.patch, D447.23.patch, D447.24.patch,
D447.3.patch, D447.4.patch, D447.5.patch, D447.6.patch, D447.7.patch, D447.8.patch, D447.9.patch,
Data-block-encoding-2011-12-23.patch, Delta-encoding-2012-01-17_11_09_09.patch, Delta-encoding.patch-2011-12-22_11_52_07.patch,
Delta-encoding.patch-2012-01-05_15_16_43.patch, Delta-encoding.patch-2012-01-05_16_31_44.patch,
Delta-encoding.patch-2012-01-05_16_31_44_copy.patch, Delta-encoding.patch-2012-01-05_18_50_47.patch,
Delta-encoding.patch-2012-01-07_14_12_48.patch, Delta-encoding.patch-2012-01-13_12_20_07.patch,
Delta_encoding_with_memstore_TS.patch, open-source.diff
> A compression for keys. Keys are sorted in HFile and they are usually very similar. Because
of that, it is possible to design better compression than general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save memory in cache
as well as speeding seeks within HFileBlocks. It should improve performance a lot, if key
lengths are larger than value lengths. For example, it makes a lot of sense to use it when
value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) shows that
I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than LZO). Moreover,
it should allow far more efficient seeking which should improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the savings are
due to prefix compression, int128 encoding, timestamp diffs and bitfields to avoid duplication.
That way, comparisons of compressed data can be much faster than a byte comparator (thanks
to prefix compression and bitfields).
> In order to implement it in HBase two important changes in design will be needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking and iterating;
access to uncompressed buffer in HFileBlock will have bad performance
> -extend comparators to support comparison assuming that N first bytes are equal (or some
fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message