hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jacek Migdal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-4218) Delta Encoding of KeyValues (aka prefix compression)
Date Mon, 22 Aug 2011 17:13:29 GMT

    [ https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13088827#comment-13088827
] 

Jacek Migdal commented on HBASE-4218:
-------------------------------------

Regarding variable byte encoding. There is also another option than VInt and FInt: within
a block have the same width of int, but it could be different across blocks.
* exploit similarity of data within given block
* usually have the same size as VInt
* few branches
* the key value format is not uniform across all of the data

Having said that, in many Key Values there are only a few different sizes. That allows even
more efficient encoding. On the other hand, when value lengths are getting longer, they vary
a lot. But in that case keys are a tiny percent of whole file, so any savings from VB will
be insignificant. Your mileage may vary.

> Delta Encoding of KeyValues  (aka prefix compression)
> -----------------------------------------------------
>
>                 Key: HBASE-4218
>                 URL: https://issues.apache.org/jira/browse/HBASE-4218
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>            Reporter: Jacek Migdal
>              Labels: compression
>
> A compression for keys. Keys are sorted in HFile and they are usually very similar. Because
of that, it is possible to design better compression than general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save memory in cache
as well as speeding seeks within HFileBlocks. It should improve performance a lot, if key
lengths are larger than value lengths. For example, it makes a lot of sense to use it when
value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) shows that
I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than LZO). Moreover,
it should allow far more efficient seeking which should improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the savings are
due to prefix compression, int128 encoding, timestamp diffs and bitfields to avoid duplication.
That way, comparisons of compressed data can be much faster than a byte comparator (thanks
to prefix compression and bitfields).
> In order to implement it in HBase two important changes in design will be needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking and iterating;
access to uncompressed buffer in HFileBlock will have bad performance
> -extend comparators to support comparison assuming that N first bytes are equal (or some
fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message