hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matt Corgan (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)
Date Wed, 18 Jan 2012 06:10:40 GMT

    [ https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188294#comment-13188294
] 

Matt Corgan commented on HBASE-4218:
------------------------------------

I used plenty of memory and a warmup run so that for the measured results all reads were served
out of the OS page cache and HBase block cache.  I was trying to measure compression ratio
and cpu performance assuming that the data set is very hot and cached nearly 100%.  If you're
IO bound, then that 50% cpu difference shouldn't matter much, like you said.  It strikes me
that bringing IO into the test is really just testing the effective size of the block cache
which you can already do by adjusting the block cache size in hbase-site.  CPU efficiency
difference would get drowned out.

A scenario i have where data is 100% cached is chronological log ("event") data (sharded 16
ways) where the last ~2 days fit in memory.  We add different secondary index tables to the
primary table depending on different reports we want to generate.  When scanning those secondary
indexes we pull millions of rows from the primary table in random order.  The better the compression,
the more days of log events we can hold in memory, and the better the cpu efficiency, the
faster we can do the random reads.
                
> Data Block Encoding of KeyValues  (aka delta encoding / prefix compression)
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-4218
>                 URL: https://issues.apache.org/jira/browse/HBASE-4218
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.94.0
>            Reporter: Jacek Migdal
>            Assignee: Mikhail Bautin
>              Labels: compression
>             Fix For: 0.94.0
>
>         Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch, 0001-Delta-encoding.patch,
4218-2012-01-14.txt, 4218-v16.txt, 4218.txt, D447.1.patch, D447.10.patch, D447.11.patch, D447.12.patch,
D447.13.patch, D447.14.patch, D447.15.patch, D447.16.patch, D447.17.patch, D447.18.patch,
D447.19.patch, D447.2.patch, D447.20.patch, D447.21.patch, D447.22.patch, D447.23.patch, D447.24.patch,
D447.3.patch, D447.4.patch, D447.5.patch, D447.6.patch, D447.7.patch, D447.8.patch, D447.9.patch,
Data-block-encoding-2011-12-23.patch, Delta-encoding-2012-01-17_11_09_09.patch, Delta-encoding.patch-2011-12-22_11_52_07.patch,
Delta-encoding.patch-2012-01-05_15_16_43.patch, Delta-encoding.patch-2012-01-05_16_31_44.patch,
Delta-encoding.patch-2012-01-05_16_31_44_copy.patch, Delta-encoding.patch-2012-01-05_18_50_47.patch,
Delta-encoding.patch-2012-01-07_14_12_48.patch, Delta-encoding.patch-2012-01-13_12_20_07.patch,
Delta_encoding_with_memstore_TS.patch, open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very similar. Because
of that, it is possible to design better compression than general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save memory in cache
as well as speeding seeks within HFileBlocks. It should improve performance a lot, if key
lengths are larger than value lengths. For example, it makes a lot of sense to use it when
value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) shows that
I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than LZO). Moreover,
it should allow far more efficient seeking which should improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the savings are
due to prefix compression, int128 encoding, timestamp diffs and bitfields to avoid duplication.
That way, comparisons of compressed data can be much faster than a byte comparator (thanks
to prefix compression and bitfields).
> In order to implement it in HBase two important changes in design will be needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking and iterating;
access to uncompressed buffer in HFileBlock will have bad performance
> -extend comparators to support comparison assuming that N first bytes are equal (or some
fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message