hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Dimiduk (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-11331) [blockcache] lazy block decompression
Date Thu, 21 Aug 2014 17:17:11 GMT

     [ https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Nick Dimiduk updated HBASE-11331:
---------------------------------

    Attachment: v03-20g-045g-true.pdf
                v03-20g-045g-false.pdf

Sharing some more test results. These are running PerfEval randomRead test on a single machine
cluster deployment. The RS gets a 20g heap out of 24g available on the box. Table is snappy
compressed. The idea is to demonstrate a best-case scenario for this patch: you have enough
ram for the whole compressed data set, but not after decompressed. The {{true}} run below
shows some IO wait, so I may have the math on the compression ratio slightly off.

Values in this table come from the "average" values reported on the attached screen shots.
I've chosen some of the more critical metrics, but it's all there for reference. Let me know
if there's a metric I'm missing, I can add it to the report (if OpenTSDB collects it, that
is).

|| ||hbase.block.data.cachecompressed=false||hbase.block.data.cachecompressed=true||delta|
                                                                                         
                      
|hbase.regionserver.server.Get_num_ops|423|4.93k|{color:green}1065%{color}|              
                                                                                         
                      
|hbase.regionserver.server.Get_mean|19.14 ms|1.00 ms|{color:green}-94%{color}|           
                                                                                         
                      
|hbase.regionserver.server.Get_99th_percentile|182.58 ms|33.17 ms|{color:green}-81%{color}|
                                                                                         
                    
|hbase.regionserver.jvmmetrics.GcTimeMillis|27.73 ms|401.16 ms|{color:red}1346%{color}|  
                                                                                         
                      
|proc.loadavg.1min|11.55|7.82|{color:green}-32%{color}|                                  
                                                                                         
                      
|proc.stat.cpu.percpu{type=iowait}|358.43|211.83|{color:green}-40%{color}|               
                                                                                         
                      
|hbase.regionserver.server.blockCacheCount|181.66k|722.55k|{color:green}297%{color}|

> [blockcache] lazy block decompression
> -------------------------------------
>
>                 Key: HBASE-11331
>                 URL: https://issues.apache.org/jira/browse/HBASE-11331
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: Nick Dimiduk
>            Assignee: Nick Dimiduk
>         Attachments: HBASE-11331.00.patch, HBASE-11331.01.patch, HBASE-11331.02.patch,
HBASE-11331.03.patch, HBASE-11331LazyBlockDecompressperfcompare.pdf, lazy-decompress.02.0.pdf,
lazy-decompress.02.1.json, lazy-decompress.02.1.pdf, v03-20g-045g-false.pdf, v03-20g-045g-true.pdf
>
>
> Maintaining data in its compressed form in the block cache will greatly increase our
effective blockcache size and should show a meaning improvement in cache hit rates in well
designed applications. The idea here is to lazily decompress/decrypt blocks when they're consumed,
rather than as soon as they're pulled off of disk.
> This is related to but less invasive than HBASE-8894.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message