hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Victor Xu (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HBASE-14061) Support CF-level Storage Policy
Date Sun, 12 Jul 2015 06:39:04 GMT
Victor Xu created HBASE-14061:

             Summary: Support CF-level Storage Policy
                 Key: HBASE-14061
                 URL: https://issues.apache.org/jira/browse/HBASE-14061
             Project: HBase
          Issue Type: Improvement
          Components: HFile, regionserver
         Environment: hadoop-2.6.0
            Reporter: Victor Xu

After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934],
I wrote a patch to implement cf-level storage policy. 
My main purpose is to improve random-read performance for some really hot data, which usually
locates in certain column family of a big table.

$ hbase shell
> alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}
> alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => {'hbase.hstore.block.storage.policy'

HDFS's setStoragePolicy can only take effect when new hfile is created in a configured directory,
so I had to make sub directories(for each cf) in region's .tmp directory and set storage policy
for them.

Besides, I had to upgrade hadoop version to 2.6.0 because dfs.getStoragePolicy cannot be easily
written in reflection, and I needed this api to finish my unit test.

This message was sent by Atlassian JIRA

View raw message