hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Busbey (JIRA)" <j...@apache.org>
Subject [jira] [Reopened] (HBASE-14061) Support CF-level Storage Policy
Date Mon, 09 Jan 2017 23:02:58 GMT

     [ https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Sean Busbey reopened HBASE-14061:

This causes building against the Hadoop 3 profile to fail, and will cause building against
Hadoop 2.8 to fail once that release happens, due to conflicting method signatures in HFileSystem
here and FileSystem / FilterFileSystem in Hadoop as of HADOOP-12161.

The problem methods are a part of Hadoop's Public/Stable interface so they're unlikely to
remove them.

Please post an addendum ASAP.

I have filed HBASE-17441 for the failure of hadoopcheck to catch this in precommit.

> Support CF-level Storage Policy
> -------------------------------
>                 Key: HBASE-14061
>                 URL: https://issues.apache.org/jira/browse/HBASE-14061
>             Project: HBase
>          Issue Type: Sub-task
>          Components: HFile, regionserver
>         Environment: hadoop-2.6.0
>            Reporter: Victor Xu
>            Assignee: Yu Li
>             Fix For: 2.0.0
>         Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch,
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934],
I wrote a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot data, which
usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => {'hbase.hstore.block.storage.policy'
> HDFS's setStoragePolicy can only take effect when new hfile is created in a configured
directory, so I had to make sub directories(for each cf) in region's .tmp directory and set
storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because dfs.getStoragePolicy cannot
be easily written in reflection, and I needed this api to finish my unit test.

This message was sent by Atlassian JIRA

View raw message