hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amandeep Khurana <ama...@gmail.com>
Subject Re: Best practices for custom filter class distribution?
Date Wed, 27 Jun 2012 17:54:40 GMT
Currently, you have to compile a jar, put them on all servers and restart the RS process. I
don't believe there is an easier way to do it as of right now. And I agree, it's not entirely
desirable to have to restart the cluster to install a custom filter.

You can combine multiple filters into a FilterList and configure it in two modes: Pass All,
or Pass One. Did you try that? 

List<Filter> myList = new ArrayList();
FilterList myFilterList = new FilterList(FilterList.Operator.MUST_PASS_ALL, myList);

On Wednesday, June 27, 2012 at 1:47 PM, Evan Pollan wrote:

> What're the current best practices for making custom Filter implementation
> classes available to the region servers? My cluster is running 0.90.4 from
> the CDH3U3 distribution, FWIW.
> I searched around and didn't find anything other than "add your filter to
> the region server's classpath." I'm hoping there's support for something
> that doesn't involve actually installing jar files on each region server,
> updating each region server's configuration, and doing a rolling restart of
> the whole cluster...
> I did find this still-outstanding bug requesting parity between HDFS-based
> co-processor class loading and filter class loading:
> https://issues.apache.org/jira/browse/HBASE-1936.
> How are folks handling this?
> The stock filters are fairly limited, especially without the ability (at
> least AFAIK) to combine the existing filters together via basic boolean
> algebra, so I can't do much without writing my own filter(s).
> thanks,
> Evan

View raw message