hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Scott Cinnamond <scinnam...@gmail.com>
Subject Re: Best practices for custom filter class distribution?
Date Wed, 27 Jun 2012 20:29:35 GMT
Agree with above comment on FilterList. You can create an
"expression tree" of seemingly any depth by nesting FilterList and HBase
seems to navigate and process this very nicely for both row and column
filters.

On Wed, Jun 27, 2012 at 2:33 PM, Michael Segel <michael_segel@hotmail.com>wrote:

> One way..,
>
> Create an NFS mountable directory for your cluster and mount on all of the
> DNs.
> You can either place a symbolic link in /usr/lib/hadoop/lib or add the jar
> to the classpath in /etc/hadoop/conf/hadoop-env.sh
> (Assuming Cloudera)
>
>
> On Jun 27, 2012, at 12:47 PM, Evan Pollan wrote:
>
> > What're the current best practices for making custom Filter
> implementation
> > classes available to the region servers?  My cluster is running 0.90.4
> from
> > the CDH3U3 distribution, FWIW.
> >
> > I searched around and didn't find anything other than "add your filter to
> > the region server's classpath."  I'm hoping there's support for something
> > that doesn't involve actually installing jar files on each region server,
> > updating each region server's configuration, and doing a rolling restart
> of
> > the whole cluster...
> >
> > I did find this still-outstanding bug requesting parity between
> HDFS-based
> > co-processor class loading and filter class loading:
> > https://issues.apache.org/jira/browse/HBASE-1936.
> >
> > How are folks handling this?
> >
> > The stock filters are fairly limited, especially without the ability (at
> > least AFAIK) to combine the existing filters together via basic boolean
> > algebra, so I can't do much without writing my own filter(s).
> >
> >
> > thanks,
> > Evan
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message