hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Taylor <jtay...@salesforce.com>
Subject Re: deploy saleforce phoenix coprocessor to hbase/lib??
Date Tue, 10 Sep 2013 23:39:34 GMT
When a table is created with Phoenix, its HBase table is configured
with the Phoenix coprocessors. We do not specify a jar path, so the
Phoenix jar that contains the coprocessor implementation classes must
be on the classpath of the region server.

In addition to coprocessors, Phoenix relies on custom filters which
are also in the Phoenix jar. In theory you could put the jar in HDFS,
use the relatively new HBase feature to load custom filters from HDFS,
and issue alter table calls for existing Phoenix HBase tables to
reconfigure the coprocessors. When new Phoenix tables are created,
though, they wouldn't have this jar path.

FYI, we're looking into modifying our install procedure to do the
above (see https://github.com/forcedotcom/phoenix/issues/216), if
folks are interested in contributing.


On Sep 10, 2013, at 2:41 PM, Tianying Chang <tichang@ebaysf.com> wrote:

> Hi,
> Since this is not a hbase system level jar, instead, it is more like user code, should
we deploy it under hbase/lib?  It seems we can use "alter" to add the coprocessor for a particular
user table.  So I can put the jar file any place that is accessible, e.g. hdfs:/myPath?
> My customer said, there is no need to run 'aler' command. Instead, as long as I put the
jar into hbase/lib, then when phoenix client make read call, it will add the the coprocessor
attr into that table being read. It is kind of suspicious. Does the phoenix client call a
"alter" under cover for the client  already?
> Anyone knows about this?
> Thanks
> Tian-Ying

View raw message