hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tom Brown <tombrow...@gmail.com>
Subject Re: How to deploy coprocessor via HDFS
Date Mon, 27 Oct 2014 21:42:46 GMT
I'm not sure how to tell if it is a region endpoint or a region server
endpoint.

I have not had to explicitly associate the coprocessor with the table
before (it is loaded via "hbase.coprocessor.region.classes" in
hbase-site.xml), so it might be a region server endpoint. However, the
coprocessor code knows to which table the request applies, so it might be a
region endpoint.

If it helps, this is a 0.94.x cluster (and upgrading isn't doable right
now).

Can both types of endpoint be loaded from HDFS, or just the table-based one?

--Tom

On Mon, Oct 27, 2014 at 3:31 PM, Gary Helmling <ghelmling@gmail.com> wrote:

> Hi Tom,
>
> First off, are you talking about a region endpoint (vs. master
> endpoint or region server endpoint)?
>
> As long as you are talking about a region endpoint, the endpoint
> coprocessor can be configured as a table coprocessor, the same as a
> RegionObserver.  You can see an example and description in the HBase
> guide: http://hbase.apache.org/book/ch13s03.html
>
> From the HBase shell:
>
>   hbase> alter 't1',
>
> 'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2'
>
> The arguments are: HDFS path, classname, priority, key=value
> parameters.  Arguments are separated by a '|' character.
>
> Using this configuration, your endpoint class should be loaded from
> the jar file in HDFS.  If it's not loaded, you can check the
> regionserver log of any of the servers hosting the table's regions.
> Just search for your endpoint classname and you should find an error
> message of what went wrong.
>
>
>
> On Mon, Oct 27, 2014 at 2:03 PM, Tom Brown <tombrown52@gmail.com> wrote:
> > Is it possible to deploy an endpoint coprocessor via HDFS or must I
> > distribute the jar file to each regionserver individually?
> >
> > In my testing, it appears the endpoint coprocessors cannot be loaded from
> > HDFS, though I'm not at all sure I'm doing it right (are delimiters ":"
> or
> > "|", when I use "hdfs:///" does that map to the root hdfs path or the
> hbase
> > hdfs path, etc).
> >
> > I have attempted to google this, and have not found any clear answer.
> >
> > Thanks in advance!
> >
> > --Tom
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message