hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Helmling <ghelml...@gmail.com>
Subject Re: How to deploy coprocessor via HDFS
Date Mon, 27 Oct 2014 21:31:07 GMT
Hi Tom,

First off, are you talking about a region endpoint (vs. master
endpoint or region server endpoint)?

As long as you are talking about a region endpoint, the endpoint
coprocessor can be configured as a table coprocessor, the same as a
RegionObserver.  You can see an example and description in the HBase
guide: http://hbase.apache.org/book/ch13s03.html

>From the HBase shell:

  hbase> alter 't1',
    'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2'

The arguments are: HDFS path, classname, priority, key=value
parameters.  Arguments are separated by a '|' character.

Using this configuration, your endpoint class should be loaded from
the jar file in HDFS.  If it's not loaded, you can check the
regionserver log of any of the servers hosting the table's regions.
Just search for your endpoint classname and you should find an error
message of what went wrong.



On Mon, Oct 27, 2014 at 2:03 PM, Tom Brown <tombrown52@gmail.com> wrote:
> Is it possible to deploy an endpoint coprocessor via HDFS or must I
> distribute the jar file to each regionserver individually?
>
> In my testing, it appears the endpoint coprocessors cannot be loaded from
> HDFS, though I'm not at all sure I'm doing it right (are delimiters ":" or
> "|", when I use "hdfs:///" does that map to the root hdfs path or the hbase
> hdfs path, etc).
>
> I have attempted to google this, and have not found any clear answer.
>
> Thanks in advance!
>
> --Tom

Mime
View raw message