hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13907) Document how to deploy a coprocessor
Date Tue, 11 Aug 2015 08:15:46 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14681411#comment-14681411

Hadoop QA commented on HBASE-13907:

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  against master branch at commit 3d5801602da7cde1f20bdd4b898e8b3cac77f2a3.
  ATTACHMENT ID: 12749764

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+0 tests included{color}.  The patch appears to be a documentation patch
that doesn't require tests.

    {color:green}+1 hadoop versions{color}. The patch compiles with all supported hadoop versions
(2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 protoc{color}.  The applied patch does not increase the total number of
protoc compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 checkstyle{color}.  The applied patch does not increase the total number
of checkstyle errors

    {color:green}+1 findbugs{color}.  The patch does not introduce any  new Findbugs (version
2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

     {color:red}-1 core tests{color}.  The patch failed these unit tests:

     {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s): 	at org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat2.testWritingPEData(TestHFileOutputFormat2.java:335)
	at org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:384)
	at org.apache.hadoop.hbase.mapreduce.TestCellCounter.testCellCounterForCompleteTable(TestCellCounter.java:299)
	at org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat.testWithMapReduceImpl(TestTableSnapshotInputFormat.java:247)
	at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduce(TableSnapshotInputFormatTestBase.java:112)
	at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase.testWithMapReduceSingleRegion(TableSnapshotInputFormatTestBase.java:91)

Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/15038//testReport/
Release Findbugs (version 2.0.3) 	warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/15038//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/15038//artifact/patchprocess/checkstyle-aggregate.html

  Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/15038//console

This message is automatically generated.

> Document how to deploy a coprocessor
> ------------------------------------
>                 Key: HBASE-13907
>                 URL: https://issues.apache.org/jira/browse/HBASE-13907
>             Project: HBase
>          Issue Type: Bug
>          Components: documentation
>            Reporter: Misty Stanley-Jones
>            Assignee: Misty Stanley-Jones
>         Attachments: HBASE-13907-1.patch, HBASE-13907-2.patch, HBASE-13907-v3.patch,
> Capture this information:
> > Where are the dependencies located for these classes? Is there a path on HDFS or
local disk that dependencies need to be placed so that each RegionServer has access to them?
> It is suggested to bundle them as a single jar so that RS can load the whole jar and
resolve dependencies. If you are not able to do that, you need place the dependencies in regionservers
class path so that they are loaded during RS startup. Do either of these options work for
you? Btw, you can load the coprocessors/filters into path specified by hbase.dynamic.jars.dir
[1], so that they are loaded dynamically by regionservers when the class is accessed (or you
can place them in the RS class path too, so that they are loaded during RS JVM startup).
> > How would one deploy these using an automated system? (puppet/chef/ansible/etc)
> You can probably use these tools to automate shipping the jars to above locations?
> > Tests our developers have done suggest that simply disabling a coprocessor, replacing
the jar with a different version, and enabling the coprocessor again does not load the newest
version. With that in mind how does one know which version is currently deployed and enabled
without resorting to parsing `hbase shell` output or restarting hbase?
> Actually this is a design issue with current classloader. You can't reload a class in
a JVM unless you delete all the current references to it. Since the current JVM (classloader)
has reference to it, you can't overwrite it unless you kill the JVM, which is equivalent to
restarting it. So you still have the older class loaded in place. For this to work, classloader
design should be changed. If it works for you, you can rename the coprocessor class name and
the new version of jar and RS loads it properly.
> > Where does logging go, and how does one access it? Does logging need to be configured
in a certain way?
> Can you please specify which logging you are referring to?
> > Where is a good location to place configuration files?
> Same as above, are these hbase configs or something else? If hbase configs, are these
gateway configs/server side? 

This message was sent by Atlassian JIRA

View raw message