phoenix-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-5070) NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup
Date Wed, 19 Dec 2018 19:29:00 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725299#comment-16725299
] 

Hadoop QA commented on PHOENIX-5070:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12952387/PHOENIX-5070.patch
  against master branch at commit 69d2d76923ae7a070373431bb774188d3d6c5571.
  ATTACHMENT ID: 12952387

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified
tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:red}-1 release audit{color}.  The applied patch generated 1 release audit warnings
(more than the master's current 0 warnings).

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines longer than 100

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     ./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/2220//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/2220//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/2220//console

This message is automatically generated.

> NPE when upgrading Phoenix 4.13.0 to Phoenix 4.14.1 with hbase-1.x branch in secure setup
> -----------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5070
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5070
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.14.0, 4.14.1
>            Reporter: Monani Mihir
>            Assignee: Monani Mihir
>            Priority: Blocker
>             Fix For: 4.15.0
>
>         Attachments: PHOENIX-5070-4.x-HBase-1.3.01.patch, PHOENIX-5070-4.x-HBase-1.3.02.patch,
PHOENIX-5070-4.x-HBase-1.3.03.patch, PHOENIX-5070.patch
>
>
> PhoenixAccessController populates accessControllers during calls like loadTable before
it checks if current user has all required permission for given Hbase table and schema. 
> With [Phoenix-4661|https://issues.apache.org/jira/browse/PHOENIX-4661] , We somehow removed
this for only preGetTable func call. Because of this when we upgrade Phoenix from 4.13.0 to
4.14.1 , we get NPE for accessControllers in PhoenixAccessController#getUserPermissions. 
>  Here is exception stack trace :- 
>  
> {code:java}
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NullPointerException
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:598)
> at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16357)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8354)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2208)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2190)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.lang.NullPointerException
> at org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:409)
> at org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:403)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
> at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:453)
> at org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:434)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:210)
> at org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:403)
> at org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:482)
> at org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:104)
> at org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:161)
> at org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:81)
> at org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:157)
> at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:563)
> ... 9 more
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1291)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:231)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:35542)
> at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1734)
> ... 13 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message