phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pedro Boado (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (PHOENIX-4372) Distribution of Apache Phoenix 4.13 for CDH 5.11.2
Date Sun, 19 Nov 2017 15:23:01 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258510#comment-16258510
] 

Pedro Boado edited comment on PHOENIX-4372 at 11/19/17 3:22 PM:
----------------------------------------------------------------

Only two tests failing now in {{NeedsOwnMiniClusterTest}} after removing dependency  : 

This one runs on its own so  it looks like one of the previous tests have not finished properly
{code:}
[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 7.848 s <<<
FAILURE! - in org.apache.phoenix.end2end.SystemTablePermissionsIT
[ERROR] testNamespaceMappedSystemTables(org.apache.phoenix.end2end.SystemTablePermissionsIT)
 Time elapsed: 6.713 s  <<< ERROR!
java.io.IOException: Shutting down
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
Caused by: java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterAddress
already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
Caused by: java.net.BindException: Port in use: 0.0.0.0:60010
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
Caused by: java.net.BindException: Address already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)

[ERROR] testSystemTablePermissions(org.apache.phoenix.end2end.SystemTablePermissionsIT)  Time
elapsed: 1.133 s  <<< ERROR!
java.io.IOException: Shutting down
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
Caused by: java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterAddress
already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
Caused by: java.net.BindException: Port in use: 0.0.0.0:60010
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
Caused by: java.net.BindException: Address already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
{code}

... and ...

{code}
[ERROR] Tests run: 33, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 337.164 s <<<
FAILURE! - in org.apache.phoenix.end2end.IndexScrutinyToolIT
[ERROR] testOutputInvalidRowsToFile[1](org.apache.phoenix.end2end.IndexScrutinyToolIT)  Time
elapsed: 5.59 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: 
The last block in /tmp/957a3a39-5921-4fac-a761-292c44e1b8e9/T000043.T000044/part-m-00000 is
not full; last block size = 65 but file block size = 134217728
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:2193)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:2134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:2096)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:818)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.concat(AuthorizationProviderProxyClientProtocol.java:280)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2220)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2214)

        at org.apache.phoenix.end2end.IndexScrutinyToolIT.testOutputInvalidRowsToFile(IndexScrutinyToolIT.java:457)

[ERROR] testOutputInvalidRowsToFile[2](org.apache.phoenix.end2end.IndexScrutinyToolIT)  Time
elapsed: 10.789 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: 
The last block in /tmp/7a9cf8df-d7c4-4604-a103-5ba0da12d58d/T000076.T000077/part-m-00000 is
not full; last block size = 65 but file block size = 134217728
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:2193)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:2134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:2096)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:818)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.concat(AuthorizationProviderProxyClientProtocol.java:280)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2220)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2214)

        at org.apache.phoenix.end2end.IndexScrutinyToolIT.testOutputInvalidRowsToFile(IndexScrutinyToolIT.java:457)
{code}

... I don't have a clue about this error yet. 


was (Author: pboado):
Only two tests failing now after removing dependency in {{NeedsOwnMiniClusterTest}} : 

This one runs on its own so  it looks like one of the previous tests have not finished properly
{code:}
[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 7.848 s <<<
FAILURE! - in org.apache.phoenix.end2end.SystemTablePermissionsIT
[ERROR] testNamespaceMappedSystemTables(org.apache.phoenix.end2end.SystemTablePermissionsIT)
 Time elapsed: 6.713 s  <<< ERROR!
java.io.IOException: Shutting down
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
Caused by: java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterAddress
already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
Caused by: java.net.BindException: Port in use: 0.0.0.0:60010
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
Caused by: java.net.BindException: Address already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)

[ERROR] testSystemTablePermissions(org.apache.phoenix.end2end.SystemTablePermissionsIT)  Time
elapsed: 1.133 s  <<< ERROR!
java.io.IOException: Shutting down
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
Caused by: java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterAddress
already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
Caused by: java.net.BindException: Port in use: 0.0.0.0:60010
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
Caused by: java.net.BindException: Address already in use
        at org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
{code}

... and ...

{code}
[ERROR] Tests run: 33, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 337.164 s <<<
FAILURE! - in org.apache.phoenix.end2end.IndexScrutinyToolIT
[ERROR] testOutputInvalidRowsToFile[1](org.apache.phoenix.end2end.IndexScrutinyToolIT)  Time
elapsed: 5.59 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: 
The last block in /tmp/957a3a39-5921-4fac-a761-292c44e1b8e9/T000043.T000044/part-m-00000 is
not full; last block size = 65 but file block size = 134217728
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:2193)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:2134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:2096)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:818)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.concat(AuthorizationProviderProxyClientProtocol.java:280)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2220)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2214)

        at org.apache.phoenix.end2end.IndexScrutinyToolIT.testOutputInvalidRowsToFile(IndexScrutinyToolIT.java:457)

[ERROR] testOutputInvalidRowsToFile[2](org.apache.phoenix.end2end.IndexScrutinyToolIT)  Time
elapsed: 10.789 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: 
The last block in /tmp/7a9cf8df-d7c4-4604-a103-5ba0da12d58d/T000076.T000077/part-m-00000 is
not full; last block size = 65 but file block size = 134217728
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:2193)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:2134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:2096)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:818)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.concat(AuthorizationProviderProxyClientProtocol.java:280)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2220)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2214)

        at org.apache.phoenix.end2end.IndexScrutinyToolIT.testOutputInvalidRowsToFile(IndexScrutinyToolIT.java:457)
{code}

... I don't have a clue about this error yet. 

> Distribution of Apache Phoenix 4.13 for CDH 5.11.2
> --------------------------------------------------
>
>                 Key: PHOENIX-4372
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4372
>             Project: Phoenix
>          Issue Type: Task
>    Affects Versions: 4.13.0
>            Reporter: Pedro Boado
>            Priority: Minor
>              Labels: cdh
>         Attachments: PHOENIX-4372-v2.patch, PHOENIX-4372.patch
>
>
> Changes required on top of branch 4.13-HBase-1.2 for creating a parcel of Apache Phoenix
4.13.0 for CDH 5.11.2 . 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message