hbase-builds mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Build failed in Jenkins: HBase-Trunk_matrix » latest1.8,yahoo-not-h2 #1284
Date Sun, 24 Jul 2016 03:25:03 GMT
See <https://builds.apache.org/job/HBase-Trunk_matrix/jdk=latest1.8,label=yahoo-not-h2/1284/changes>

Changes:

[syuanjiangdev] HBASE-16008 A robust way deal with early termination of HBCK (Stephen

------------------------------------------
[...truncated 40789 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.wal.TestSecureWAL
Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 342.819 sec <<< FAILURE!
- in org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
testSnapshotStateAfterMerge(org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient)
 Time elapsed: 133.256 sec  <<< ERROR!
org.apache.hadoop.hbase.client.ScannerTimeoutException: 60065ms passed since the last invocation,
timeout is currently set to 60000
	at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:433)
	at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:363)
	at org.apache.hadoop.hbase.client.ClientSimpleScanner.next(ClientSimpleScanner.java:51)
	at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:776)
	at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:702)
	at org.apache.hadoop.hbase.MetaTableAccessor.getTableRegionsAndLocations(MetaTableAccessor.java:624)
	at org.apache.hadoop.hbase.MetaTableAccessor.getTableRegions(MetaTableAccessor.java:447)
	at org.apache.hadoop.hbase.client.HBaseAdmin.getTableRegions(HBaseAdmin.java:2219)
	at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.waitRegionsAfterMerge(TestFlushSnapshotFromClient.java:523)
	at org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testSnapshotStateAfterMerge(TestFlushSnapshotFromClient.java:332)
Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException:
Unknown scanner '66'. This can happen due to any of the following reasons: a) Scanner id given
is wrong, b) Scanner lease expired because of long wait between consecutive client checkins,
c) Server may be closing down, d) RegionServer restart during upgrade.
If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period'
configuration.
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2599)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:38435)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
	at org.apache.hadoop.hbase.ipc.AsyncCall.setFailed(AsyncCall.java:159)
	at org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.channelRead0(AsyncServerResponseHandler.java:81)
	at org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.channelRead0(AsyncServerResponseHandler.java:38)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: org.apache.hadoop.hbase.UnknownScannerException:
Unknown scanner '66'. This can happen due to any of the following reasons: a) Scanner id given
is wrong, b) Scanner lease expired because of long wait between consecutive client checkins,
c) Server may be closing down, d) RegionServer restart during upgrade.
If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period'
configuration.
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2599)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:38435)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)

	at org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.createRemoteException(AsyncServerResponseHandler.java:124)
	at org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.channelRead0(AsyncServerResponseHandler.java:76)
	at org.apache.hadoop.hbase.ipc.AsyncServerResponseHandler.channelRead0(AsyncServerResponseHandler.java:38)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:326)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1320)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:905)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:563)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:504)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:418)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:390)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
	at java.lang.Thread.run(Thread.java:745)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.752 sec - in org.apache.hadoop.hbase.wal.TestSecureWAL
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey
Running org.apache.hadoop.hbase.wal.TestFSHLogProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.386 sec - in org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.749 sec - in org.apache.hadoop.hbase.wal.TestFSHLogProvider
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.979 sec - in org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingStrategy
Running org.apache.hadoop.hbase.wal.TestWALSplitCompressed
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.463 sec - in org.apache.hadoop.hbase.wal.TestWALSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.zookeeper.TestZKLeaderManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.295 sec - in org.apache.hadoop.hbase.zookeeper.TestZKLeaderManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.zookeeper.TestZooKeeperNodeTracker
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.626 sec - in org.apache.hadoop.hbase.zookeeper.TestZooKeeperNodeTracker
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.zookeeper.TestRecoverableZooKeeper
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.638 sec - in org.apache.hadoop.hbase.zookeeper.TestRecoverableZooKeeper
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.zookeeper.TestZKMulti
Running org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock
Running org.apache.hadoop.hbase.zookeeper.TestZooKeeperACL
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.46 sec - in org.apache.hadoop.hbase.zookeeper.TestZooKeeperACL
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.235 sec - in org.apache.hadoop.hbase.zookeeper.TestZKMulti
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.zookeeper.TestHQuorumPeer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.1 sec - in org.apache.hadoop.hbase.zookeeper.TestHQuorumPeer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.TestIOFencing
Running org.apache.hadoop.hbase.constraint.TestConstraint
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.447 sec - in org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.TestMovedRegionsCleaner
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.095 sec - in org.apache.hadoop.hbase.TestMovedRegionsCleaner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.814 sec - in org.apache.hadoop.hbase.constraint.TestConstraint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
Running org.apache.hadoop.hbase.backup.TestHFileArchiving
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.059 sec - in org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.698 sec - in org.apache.hadoop.hbase.wal.TestWALSplitCompressed
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.tool.TestCanaryTool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.894 sec - in org.apache.hadoop.hbase.TestIOFencing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.ipc.TestProtoBufRpc
Running org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.389 sec - in org.apache.hadoop.hbase.ipc.TestProtoBufRpc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.ipc.TestRpcClientLeaks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.065 sec - in org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Running org.apache.hadoop.hbase.ipc.TestHBaseClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.61 sec - in org.apache.hadoop.hbase.ipc.TestHBaseClient
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed
in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.261 sec - in org.apache.hadoop.hbase.ipc.TestRpcClientLeaks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.15 sec - in org.apache.hadoop.hbase.tool.TestCanaryTool
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.114 sec - in org.apache.hadoop.hbase.backup.TestHFileArchiving
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 366.797 sec - in org.apache.hadoop.hbase.io.encoding.TestChangingEncoding

Results :

Failed tests: 
  TestHashTable.testHashTable:107 test job failed expected:<0> but was:<1>
Tests in error: 
  TestFromClientSide3.testAdvancedConfigOverride:150->Object.wait:460->Object.wait:-2
» TestTimedOut
  TestHRegionServerBulkLoad.testAtomicBulkLoad:355 » NullPointer
  TestFlushSnapshotFromClient.testSnapshotStateAfterMerge:332->waitRegionsAfterMerge:523
» ScannerTimeout

Tests run: 1921, Failures: 1, Errors: 3, Skipped: 26

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache HBase ...................................... SUCCESS [  4.639 s]
[INFO] Apache HBase - Checkstyle ......................... SUCCESS [  0.710 s]
[INFO] Apache HBase - Resource Bundle .................... SUCCESS [  0.217 s]
[INFO] Apache HBase - Annotations ........................ SUCCESS [  0.156 s]
[INFO] Apache HBase - Protocol ........................... SUCCESS [  3.527 s]
[INFO] Apache HBase - Common ............................. SUCCESS [01:50 min]
[INFO] Apache HBase - Procedure .......................... SUCCESS [01:53 min]
[INFO] Apache HBase - Client ............................. SUCCESS [ 45.983 s]
[INFO] Apache HBase - Hadoop Compatibility ............... SUCCESS [  8.445 s]
[INFO] Apache HBase - Hadoop Two Compatibility ........... SUCCESS [ 11.401 s]
[INFO] Apache HBase - Prefix Tree ........................ SUCCESS [ 12.027 s]
[INFO] Apache HBase - Server ............................. FAILURE [  01:30 h]
[INFO] Apache HBase - Testing Util ....................... SKIPPED
[INFO] Apache HBase - Thrift ............................. SKIPPED
[INFO] Apache HBase - RSGroup ............................ SKIPPED
[INFO] Apache HBase - Shell .............................. SKIPPED
[INFO] Apache HBase - Integration Tests .................. SKIPPED
[INFO] Apache HBase - Examples ........................... SKIPPED
[INFO] Apache HBase - Rest ............................... SKIPPED
[INFO] Apache HBase - External Block Cache ............... SKIPPED
[INFO] Apache HBase - Spark .............................. SKIPPED
[INFO] Apache HBase - Assembly ........................... SKIPPED
[INFO] Apache HBase - Shaded ............................. SKIPPED
[INFO] Apache HBase - Shaded - Client .................... SKIPPED
[INFO] Apache HBase - Shaded - Server .................... SKIPPED
[INFO] Apache HBase - Archetypes ......................... SKIPPED
[INFO] Apache HBase - Exemplar for hbase-client archetype  SKIPPED
[INFO] Apache HBase - Exemplar for hbase-shaded-client archetype  SKIPPED
[INFO] Apache HBase - Archetype builder .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:35 h
[INFO] Finished at: 2016-07-24T03:22:39+00:00
[INFO] Final Memory: 71M/2603M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test
(secondPartTestsExecution) on project hbase-server: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/HBase-Trunk_matrix/jdk=latest1.8,label=yahoo-not-h2/ws/hbase-server/target/surefire-reports>
for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hbase-server
Build step 'Invoke top-level Maven targets' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script  : # Run zombie detector script
./dev-support/zombie-detector.sh --jenkins ${BUILD_ID}
[yahoo-not-h2] $ /bin/bash -xe /tmp/hudson2937129689862708013.sh
+ ./dev-support/zombie-detector.sh --jenkins 1284
Sun Jul 24 03:22:41 UTC 2016 We're ok: there is no zombie test


    {color:green}+1 zombies{color}. No zombie tests found running at the end of the build.
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Archiving artifacts
Recording test results
[FINDBUGS] Skipping publisher since build result is FAILURE
[CHECKSTYLE] Skipping publisher since build result is FAILURE

Mime
View raw message