hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: [VOTE] The first hbase-2.0.0-beta-1 Release Candidate is available
Date Wed, 10 Jan 2018 10:31:15 GMT
I know, this one sunk, but still running it on my cluster, so here is a new
issue I just got....

Any idea what this can be? I see this only a one of my nodes...

2018-01-10 05:22:55,786 WARN  [regionserver/node8.com/192.168.23.2:16020]
wal.AsyncFSWAL: create wal log writer hdfs://
node2.com:8020/hbase/WALs/node8.com,16020,1515579724994/node8.com%2C16020%2C1515579724994.1515579743134
failed, retry = 6
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010
at
org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown
Source)
Caused by:
org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException:
syscall:getsockopt(..) failed: Connexion refusée
... 1 more


>From the same node, if I ls while the RS is starting, I can see the related
directoy:


hbase@node8:~/hbase-2.0.0-beta-1/logs$ /home/hadoop/hadoop-2.7.5/bin/hdfs
dfs -ls /hbase/WALs/
Found 35 items
...
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node1.com,16020,1515579724884
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node3.com,16020,1515579738916
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node4.com,16020,1515579717193
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node5.com,16020,1515579724586
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node6.com,16020,1515579724999
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node7.com,16020,1515579725681
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:23 /hbase/WALs/
node8.com,16020,1515579724994



and after the RS tries many times and fails the directory is gone:
hbase@node8:~/hbase-2.0.0-beta-1/logs$ /home/hadoop/hadoop-2.7.5/bin/hdfs
dfs -ls /hbase/WALs/
Found 34 items
...
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node1.com,16020,1515579724884
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node3.com,16020,1515579738916
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node4.com,16020,1515579717193
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node5.com,16020,1515579724586
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node6.com,16020,1515579724999
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:22 /hbase/WALs/
node7.com,16020,1515579725681




2018-01-10 05:23:46,177 ERROR [regionserver/node8.com/192.168.23.2:16020]
regionserver.HRegionServer: ***** ABORTING region server
node8.com,16020,1515579724994:
Unhandled: Failed to create wal log writer hdfs://
node2.com:8020/hbase/WALs/node8.com,16020,1515579724994/node8.com%2C16020%2C1515579724994.1515579743134
after retrying 10 time(s) *****
java.io.IOException: Failed to create wal log writer hdfs://
node2.com:8020/hbase/WALs/node8.com,16020,1515579724994/node8.com%2C16020%2C1515579724994.1515579743134
after retrying 10 time(s)
at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:663)
at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130)
at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:766)
at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:504)
at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:264)
at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:69)
at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:44)
at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:139)
at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:55)
at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:244)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2123)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1315)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1196)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1008)
at java.lang.Thread.run(Thread.java:748)


...


2018-01-10 05:23:46,324 INFO  [regionserver/node8.com/192.168.23.2:16020]
regionserver.HRegionServer: regionserver/node8.com/192.168.23.2:16020
exiting
2018-01-10 05:23:46,324 ERROR [main] regionserver.HRegionServerCommandLine:
Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3016)

Which is very surprising, because I can clearly see the directory being
created.

Another attempt here, we I even look one step deeper and can see the
generated file:
2018-01-10 05:27:58,116 WARN  [regionserver/node8.com/192.168.23.2:16020]
wal.AsyncFSWAL: create wal log writer
hdfs://node2.com:8020*/hbase/WALs/node8.com
<http://node8.com>,16020,1515580031417/node8.com
<http://node8.com>%2C16020%2C1515580031417.1515580037373* failed, retry = 7
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
syscall:getsockopt(..) failed: Connexion refusée: /192.168.23.2:50010
at
org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(..)(Unknown
Source)
Caused by:
org.apache.hbase.thirdparty.io.netty.channel.unix.Errors$NativeConnectException:
syscall:getsockopt(..) failed: Connexion refusée
... 1 more
2018-01-10 05:28:08,210 INFO  [regionserver/node8.com/192.168.23.2:16020]
util.FSHDFSUtils: Recover lease on dfs file /hbase/WALs/node8.com
,16020,1515580031417/node8.com%2C16020%2C1515580031417.1515580037373
2018-01-10 05:28:08,228 INFO  [regionserver/node8.com/192.168.23.2:16020]
util.FSHDFSUtils: Failed to recover lease, attempt=0 on file=/hbase/WALs/
node8.com,16020,1515580031417/node8.com%2C16020%2C1515580031417.1515580037373
after 17ms

hbase@node8:~/hbase-2.0.0-beta-1/logs$ /home/hadoop/hadoop-2.7.5/bin/hdfs
dfs -ls -R /hbase/WALs/ | grep node8
drwxr-xr-x   - hbase supergroup          0 2018-01-10 05:28 /hbase/WALs/
node8.com,16020,1515580031417
-rw-r--r--   3 hbase supergroup          0 2018-01-10 05:28
*/hbase/WALs/node8.com
<http://node8.com>,16020,1515580031417/node8.com
<http://node8.com>%2C16020%2C1515580031417.1515580037373*


But still says it fails. Any clue? all other nodes are working fine.

2018-01-09 16:25 GMT-05:00 Stack <stack@duboce.net>:

> On Tue, Jan 9, 2018 at 10:07 AM, Andrew Purtell <apurtell@apache.org>
> wrote:
>
> > I just vetoed the RC because TestMemstoreLABWithoutPool always fails for
> > me. It was the same with the last RC too. My Java is Oracle Java 8u144
> > running on x64 Linux (Ubuntu xenial). Let me know if you need me to
> provide
> > the test output.
> >
> >
> Ok. I can't make it fail. I'm going to disable it and file an issue where
> we can work on figuring what is different here.
>
> Thanks A,
>
> St.Ack
>
>
>
> >
> > On Tue, Jan 9, 2018 at 9:31 AM, Stack <stack@duboce.net> wrote:
> >
> > > I put up a new RC JMS. It still has flakies (though Duo fixed
> > > TestFromClientSide...). Was thinking that we could release beta-1
> though
> > it
> > > has flakies. We'll keep working on cutting these down as we approach
> GA.
> > > St.Ack
> > >
> > > On Sun, Jan 7, 2018 at 10:02 PM, Stack <stack@duboce.net> wrote:
> > >
> > > > On Sun, Jan 7, 2018 at 3:14 AM, Jean-Marc Spaggiari <
> > > > jean-marc@spaggiari.org> wrote:
> > > >
> > > >> Ok, thanks Stack. I will keep it running all day long until I get a
> > > >> successful one. Is that useful that I report all the failed? Or
> just a
> > > >> wast
> > > >> of time? Here is the last failed:
> > > >>
> > > >> [INFO] Results:
> > > >> [INFO]
> > > >> [ERROR] Failures:
> > > >> [ERROR]   TestFromClientSide.testCheckAndDeleteWithCompareOp:4982
> > > >> expected:<false> but was:<true>
> > > >> [ERROR] Errors:
> > > >> [ERROR]   TestDLSAsyncFSWAL>AbstractTestDLS.testThreeRSAbort:401 »
> > > >> TableNotFound Region ...
> > > >> [INFO]
> > > >> [ERROR] Tests run: 3585, Failures: 1, Errors: 1, Skipped: 44
> > > >> [INFO]
> > > >>
> > > >>
> > > >>
> > > > Thanks for bringing up flakies. If we look at the nightlies' run, we
> > can
> > > > get the current list. Probably no harm if all tests pass once in a
> > while
> > > > (smile).
> > > >
> > > > Looking at your findings, TestFromClientSide.
> > > testCheckAndDeleteWithCompareOp
> > > > looks to be new to beta-1. Its a cranky one. I'm looking at it. Might
> > > punt
> > > > to beta-2 if can't figure it by tomorrow. HBASE-19731.
> > > >
> > > > TestDLSAsyncFSWAL is a flakey that unfortunately passes locally.
> > > >
> > > > Let me see what others we have...
> > > >
> > > > S
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >> JMS
> > > >>
> > > >> 2018-01-07 1:55 GMT-05:00 Apekshit Sharma <appy@cloudera.com>:
> > > >>
> > > >> > bq. Don't you think we have enough branches already mighty Appy?
> > > >> > Yeah we do...sigh.
> > > >> >
> > > >> >
> > > >> > idk about that. But don't we need a *patch* branch branch-2.0
> (just
> > > like
> > > >> > branch-1.4) where we "make backwards-compatible bug fixes" and a
> > > *minor*
> > > >> > branch branch-2 where we "add functionality in a
> > backwards-compatible
> > > >> > manner".
> > > >> > Quotes are from http://hbase.apache.org/book.h
> > > >> tml#hbase.versioning.post10.
> > > >> > I stumbled on this issue when thinking about backporting
> > > >> > https://issues.apache.org/jira/browse/HBASE-17436 for 2.1.
> > > >> >
> > > >> > -- Appy
> > > >> >
> > > >> >
> > > >> > On Sat, Jan 6, 2018 at 4:11 PM, stack <saint.ack@gmail.com>
> wrote:
> > > >> >
> > > >> > > It is not you.  There are a bunch of flies we need to fix. This
> > > >> latter is
> > > >> > > for sure flakey.  Let me take a look. Thanks, JMS.
> > > >> > >
> > > >> > > S
> > > >> > >
> > > >> > > On Jan 6, 2018 5:57 PM, "Jean-Marc Spaggiari" <
> > > >> jean-marc@spaggiari.org>
> > > >> > > wrote:
> > > >> > >
> > > >> > > I might not doing the right magic to get that run.... If someone
> > is
> > > >> able
> > > >> > to
> > > >> > > get all the tests pass, can you please share the command you
> run?
> > > >> > >
> > > >> > > Thanks,
> > > >> > >
> > > >> > > JMS
> > > >> > >
> > > >> > >
> > > >> > > [INFO] Results:
> > > >> > > [INFO]
> > > >> > > [ERROR] Failures:
> > > >> > > [ERROR]   TestFromClientSide.testCheckAndDeleteWithCompareO
> p:4982
> > > >> > > expected:<false> but was:<true>
> > > >> > > [ERROR]
> > > >> > > org.apache.hadoop.hbase.master.assignment.TestMergeTableRegi
> > > >> onsProcedure
> > > >> > > .testMergeRegionsConcurrently(org.apache.hadoop.hbase.
> > master.assig
> > > >> > > nment.TestMergeTableRegionsProcedure)
> > > >> > > [ERROR]   Run 1:
> > > >> > > TestMergeTableRegionsProcedure.setup:111->resetProcExecutorT
> > > >> estingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [ERROR]   Run 2:
> > > >> > > TestMergeTableRegionsProcedure.tearDown:128->
> > > >> > > resetProcExecutorTestingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [INFO]
> > > >> > > [ERROR]
> > > >> > > org.apache.hadoop.hbase.master.assignment.TestMergeTableRegi
> > > >> onsProcedure
> > > >> > > .testMergeTwoRegions(org.apache.hadoop.hbase.master.
> > assignment.Tes
> > > >> > > tMergeTableRegionsProcedure)
> > > >> > > [ERROR]   Run 1:
> > > >> > > TestMergeTableRegionsProcedure.setup:111->resetProcExecutorT
> > > >> estingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [ERROR]   Run 2:
> > > >> > > TestMergeTableRegionsProcedure.tearDown:128->
> > > >> > > resetProcExecutorTestingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [INFO]
> > > >> > > [ERROR]
> > > >> > > org.apache.hadoop.hbase.master.assignment.TestMergeTableRegi
> > > >> onsProcedure
> > > >> > .
> > > >> > > testRecoveryAndDoubleExecution(org.apache.hadoop.hbase.
> master.ass
> > > >> > > ignment.TestMergeTableRegionsProcedure)
> > > >> > > [ERROR]   Run 1:
> > > >> > > TestMergeTableRegionsProcedure.setup:111->resetProcExecutorT
> > > >> estingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [ERROR]   Run 2:
> > > >> > > TestMergeTableRegionsProcedure.tearDown:128->
> > > >> > > resetProcExecutorTestingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [INFO]
> > > >> > > [ERROR]
> > > >> > > org.apache.hadoop.hbase.master.assignment.TestMergeTableRegi
> > > >> onsProcedure
> > > >> > .
> > > >> > > testRollbackAndDoubleExecution(org.apache.hadoop.hbase.
> master.ass
> > > >> > > ignment.TestMergeTableRegionsProcedure)
> > > >> > > [ERROR]   Run 1:
> > > >> > > TestMergeTableRegionsProcedure.testRollbackAndDoubleExecution
> :272
> > > >> > > expected:<true> but was:<false>
> > > >> > > [ERROR]   Run 2:
> > > >> > > TestMergeTableRegionsProcedure.tearDown:128->
> > > >> > > resetProcExecutorTestingKillFl
> > > >> > > ag:138
> > > >> > > expected executor to be running
> > > >> > > [INFO]
> > > >> > > [ERROR]   TestSnapshotQuotaObserverChore.testSnapshotSize:276
> > > Waiting
> > > >> > > timed
> > > >> > > out after [30 000] msec
> > > >> > > [ERROR]
> > > >> > >  TestHRegionWithInMemoryFlush>TestHRegion.testWritesWhileScan
> > > >> ning:3813
> > > >> > > expected null, but was:<org.apache.hadoop.hbase.
> > > >> > NotServingRegionException:
> > > >> > > testWritesWhileScanning,,1515277468063.468265483817cb6da6320
> > > >> 26ba5b306f6.
> > > >> > > is
> > > >> > > closing>
> > > >> > > [ERROR] Errors:
> > > >> > > [ERROR]   TestDLSAsyncFSWAL>AbstractTestDLS.
> testThreeRSAbort:401
> > »
> > > >> > > TableNotFound testThr...
> > > >> > > [ERROR]
> > > >> > > org.apache.hadoop.hbase.master.balancer.
> > TestRegionsOnMasterOptions.
> > > >> > > testRegionsOnAllServers(org.apache.hadoop.hbase.master.
> balancer.
> > > >> > > TestRegionsOnMasterOptions)
> > > >> > > [ERROR]   Run 1:
> > > >> > > TestRegionsOnMasterOptions.testRegionsOnAllServers:94->
> > > >> > > checkBalance:207->Object.wait:-2
> > > >> > > » TestTimedOut
> > > >> > > [ERROR]   Run 2: TestRegionsOnMasterOptions.
> > testRegionsOnAllServers
> > > »
> > > >> > > Appears to be stuck in t...
> > > >> > > [INFO]
> > > >> > > [INFO]
> > > >> > > [ERROR] Tests run: 3604, Failures: 7, Errors: 2, Skipped: 44
> > > >> > > [INFO]
> > > >> > >
> > > >> > >
> > > >> > > 2018-01-06 15:52 GMT-05:00 Jean-Marc Spaggiari <
> > > >> jean-marc@spaggiari.org
> > > >> > >:
> > > >> > >
> > > >> > > > Deleted the class to get all the tests running. Was running on
> > the
> > > >> RC1
> > > >> > > > from the tar.
> > > >> > > >
> > > >> > > > I know get those one failing.
> > > >> > > >
> > > >> > > > [ERROR] Failures:
> > > >> > > > [ERROR]   TestFavoredStochasticLoadBalan
> > > >> cer.test2FavoredNodesDead:352
> > > >> > > > Balancer did not run
> > > >> > > > [ERROR]   TestRegionMergeTransactionOnCl
> > > >> uster.testCleanMergeReference:
> > > >> > > 284
> > > >> > > > hdfs://localhost:45311/user/jmspaggi/test-data/7c269e83-
> > > >> > > > 5982-449e-8cf8-6babaaaa4c7c/data/default/
> > testCleanMergeReference/
> > > >> > > > f1bdc6441b090dbacb391c74eaf0d1d0
> > > >> > > > [ERROR] Errors:
> > > >> > > > [ERROR]   TestDLSAsyncFSWAL>AbstractTestDLS.
> > testThreeRSAbort:401
> > > »
> > > >> > > > TableNotFound Region ...
> > > >> > > > [INFO]
> > > >> > > > [ERROR] Tests run: 3604, Failures: 2, Errors: 1, Skipped: 44
> > > >> > > >
> > > >> > > >
> > > >> > > > I have not been able to get all the tests passed locally for a
> > > >> while :(
> > > >> > > >
> > > >> > > > JM
> > > >> > > >
> > > >> > > > 2018-01-06 15:05 GMT-05:00 Ted Yu <yuzhihong@gmail.com>:
> > > >> > > >
> > > >> > > >> Looks like you didn't include HBASE-19666 which would be in
> the
> > > >> next
> > > >> > RC.
> > > >> > > >>
> > > >> > > >> On Sat, Jan 6, 2018 at 10:52 AM, Jean-Marc Spaggiari <
> > > >> > > >> jean-marc@spaggiari.org> wrote:
> > > >> > > >>
> > > >> > > >> > Trying with a different command line (mvn test -P
> runAllTests
> > > >> > > >> > -Dsurefire.secondPartThreadCount=12
> > > >> -Dtest.build.data.basedirector
> > > >> > > >> y=/ram4g
> > > >> > > >> > ) I get all those one failing.  How are you able to get
> > > >> everything
> > > >> > > >> passed???
> > > >> > > >> >
> > > >> > > >> > [INFO] Results:
> > > >> > > >> > [INFO]
> > > >> > > >> > [ERROR] Failures:
> > > >> > > >> > [ERROR]   TestDefaultCompactSelection.
> > testCompactionRatio:74->
> > > >> > TestCom
> > > >> > > >> > pactionPolicy.compactEquals:182->TestCompactionPolicy.
> > > >> > > compactEquals:201
> > > >> > > >> > expected:<[[4, 2, 1]]> but was:<[[]]>
> > > >> > > >> > [ERROR]   TestDefaultCompactSelection.
> > > >> > testStuckStoreCompaction:145->T
> > > >> > > >> > estCompactionPolicy.compactEquals:182->
> TestCompactionPolicy.
> > > >> > > >> compactEquals:201
> > > >> > > >> > expected:<[[]30, 30, 30]> but was:<[[99, 30, ]30, 30, 30]>
> > > >> > > >> > [INFO]
> > > >> > > >> > [ERROR] Tests run: 1235, Failures: 2, Errors: 0, Skipped: 4
> > > >> > > >> >
> > > >> > > >> > Second run:
> > > >> > > >> > [INFO] Results:
> > > >> > > >> > [INFO]
> > > >> > > >> > [ERROR] Failures:
> > > >> > > >> > [ERROR]   TestDefaultCompactSelection.
> > testCompactionRatio:74->
> > > >> > > >> > TestCompactionPolicy.compactEquals:182->
> TestCompactionPolicy
> > > >> > > >> .compactEquals:201
> > > >> > > >> > expected:<[[4, 2, 1]]> but was:<[[]]>
> > > >> > > >> > [ERROR]   TestDefaultCompactSelection.
> > > >> > testStuckStoreCompaction:145->
> > > >> > > >> > TestCompactionPolicy.compactEquals:182->
> TestCompactionPolicy
> > > >> > > >> .compactEquals:201
> > > >> > > >> > expected:<[[]30, 30, 30]> but was:<[[99, 30, ]30, 30, 30]>
> > > >> > > >> > [INFO]
> > > >> > > >> > [ERROR] Tests run: 1235, Failures: 2, Errors: 0, Skipped: 4
> > > >> > > >> >
> > > >> > > >> > Then again:
> > > >> > > >> >
> > > >> > > >> > [INFO] Results:
> > > >> > > >> > [INFO]
> > > >> > > >> > [ERROR] Failures:
> > > >> > > >> > [ERROR]   TestDefaultCompactSelection.
> > testCompactionRatio:74->
> > > >> > > >> > TestCompactionPolicy.compactEquals:182->
> TestCompactionPolicy
> > > >> > > >> .compactEquals:201
> > > >> > > >> > expected:<[[4, 2, 1]]> but was:<[[]]>
> > > >> > > >> > [ERROR]   TestDefaultCompactSelection.
> > > >> > testStuckStoreCompaction:145->
> > > >> > > >> > TestCompactionPolicy.compactEquals:182->
> TestCompactionPolicy
> > > >> > > >> .compactEquals:201
> > > >> > > >> > expected:<[[]30, 30, 30]> but was:<[[99, 30, ]30, 30, 30]>
> > > >> > > >> > [INFO]
> > > >> > > >> > [ERROR] Tests run: 1235, Failures: 2, Errors: 0, Skipped: 4
> > > >> > > >> > [INFO]
> > > >> > > >> > [INFO] ------------------------------
> > > >> ------------------------------
> > > >> > > >> > ------------
> > > >> > > >> > [INFO] Reactor Summary:
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > Sound like it's always the exact same result. Do I have a
> way
> > > to
> > > >> > > exclude
> > > >> > > >> > this TestCompactionPolicy test from the run?
> > > >> > > >> >
> > > >> > > >> > Here are more details from the last failure:
> > > >> > > >> > ------------------------------
> ------------------------------
> > > >> > > >> > -------------------
> > > >> > > >> > Test set: org.apache.hadoop.hbase.regionserver.
> > > >> > > TestDefaultCompactSelec
> > > >> > > >> tion
> > > >> > > >> > ------------------------------
> ------------------------------
> > > >> > > >> > -------------------
> > > >> > > >> > Tests run: 4, Failures: 2, Errors: 0, Skipped: 0, Time
> > elapsed:
> > > >> > 1.323
> > > >> > > s
> > > >> > > >> > <<< FAILURE! - in org.apache.hadoop.hbase.regionserver.
> > > >> > > >> > TestDefaultCompactSelection
> > > >> > > >> > testStuckStoreCompaction(org.apache.hadoop.hbase.
> regionserve
> > > >> > > >> r.TestDefaultCompactSelection)
> > > >> > > >> > Time elapsed: 1.047 s  <<< FAILURE!
> > > >> > > >> > org.junit.ComparisonFailure: expected:<[[]30, 30, 30]> but
> > > >> > was:<[[99,
> > > >> > > >> 30,
> > > >> > > >> > ]30, 30, 30]>
> > > >> > > >> >         at org.apache.hadoop.hbase.regionserver.
> > > >> > > >> > TestDefaultCompactSelection.testStuckStoreCompaction(
> > > >> > > >> > TestDefaultCompactSelection.java:145)
> > > >> > > >> >
> > > >> > > >> > testCompactionRatio(org.apache.hadoop.hbase.
> regionserver.Tes
> > > >> > > >> tDefaultCompactSelection)
> > > >> > > >> > Time elapsed: 0.096 s  <<< FAILURE!
> > > >> > > >> > org.junit.ComparisonFailure: expected:<[[4, 2, 1]]> but
> > > >> was:<[[]]>
> > > >> > > >> >         at org.apache.hadoop.hbase.regionserver.
> > > >> > > >> > TestDefaultCompactSelection.testCompactionRatio(
> > > >> > > >> > TestDefaultCompactSelection.java:74)
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > 2018-01-06 12:53:53,240 WARN
> [StoreOpener-22ce1d683ba4b6b93
> > > >> > > >> 73a3c541ebab2a2-1]
> > > >> > > >> > util.CommonFSUtils(536): FileSystem doesn't support
> > > >> > setStoragePolicy;
> > > >> > > >> > HDFS-6584, HDFS-9345 not available. This is normal and
> > expected
> > > >> on
> > > >> > > >> earlier
> > > >> > > >> > Hadoop versions.
> > > >> > > >> > java.lang.NoSuchMethodException: org.apache.hadoop.fs.
> > > >> > > LocalFileSystem.
> > > >> > > >> > setStoragePolicy(org.apache.hadoop.fs.Path,
> > java.lang.String)
> > > >> > > >> >         at java.lang.Class.getDeclaredMethod(Class.java:
> > 2130)
> > > >> > > >> >         at org.apache.hadoop.hbase.util.CommonFSUtils.
> > > >> > > >> > invokeSetStoragePolicy(CommonFSUtils.java:528)
> > > >> > > >> >         at org.apache.hadoop.hbase.util.CommonFSUtils.
> > > >> > > setStoragePolicy(
> > > >> > > >> > CommonFSUtils.java:518)
> > > >> > > >> >         at org.apache.hadoop.hbase.region
> > > >> server.HRegionFileSystem.
> > > >> > > >> > setStoragePolicy(HRegionFileSystem.java:193)
> > > >> > > >> >         at org.apache.hadoop.hbase.
> > regionserver.HStore.<init>(
> > > >> > > >> > HStore.java:250)
> > > >> > > >> >         at org.apache.hadoop.hbase.regionserver.HRegion.
> > > >> > > >> > instantiateHStore(HRegion.java:5497)
> > > >> > > >> >         at org.apache.hadoop.hbase.
> > > regionserver.HRegion$1.call(
> > > >> > > >> > HRegion.java:1002)
> > > >> > > >> >         at org.apache.hadoop.hbase.
> > > regionserver.HRegion$1.call(
> > > >> > > >> > HRegion.java:999)
> > > >> > > >> >         at java.util.concurrent.FutureTas
> > > >> k.run(FutureTask.java:266)
> > > >> > > >> >         at java.util.concurrent.Executors$RunnableAdapter.
> > > >> > > >> > call(Executors.java:511)
> > > >> > > >> >         at java.util.concurrent.FutureTas
> > > >> k.run(FutureTask.java:266)
> > > >> > > >> >         at java.util.concurrent.
> > ThreadPoolExecutor.runWorker(
> > > >> > > >> > ThreadPoolExecutor.java:1149)
> > > >> > > >> >         at java.util.concurrent.
> > ThreadPoolExecutor$Worker.run(
> > > >> > > >> > ThreadPoolExecutor.java:624)
> > > >> > > >> >         at java.lang.Thread.run(Thread.java:748)
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > 2018-01-06 12:53:53,322 DEBUG [main]
> util.CommonFSUtils(538):
> > > >> > > FileSystem
> > > >> > > >> > doesn't support setStoragePolicy; HDFS-6584, HDFS-9345 not
> > > >> > available.
> > > >> > > >> This
> > > >> > > >> > is normal and expected on earlier Hadoop versions.
> > > >> > > >> > java.lang.NoSuchMethodException: org.apache.hadoop.fs.
> > > >> > > LocalFileSystem.
> > > >> > > >> > setStoragePolicy(org.apache.hadoop.fs.Path,
> > java.lang.String)
> > > >> > > >> >         at java.lang.Class.getDeclaredMethod(Class.java:
> > 2130)
> > > >> > > >> >         at org.apache.hadoop.hbase.util.CommonFSUtils.
> > > >> > > >> > invokeSetStoragePolicy(CommonFSUtils.java:528)
> > > >> > > >> >         at org.apache.hadoop.hbase.util.CommonFSUtils.
> > > >> > > setStoragePolicy(
> > > >> > > >> > CommonFSUtils.java:518)
> > > >> > > >> >         at org.apache.hadoop.hbase.region
> > > >> server.HRegionFileSystem.
> > > >> > > >> > setStoragePolicy(HRegionFileSystem.java:193)
> > > >> > > >> >         at org.apache.hadoop.hbase.
> > regionserver.HStore.<init>(
> > > >> > > >> > HStore.java:250)
> > > >> > > >> >         at org.apache.hadoop.hbase.regionserver.
> > > >> > TestCompactionPolicy.
> > > >> > > >> > initialize(TestCompactionPolicy.java:109)
> > > >> > > >> >         at org.apache.hadoop.hbase.regionserver.
> > > >> > > >> > TestCompactionPolicy.setUp(TestCompactionPolicy.java:69)
> > > >> > > >> >         at sun.reflect.NativeMethodAccessorImpl.
> > invoke0(Native
> > > >> > > Method)
> > > >> > > >> >         at sun.reflect.NativeMethodAccessorImpl.invoke(
> > > >> > > >> > NativeMethodAccessorImpl.java:62)
> > > >> > > >> >         at sun.reflect.DelegatingMethodAccessorImpl.
> invoke(
> > > >> > > >> > DelegatingMethodAccessorImpl.java:43)
> > > >> > > >> >         at java.lang.reflect.Method.
> invoke(Method.java:498)
> > > >> > > >> >         at org.junit.runners.model.FrameworkMethod$1.
> > > >> > > runReflectiveCall(
> > > >> > > >> > FrameworkMethod.java:50)
> > > >> > > >> >         at org.junit.internal.runners.mod
> > > >> el.ReflectiveCallable.run(
> > > >> > > >> > ReflectiveCallable.java:12)
> > > >> > > >> >         at org.junit.runners.model.FrameworkMethod.
> > > >> > invokeExplosively(
> > > >> > > >> > FrameworkMethod.java:47)
> > > >> > > >> >         at org.junit.internal.runners.
> statements.RunBefores.
> > > >> > > >> > evaluate(RunBefores.java:24)
> > > >> > > >> >         at org.junit.internal.runners.
> > > >> > statements.RunAfters.evaluate(
> > > >> > > >> > RunAfters.java:27)
> > > >> > > >> >         at org.junit.runners.ParentRunner.runLeaf(
> > > >> > > ParentRunner.java:325
> > > >> > > >> )
> > > >> > > >> >         at org.junit.runners.BlockJUnit4ClassRunner.
> > runChild(
> > > >> > > >> > BlockJUnit4ClassRunner.java:78)
> > > >> > > >> >         at org.junit.runners.BlockJUnit4ClassRunner.
> > runChild(
> > > >> > > >> > BlockJUnit4ClassRunner.java:57)
> > > >> > > >> >         at org.junit.runners.ParentRunner$3.run(
> > > >> > > ParentRunner.java:290)
> > > >> > > >> >         at org.junit.runners.ParentRunner$1.schedule(
> > > >> > > ParentRunner.java:
> > > >> > > >> 71)
> > > >> > > >> >         at org.junit.runners.ParentRunner.runChildren(
> > > >> > > >> > ParentRunner.java:288)
> > > >> > > >> >         at org.junit.runners.ParentRunner.access$000(
> > > >> > > ParentRunner.java:
> > > >> > > >> 58)
> > > >> > > >> >         at org.junit.runners.ParentRunner$2.evaluate(
> > > >> > > >> > ParentRunner.java:268)
> > > >> > > >> >         at org.junit.runners.
> ParentRunner.run(ParentRunner.
> > > >> > java:363)
> > > >> > > >> >         at org.junit.runners.Suite.
> runChild(Suite.java:128)
> > > >> > > >> >         at org.junit.runners.Suite.runChild(Suite.java:27)
> > > >> > > >> >         at org.junit.runners.ParentRunner$3.run(
> > > >> > > ParentRunner.java:290)
> > > >> > > >> >         at org.junit.runners.ParentRunner$1.schedule(
> > > >> > > ParentRunner.java:
> > > >> > > >> 71)
> > > >> > > >> >         at org.junit.runners.ParentRunner.runChildren(
> > > >> > > >> > ParentRunner.java:288)
> > > >> > > >> >         at org.junit.runners.ParentRunner.access$000(
> > > >> > > ParentRunner.java:
> > > >> > > >> 58)
> > > >> > > >> >         at org.junit.runners.ParentRunner$2.evaluate(
> > > >> > > >> > ParentRunner.java:268)
> > > >> > > >> >         at org.junit.runners.
> ParentRunner.run(ParentRunner.
> > > >> > java:363)
> > > >> > > >> >         at org.apache.maven.surefire.
> > junitcore.JUnitCore.run(
> > > >> > > >> > JUnitCore.java:55)
> > > >> > > >> >         at org.apache.maven.surefire.
> > > junitcore.JUnitCoreWrapper.
> > > >> > > >> > createRequestAndRun(JUnitCoreWrapper.java:137)
> > > >> > > >> >         at org.apache.maven.surefire.
> > > junitcore.JUnitCoreWrapper.
> > > >> > > >> > executeEager(JUnitCoreWrapper.java:107)
> > > >> > > >> >         at org.apache.maven.surefire.
> > > junitcore.JUnitCoreWrapper.
> > > >> > > >> > execute(JUnitCoreWrapper.java:83)
> > > >> > > >> >         at org.apache.maven.surefire.
> > > junitcore.JUnitCoreWrapper.
> > > >> > > >> > execute(JUnitCoreWrapper.java:75)
> > > >> > > >> >         at org.apache.maven.surefire.juni
> > > >> tcore.JUnitCoreProvider.
> > > >> > > >> > invoke(JUnitCoreProvider.java:159)
> > > >> > > >> >         at org.apache.maven.surefire.booter.ForkedBooter.
> > > >> > > >> > invokeProviderInSameClassLoader(ForkedBooter.java:373)
> > > >> > > >> >         at org.apache.maven.surefire.booter.ForkedBooter.
> > > >> > > >> > runSuitesInProcess(ForkedBooter.java:334)
> > > >> > > >> >         at org.apache.maven.surefire.boot
> > > >> er.ForkedBooter.execute(
> > > >> > > >> > ForkedBooter.java:119)
> > > >> > > >> >         at org.apache.maven.surefire.
> > booter.ForkedBooter.main(
> > > >> > > >> > ForkedBooter.java:407)
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > 2018-01-06 12:53:53,398 INFO  [main]
> > > hbase.ResourceChecker(172):
> > > >> > > after:
> > > >> > > >> > regionserver.TestDefaultCompactSelection#
> testStuckStoreCompa
> > > >> ction
> > > >> > > >> > Thread=11 (was 7)
> > > >> > > >> > Potentially hanging thread: Monitor thread for TaskMonitor
> > > >> > > >> >         java.lang.Thread.sleep(Native Method)
> > > >> > > >> >         org.apache.hadoop.hbase.monitoring.TaskMonitor$
> > > >> > > >> > MonitorRunnable.run(TaskMonitor.java:302)
> > > >> > > >> >         java.lang.Thread.run(Thread.java:748)
> > > >> > > >> >
> > > >> > > >> > Potentially hanging thread: org.apache.hadoop.fs.
> > > >> > > FileSystem$Statistics$
> > > >> > > >> > StatisticsDataReferenceCleaner
> > > >> > > >> >         java.lang.Object.wait(Native Method)
> > > >> > > >> >         java.lang.ref.ReferenceQueue.
> > > remove(ReferenceQueue.java:
> > > >> > 143)
> > > >> > > >> >         java.lang.ref.ReferenceQueue.
> > > remove(ReferenceQueue.java:
> > > >> > 164)
> > > >> > > >> >         org.apache.hadoop.fs.FileSystem$Statistics$
> > > >> > > >> > StatisticsDataReferenceCleaner.run(FileSystem.java:3063)
> > > >> > > >> >         java.lang.Thread.run(Thread.java:748)
> > > >> > > >> >
> > > >> > > >> > Potentially hanging thread: LruBlockCacheStatsExecutor
> > > >> > > >> >         sun.misc.Unsafe.park(Native Method)
> > > >> > > >> >         java.util.concurrent.locks.LockSupport.parkNanos(
> > > >> > > >> > LockSupport.java:215)
> > > >> > > >> >         java.util.concurrent.locks.
> > AbstractQueuedSynchronizer$
> > > >> > > >> > ConditionObject.awaitNanos(AbstractQueuedSynchronizer.
> > > java:2078)
> > > >> > > >> >         java.util.concurrent.ScheduledThreadPoolExecutor$
> > > >> > > >> > DelayedWorkQueue.take(ScheduledThreadPoolExecutor.
> java:1093)
> > > >> > > >> >         java.util.concurrent.ScheduledThreadPoolExecutor$
> > > >> > > >> > DelayedWorkQueue.take(ScheduledThreadPoolExecutor.
> java:809)
> > > >> > > >> >         java.util.concurrent.ThreadPoolExecutor.getTask(
> > > >> > > >> > ThreadPoolExecutor.java:1074)
> > > >> > > >> >         java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > >> > > >> > ThreadPoolExecutor.java:1134)
> > > >> > > >> >         java.util.concurrent.
> ThreadPoolExecutor$Worker.run(
> > > >> > > >> > ThreadPoolExecutor.java:624)
> > > >> > > >> >         java.lang.Thread.run(Thread.java:748)
> > > >> > > >> >
> > > >> > > >> > Potentially hanging thread: StoreOpener-
> > > >> > > 22ce1d683ba4b6b9373a3c541ebab2
> > > >> > > >> > a2-1.LruBlockCache.EvictionThread
> > > >> > > >> >         java.lang.Object.wait(Native Method)
> > > >> > > >> >         org.apache.hadoop.hbase.io.hf
> > > >> ile.LruBlockCache$EvictionThre
> > > >> > > >> ad.run(
> > > >> > > >> > LruBlockCache.java:894)
> > > >> > > >> >         java.lang.Thread.run(Thread.java:748)
> > > >> > > >> >  - Thread LEAK? -, OpenFileDescriptor=232 (was 232),
> > > >> > > >> > MaxFileDescriptor=1048576 (was 1048576),
> > SystemLoadAverage=204
> > > >> (was
> > > >> > > >> 204),
> > > >> > > >> > ProcessCount=273 (was 273), AvailableMemoryMB=4049 (was
> 4132)
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > Full log attached
> > > >> > > >> >
> > > >> > > >> > Thanks,
> > > >> > > >> >
> > > >> > > >> > JMS
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >> > 2018-01-06 9:34 GMT-05:00 Mike Drob <mdrob@apache.org>:
> > > >> > > >> >
> > > >> > > >> >> I can reproduce the issue locally. I think it has to do
> with
> > > the
> > > >> > java
> > > >> > > >> >> version being used - IIRC this is related to the version
> of
> > > java
> > > >> > > used,
> > > >> > > >> but
> > > >> > > >> >> we can discuss in more detail on the JIRA.
> > > >> > > >> >>
> > > >> > > >> >> https://issues.apache.org/jira/browse/HBASE-19721
> > > >> > > >> >>
> > > >> > > >> >> Thanks, JMS!
> > > >> > > >> >>
> > > >> > > >> >> On Sat, Jan 6, 2018 at 6:42 AM, Jean-Marc Spaggiari <
> > > >> > > >> >> jean-marc@spaggiari.org
> > > >> > > >> >> > wrote:
> > > >> > > >> >>
> > > >> > > >> >> > How you guys are able to get the tests running?
> > > >> > > >> >> >
> > > >> > > >> >> > For me it keeps failing on TestReversedScannerCallable.
> > > >> > > >> >> >
> > > >> > > >> >> > I tried many times, always fails in the same place. I'm
> > > >> running
> > > >> > on
> > > >> > > a
> > > >> > > >> 4GB
> > > >> > > >> >> > tmpfs. Details are below. Am I doing something wrong?
> > > >> > > >> >> >
> > > >> > > >> >> > JM
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> > ./dev-support/hbasetests.sh runAllTests
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> > [INFO] Running org.apache.hadoop.hbase.
> > client.TestOperation
> > > >> > > >> >> > [INFO]
> > > >> > > >> >> > [INFO] Results:
> > > >> > > >> >> > [INFO]
> > > >> > > >> >> > [ERROR] Errors:
> > > >> > > >> >> > [ERROR]   TestReversedScannerCallable.unnecessary
> Mockito
> > > >> > > stubbings
> > > >> > > >> »
> > > >> > > >> >> > UnnecessaryStubbing
> > > >> > > >> >> > [INFO]
> > > >> > > >> >> > [ERROR] Tests run: 245, Failures: 0, Errors: 1,
> Skipped: 8
> > > >> > > >> >> > [INFO]
> > > >> > > >> >> > [INFO]
> > > >> > > >> >> > ------------------------------
> > > ------------------------------
> > > >> > > >> >> ------------
> > > >> > > >> >> > [INFO] Reactor Summary:
> > > >> > > >> >> > [INFO]
> > > >> > > >> >> > [INFO] Apache HBase ..............................
> > .........
> > > >> > > SUCCESS
> > > >> > > >> [
> > > >> > > >> >> > 1.409 s]
> > > >> > > >> >> > [INFO] Apache HBase - Checkstyle
> > ..........................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 1.295 s]
> > > >> > > >> >> > [INFO] Apache HBase - Build Support
> > .......................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 0.038 s]
> > > >> > > >> >> > [INFO] Apache HBase - Error Prone Rules
> > ...................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 1.069 s]
> > > >> > > >> >> > [INFO] Apache HBase - Annotations
> > .........................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 1.450 s]
> > > >> > > >> >> > [INFO] Apache HBase - Build Configuration
> > .................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 0.073 s]
> > > >> > > >> >> > [INFO] Apache HBase - Shaded Protocol
> > .....................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 14.292 s]
> > > >> > > >> >> > [INFO] Apache HBase - Common
> > ..............................
> > > >> > SUCCESS
> > > >> > > >> >> [01:51
> > > >> > > >> >> > min]
> > > >> > > >> >> > [INFO] Apache HBase - Metrics API
> > .........................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 2.878 s]
> > > >> > > >> >> > [INFO] Apache HBase - Hadoop Compatibility
> > ................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 12.216 s]
> > > >> > > >> >> > [INFO] Apache HBase - Metrics Implementation
> > ..............
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 7.206 s]
> > > >> > > >> >> > [INFO] Apache HBase - Hadoop Two Compatibility
> > ............
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 12.440 s]
> > > >> > > >> >> > [INFO] Apache HBase - Protocol
> > ............................
> > > >> > SUCCESS
> > > >> > > [
> > > >> > > >> >> > 0.074 s]
> > > >> > > >> >> > [INFO] Apache HBase - Client
> > ..............................
> > > >> > FAILURE
> > > >> > > >> >> [02:10
> > > >> > > >> >> > min]
> > > >> > > >> >> > [INFO] Apache HBase - Zookeeper
> > ...........................
> > > >> > SKIPPED
> > > >> > > >> >> > [INFO] Apache HBase - Replication
> > .........................
> > > >> > SKIPPED
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> > ------------------------------
> > > ------------------------------
> > > >> > > >> >> > -------------------
> > > >> > > >> >> > Test set: org.apache.hadoop.hbase.client.
> > > >> > > TestReversedScannerCallable
> > > >> > > >> >> > ------------------------------
> > > ------------------------------
> > > >> > > >> >> > -------------------
> > > >> > > >> >> > Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time
> > > >> elapsed:
> > > >> > > >> 1.515 s
> > > >> > > >> >> <<<
> > > >> > > >> >> > FAILURE! - in org.apache.hadoop.hbase.client
> > > >> > > >> >> .TestReversedScannerCallable
> > > >> > > >> >> > unnecessary Mockito
> > > >> > > >> >> > stubbings(org.apache.hadoop.hbase.client.
> > > TestReversedScanner
> > > >> > > >> Callable)
> > > >> > > >> >> > Time
> > > >> > > >> >> > elapsed: 0.014 s  <<< ERROR!
> > > >> > > >> >> > org.mockito.exceptions.misusing.
> > > UnnecessaryStubbingException:
> > > >> > > >> >> >
> > > >> > > >> >> > Unnecessary stubbings detected in test class:
> > > >> > > >> >> TestReversedScannerCallable
> > > >> > > >> >> > Clean & maintainable test code requires zero unnecessary
> > > code.
> > > >> > > >> >> > Following stubbings are unnecessary (click to navigate
> to
> > > >> > relevant
> > > >> > > >> line
> > > >> > > >> >> of
> > > >> > > >> >> > code):
> > > >> > > >> >> >   1. -> at
> > > >> > > >> >> > org.apache.hadoop.hbase.client.
> > TestReversedScannerCallable.
> > > >> > setUp(
> > > >> > > >> >> > TestReversedScannerCallable.java:66)
> > > >> > > >> >> >   2. -> at
> > > >> > > >> >> > org.apache.hadoop.hbase.client.
> > TestReversedScannerCallable.
> > > >> > setUp(
> > > >> > > >> >> > TestReversedScannerCallable.java:68)
> > > >> > > >> >> > Please remove unnecessary stubbings. More info: javadoc
> > for
> > > >> > > >> >> > UnnecessaryStubbingException class.
> > > >> > > >> >> >
> > > >> > > >> >> >
> > > >> > > >> >> > 2018-01-06 0:44 GMT-05:00 stack <saint.ack@gmail.com>:
> > > >> > > >> >> >
> > > >> > > >> >> > > On Jan 5, 2018 4:44 PM, "Apekshit Sharma" <
> > > >> appy@cloudera.com>
> > > >> > > >> wrote:
> > > >> > > >> >> > >
> > > >> > > >> >> > > bq. Care needs to be exercised backporting. Bug fixes
> > only
> > > >> > > please.
> > > >> > > >> If
> > > >> > > >> >> in
> > > >> > > >> >> > > doubt, ping me, the RM, please. Thanks.
> > > >> > > >> >> > > In that case, shouldn't we branch out branch-2.0? We
> can
> > > >> then
> > > >> > do
> > > >> > > >> >> normal
> > > >> > > >> >> > > backports to branch-2 and only bug fixes to
> branch-2.0.
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > > Don't you think we have enough branches already mighty
> > > Appy?
> > > >> > > >> >> > >
> > > >> > > >> >> > > No new features on branch-2? New features are in
> > > >> master/3.0.0
> > > >> > > only?
> > > >> > > >> >> > >
> > > >> > > >> >> > > S
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > > On Fri, Jan 5, 2018 at 9:48 AM, Andrew Purtell <
> > > >> > > >> apurtell@apache.org>
> > > >> > > >> >> > > wrote:
> > > >> > > >> >> > >
> > > >> > > >> >> > > > TestMemstoreLABWithoutPool is a flake, not a
> > consistent
> > > >> fail.
> > > >> > > >> >> > > >
> > > >> > > >> >> > > >
> > > >> > > >> >> > > > On Fri, Jan 5, 2018 at 7:18 AM, Stack <
> > stack@duboce.net
> > > >
> > > >> > > wrote:
> > > >> > > >> >> > > >
> > > >> > > >> >> > > > > On Thu, Jan 4, 2018 at 2:24 PM, Andrew Purtell <
> > > >> > > >> >> apurtell@apache.org>
> > > >> > > >> >> > > > > wrote:
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > > This one is probably my fault:
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > > TestDefaultCompactSelection
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > > HBASE-19406
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > Balazs fixed it above, HBASE-19666
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > > It can easily be reverted. The failure of
> interest
> > > >> > > >> >> > > > > > is TestMemstoreLABWithoutPool.tes
> > > >> > > >> tLABChunkQueueWithMultipleMSLA
> > > >> > > >> >> Bs.
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > This seems fine. Passes in nightly
> > > >> > > >> >> > > > > https://builds.apache.org/view
> > > >> /H-L/view/HBase/job/HBase%
> > > >> > > >> >> > > > > 20Nightly/job/branch-2/171/tes
> > > >> tReport/org.apache.hadoop.
> > > >> > > >> >> > > > > hbase.regionserver/TestMemstoreLABWithoutPool/
> > > >> > > >> >> > > > > and locally against the tag. It fails consistently
> > for
> > > >> you
> > > >> > > >> Andrew?
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > > > Should all unit tests pass on a beta? I think
> > so,
> > > at
> > > >> > > least
> > > >> > > >> if
> > > >> > > >> >> the
> > > >> > > >> >> > > > > > failures
> > > >> > > >> >> > > > > > > are 100% repeatable.
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > This is fair. Let me squash this RC and roll
> > another.
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > Will put it up in a few hours.
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > Thanks,
> > > >> > > >> >> > > > > S
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > > > > > -0
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > Checked sums and signatures: ok
> > > >> > > >> >> > > > > > > RAT check: ok
> > > >> > > >> >> > > > > > > Built from source: ok (8u144)
> > > >> > > >> >> > > > > > > Ran unit tests: some failures (8u144)
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > [ERROR]   TestDefaultCompactSelection.t
> > > >> > > >> >> estCompactionRatio:74->
> > > >> > > >> >> > > > > > > TestCompactionPolicy.compactEq
> > > >> > > >> uals:182->TestCompactionPolicy.
> > > >> > > >> >> > > > > > compactEquals:201
> > > >> > > >> >> > > > > > > expected:<[[4, 2, 1]]> but was:<[[]]>
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > [ERROR]   TestDefaultCompactSelection.
> > > >> > > >> >> > > testStuckStoreCompaction:145->
> > > >> > > >> >> > > > > > > TestCompactionPolicy.compactEq
> > > >> > > >> uals:182->TestCompactionPolicy.
> > > >> > > >> >> > > > > > compactEquals:201
> > > >> > > >> >> > > > > > > expected:<[[]30, 30, 30]> but was:<[[99, 30,
> > ]30,
> > > >> 30,
> > > >> > > 30]>
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > [ERROR]   TestMemstoreLABWithoutPool.
> > > >> > > >> >> > > testLABChunkQueueWithMultipleM
> > > >> > > >> >> > > > > > SLABs:143
> > > >> > > >> >> > > > > > > All the chunks must have been cleared
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > On Fri, Dec 29, 2017 at 10:15 AM, Stack <
> > > >> > > stack@duboce.net>
> > > >> > > >> >> > wrote:
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > >> The first release candidate for HBase
> > > 2.0.0-beta-1
> > > >> is
> > > >> > up
> > > >> > > >> at:
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >>  https://dist.apache.org/repos/
> > > >> > > >> dist/dev/hbase/hbase-2.0.0-
> > > >> > > >> >> > > > beta-1-RC0/
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> Maven artifacts are available from a staging
> > > >> directory
> > > >> > > >> here:
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >>  https://repository.apache.org/
> > > >> content/repositories/
> > > >> > > >> >> > > > > orgapachehbase-1188
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> All was signed with my key at 8ACC93D2 [1]
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> I tagged the RC as 2.0.0-beta-1-RC0
> > > >> > > >> >> > > > > > >> (0907563eb72697b394b8b960fe54887d6ff304fd)
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> hbase-2.0.0-beta-1 is our first beta release.
> > It
> > > >> > > includes
> > > >> > > >> all
> > > >> > > >> >> > that
> > > >> > > >> >> > > > was
> > > >> > > >> >> > > > > > in
> > > >> > > >> >> > > > > > >> previous alphas (new assignment manager,
> > offheap
> > > >> > > >> read/write
> > > >> > > >> >> > path,
> > > >> > > >> >> > > > > > >> in-memory
> > > >> > > >> >> > > > > > >> compactions, etc.). The APIs and feature-set
> > are
> > > >> > sealed.
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> hbase-2.0.0-beta-1 is a not-for-production
> > > preview
> > > >> of
> > > >> > > >> >> > hbase-2.0.0.
> > > >> > > >> >> > > > It
> > > >> > > >> >> > > > > is
> > > >> > > >> >> > > > > > >> meant for devs and downstreamers to test
> drive
> > > and
> > > >> > flag
> > > >> > > us
> > > >> > > >> >> if we
> > > >> > > >> >> > > > > messed
> > > >> > > >> >> > > > > > up
> > > >> > > >> >> > > > > > >> on anything ahead of our rolling GAs. We are
> > > >> > particular
> > > >> > > >> >> > interested
> > > >> > > >> >> > > > in
> > > >> > > >> >> > > > > > >> hearing from Coprocessor developers.
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> The list of features addressed in 2.0.0 so
> far
> > > can
> > > >> be
> > > >> > > >> found
> > > >> > > >> >> here
> > > >> > > >> >> > > > [3].
> > > >> > > >> >> > > > > > >> There
> > > >> > > >> >> > > > > > >> are thousands. The list of ~2k+ fixes in
> 2.0.0
> > > >> > > exclusively
> > > >> > > >> >> can
> > > >> > > >> >> > be
> > > >> > > >> >> > > > > found
> > > >> > > >> >> > > > > > >> here [4] (My JIRA JQL foo is a bit dodgy --
> > > >> forgive me
> > > >> > > if
> > > >> > > >> >> > > mistakes).
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> I've updated our overview doc. on the state
> of
> > > >> 2.0.0
> > > >> > > [6].
> > > >> > > >> >> We'll
> > > >> > > >> >> > do
> > > >> > > >> >> > > > one
> > > >> > > >> >> > > > > > >> more
> > > >> > > >> >> > > > > > >> beta before we put up our first 2.0.0 Release
> > > >> > Candidate
> > > >> > > by
> > > >> > > >> >> the
> > > >> > > >> >> > end
> > > >> > > >> >> > > > of
> > > >> > > >> >> > > > > > >> January, 2.0.0-beta-2. Its focus will be
> making
> > > it
> > > >> so
> > > >> > > >> users
> > > >> > > >> >> can
> > > >> > > >> >> > do
> > > >> > > >> >> > > a
> > > >> > > >> >> > > > > > >> rolling upgrade on to hbase-2.x from
> hbase-1.x
> > > (and
> > > >> > any
> > > >> > > >> bug
> > > >> > > >> >> > fixes
> > > >> > > >> >> > > > > found
> > > >> > > >> >> > > > > > >> running beta-1). Here is the list of what we
> > have
> > > >> > > >> targeted so
> > > >> > > >> >> > far
> > > >> > > >> >> > > > for
> > > >> > > >> >> > > > > > >> beta-2 [5]. Check it out.
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> One knownissue is that the User API has not
> > been
> > > >> > > properly
> > > >> > > >> >> > filtered
> > > >> > > >> >> > > > so
> > > >> > > >> >> > > > > it
> > > >> > > >> >> > > > > > >> shows more than just InterfaceAudience Public
> > > >> content
> > > >> > > >> >> > > (HBASE-19663,
> > > >> > > >> >> > > > to
> > > >> > > >> >> > > > > > be
> > > >> > > >> >> > > > > > >> fixed by beta-2).
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> Please take this beta for a spin. Please vote
> > on
> > > >> > whether
> > > >> > > >> it
> > > >> > > >> >> ok
> > > >> > > >> >> > to
> > > >> > > >> >> > > > put
> > > >> > > >> >> > > > > > out
> > > >> > > >> >> > > > > > >> this RC as our first beta (Note CHANGES has
> not
> > > yet
> > > >> > been
> > > >> > > >> >> > updated).
> > > >> > > >> >> > > > Let
> > > >> > > >> >> > > > > > the
> > > >> > > >> >> > > > > > >> VOTE be open for 72 hours (Monday)
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> Thanks,
> > > >> > > >> >> > > > > > >> Your 2.0.0 Release Manager
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >> 1. http://pgp.mit.edu/pks/lookup?
> > op=get&search=
> > > >> > > >> >> > 0x9816C7FC8ACC93D2
> > > >> > > >> >> > > > > > >> 3. https://goo.gl/scYjJr
> > > >> > > >> >> > > > > > >> 4. https://goo.gl/dFFT8b
> > > >> > > >> >> > > > > > >> 5. https://issues.apache.org/jira
> > > >> > > >> /projects/HBASE/versions/
> > > >> > > >> >> > > 12340862
> > > >> > > >> >> > > > > > >> 6. https://docs.google.com/document/d/
> > > >> > > >> >> > > 1WCsVlnHjJeKUcl7wHwqb4z9iEu_
> > > >> > > >> >> > > > > > >> ktczrlKHK8N4SZzs/
> > > >> > > >> >> > > > > > >> <https://docs.google.com/document/d/
> > > >> > > >> >> > 1WCsVlnHjJeKUcl7wHwqb4z9iEu_
> > > >> > > >> >> > > > > > ktczrlKHK8N4SZzs/>
> > > >> > > >> >> > > > > > >>
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > --
> > > >> > > >> >> > > > > > > Best regards,
> > > >> > > >> >> > > > > > > Andrew
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > > > Words like orphans lost among the crosstalk,
> > > meaning
> > > >> > torn
> > > >> > > >> from
> > > >> > > >> >> > > > truth's
> > > >> > > >> >> > > > > > > decrepit hands
> > > >> > > >> >> > > > > > >    - A23, Crosstalk
> > > >> > > >> >> > > > > > >
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > > --
> > > >> > > >> >> > > > > > Best regards,
> > > >> > > >> >> > > > > > Andrew
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > > > Words like orphans lost among the crosstalk,
> > meaning
> > > >> torn
> > > >> > > >> from
> > > >> > > >> >> > > truth's
> > > >> > > >> >> > > > > > decrepit hands
> > > >> > > >> >> > > > > >    - A23, Crosstalk
> > > >> > > >> >> > > > > >
> > > >> > > >> >> > > > >
> > > >> > > >> >> > > >
> > > >> > > >> >> > > >
> > > >> > > >> >> > > >
> > > >> > > >> >> > > > --
> > > >> > > >> >> > > > Best regards,
> > > >> > > >> >> > > > Andrew
> > > >> > > >> >> > > >
> > > >> > > >> >> > > > Words like orphans lost among the crosstalk, meaning
> > > torn
> > > >> > from
> > > >> > > >> >> truth's
> > > >> > > >> >> > > > decrepit hands
> > > >> > > >> >> > > >    - A23, Crosstalk
> > > >> > > >> >> > > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > >
> > > >> > > >> >> > > --
> > > >> > > >> >> > >
> > > >> > > >> >> > > -- Appy
> > > >> > > >> >> > >
> > > >> > > >> >> >
> > > >> > > >> >>
> > > >> > > >> >
> > > >> > > >> >
> > > >> > > >>
> > > >> > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> >
> > > >> > -- Appy
> > > >> >
> > > >>
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrew
> >
> > Words like orphans lost among the crosstalk, meaning torn from truth's
> > decrepit hands
> >    - A23, Crosstalk
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message