Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5064A17F40 for ; Wed, 21 Jan 2015 06:27:35 +0000 (UTC) Received: (qmail 36839 invoked by uid 500); 21 Jan 2015 06:27:35 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 36787 invoked by uid 500); 21 Jan 2015 06:27:35 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 36776 invoked by uid 99); 21 Jan 2015 06:27:35 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Jan 2015 06:27:35 +0000 Date: Wed, 21 Jan 2015 06:27:34 +0000 (UTC) From: "Hadoop QA (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-12864) IntegrationTestTableSnapshotInputFormat fails MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285241#comment-14285241 ] Hadoop QA commented on HBASE-12864: ----------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12693498/hbase-12864_v1.patch against master branch at commit 9bdb81f0a1db308a8a452379455b6bbfe70ea20d. ATTACHMENT ID: 12693498 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html Checkstyle Errors: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//artifact/patchprocess/checkstyle-aggregate.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/12525//console This message is automatically generated. > IntegrationTestTableSnapshotInputFormat fails > --------------------------------------------- > > Key: HBASE-12864 > URL: https://issues.apache.org/jira/browse/HBASE-12864 > Project: HBase > Issue Type: Bug > Reporter: Enis Soztutar > Assignee: Enis Soztutar > Fix For: 1.0.0, 2.0.0, 1.1.0 > > Attachments: hbase-12864_v1.patch > > > IntegrationTestTableSnapshotInputFormat fails with > first: > {code} > 2015-01-15 03:56:36,175 INFO [main] mapreduce.Job: Task Id : attempt_1420685782128_0080_m_000014_2, Status : FAILED > Error: java.io.IOException: java.lang.NoClassDefFoundError: com/yammer/metrics/core/MetricsRegistry > at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:858) > at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:756) > at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:729) > at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4885) > at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4851) > at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4822) > at org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:60) > at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:190) > at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:139) > at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:545) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:783) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > {code} > and then when that is fixed with: > {code} > 2015-01-15 04:15:58,576|beaver.machine|INFO|28451|139674165233408|MainThread|Error: java.io.IOException: java.lang.IllegalStateException: bucketCacheSize <= 0; Check hbase.bucketcache.size setting and/or server java heap size > 2015-01-15 04:15:58,576|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:858) > 2015-01-15 04:15:58,576|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:756) > 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:729) > 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4885) > 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4851) > 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4822) > 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:60) > 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:491) > 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:536) > 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:186) > 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:250) > 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3762) > 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:832) > 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:829) > 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.FutureTask.run(FutureTask.java:262) > 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.FutureTask.run(FutureTask.java:262) > 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > 2015-01-15 04:16:22,766|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > 2015-01-15 04:16:22,766|beaver.machine|INFO|28451|139674165233408|MainThread|at java.lang.Thread.run(Thread.java:745) > {code} > [~ndimiduk] do you know about the second failure? We can setting block cache size to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)