Return-Path: X-Original-To: apmail-crunch-dev-archive@www.apache.org Delivered-To: apmail-crunch-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 90E4217789 for ; Tue, 14 Oct 2014 14:29:40 +0000 (UTC) Received: (qmail 39420 invoked by uid 500); 14 Oct 2014 14:29:40 -0000 Delivered-To: apmail-crunch-dev-archive@crunch.apache.org Received: (qmail 39389 invoked by uid 500); 14 Oct 2014 14:29:40 -0000 Mailing-List: contact dev-help@crunch.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@crunch.apache.org Delivered-To: mailing list dev@crunch.apache.org Received: (qmail 39377 invoked by uid 99); 14 Oct 2014 14:29:40 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Oct 2014 14:29:40 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of danushka.menikkumbura@gmail.com designates 209.85.212.173 as permitted sender) Received: from [209.85.212.173] (HELO mail-wi0-f173.google.com) (209.85.212.173) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Oct 2014 14:29:13 +0000 Received: by mail-wi0-f173.google.com with SMTP id fb4so10303720wid.12 for ; Tue, 14 Oct 2014 07:29:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=KZ9yHLHoAfN4J+6xKMNPor83XWJNcuk1gsAFWsrORBs=; b=SnYVN6pvFKssbA87e5B6ND6hyCauIRnLaiG7LFXQE/gdS7Ws5mTlVpVfr7vTJyM7Ck Am+Bxsnl5x1DNL+nzOIUL+dahA9RUAJtBmR6U9ITwScQK2jc2qL//ge7r90+4DeeuCBD Jex6nJGHNIpPx2DrN7Cpzzh8t3MKIzXKY99zOhkyTIVVxu6Yxv2UlrWGOEi7lXjLpjZ0 6GBntM87Z4NO+QDFYf8VGTPtuUqM230rQyIRIIwjnpzqh3qgYCF/BF91v4Jxt4QFGr8i TqnbBsCH/nZ+mBopzeJKkZLllJWSfj49h6/l72vJnjZMhsRKvMzWdOxx3/nR4AnFXfQ1 PJWg== X-Received: by 10.180.19.35 with SMTP id b3mr5847065wie.10.1413296953312; Tue, 14 Oct 2014 07:29:13 -0700 (PDT) MIME-Version: 1.0 Received: by 10.217.13.73 with HTTP; Tue, 14 Oct 2014 07:28:43 -0700 (PDT) In-Reply-To: References: From: Danushka Menikkumbura Date: Tue, 14 Oct 2014 10:28:43 -0400 Message-ID: Subject: Re: Test failure on the master branch To: dev Content-Type: multipart/alternative; boundary=bcaec53d5eb5dd4b64050562d69b X-Virus-Checked: Checked by ClamAV on apache.org --bcaec53d5eb5dd4b64050562d69b Content-Type: text/plain; charset=UTF-8 Thanks J! On Tue, Oct 14, 2014 at 9:57 AM, Josh Wills wrote: > I'm still rocking 1.7. Will give 1.8 a whirl this evening. > > J > > On Tue, Oct 14, 2014 at 6:53 AM, Danushka Menikkumbura < > danushka.menikkumbura@gmail.com> wrote: > > > I am on Ubuntu 14.04.1 LTS and Java 1.8.0_20. > > > > BTW maybe it is a result of the commit > > 3f98411364cec32a0a8c6681dfaabd43caa4dd60?. > > > > Thanks, > > Danushka > > > > > > > > On Tue, Oct 14, 2014 at 9:44 AM, Josh Wills wrote: > > > > > No, master is compatible w/Hadoop 1; the error you're seeing is caused > by > > > the HBase testing code being flaky. I've experienced the flaky > > > HFileTargetIT test on my machine before, but never on a regular basis. > > Can > > > you give me the basics of your setup-- OS, java version, etc.? > > > > > > On Tue, Oct 14, 2014 at 6:38 AM, Danushka Menikkumbura < > > > danushka.menikkumbura@gmail.com> wrote: > > > > > > > No I don't. > > > > > > > > That means the master branch is not compatible with Hadoop 1? > > > > > > > > Thanks, > > > > Danushka > > > > > > > > On Tue, Oct 14, 2014 at 9:26 AM, Josh Wills > > wrote: > > > > > > > > > Hrm, okay. Do you get it if you use the -Dcrunch.platform=2 option? > > > > > > > > > > On Tue, Oct 14, 2014 at 6:23 AM, Danushka Menikkumbura < > > > > > danushka.menikkumbura@gmail.com> wrote: > > > > > > > > > > > Yes. I get it every time I try to build HBase module. > > > > > > > > > > > > FYI : org.apache.crunch.io.hbase.HFileTargetIT is the test in > > error. > > > > > > > > > > > > Thanks, > > > > > > Danushka > > > > > > > > > > > > On Tue, Oct 14, 2014 at 9:15 AM, Josh Wills > > > > > wrote: > > > > > > > > > > > > > That can happen intermittently if the local HBase cluster gets > > hung > > > > > up-- > > > > > > do > > > > > > > you get it regularly (i.e., every time you run?) > > > > > > > > > > > > > > J > > > > > > > > > > > > > > On Tue, Oct 14, 2014 at 6:08 AM, Danushka Menikkumbura < > > > > > > > danushka.menikkumbura@gmail.com> wrote: > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > I am getting the following test failure while building > Crunch. > > > Have > > > > > you > > > > > > > got > > > > > > > > an idea as to what may be the issue here?. > > > > > > > > > > > > > > > > 34787 [Thread-2057] INFO > > > > > > > > > > > > > > > > org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob > > > > - > > > > > > > > Job status available at: http://localhost:8080/ > > > > > > > > 44548 [M:0;danushka:38318.oldLogCleaner] ERROR > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager - > Connection > > > not > > > > > > found > > > > > > > > in the list, can't delete it (connection > > > > > > > > > > key=HConnectionKey{properties={hbase.zookeeper.quorum=localhost, > > > > > > > > hbase.rpc.timeout=60000, > > > hbase.zookeeper.property.clientPort=57963, > > > > > > > > zookeeper.znode.parent=/hbase, > hbase.client.retries.number=350, > > > > > > > > hbase.client.pause=100}, username='danushka'}). May be the > key > > > was > > > > > > > > modified? > > > > > > > > java.lang.Exception > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:493) > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(HConnectionManager.java:429) > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.stop(ReplicationLogCleaner.java:141) > > > > > > > > at > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > org.apache.hadoop.hbase.master.cleaner.CleanerChore.cleanup(CleanerChore.java:276) > > > > > > > > at org.apache.hadoop.hbase.Chore.run(Chore.java:94) > > > > > > > > at java.lang.Thread.run(Thread.java:745) > > > > > > > > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time > elapsed: > > > > > 21.625 > > > > > > > sec > > > > > > > > > > > > > > > > Thanks, > > > > > > > > Danushka > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > Director of Data Science > > > > > > > Cloudera > > > > > > > Twitter: @josh_wills > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Director of Data Science > > > > > Cloudera > > > > > Twitter: @josh_wills > > > > > > > > > > > > > > > > > > > > > -- > > > Director of Data Science > > > Cloudera > > > Twitter: @josh_wills > > > > > > > > > -- > Director of Data Science > Cloudera > Twitter: @josh_wills > --bcaec53d5eb5dd4b64050562d69b--