Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 35286 invoked from network); 15 Dec 2009 17:59:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 15 Dec 2009 17:59:34 -0000 Received: (qmail 81363 invoked by uid 500); 15 Dec 2009 17:59:33 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 81274 invoked by uid 500); 15 Dec 2009 17:59:32 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 81264 invoked by uid 99); 15 Dec 2009 17:59:32 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Dec 2009 17:59:32 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=BAYES_00 X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of saint.ack@gmail.com designates 209.85.218.223 as permitted sender) Received: from [209.85.218.223] (HELO mail-bw0-f223.google.com) (209.85.218.223) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Dec 2009 17:59:30 +0000 Received: by bwz23 with SMTP id 23so138619bwz.29 for ; Tue, 15 Dec 2009 09:59:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:from:to :in-reply-to:content-type:content-transfer-encoding:x-mailer :mime-version:subject:date:references; bh=R37pV27pGSpwc9nX+BkytSHAmkLy8+5VSdiISaeo+50=; b=b8Bl1ptAceYhaE/215LBapgKtWmc842cm43zRMrAjPcapP47V8H2266gSHUgBvKH83 JoC59BquAmEYJDZmbE/ZUlxAs0e3DdvbN2mrJta1bJJwRdZDhTvc7PkQAwNO4vJ036tO tJz2DwxQAtGyydyTn0l/HteKFFn5K1WU1zMp8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:from:to:in-reply-to:content-type :content-transfer-encoding:x-mailer:mime-version:subject:date :references; b=W5jwlEoHgMXOZnLtNTqf6xWYRY2UGScwvyHdrW5E3HN++Ort3rOPAgBZhjw8rUt7T4 PsRX/xh/kZz4uR+BcIkmGUFXoUOwimHIjdR4PGm6J08e36IFY4Pkic+KVLxYzZBLibox UcIVXI4kj5dWt9yIuuPJymsyC71WqYM5/65pc= Received: by 10.204.15.11 with SMTP id i11mr2158319bka.9.1260899949193; Tue, 15 Dec 2009 09:59:09 -0800 (PST) Received: from ?169.230.124.73? (udp046947uds.ucsf.edu [169.230.124.73]) by mx.google.com with ESMTPS id 15sm25607bwz.4.2009.12.15.09.59.06 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 15 Dec 2009 09:59:08 -0800 (PST) Message-Id: From: Stack To: "hbase-user@hadoop.apache.org" In-Reply-To: <519188da0912142258g69ca5803ta53de3791563c7a8@mail.gmail.com> Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit X-Mailer: iPhone Mail (7D11) Mime-Version: 1.0 (iPhone Mail 7D11) Subject: Re: running unit test based on HBaseClusterTestCase Date: Tue, 15 Dec 2009 09:59:23 -0800 References: <519188da0912142258g69ca5803ta53de3791563c7a8@mail.gmail.com> Do you have hadoop jars in your eclipse classpath? Stack On Dec 14, 2009, at 10:58 PM, Guohua Hao wrote: > Hello All, > > In my own application, I have a unit test case which extends > HBaseClusterTestCase in order to test some of my operation over HBase > cluster. I override the setup function in my own test case, and this > setup > function begins with super.setup() function call. > > When I try to run my unit test from within Eclipse, I got the > following > error: > > java.lang.NoSuchMethodError: > org.apache.hadoop.security.UserGroupInformation.setCurrentUser(Lorg/ > apache/hadoop/security/UserGroupInformation;)V > at org.apache.hadoop.hdfs.MiniDFSCluster. > (MiniDFSCluster.java:236) > at org.apache.hadoop.hdfs.MiniDFSCluster. > (MiniDFSCluster.java:119) > at > org.apache.hadoop.hbase.HBaseClusterTestCase.setUp > (HBaseClusterTestCase.java:123) > > I included the hadoop-0.20.1-core.jar in my classpath, since this > jar file > contains the org.apache.hadoop.security.UserGroupInformation class. > > Could anybody give me some hint on how to solve this problem? > > Thank you very much, > Guohua