Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 63F734235 for ; Mon, 6 Jun 2011 07:12:35 +0000 (UTC) Received: (qmail 2518 invoked by uid 500); 6 Jun 2011 07:12:34 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 2469 invoked by uid 500); 6 Jun 2011 07:12:26 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 2452 invoked by uid 99); 6 Jun 2011 07:12:24 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Jun 2011 07:12:24 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of praveenesh@gmail.com designates 74.125.82.169 as permitted sender) Received: from [74.125.82.169] (HELO mail-wy0-f169.google.com) (74.125.82.169) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Jun 2011 07:12:17 +0000 Received: by wyf19 with SMTP id 19so3407056wyf.14 for ; Mon, 06 Jun 2011 00:11:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=OhlkXI0o9YJ08zeHLYHmsHQOp0oPWBAeSz5DNK7XvG0=; b=HIL9lvLO6yaoRxTRfpRGCCXATTmBCtiKE7wY5Dnr9oVff/SFVOwkg/S6oIpMU/f3i8 ShLmxnNLQ4xYdFUnNDIHD6dSeZvwsdpNuNZbaaPMYZ7doo/53eOkh7NqOx5E6d4qZ4Oa D+cr3Cmi0zuAd7IzngiKW7o9S8cNmDRgoQ/NM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=XEaAILebRki9yBWm9UjVtdWwtE7FEeQKGjJ3J7EEFONVhOpFkLUAdogtvaIcPsIt/F FFEeqLSbNUv5Kl9sVWiunv2Av2aFMF9kL33jA4wkO1zU1FTmxqWMjyD7waljRd69nzTJ Vax2Iau1NvFwRJ4XW0tNDzIdIicIgcoVZ8FC8= MIME-Version: 1.0 Received: by 10.216.122.10 with SMTP id s10mr2087343weh.34.1307344316652; Mon, 06 Jun 2011 00:11:56 -0700 (PDT) Received: by 10.216.18.145 with HTTP; Mon, 6 Jun 2011 00:11:56 -0700 (PDT) In-Reply-To: References: <216725.44261.qm@web65508.mail.ac4.yahoo.com> <47F36587-F9F9-4CD2-97B6-E2DB65B079CB@cse.unl.edu> Date: Mon, 6 Jun 2011 12:41:56 +0530 Message-ID: Subject: Re: Does Hadoop 0.20.2 and HBase 0.90.3 compatible ?? From: praveenesh kumar To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=00261887757e97de9704a505d151 X-Virus-Checked: Checked by ClamAV on apache.org --00261887757e97de9704a505d151 Content-Type: text/plain; charset=ISO-8859-1 Hello guys..!!! I copied the hadoop-core-append jar file from hbase/lib folder into hadoop folder and replaced it with hadoop-0.20.2-core.jar which was suggested in the following link http://www.apacheserver.net/Using-Hadoop-bundled-in-lib-directory-HBase-at1136240.htm I guess this is what have been mentioned in the link and by Andy that I am doing. If I am doing somehting wrong, kindly tell me. But now after adding that jar file.. I am not able to run my hadoop.. I am getting following exception messages on my screen ub13: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName ub13: Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName ub13: at java.net.URLClassLoader$1.run(URLClassLoader.java:217) ub13: at java.security.AccessController.doPrivileged(Native Method) ub13: at java.net.URLClassLoader.findClass(URLClassLoader.java:205) ub13: at java.lang.ClassLoader.loadClass(ClassLoader.java:321) ub13: at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) ub13: at java.lang.ClassLoader.loadClass(ClassLoader.java:266) ub13: Could not find the main class: org.apache.hadoop.util.PlatformName. Program will exit. ub13: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/datanode/DataNode ub13: starting secondarynamenode, logging to /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-ub13.out ub13: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName ub13: Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName ub13: at java.net.URLClassLoader$1.run(URLClassLoader.java:217) ub13: at java.security.AccessController.doPrivileged(Native Method) ub13: at java.net.URLClassLoader.findClass(URLClassLoader.java:205) ub13: at java.lang.ClassLoader.loadClass(ClassLoader.java:321) ub13: at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) ub13: at java.lang.ClassLoader.loadClass(ClassLoader.java:266) ub13: Could not find the main class: org.apache.hadoop.util.PlatformName. Program will exit. ub13: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode Have I done something wrong.. Please guide me...!! Thanks, Praveenesh On Fri, Jun 3, 2011 at 8:17 PM, Cosmin Lehene wrote: > Also I think we should make it more clear (on the front page) that HBase > requires the append branch and provide a link to that as well. > > Cosmin > > On Jun 3, 2011, at 3:23 PM, Brian Bockelman wrote: > > > A meta-comment here (triggered by Praveenesh's comments about not wanting > to re-install): > > > > If you have to install the same software on more than one piece of > software, you really ought to automate it. In the long run, you'll save > more time if you can instantly roll out changes than manually doing it. In > all likelihood, if you're going to be working with a piece of software > (Hadoop-based or not!), you'll re-install it a few times. > > > > The install of HDFS should take roughly the same amount of time on 2, 20, > or 200 nodes. > > > > Brian > > > > On Jun 3, 2011, at 6:47 AM, Andrew Purtell wrote: > > > >>> Is *Hadoop 0.20.2 also not compatible with Hbase 0.90.3 ???* > >> > >> In a strict sense they are, but without append support HBase cannot > guarantee that the last block of write ahead logs are synced to disk, so in > some failure cases edits will be lost. With append support then the "hole" > of these failure cases is closed. Also append branch adds a change to lease > recovery that allows the HBase Master to take ownership of regionserver logs > in order to split them quickly. (Without this patch you may be waiting 10 > minutes for lease recovery...) So these differences are clearly important > for durability and fault recovery in a realistic time frame. > >> > >> A full reinstallation is not necessary. > >> > >> You can take the Hadoop core jar packaged in lib/ of HBase 0.90.3 and > replace every Hadoop core jar on your cluster with them. > >> > >> OR > >> > >> You can compile 0.20-append and just replace the Hadoop core jar > everywhere with the result, including the one in the HBase lib/. > >> > >> - Andy > >> > >> --- On Fri, 6/3/11, praveenesh kumar wrote: > >> > >>> From: praveenesh kumar > >>> Subject: Does Hadoop 0.20.2 and HBase 0.90.3 compatible ?? > >>> To: common-user@hadoop.apache.org, user@hbase.apache.org > >>> Date: Friday, June 3, 2011, 3:37 AM > >>> Guys, > >>> > >>> I am in a very big big confusion. Please.. I really need > >>> your feedbacks and > >>> suggestions.. > >>> > >>> The scenario is like this... > >>> > >>> I set up *Hadoop 0.20.2 cluster* of *12 nodes*.. > >>> > >>> Now I set up* Hbase 0.90.3* *12 node cluster* on top > >>> of it. > >>> > >>> But after all that experimenting and struggling.. I read > >>> the following SHOCKING line on my Hbase web UI > >>> > >>> ---* " You are currently running the HMaster without HDFS append > support > >>> enabled. This may result in data loss. Please see the HBase wiki for > >>> details. "* > >>> > >>> And when I searched more about it.. I found Michael G. Nolls article.. > >>> saying that *Hadoop 0.20.2 and Hbase 0.90.2 are not compatible*. > >>> > >>> Is *Hadoop 0.20.2 also not compatible with Hbase 0.90.3 ???* > >>> > >>> So does it means I have to re-install hadoop 0.20. > >>> append if I want to use Hbase. > >>> > >>> I did a lot of struggle to reach upto this stage.. do I > >>> have to do all of it again.. ?? > >>> > >>> Is there any other work around solution.. of not > >>> re-installing everything again ?? > >>> > >>> Please help..!!! :-( > >>> > >>> Thanks, > >>> Praveenesh > >>> > > > > --00261887757e97de9704a505d151--