Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2DFB39E36 for ; Mon, 12 Mar 2012 04:35:29 +0000 (UTC) Received: (qmail 58733 invoked by uid 500); 12 Mar 2012 04:35:25 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 58308 invoked by uid 500); 12 Mar 2012 04:35:21 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 58272 invoked by uid 99); 12 Mar 2012 04:35:19 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Mar 2012 04:35:19 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of fozziethebeat@gmail.com designates 74.125.82.176 as permitted sender) Received: from [74.125.82.176] (HELO mail-we0-f176.google.com) (74.125.82.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Mar 2012 04:35:14 +0000 Received: by werc1 with SMTP id c1so3739424wer.35 for ; Sun, 11 Mar 2012 21:34:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=uXAHcNTc4hEuKDNwhkIdPjDbvXIKHLbBUEB7ILjGIYc=; b=aPtzxCkV4abSlUG4CUBe7gaNb5pFJLalHPCXkkdCLgY7Musd5S29UdGyTLVLO7Vmgj xYmhfxPuwXEI45LvR/Qin/6WQYIF0V6Y6mB4xl5UmaEj2jXoymf3dn9OIZjFdsrS/lqH mW4Z1zpgxe0QrEp9NxiJ7432xguksYgfu2JQ3me2LynnRy84H1J6PjkjfE9Wh2BRQQGJ ixmvlyy8hbo0ReOLoIgZb1mD1rkdCTWfrhtFa793CrZGts9n2azbjR1nPBe9ibFWqsET GnuztEJ57VvwxIHh4phoZwJ9HTsH6HYEd/Ny7GyYG14Ggssoc3DndGuAUU8rmZRO2zg/ tKgg== MIME-Version: 1.0 Received: by 10.180.105.69 with SMTP id gk5mr3979599wib.3.1331526893348; Sun, 11 Mar 2012 21:34:53 -0700 (PDT) Received: by 10.223.114.211 with HTTP; Sun, 11 Mar 2012 21:34:53 -0700 (PDT) In-Reply-To: References: Date: Sun, 11 Mar 2012 21:34:53 -0700 Message-ID: Subject: Re: Setting up MapReduce 2 on a test cluster From: Keith Stevens To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d04426f147ca68004bb04437b X-Virus-Checked: Checked by ClamAV on apache.org --f46d04426f147ca68004bb04437b Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Just to double check that I'm checking the logs correctly, I need the hadoop-hdfs-0.23.0-cdh4b1.jar specifically, yes? The logs for my local nodes are reporting that this is included in the class path for the resource master and node masters. Is there some other task that might be missing the jar? On Sun, Mar 11, 2012 at 9:26 PM, Keith Stevens wro= te: > Hi Harsh, > > Thanks for getting back to me on this on a sunday. > > Your guess is the same as mine, but i'm not sure where this is happening > or how. > > I installed this manually using the tarballs because the cluster i'm > working on is mostly cut off from the internet. I also can't seem to > install createrepo to create a local yum repository. > > Is there a way to install the cdh4 packages using just yum install if I > downloaded them all? I tried to do this but yum says there the yarn rpm > depends on the hadoop rpm and visa versa. > > Thanks, > --Keith > > > On Sun, Mar 11, 2012 at 8:53 PM, Harsh J wrote: > >> Hey Keith, >> >> You're most likely missing the HDFS jar somehow. I use the package >> installation and am able to run the following successfully: >> >> hadoop jar /usr/lib/hadoop/hadoop-mapreduce-examples.jar randomwriter >> -Dmapreduce.job.user.name=3D$USER >> -Dmapreduce.clientfactory.class.name >> =3Dorg.apache.hadoop.mapred.YarnClientFactory >> -Dmapreduce.randomwriter.bytespermap=3D10000 -Ddfs.blocksize=3D536870912 >> -Ddfs.block.size=3D536870912 -libjars >> /usr/lib/hadoop/hadoop-mapreduce-client-jobclient.jar output >> >> My `hadoop classpath` looks like: >> >> /etc/hadoop/conf:/usr/lib/hadoop:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*= :/usr/lib/hadoop/share/hadoop/common/*:/usr/lib/hadoop/share/hadoop/hdfs/*:= /usr/lib/hadoop/share/hadoop/mapreduce/* >> >> How have you installed this? >> >> On Mon, Mar 12, 2012 at 6:48 AM, Keith Stevens >> wrote: >> > Hi All, >> > >> > I've been trying to setup Cloudera's Ch4 Beta 1 release of MapReduce >> 2.0 on >> > a small cluster for testing but i'm not having much luck getting thing= s >> > running. I've been following the guides on >> > >> http://hadoop.apache.org/common/docs/r0.23.1/hadoop-yarn/hadoop-yarn-sit= e/ClusterSetup.htmlto >> > configure everything. hdfs seems to be working properly in that I can >> > access the file system, load files, and read them. >> > >> > However, running jobs doesn't seem to work correctly. I'm trying to r= un >> > just a sample job with >> > >> > hadoop jar >> > >> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.= 0-cdh4b1.jar >> > randomwriter -Dmapreduce.job.user.name=3D$USER - >> > Dmapreduce.clientfactory.class.name >> =3Dorg.apache.hadoop.mapred.YarnClientFactory >> > -Dmapreduce.randomwriter.bytespermap=3D10000 -Ddfs.blocksize=3D5368709= 12 >> > -Ddfs.block.size=3D536870912 -libjars >> > >> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclie= nt-0.23.0-cdh4b1.jar >> > output >> > >> > When running I get a ClassNotFoundException: >> > org.apache.hadoop.hdfs.DistributedFileSystem exception on the local no= de >> > running the task. I have fs.hdfs.impl set to be >> > org.apache.hadoop.hdfs.DistributedFileSystem which i believe is to be >> > correct. But i'm not sure why the node isn't finding the class. >> > >> > In my setup, everything is located under /usr/local/hadoop on all the >> nodes >> > and all the relevant environment variables point to that directly. So >> when >> > the local nodes start up they include this: >> > >> > -classpath >> > >> /usr/local/hadoop/conf:/usr/local/hadoop/conf:/usr/local/hadoop/conf:/us= r/local/hadoop-0.23.0-cdh4b1/sbin/..:/usr/local/hadoop-0.23.0-cdh4b1/sbin/.= ./lib/*:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../*:/usr/local/hadoop-0.23.0-= cdh4b1/sbin/../share/hadoop/common/lib/*:/usr/local/hadoop-0.23.0-cdh4b1/sb= in/../share/hadoop/common/*:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../share/h= adoop/hdfs:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../share/hadoop/hdfs/lib/*:= /usr/local/hadoop-0.23.0-cdh4b1/sbin/../share/hadoop/hdfs/*:/usr/local/hado= op/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:= /usr/local/hadoop/share/hadoop/mapreduce/*:/usr/local/hadoop/share/hadoop/m= apreduce/lib/*:/usr/local/hadoop/conf/nm-config/log4j.properties >> > >> > which looks to be correct. So I'm not exactly sure where the problem = is >> > coming from. >> > >> > Any suggestions on what might be wrong or how to further diagnose the >> > problem would be greatly appreciated. >> > >> > Thanks! >> > --Keith >> >> >> >> -- >> Harsh J >> > > --f46d04426f147ca68004bb04437b--