Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 953A1924D for ; Mon, 12 Mar 2012 02:21:23 +0000 (UTC) Received: (qmail 71844 invoked by uid 500); 12 Mar 2012 01:19:21 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 71749 invoked by uid 500); 12 Mar 2012 01:19:20 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 71741 invoked by uid 99); 12 Mar 2012 01:19:20 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Mar 2012 01:19:20 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of fozziethebeat@gmail.com designates 209.85.212.176 as permitted sender) Received: from [209.85.212.176] (HELO mail-wi0-f176.google.com) (209.85.212.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Mar 2012 01:19:13 +0000 Received: by wibhm17 with SMTP id hm17so2704539wib.11 for ; Sun, 11 Mar 2012 18:18:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=yO3LtAg/XMuLMGsANQ2BoUpDos/aClqcqvOtYWRdSZs=; b=KBt6Lqd32a57qQMxq7kSzeefkBEk1lybUR9FxGkw4urF7Bp97RTBLCVI0HZUDRV6d6 NJ1GOacMs7sMiZesUasnQPesh96suGveioAtvne3A8OOQKiQhaqdnvfqvvcNzfkA/kQO QpVzB/symDzfU1jhCNmuZdQi9W9Xznzp1qeZhqhK8IiHCU8x5TcFlPEEjswJeHHzKDk9 t4p0eJ8rN7tCFaT/spBb+XVgGIsMmtSJ6e2ruBACCJ/HZN87oBKhZnUikeNkS4IQ28Pv gGQw93kEA1KJ5VdhhuD31PDGfdf4ES+3bLf634qhMPkll7A/P6Jb44eGh2RCYuHBTGSG fOsA== MIME-Version: 1.0 Received: by 10.216.137.226 with SMTP id y76mr5664000wei.96.1331515133812; Sun, 11 Mar 2012 18:18:53 -0700 (PDT) Received: by 10.223.114.211 with HTTP; Sun, 11 Mar 2012 18:18:53 -0700 (PDT) Date: Sun, 11 Mar 2012 18:18:53 -0700 Message-ID: Subject: Setting up MapReduce 2 on a test cluster From: Keith Stevens To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016e6d58a1a905f5d04bb0186e5 X-Virus-Checked: Checked by ClamAV on apache.org --0016e6d58a1a905f5d04bb0186e5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi All, I've been trying to setup Cloudera's Ch4 Beta 1 release of MapReduce 2.0 on a small cluster for testing but i'm not having much luck getting things running. I've been following the guides on http://hadoop.apache.org/common/docs/r0.23.1/hadoop-yarn/hadoop-yarn-site/C= lusterSetup.htmlto configure everything. hdfs seems to be working properly in that I can access the file system, load files, and read them. However, running jobs doesn't seem to work correctly. I'm trying to run just a sample job with hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.0-c= dh4b1.jar randomwriter -Dmapreduce.job.user.name=3D$USER - Dmapreduce.clientfactory.class.name=3Dorg.apache.hadoop.mapred.YarnClientFa= ctory -Dmapreduce.randomwriter.bytespermap=3D10000 -Ddfs.blocksize=3D536870912 -Ddfs.block.size=3D536870912 -libjars /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-= 0.23.0-cdh4b1.jar output When running I get a ClassNotFoundException: org.apache.hadoop.hdfs.DistributedFileSystem exception on the local node running the task. I have fs.hdfs.impl set to be org.apache.hadoop.hdfs.DistributedFileSystem which i believe is to be correct. But i'm not sure why the node isn't finding the class. In my setup, everything is located under /usr/local/hadoop on all the nodes and all the relevant environment variables point to that directly. So when the local nodes start up they include this: -classpath /usr/local/hadoop/conf:/usr/local/hadoop/conf:/usr/local/hadoop/conf:/usr/l= ocal/hadoop-0.23.0-cdh4b1/sbin/..:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../l= ib/*:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../*:/usr/local/hadoop-0.23.0-cdh= 4b1/sbin/../share/hadoop/common/lib/*:/usr/local/hadoop-0.23.0-cdh4b1/sbin/= ../share/hadoop/common/*:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../share/hado= op/hdfs:/usr/local/hadoop-0.23.0-cdh4b1/sbin/../share/hadoop/hdfs/lib/*:/us= r/local/hadoop-0.23.0-cdh4b1/sbin/../share/hadoop/hdfs/*:/usr/local/hadoop/= share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/us= r/local/hadoop/share/hadoop/mapreduce/*:/usr/local/hadoop/share/hadoop/mapr= educe/lib/*:/usr/local/hadoop/conf/nm-config/log4j.properties which looks to be correct. So I'm not exactly sure where the problem is coming from. Any suggestions on what might be wrong or how to further diagnose the problem would be greatly appreciated. Thanks! --Keith --0016e6d58a1a905f5d04bb0186e5--