Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 021131179E for ; Sun, 13 Jul 2014 06:00:11 +0000 (UTC) Received: (qmail 77191 invoked by uid 500); 13 Jul 2014 06:00:01 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 77054 invoked by uid 500); 13 Jul 2014 06:00:01 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 77044 invoked by uid 99); 13 Jul 2014 06:00:00 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Jul 2014 06:00:00 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of xisisu@gmail.com designates 209.85.213.173 as permitted sender) Received: from [209.85.213.173] (HELO mail-ig0-f173.google.com) (209.85.213.173) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Jul 2014 05:59:59 +0000 Received: by mail-ig0-f173.google.com with SMTP id h18so758579igc.12 for ; Sat, 12 Jul 2014 22:59:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=M2blX1A4r2O2kiCSB4f8XCc0nEL6WnYqCH3cdTT1yTk=; b=EeRGUQVm1c3zzY4LJ2h8Zw0XOm5s/r1KXS1uerISVijoCt+08bec3o//7K6IoPJhBe E+DZjdesI2G6fmC4x95U3cgP8+5nxFDl8YsyIBfwBUnJGm9Zhm1MmVlhJSrX/q5AX/5y mx1oA0DIgp7Sb/ixXiZ04JvRWC0QQEqWjChRp2RIHeYpmKSTS4fxvMWlWdaln+Kj47Mq yWCQuchoylDkGELZXye5jciFd+b9eILdCB6H0etnLzSGaE0z20E4oJjy7BJ7DtPZd0Jk hgjs/va3ZRxewqGtAtL2uliKJj8I+nBTmyh/ZCXwH5ML6gos9N957EFQQrYg2EJ5nc5L u3HQ== X-Received: by 10.50.164.201 with SMTP id ys9mr16200815igb.40.1405231174070; Sat, 12 Jul 2014 22:59:34 -0700 (PDT) MIME-Version: 1.0 Received: by 10.64.81.169 with HTTP; Sat, 12 Jul 2014 22:58:53 -0700 (PDT) From: Sisu Xi Date: Sun, 13 Jul 2014 00:58:53 -0500 Message-ID: Subject: hadoop multinode, only master node doing the work To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e0149c1def5375404fe0ce05a X-Virus-Checked: Checked by ClamAV on apache.org --089e0149c1def5375404fe0ce05a Content-Type: text/plain; charset=UTF-8 Hi, all: I am new to hadoop. I followed the tutorial on http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ and installed hadoop 2.2.0 on two 4-core Ubuntu 12.04 machines. I can start the pi program, however, only the master node is doing the work (I checked the top on each machine). Seems the two nodes are configured correctly, because I can start the program in the slave node as well, and still only the master node is doing the actual work. I have tried different number of mappers for the pi program, and the results is the same. Is there anything else I can check? In the end is my configure file on each host. Thanks very much! Sisu ---------yarn-site.xml------- yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address master:8032 ---------------hdfs-site.xml-------------------- dfs.replication 2 dfs.namenode.name.dir file:/home/xisisu/mydata/hdfs/namenode dfs.datanode.data.dir file:/home/xisisu/mydata/hdfs/datanode -------------core-site.xml------------- fs.default.name hdfs://master:9000 ------------------mapred-site.xml----------------- mapreduce.framework.name yarn mapred.job.tracker master:54311 The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. -- *Sisu Xi, PhD Candidate* http://www.cse.wustl.edu/~xis/ Department of Computer Science and Engineering Campus Box 1045 Washington University in St. Louis One Brookings Drive St. Louis, MO 63130 --089e0149c1def5375404fe0ce05a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi, all:

I am new to hadoop. I followed= the tutorial on=C2=A0

and installed hadoop 2.2.0 on two 4-core Ubuntu 12.04 m= achines.

I can start the pi program, however, only= the master node is doing the work (I checked the top on each machine).
Seems the two nodes are configured correctly, because I can start =C2= =A0the program in the slave node as well, and still only the master node is= doing the actual work.
I have tried different number of mappers = for the pi program, and the results is the same.

Is there anything else I can check?=C2=A0
In the end is my configure file on each host.

Thanks very much!

Sisu

---------yarn-site.xml-------

<proper= ty>
=C2=A0 <name>yarn.nodemanager.aux-services</name&= gt;
=C2=A0 <value>mapreduce_shuffle</value>
</property>


<property>
=C2=A0 <na= me>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
=C2=A0 <value>org.apache.hadoop.mapred.ShuffleHandler</val= ue>
</property>


<property= >
=C2=A0 <name>yarn.resourcemanager.address</name>=
=C2=A0 <value>master:8032</value>
</pro= perty>

---------------hdfs-site.xml--------------------<= /div>

<property>
=C2=A0 <name>= ;dfs.replication</name>
=C2=A0 <value>2</value>=
</property>


<property= >
=C2=A0 <name>dfs.namenode.name.dir</name>
<= div>=C2=A0 <value>file:/home/xisisu/mydata/hdfs/namenode</value>= ;
</property>


<property= >
=C2=A0 <name>dfs.datanode.data.dir</name>
<= div>=C2=A0 <value>file:/home/xisisu/mydata/hdfs/datanode</value>= ;
</property>


------= -------core-site.xml-------------

<propert= y>
<value>hdfs://master:9000</value>
</property&= gt;

------------------mapred-site.xml-------= ----------

<property>
=C2=A0 = <name>mapreduce.framework= .name</name>
=C2=A0 <value>yarn</value>
</property>

<property>
=C2=A0 <name>mapred.= job.tracker</name>
=C2=A0 <value>master:54311</val= ue>
=C2=A0 <description>The host and port that the MapReduce job tra= cker runs
=C2=A0 at. =C2=A0If "local", then jobs are ru= n in-process as a single map
=C2=A0 and reduce task.
= =C2=A0 </description>
</property>




--


Sisu Xi, PhD Candidate

http://www.cse.wustl= .edu/~xis/
Department of Computer Science and Engineering
Campus = Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO = 63130
--089e0149c1def5375404fe0ce05a--