Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 95BF4119D1 for ; Sun, 13 Jul 2014 17:54:20 +0000 (UTC) Received: (qmail 80004 invoked by uid 500); 13 Jul 2014 17:54:14 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 79853 invoked by uid 500); 13 Jul 2014 17:54:13 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 79836 invoked by uid 99); 13 Jul 2014 17:54:13 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Jul 2014 17:54:13 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of xisisu@gmail.com designates 209.85.213.174 as permitted sender) Received: from [209.85.213.174] (HELO mail-ig0-f174.google.com) (209.85.213.174) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Jul 2014 17:54:09 +0000 Received: by mail-ig0-f174.google.com with SMTP id c1so1029579igq.13 for ; Sun, 13 Jul 2014 10:53:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=ygpYbD1HIoeICrWiYRnf3BX3XIKaHRKe/89BhzqLDsQ=; b=gBA+KxljDgr8zjeASe0q9ro3ZlYk1em8RE6DOPVopAwAlHdCfbT5+yXFPXP2sCllPN lu8H+Hh7OmIkst7wF0QNkjCSCwWiwo6fsx4n/XFHJmUTJ/GN9LJGSwjswD028lfQ+8Wi 0YiH3bMozvJyhPS1tes3jR24rVT2J3ZiyDFhszrSYPPSsJgij8Q0uo71kT95pMvtNOix czUiC+WVse6dDzH5Qh0sI5/0DzfJYFM+G//oP4dFuiS40fMrXIBIveiiYVPYbjjgOZmJ WOhQsKbVLv5fMRE/AeKPW+2pVudFF2FwXwHdKzyh50NXbouPVSbkqlIANM9inC2z4Mb6 LxPg== X-Received: by 10.50.137.71 with SMTP id qg7mr19114961igb.24.1405274029254; Sun, 13 Jul 2014 10:53:49 -0700 (PDT) MIME-Version: 1.0 Received: by 10.64.81.169 with HTTP; Sun, 13 Jul 2014 10:53:09 -0700 (PDT) In-Reply-To: References: From: Sisu Xi Date: Sun, 13 Jul 2014 12:53:09 -0500 Message-ID: Subject: Re: hadoop multinode, only master node doing the work To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c3c19653705f04fe16dbd4 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c3c19653705f04fe16dbd4 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, Sam: Thanks for your help! You are right, seems only one node is running. Here is the output: xisisu@slave-01:/usr/local/hadoop$ hadoop dfsadmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'. 14/07/13 12:50:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 14068822016 (13.10 GB) Present Capacity: 8425232425 (7.85 GB) DFS Remaining: 8423534592 (7.85 GB) DFS Used: 1697833 (1.62 MB) DFS Used%: 0.02% Under replicated blocks: 21 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 1 (1 total, 0 dead) Live datanodes: Name: 172.16.20.170:50010 (master) Hostname: master Decommission Status : Normal Configured Capacity: 14068822016 (13.10 GB) DFS Used: 1697833 (1.62 MB) Non DFS Used: 5643589591 (5.26 GB) DFS Remaining: 8423534592 (7.85 GB) DFS Used%: 0.01% DFS Remaining%: 59.87% Last contact: Sun Jul 13 12:50:13 CDT 2014 I get the same output when I run the command on the slave node. Is there anything I am missing in the config file? Thanks very much! Sisu On Sun, Jul 13, 2014 at 1:36 AM, Kilaru, Sambaiah < Sambaiah_Kilaru@intuit.com> wrote: > HI Sisu Xi, > > On the master node can you check > Hadoop dfsadmin =E2=80=93report > And listing all the slave nodes or you can check master URL and it should > all datanodes listed as slave nodes. > Check for RM UI and slave nodes listed there also. > > Thanks, > Sam > > From: Sisu Xi > Reply-To: "user@hadoop.apache.org" > Date: Sunday, July 13, 2014 at 11:28 AM > To: "user@hadoop.apache.org" > Subject: hadoop multinode, only master node doing the work > > Hi, all: > > I am new to hadoop. I followed the tutorial on > > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-mult= i-node-cluster/ > > and installed hadoop 2.2.0 on two 4-core Ubuntu 12.04 machines. > > I can start the pi program, however, only the master node is doing the > work (I checked the top on each machine). > Seems the two nodes are configured correctly, because I can start the > program in the slave node as well, and still only the master node is doin= g > the actual work. > I have tried different number of mappers for the pi program, and the > results is the same. > > Is there anything else I can check? > > In the end is my configure file on each host. > > Thanks very much! > > Sisu > > ---------yarn-site.xml------- > > > yarn.nodemanager.aux-services > mapreduce_shuffle > > > > > yarn.nodemanager.aux-services.mapreduce.shuffle.class > org.apache.hadoop.mapred.ShuffleHandler > > > > > yarn.resourcemanager.address > master:8032 > > > ---------------hdfs-site.xml-------------------- > > > dfs.replication > 2 > > > > > dfs.namenode.name.dir > file:/home/xisisu/mydata/hdfs/namenode > > > > > dfs.datanode.data.dir > file:/home/xisisu/mydata/hdfs/datanode > > > > -------------core-site.xml------------- > > > fs.default.name > hdfs://master:9000 > > > ------------------mapred-site.xml----------------- > > > mapreduce.framework.name > yarn > > > > mapred.job.tracker > master:54311 > The host and port that the MapReduce job tracker runs > at. If "local", then jobs are run in-process as a single map > and reduce task. > > > > > > > -- > > > *Sisu Xi, PhD Candidate* > > http://www.cse.wustl.edu/~xis/ > Department of Computer Science and Engineering > Campus Box 1045 > Washington University in St. Louis > One Brookings Drive > St. Louis, MO 63130 > --=20 *Sisu Xi, PhD Candidate* http://www.cse.wustl.edu/~xis/ Department of Computer Science and Engineering Campus Box 1045 Washington University in St. Louis One Brookings Drive St. Louis, MO 63130 --001a11c3c19653705f04fe16dbd4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi, Sam:

Thanks for your help! You are = right, seems only one node is running.
Here is the output:
<= div>

xisisu@slave-01:/usr/local/hadoop$ hadoop dfsadmin = -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.<= /div>
Instead use the hdfs command for it.

Ope= nJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/li= b/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM w= ill try to fix the stack guard now.
It's highly recommended that you fix the library with 'execsta= ck -c <libfile>', or link it with '-z noexecstack'.
=
14/07/13 12:50:14 WARN util.NativeCodeLoader: Unable to load native-ha= doop library for your platform... using builtin-java classes where applicab= le
Configured Capacity: 14068822016 (13.10 GB)
Present Capacity= : 8425232425 (7.85 GB)
DFS Remaining: 8423534592 (7.85 GB)
<= div>DFS Used: 1697833 (1.62 MB)
DFS Used%: 0.02%
Under = replicated blocks: 21
Blocks with corrupt replicas: 0
Missing blocks: 0
=
-------------------------------------------------
= Datanodes available: 1 (1 total, 0 dead)

Live data= nodes:
Name: 172.16.20.170:50010 (= master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 14068822016 (13.10 GB)
DFS Used: 16= 97833 (1.62 MB)
Non DFS Used: 5643589591 (5.26 GB)
DFS Remaining: 8423534592= (7.85 GB)
DFS Used%: 0.01%
DFS Remaining%: 59.87%
Last contact: Sun Jul 13 12:50:13 CDT 2014


I get the same output when I run the command on the sla= ve node. =C2=A0


Is there anything I= am missing in the config file?

Thanks very much!<= /div>

Sisu



On Sun, Jul 13, 2014 at 1:36 AM, Kila= ru, Sambaiah <Sambaiah_Kilaru@intuit.com> wrote:
HI Sisu Xi,

On the master node can you check
Hadoop dfsadmin =E2=80=93report
And listing all the slave nodes or you can check master URL and it sho= uld all datanodes listed as slave nodes.
Check for RM UI and slave nodes listed there also.

Thanks,
Sam

From: Sisu Xi <xisisu@gmail.com>
Reply-To: "user@hadoop.apache.org" &= lt;user@hadoop.= apache.org>
Date: Sunday, July 13, 2014 at 11:2= 8 AM
To: "user@hadoop.apache.org" <user@hadoop.apache= .org>
Subject: hadoop multinode, only mas= ter node doing the work

Hi, all:

I am new to hadoop. I followed the tutorial on=C2=A0

and installed hadoop 2.2.0 on two 4-core Ubuntu 12.04 machines.

I can start the pi program, however, only the master node is doing the= work (I checked the top on each machine).
Seems the two nodes are configured correctly, because I can start =C2= =A0the program in the slave node as well, and still only the master node is= doing the actual work.
I have tried different number of mappers for the pi program, and the r= esults is the same.

Is there anything else I can check?=C2=A0

In the end is my configure file on each host.

Thanks very much!

Sisu

---------yarn-site.xml-------

<property>
=C2=A0 <name>yarn.nodemanager.aux-services</name>
=C2=A0 <value>mapreduce_shuffle</value>
</property>


<property>
=C2=A0 <name>yarn.nodemanager.aux-services.mapreduce.shuffle.cla= ss</name>
=C2=A0 <value>org.apache.hadoop.mapred.ShuffleHandler</value&= gt;
</property>


<property>
=C2=A0 <name>yarn.resourcemanager.address</name>
=C2=A0 <value>master:8032</value>
</property>

---------------hdfs-site.xml--------------------

<property>
=C2=A0 <name>dfs.replication</name>
=C2=A0 <value>2</value>
</property>


<property>
=C2=A0 <name>dfs.namenode.name.dir</name>
=C2=A0 <value>file:/home/xisisu/mydata/hdfs/namenode</value&g= t;
</property>


<property>
=C2=A0 <name>dfs.datanode.data.dir</name>
=C2=A0 <value>file:/home/xisisu/mydata/hdfs/datanode</value&g= t;
</property>


-------------core-site.xml-------------

<property>
<name>fs.de= fault.name</name>
<value>hdfs://master:9000</value>
</property>

------------------mapred-site.xml-----------------

<property>
=C2=A0 <name>mapreduce.framework.name</name>
=C2=A0 <value>yarn</value>
</property>

<property>
=C2=A0 <name>mapred.job.tracker</name>
=C2=A0 <value>master:54311</value>
=C2=A0 <description>The host and port that the MapReduce job tra= cker runs
=C2=A0 at. =C2=A0If "local", then jobs are run in-process as= a single map
=C2=A0 and reduce task.
=C2=A0 </description>
</property>




--


Sisu Xi, PhD Candidate

http://www.cse= .wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130



--


Sisu X= i, PhD Candidate

http= ://www.cse.wustl.edu/~xis/
Department of Computer Science and Engine= ering
Campus Box 1045
Washington University in St. Louis
One Brook= ings Drive
St. Louis, MO 63130
--001a11c3c19653705f04fe16dbd4--