Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7E6B4FDDE for ; Wed, 8 May 2013 20:50:33 +0000 (UTC) Received: (qmail 95503 invoked by uid 500); 8 May 2013 20:50:25 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 95360 invoked by uid 500); 8 May 2013 20:50:25 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 95352 invoked by uid 99); 8 May 2013 20:50:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 May 2013 20:50:25 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of cembree@gmail.com designates 209.85.214.170 as permitted sender) Received: from [209.85.214.170] (HELO mail-ob0-f170.google.com) (209.85.214.170) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 May 2013 20:50:19 +0000 Received: by mail-ob0-f170.google.com with SMTP id er7so833574obc.29 for ; Wed, 08 May 2013 13:49:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:reply-to:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=2Cg7VVel7QCvFBRer7eXV8bC8upOM0NBHVVpau8frUY=; b=X6txdES1/nriKsqe6xY6BsorZQiCQ2LCSwyvhNKld32XN2IV06uZxPmYq1ZabibOSD cYVudLm8bP537HoWaP8cBCpEGv0X5VzSaV9ZF/4ZfOlQ+ARddkE/8AvrL8NrWpDLM2qB tcEngaMbkF5KWwsCGx0fOQftIvS3f0DuBFnXwF7orTAsj2SSC51I65XDosheZIAsY1SJ OovAEoV8UDGi6cyYhy4eteNVAqtPC4ypJBPmxmKfYzKTT2Kzf3QUknRFqeco1RO75Y85 HwspFO8gwHMshIbeCf2yxm02j9zBRY/NvJDt1UvKnWT3rjtg2uO7GBCTw3bsL3JAtdpM p18w== MIME-Version: 1.0 X-Received: by 10.182.230.198 with SMTP id ta6mr2644086obc.98.1368046198868; Wed, 08 May 2013 13:49:58 -0700 (PDT) Received: by 10.76.81.70 with HTTP; Wed, 8 May 2013 13:49:58 -0700 (PDT) Reply-To: chris@embree.us In-Reply-To: <3DB84EA5F65C804D98EE9F8562FAA53B5138CFF0@ESV4-MBX02.linkedin.biz> References: <3DB84EA5F65C804D98EE9F8562FAA53B5138CFF0@ESV4-MBX02.linkedin.biz> Date: Wed, 8 May 2013 16:49:58 -0400 Message-ID: Subject: Re: Rack Aware Hadoop cluster From: Chris Embree To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c33112b8538e04dc3b13fc X-Virus-Checked: Checked by ClamAV on apache.org --001a11c33112b8538e04dc3b13fc Content-Type: text/plain; charset=ISO-8859-1 Here is a sample I stole from the web and modified slightly... I think. HADOOP_CONF=/etc/hadoop/conf while [ $# -gt 0 ] ; do nodeArg=$1 exec< ${HADOOP_CONF}/rack_info.txt result="" while read line ; do ar=( $line ) if [ "${ar[0]}" = "$nodeArg" ] ; then result="${ar[1]}" fi done shift if [ -z "$result" ] ; then echo -n "/default/rack " else echo -n "$result " fi done The rack_info.txt file contains all hostname AND IP addresses for each node: 10.10.10.10 /dc1/rack1 10.10.10.11 /dc1/rack2 datanode1 /dc1/rack1 datanode2 /dc1/rack2 .. etch. On Wed, May 8, 2013 at 1:38 PM, Adam Faris wrote: > Look between the blocks starting at line 1336. > http://lnkd.in/rJsqpV Some day it will get included in the > documentation with a future Hadoop release. :) > > -- Adam > > On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com> > wrote: > > > If anybody have sample (topology.script.file.name) script then please > share it. > > > > > > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem < > 3m.mustaqeem@gmail.com> wrote: > > @chris, I have test it outside. It is working fine. > > > > > > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov > wrote: > > Error in script. > > > > > > On Wed, May 8, 2013 at 7:11 AM, Chris Embree wrote: > > Your script has an error in it. Please test your script using both IP > Addresses and Names, outside of hadoop. > > > > > > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem < > 3m.mustaqeem@gmail.com> wrote: > > I have done this and found following error in log - > > > > > > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: > Exception running > /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1 > > org.apache.hadoop.util.Shell$ExitCodeException: > /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: > /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax > error: "(" unexpected (expecting "done") > > > > at org.apache.hadoop.util.Shell.runCommand(Shell.java:202) > > at org.apache.hadoop.util.Shell.run(Shell.java:129) > > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322) > > at > org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241) > > at > org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179) > > at > org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119) > > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454) > > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881) > > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90) > > at > org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:415) > > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441) > > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729) > > 2013-05-08 18:53:45,223 ERROR > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve > call returned null! Using /default-rack for host [127.0.0.1] > > > > > > > > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov > wrote: > > You can put this parameter to core-site.xml or hdfs-site.xml > > It both parsed during the HDFS startup. > > > > Leonid > > > > > > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem < > 3m.mustaqeem@gmail.com> wrote: > > Hello everyone, > > I was searching for how to make the hadoop cluster rack-aware and I > find out from here > http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of " > topology.script.file.name". But here it is not written where to put this > > > > topology.script.file.name > > > /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh > > > > > > Means in which configuration file. > > I am using hadoop-2.0.3-alpha. > > > > > > -- > > With regards --- > > Mohammad Mustaqeem, > > M.Tech (CSE) > > MNNIT Allahabad > > 9026604270 > > > > > > > > > > > > > > -- > > With regards --- > > Mohammad Mustaqeem, > > M.Tech (CSE) > > MNNIT Allahabad > > 9026604270 > > > > > > > > > > > > > > > > -- > > With regards --- > > Mohammad Mustaqeem, > > M.Tech (CSE) > > MNNIT Allahabad > > 9026604270 > > > > > > > > > > > > -- > > With regards --- > > Mohammad Mustaqeem, > > M.Tech (CSE) > > MNNIT Allahabad > > 9026604270 > > > > > > --001a11c33112b8538e04dc3b13fc Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Here is a sample I stole from the web and modified slightl= y... I think. =A0

HADOOP_CONF=3D/etc/hadoop/conf

while [ $# -gt 0 ] ; do
=A0 nodeArg=3D$1
=A0 exec< ${HADOOP_CONF}/rack_info.txt
=A0 result=3D"&quo= t;
=A0 while read line ; do
=A0 =A0 ar=3D( $line )
=A0 =A0 if [ "${ar[0]}" =3D "$nodeArg" ] ; then
=A0 =A0 =A0 result=3D"${ar[1]}"
=A0 =A0 fi
=A0 done
=A0 shift
=A0 if [ -= z "$result" ] ; then
=A0 =A0 echo -n "/default/rac= k "
=A0 else
=A0 =A0 echo -n "$result "<= /div>
=A0 fi

done


The rack_info.txt file contains all hostname AND IP addresses for each no= de:
10.10.10.10 =A0/dc1/rack1
10.10.10.11 = =A0/dc1/rack2
datanode1 =A0/dc1/rack1
datanode2 =A0/dc1/rack2<= /div>
.. etch.


On Wed, May 8, 2013 at 1:38 PM, Adam Faris <afari= s@linkedin.com> wrote:
Look between the <code> blocks startin= g at line 1336. =A0http= ://lnkd.in/rJsqpV =A0 Some day it will get included in the documentatio= n with a future Hadoop release. :)

-- Adam

On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com>
=A0wrote:

> If anybody have sample (topology.script.file.name) script then please share it= .
>
>
> On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com> wrote:
> @chris, I have test it outside. It is working fine.
>
>
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lfedotov@hortonworks.com> wrote:
> Error in script.
>
>
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <cembree@gmail.com> wrote:
> Your script has an error in it. =A0Please test your script using both = IP Addresses and Names, outside of hadoop.
>
>
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com> wrote:
> I have done this and found following error in log -
>
>
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:= Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoo= p/rack.sh 127.0.0.1
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/develo= pment/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development= /hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpec= ted (expecting "done")
>
> =A0 =A0 =A0 at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)=
> =A0 =A0 =A0 at org.apache.hadoop.util.Shell.run(Shell.java:129)
> =A0 =A0 =A0 at org.apache.hadoop.util.Shell$ShellCommandExecutor.execu= te(Shell.java:322)
> =A0 =A0 =A0 at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBased= Mapping.runResolveCommand(ScriptBasedMapping.java:241)
> =A0 =A0 =A0 at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBased= Mapping.resolve(ScriptBasedMapping.java:179)
> =A0 =A0 =A0 at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(= CachedDNSToSwitchMapping.java:119)
> =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeM= anager.resolveNetworkLocation(DatanodeManager.java:454)
> =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeM= anager.registerDatanode(DatanodeManager.java:713)
> =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.reg= isterDatanode(FSNamesystem.java:3459)
> =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServe= r.registerDatanode(NameNodeRpcServer.java:881)
> =A0 =A0 =A0 at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServe= rSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.j= ava:90)
> =A0 =A0 =A0 at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolP= rotos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.j= ava:18295)
> =A0 =A0 =A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBuf= RpcInvoker.call(ProtobufRpcEngine.java:454)
> =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:= 1735)
> =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:= 1731)
> =A0 =A0 =A0 at java.security.AccessController.doPrivileged(Native Meth= od)
> =A0 =A0 =A0 at javax.security.auth.Subject.doAs(Subject.java:415)
> =A0 =A0 =A0 at org.apache.hadoop.security.UserGroupInformation.doAs(Us= erGroupInformation.java:1441)
> =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:17= 29)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanag= ement.DatanodeManager: The resolve call returned null! Using /default-rack = for host [127.0.0.1]
>
>
>
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lfedotov@hortonworks.com> wrote:
> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
>
> Leonid
>
>
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com> wrote:
> Hello everyone,
> =A0 =A0 I was searching for how to make the hadoop cluster rack-aware = and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-pro= ject-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we= can do this by giving property of "topology.script.file.name". But here = it is not written where to put this
> <property>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 <name>topology.script.file.name</name>=
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 <value>/home/mustaqeem/development/h= adoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>
>
>
>
> --
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>
>
>
>
>
> --
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>
>
>
> --
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>


--001a11c33112b8538e04dc3b13fc--