hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Maheswara Rao G <mahesw...@huawei.com>
Subject RE: Hadoop configuration
Date Fri, 23 Dec 2011 11:31:33 GMT
Hi Humayun ,

 Lets assume you have JT, TT1, TT2, TT3

  Now you should configure the \etc\hosts like below examle

      10.18.xx.1 JT

      10.18.xx.2 TT1

      10.18.xx.3 TT2

      10.18.xx.4 TT3

   Configure the same set in all the machines, so that all task trackers can talk each other
with hostnames correctly. Also pls remove some entries from your files localhost.localdomain localhost humayun

I have seen others already suggested many links for the regular configuration items. Hope
you might clear about them.

hope it will help...




From: Humayun kabir [humayun0156@gmail.com]
Sent: Thursday, December 22, 2011 10:34 PM
To: common-user@hadoop.apache.org; Uma Maheswara Rao G
Subject: Re: Hadoop configuration

Hello Uma,

Thanks for your cordial and quick reply. It would be great if you explain what you suggested
to do. Right now we are running on following

We are using hadoop on virtual box. when it is a single node then it works fine for big dataset
larger than the default block size. but in case of multinode cluster (2 nodes) we are facing
some problems. We are able to ping both "Master->Slave" and "Slave->Master".
Like when the input dataset is smaller than the default block size(64 MB) then it works fine.
but when the input dataset is larger than the default block size then it shows ‘too much
fetch failure’ in reduce state.
here is the output link

this is our /etc/hosts file humayun # Added by NetworkManager localhost.localdomain localhost
::1 humayun localhost6.localdomain6 localhost6 humayun

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts master slave



On 22 December 2011 15:47, Uma Maheswara Rao G <maheswara@huawei.com<mailto:maheswara@huawei.com>>
Hey Humayun,

 To solve the too many fetch failures problem, you should configure host mapping correctly.
Each tasktracker should be able to ping from each other.

From: Humayun kabir [humayun0156@gmail.com<mailto:humayun0156@gmail.com>]
Sent: Thursday, December 22, 2011 2:54 PM
To: common-user@hadoop.apache.org<mailto:common-user@hadoop.apache.org>
Subject: Hadoop configuration

someone please help me to configure hadoop such as core-site.xml,
hdfs-site.xml, mapred-site.xml etc.
please provide some example. it is badly needed. because i run in a 2 node
cluster. when i run the wordcount example then it gives the result too
mutch fetch failure.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message