Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D19D1990B for ; Mon, 26 Dec 2011 16:53:55 +0000 (UTC) Received: (qmail 95378 invoked by uid 500); 26 Dec 2011 16:53:52 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 95340 invoked by uid 500); 26 Dec 2011 16:53:51 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 95332 invoked by uid 99); 26 Dec 2011 16:53:51 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 26 Dec 2011 16:53:51 +0000 X-ASF-Spam-Status: No, hits=1.6 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of humayun0156@gmail.com designates 209.85.212.48 as permitted sender) Received: from [209.85.212.48] (HELO mail-vw0-f48.google.com) (209.85.212.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 26 Dec 2011 16:53:47 +0000 Received: by vbbfa15 with SMTP id fa15so10821815vbb.35 for ; Mon, 26 Dec 2011 08:53:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=rFLZ8r5PiTyWGPE13UBvatXeC+a2Lci5E7eWuTBQ1Xs=; b=Nx5jZhD7fkLy5WLoo7BdNg0xE99tD2GdYcsTE121FmS9Zp7HY1s1znfXnFlJcdCRcC 5gmZycW+2Lk878qGl2+kaRUKByRO4q2Eu9cRPa7fcDYYn67la/N0I1FfcooQRIcHvB+z h52g3nozVWyJUe8uzq9t38OTJcC53GlRX46us= MIME-Version: 1.0 Received: by 10.52.91.7 with SMTP id ca7mr12125645vdb.120.1324918406908; Mon, 26 Dec 2011 08:53:26 -0800 (PST) Received: by 10.220.151.1 with HTTP; Mon, 26 Dec 2011 08:53:26 -0800 (PST) In-Reply-To: <1542FA4EE20C5048A5C2A3663BED2A6B0F87E9BD@szxeml531-mbs.china.huawei.com> References: <1542FA4EE20C5048A5C2A3663BED2A6B0F87DC42@szxeml531-mbs.china.huawei.com> <1542FA4EE20C5048A5C2A3663BED2A6B0F87E1B6@szxeml531-mbs.china.huawei.com> <1542FA4EE20C5048A5C2A3663BED2A6B0F87E9BD@szxeml531-mbs.china.huawei.com> Date: Mon, 26 Dec 2011 22:53:26 +0600 Message-ID: Subject: Re: Hadoop configuration From: Humayun kabir To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf307f340000090604b5019bf2 --20cf307f340000090604b5019bf2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Uma, Thanks a lot. At last it is running without errors. Thank you very much for your suggestion. On 26 December 2011 20:04, Uma Maheswara Rao G wrote= : > Hey Humayun, > Looks your hostname still not resoling properly. even though you > configured hostnames as master, slave...etc, it is getting humayun as > hostname. > just edit /etc/HOSTNAME file with correct hostname what you are expecting > here. > To confirm whether it is resolving properly or not, you can just do below > steps > #hostname > ............................//should get hostname here > correctly ( ex: master) > #hostname -i > ..........................//should resolve correct IP > here ... ( ex: master ip) > > > and make sure slave and slave1 sre pingable each other. > > Regards, > Uma > > ________________________________________ > From: Humayun kabir [humayun0156@gmail.com] > Sent: Saturday, December 24, 2011 9:51 PM > To: common-user@hadoop.apache.org > Subject: Re: Hadoop configuration > > i've checked my log files. But i don't understand to why this error occur= s. > here i my logs files. please give me some suggestion. > > jobtracker.log < http://paste.ubuntu.com/781181/ > > > namenode.log < http://paste.ubuntu.com/781183/ > > > datanode.log(1st machine) < http://paste.ubuntu.com/781176/ > > > datanode.log(2nd machine) < http://paste.ubuntu.com/781195 > / > > > > tasktracker.log(1st machine) < http://paste.ubuntu.com/781192/ > > > tasktracker.log(2nd machine) < http://paste.ubuntu.com/781197/ > > > > > On 24 December 2011 15:26, Joey Krabacher wrote: > > > have you checked your log files for any clues? > > > > --Joey > > > > On Sat, Dec 24, 2011 at 3:15 AM, Humayun kabir > > wrote: > > > Hi Uma, > > > > > > Thank you very much for your tips. We tried it in 3 nodes in virtual > box > > as > > > you suggested. But still we are facing problem. Here is our all > > > configuration file to all nodes. please take a look and show us some > ways > > > to solve it. It was nice and it would be great if you help us in this > > > regards. > > > > > > core-site.xml < http://pastebin.com/Twn5edrp > > > > hdfs-site.xml < http://pastebin.com/k4hR4GE9 > > > > mapred-site.xml < http://pastebin.com/gZuyHswS > > > > > > > /etc/hosts < http://pastebin.com/5s0yhgnj > > > > > > > output < http://paste.ubuntu.com/780807/ > > > > > > > > > > Hope you will understand and extend your helping hand towards us. > > > > > > Have a nice day. > > > > > > Regards > > > Humayun > > > > > > On 23 December 2011 17:31, Uma Maheswara Rao G > > wrote: > > > > > >> Hi Humayun , > > >> > > >> Lets assume you have JT, TT1, TT2, TT3 > > >> > > >> Now you should configure the \etc\hosts like below examle > > >> > > >> 10.18.xx.1 JT > > >> > > >> 10.18.xx.2 TT1 > > >> > > >> 10.18.xx.3 TT2 > > >> > > >> 10.18.xx.4 TT3 > > >> > > >> Configure the same set in all the machines, so that all task > trackers > > >> can talk each other with hostnames correctly. Also pls remove some > > entries > > >> from your files > > >> > > >> 127.0.0.1 localhost.localdomain localhost > > >> > > >> 127.0.1.1 humayun > > >> > > >> > > >> > > >> I have seen others already suggested many links for the regular > > >> configuration items. Hope you might clear about them. > > >> > > >> hope it will help... > > >> > > >> Regards, > > >> > > >> Uma > > >> > > >> ________________________________ > > >> > > >> From: Humayun kabir [humayun0156@gmail.com] > > >> Sent: Thursday, December 22, 2011 10:34 PM > > >> To: common-user@hadoop.apache.org; Uma Maheswara Rao G > > >> Subject: Re: Hadoop configuration > > >> > > >> Hello Uma, > > >> > > >> Thanks for your cordial and quick reply. It would be great if you > > explain > > >> what you suggested to do. Right now we are running on following > > >> configuration. > > >> > > >> We are using hadoop on virtual box. when it is a single node then it > > works > > >> fine for big dataset larger than the default block size. but in case > of > > >> multinode cluster (2 nodes) we are facing some problems. We are able > to > > >> ping both "Master->Slave" and "Slave->Master". > > >> Like when the input dataset is smaller than the default block size(6= 4 > > MB) > > >> then it works fine. but when the input dataset is larger than the > > default > > >> block size then it shows =E2=80=98too much fetch failure=E2=80=99 in= reduce state. > > >> here is the output link > > >> http://paste.ubuntu.com/707517/ > > >> > > >> this is our /etc/hosts file > > >> > > >> 192.168.60.147 humayun # Added by NetworkManager > > >> 127.0.0.1 localhost.localdomain localhost > > >> ::1 humayun localhost6.localdomain6 localhost6 > > >> 127.0.1.1 humayun > > >> > > >> # The following lines are desirable for IPv6 capable hosts > > >> ::1 localhost ip6-localhost ip6-loopback > > >> fe00::0 ip6-localnet > > >> ff00::0 ip6-mcastprefix > > >> ff02::1 ip6-allnodes > > >> ff02::2 ip6-allrouters > > >> ff02::3 ip6-allhosts > > >> > > >> 192.168.60.1 master > > >> 192.168.60.2 slave > > >> > > >> > > >> Regards, > > >> > > >> -Humayun. > > >> > > >> > > >> On 22 December 2011 15:47, Uma Maheswara Rao G > >> > wrote: > > >> Hey Humayun, > > >> > > >> To solve the too many fetch failures problem, you should configure > host > > >> mapping correctly. > > >> Each tasktracker should be able to ping from each other. > > >> > > >> Regards, > > >> Uma > > >> ________________________________________ > > >> From: Humayun kabir [humayun0156@gmail.com humayun0156@gmail.com > > >] > > >> Sent: Thursday, December 22, 2011 2:54 PM > > >> To: common-user@hadoop.apache.org common-user@hadoop.apache.org> > > >> Subject: Hadoop configuration > > >> > > >> someone please help me to configure hadoop such as core-site.xml, > > >> hdfs-site.xml, mapred-site.xml etc. > > >> please provide some example. it is badly needed. because i run in a = 2 > > node > > >> cluster. when i run the wordcount example then it gives the result t= oo > > >> mutch fetch failure. > > >> > > >> > > > --20cf307f340000090604b5019bf2--