Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5B26F17BE1 for ; Mon, 28 Sep 2015 13:15:58 +0000 (UTC) Received: (qmail 53289 invoked by uid 500); 28 Sep 2015 13:15:52 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 53186 invoked by uid 500); 28 Sep 2015 13:15:52 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 53171 invoked by uid 99); 28 Sep 2015 13:15:52 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Sep 2015 13:15:52 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id D01B3C69FE for ; Mon, 28 Sep 2015 13:15:51 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.902 X-Spam-Level: ** X-Spam-Status: No, score=2.902 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 9UlnDrrXy1qA for ; Mon, 28 Sep 2015 13:15:38 +0000 (UTC) Received: from mail-yk0-f174.google.com (mail-yk0-f174.google.com [209.85.160.174]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 18D0920595 for ; Mon, 28 Sep 2015 13:15:38 +0000 (UTC) Received: by ykft14 with SMTP id t14so178831343ykf.0 for ; Mon, 28 Sep 2015 06:15:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=3nKLlt7bjFt4vLM2O7cSUtV1772O0maBUPryPaK4noQ=; b=0VUadnPvxJepyHpZLlNdQIynee55nKA+gtBPWqytVVzXDVidPPuwnU8i4WkrPu3FP4 R7ZI5s/02qFJYYCHf8hUReSekkhPLVJ7XtxvXTvMb3ub3H12dLSMoxvVkcFKB7QsAor0 Oiia6dE4wj1Ns+W4cDb/NSRphoz/HEgobz48SnN6Zm5iPUpU8xMvbCbMILoL06Y9AAVa QmXsIYd+63fa8EKGSMv16oe8XUV+5zRFzA1/mGxpWMYxA2aPs5m0D3R3zasCYcnci60F u7g1oeQbqp6D/9kL9Ptf50qPtevsmY8LCob3Wb6FyBji3FFJv/nX5KHRjhj8CpssjglP 0YrA== MIME-Version: 1.0 X-Received: by 10.170.217.68 with SMTP id j65mr16611481ykf.9.1443446137340; Mon, 28 Sep 2015 06:15:37 -0700 (PDT) Received: by 10.37.210.195 with HTTP; Mon, 28 Sep 2015 06:15:37 -0700 (PDT) In-Reply-To: References: <8AD4EE147886274A8B495D6AF407DF698E4837C3@szxeml510-mbx.china.huawei.com> Date: Mon, 28 Sep 2015 08:15:37 -0500 Message-ID: Subject: Re: Problem running example (wrong IP address) From: Daniel Watrous To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a113a436844cd9c0520ce7ea0 --001a113a436844cd9c0520ce7ea0 Content-Type: text/plain; charset=UTF-8 Thanks to Namikaze pointing out that I should have sent the namenode log as a pastbin http://pastebin.com/u33bBbgu On Mon, Sep 28, 2015 at 8:02 AM, Daniel Watrous wrote: > I have posted the namenode logs here: > https://gist.github.com/dwatrous/dafaa7695698f36a5d93 > > Thanks for all the help. > > On Sun, Sep 27, 2015 at 10:28 AM, Brahma Reddy Battula < > brahmareddy.battula@hotmail.com> wrote: > >> Thanks for sharing the logs. >> >> Problem is interesting..can you please post namenode logs and dual IP >> configurations(thinking problem with gateway while sending requests from >> 52.1 segment to 51.1 segment..) >> >> Thanks And Regards >> Brahma Reddy Battula >> >> >> ------------------------------ >> Date: Fri, 25 Sep 2015 12:19:00 -0500 >> >> Subject: Re: Problem running example (wrong IP address) >> From: dwmaillist@gmail.com >> To: user@hadoop.apache.org >> >> hadoop-master http://pastebin.com/yVF8vCYS >> hadoop-data1 http://pastebin.com/xMEdf01e >> hadoop-data2 http://pastebin.com/prqd02eZ >> >> >> >> On Fri, Sep 25, 2015 at 11:53 AM, Brahma Reddy Battula < >> brahmareddy.battula@hotmail.com> wrote: >> >> sorry,I am not able to access the logs, could please post in paste bin or >> attach the 192.168.51.6( as your query is why different IP) DN logs and >> namenode logs here..? >> >> >> >> >> Thanks And Regards >> Brahma Reddy Battula >> >> >> ------------------------------ >> Date: Fri, 25 Sep 2015 11:16:55 -0500 >> Subject: Re: Problem running example (wrong IP address) >> From: dwmaillist@gmail.com >> To: user@hadoop.apache.org >> >> >> Brahma, >> >> Thanks for the reply. I'll keep this conversation here in the user list. >> The /etc/hosts file is identical on all three nodes >> >> hadoop@hadoop-data1:~$ cat /etc/hosts >> 127.0.0.1 localhost >> 192.168.51.4 hadoop-master >> 192.168.52.4 hadoop-data1 >> 192.168.52.6 hadoop-data2 >> >> hadoop@hadoop-data2:~$ cat /etc/hosts >> 127.0.0.1 localhost >> 192.168.51.4 hadoop-master >> 192.168.52.4 hadoop-data1 >> 192.168.52.6 hadoop-data2 >> >> hadoop@hadoop-master:~$ cat /etc/hosts >> 127.0.0.1 localhost >> 192.168.51.4 hadoop-master >> 192.168.52.4 hadoop-data1 >> 192.168.52.6 hadoop-data2 >> >> Here are the startup logs for all three nodes: >> https://gist.github.com/dwatrous/7241bb804a9be8f9303f >> https://gist.github.com/dwatrous/bcd85cda23d6eca3a68b >> https://gist.github.com/dwatrous/922c4f773aded0137fa3 >> >> Thanks for your help. >> >> >> On Fri, Sep 25, 2015 at 10:33 AM, Brahma Reddy Battula < >> brahmareddy.battula@huawei.com> wrote: >> >> Seems DN started in three machines and failed in >> hadoop-data1(192.168.52.4).. >> >> >> 192.168.51.6 : giving IP as 192.168.51.1 ...can >> you please check /etc/hosts file of 192.168.51.6 (might be 192.168.51.1 >> is configured in /etc/hosts) >> >> 192.168.52.4 : datanode startup might be failed ( you can check this node >> logs) >> >> 192.168.51.4 : Datanode starup is >> success..which is in master node.. >> >> >> >> Thanks & Regards >> Brahma Reddy Battula >> >> >> >> ------------------------------ >> *From:* Daniel Watrous [dwmaillist@gmail.com] >> *Sent:* Friday, September 25, 2015 8:41 PM >> *To:* user@hadoop.apache.org >> *Subject:* Re: Problem running example (wrong IP address) >> >> I'm still stuck on this and posted it to stackoverflow: >> >> http://stackoverflow.com/questions/32785256/hadoop-datanode-binds-wrong-ip-address >> >> Thanks, >> Daniel >> >> On Fri, Sep 25, 2015 at 8:28 AM, Daniel Watrous >> wrote: >> >> I could really use some help here. As you can see from the output below, >> the two attached datanodes are identified with a non-existent IP address. >> Can someone tell me how that gets selected or how to explicitly set it. >> Also, why are both datanodes shown under the same name/IP? >> >> hadoop@hadoop-master:~$ hdfs dfsadmin -report >> Configured Capacity: 84482326528 (78.68 GB) >> Present Capacity: 75745546240 (70.54 GB) >> DFS Remaining: 75744862208 (70.54 GB) >> DFS Used: 684032 (668 KB) >> DFS Used%: 0.00% >> Under replicated blocks: 0 >> Blocks with corrupt replicas: 0 >> Missing blocks: 0 >> Missing blocks (with replication factor 1): 0 >> >> ------------------------------------------------- >> Live datanodes (2): >> >> Name: 192.168.51.1:50010 (192.168.51.1) >> Hostname: hadoop-data1 >> Decommission Status : Normal >> Configured Capacity: 42241163264 (39.34 GB) >> DFS Used: 303104 (296 KB) >> Non DFS Used: 4302479360 (4.01 GB) >> DFS Remaining: 37938380800 (35.33 GB) >> DFS Used%: 0.00% >> DFS Remaining%: 89.81% >> Configured Cache Capacity: 0 (0 B) >> Cache Used: 0 (0 B) >> Cache Remaining: 0 (0 B) >> Cache Used%: 100.00% >> Cache Remaining%: 0.00% >> Xceivers: 1 >> Last contact: Fri Sep 25 13:25:37 UTC 2015 >> >> >> Name: 192.168.51.4:50010 (hadoop-master) >> Hostname: hadoop-master >> Decommission Status : Normal >> Configured Capacity: 42241163264 (39.34 GB) >> DFS Used: 380928 (372 KB) >> Non DFS Used: 4434300928 (4.13 GB) >> DFS Remaining: 37806481408 (35.21 GB) >> DFS Used%: 0.00% >> DFS Remaining%: 89.50% >> Configured Cache Capacity: 0 (0 B) >> Cache Used: 0 (0 B) >> Cache Remaining: 0 (0 B) >> Cache Used%: 100.00% >> Cache Remaining%: 0.00% >> Xceivers: 1 >> Last contact: Fri Sep 25 13:25:38 UTC 2015 >> >> >> >> On Thu, Sep 24, 2015 at 5:05 PM, Daniel Watrous >> wrote: >> >> The IP address is clearly wrong, but I'm not sure how it gets set. Can >> someone tell me how to configure it to choose a valid IP address? >> >> On Thu, Sep 24, 2015 at 3:26 PM, Daniel Watrous >> wrote: >> >> I just noticed that both datanodes appear to have chosen that IP address >> and bound that port for HDFS communication. >> >> http://screencast.com/t/OQNbrWFF >> >> Any idea why this would be? Is there some way to specify which >> IP/hostname should be used for that? >> >> On Thu, Sep 24, 2015 at 3:11 PM, Daniel Watrous >> wrote: >> >> When I try to run a map reduce example, I get the following error: >> >> hadoop@hadoop-master:~$ hadoop jar >> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar >> pi 10 30 >> Number of Maps = 10 >> Samples per Map = 30 >> 15/09/24 20:04:28 INFO hdfs.DFSClient: Exception in >> createBlockOutputStream >> java.io.IOException: Got error, status message , ack with firstBadLink as >> 192.168.51.1:50010 >> at >> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140) >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1334) >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) >> 15/09/24 20:04:28 INFO hdfs.DFSClient: Abandoning >> BP-852923283-127.0.1.1-1443119668806:blk_1073741825_1001 >> 15/09/24 20:04:28 INFO hdfs.DFSClient: Excluding datanode >> DatanodeInfoWithStorage[192.168.51.1:50010 >> ,DS-45f6e06d-752e-41e8-ac25-ca88bce80d00,DISK] >> 15/09/24 20:04:28 WARN hdfs.DFSClient: Slow waitForAckedSeqno took >> 65357ms (threshold=30000ms) >> Wrote input for Map #0 >> >> I'm not sure why it's trying to access 192.168.51.1:50010, which isn't >> even a valid IP address in my setup. >> >> Daniel >> >> >> >> >> >> >> >> > --001a113a436844cd9c0520ce7ea0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Thanks to Namikaze pointing out that I should have sent th= e namenode log as a pastbin



On Mon, Sep 28, = 2015 at 8:02 AM, Daniel Watrous <dwmaillist@gmail.com> wr= ote:
I have posted the n= amenode logs here:

Thanks for all the help.
<= /div>
On Sun, Sep 27, 2015 at 10:28 AM, Brahma Reddy = Battula <brahmareddy.battula@hotmail.com> wrot= e:
Thanks for sharing the logs.

Problem is interesting..can you please post namenode logs and dual IP conf= igurations(thinking problem with gateway while sending requests from 52.1 s= egment to 51.1 segment..)

Thanks And Regards
Brahma Reddy = Battula



Date: Fri, 25 Sep 2015 12:19:00 -0500<= div>

Subject: Re: Problem running example (wrong IP address)
Fro= m: dwmaillist@gma= il.com
To: user@hadoop.apache.org

hadoop-master=C2=A0http://pastebin.co= m/yVF8vCYS
hadoop-data= 2=C2=A0http://pa= stebin.com/prqd02eZ



<= div>On Fri, Sep 25, 2015 at 11:53 AM, Brahma Reddy Battula <= brahmareddy.battula@hotmail.com> wrote:
sorry,I am not able to access the logs, could please = post in paste bin or attach the=C2=A0192.168.51.6( as your query is why different IP)= DN logs and namenode logs here..?




Thanks And Regards
Brahma Reddy Battula


Date: Fri, 25 Sep 2015 11:16:55 -0500
Subject: = Re: Problem running example (wrong IP address)
From: dwmaillist@gmail.com
T= o: user@hadoop.= apache.org


Brahma,

Thanks for the reply. I'll keep this conversation here in the user lis= t. The /etc/hosts file is identical on all three nodes

=
hadoop@hadoop-data1:~$ cat /etc/hosts
127.0.0.1 localho= st
192.168.51.4 hadoop-master
192.168.52.4 hadoop-d= ata1
192.168.52.6 hadoop-data2

hadoop@ha= doop-data2:~$ cat /etc/hosts
127.0.0.1 localhost
192.16= 8.51.4 hadoop-master
192.168.52.4 hadoop-data1
192.= 168.52.6 hadoop-data2

hadoop@hadoop-mas= ter:~$ cat /etc/hosts
127.0.0.1 localhost
192.168.51.4 = hadoop-master
192.168.52.4 hadoop-data1
192.168.52.= 6 hadoop-data2

Here are the startup logs for= all three nodes:

Thanks for you= r help.


On Fri, Sep 25, 2015 at 10:= 33 AM, Brahma Reddy Battula <brahmareddy.battula@huawei.com> wrote:
Seems DN started in three machines and failed in hadoop-data1(192.168.52.4)= ..


192.168.51.6 : giving IP as
192.168.51.1...can you please check /etc/hosts file of 192.168= .51.6 (might be 192= .168.51.1 is configured in /etc/hosts)

192.168.52.4 : datanode startup might be failed ( you can check this node l= ogs)

192.168.51.4 :=C2= =A0 Datanode starup is success..which is in master node..



Thanks & Regards
=C2=A0Brahma Reddy Battula
=C2=A0



From: Daniel Watrous [dwmaillist@gmail.com]
Sent: Friday, September 25, 2015 8:41 PM
To: user= @hadoop.apache.org
Subject: Re: Problem running example (wrong IP address)

I'm still stuck on this and posted it to stackoverflow= :

Thanks,
Daniel

On Fri, Sep 25, 2015 at 8:28 AM, Daniel Watrous <dwmaillist@gm= ail.com> wrote:
I could really use some help here. As you can see from the= output below, the two attached datanodes are identified with a non-existen= t IP address. Can someone tell me how that gets selected or how to explicit= ly set it. Also, why are both datanodes shown under the same name/IP?

hadoop@hadoop-master:~$ hdfs dfsadmin -report
Configured Capacity: 84482326528 (78.68 GB)
Present Capacity: 75745546240 (70.54 GB)
DFS Remaining: 75744862208 (70.54 GB)
DFS Used: 684032 (668 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.= 51.1:50010 (192.168.51.1)
Hostname: hadoop-data1
Decommission Status : Normal
Configured Capacity: 42241163264 (39.34 GB)
DFS Used: 303104 (296 KB)
Non DFS Used: 4302479360 (4.01 GB)
DFS Remaining: 37938380800 (35.33 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 25 13:25:37 UTC 2015


Name: 192.168.= 51.4:50010 (hadoop-master)
Hostname: hadoop-master
Decommission Status : Normal
Configured Capacity: 42241163264 (39.34 GB)
DFS Used: 380928 (372 KB)
Non DFS Used: 4434300928 (4.13 GB)
DFS Remaining: 37806481408 (35.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.50%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 25 13:25:38 UTC 2015



On Thu, Sep 24, 2015 at 5:05 PM, Daniel Watrous <dwmaillist@gm= ail.com> wrote:
The IP address is clearly wrong, but I'm not sure how = it gets set. Can someone tell me how to configure it to choose a valid IP a= ddress?

On Thu, Sep 24, 2015 at 3:26 PM, Daniel Watrous <dwmaillist@gm= ail.com> wrote:
I just noticed that both datanodes appear to have chosen t= hat IP address and bound that port for HDFS communication.


Any idea why this would be? Is there some way to specify which IP/host= name should be used for that?

On Thu, Sep 24, 2015 at 3:11 PM, Daniel Watrous <dwmaillist@gm= ail.com> wrote:
When I try to run a map reduce example, I get the following error:

hadoop@hadoop-master:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapr= educe/hadoop-mapreduce-examples-2.7.1.jar pi 10 30
Number of Maps =C2=A0=3D 10
Samples per Map =3D 30
15/09/24 20:04:28 INFO hdfs.DFSClient: Exception in createBlockOutputS= tream
java.io.IOException: Got error, status message , ack with firstBadLink= as 192.168.51.1:50010
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocol.datatra= nsfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:1= 40)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSOutputStream$= DataStreamer.createBlockOutputStream(DFSOutputStream.java:1334)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSOutputStream$= DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSOutputStream$= DataStreamer.run(DFSOutputStream.java:449)
15/09/24 20:04:28 INFO hdfs.DFSClient: Abandoning BP-852923283-127.0.1= .1-1443119668806:blk_1073741825_1001
15/09/24 20:04:28 INFO hdfs.DFSClient: Excluding datanode DatanodeInfo= WithStorage[192.168= .51.1:50010,DS-45f6e06d-752e-41e8-ac25-ca88bce80d00,DISK]
15/09/24 20:04:28 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 653= 57ms (threshold=3D30000ms)
Wrote input for Map #0

I'm not sure why it's trying to access 192.168.51.1:50010, which isn't even a valid IP address in my setup= .

Daniel








--001a113a436844cd9c0520ce7ea0--