hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brahma Reddy Battula <brahmareddy.batt...@huawei.com>
Subject RE: Problem running example (wrong IP address)
Date Fri, 25 Sep 2015 15:33:43 GMT
Seems DN started in three machines and failed in hadoop-data1(192.168.52.4)..


192.168.51.6 : giving IP as 192.168.51.1<http://192.168.51.1:50010>...can you please
check /etc/hosts file of 192.168.51.6 (might be 192.168.51.1<http://192.168.51.1:50010>
is configured in /etc/hosts)

192.168.52.4 : datanode startup might be failed ( you can check this node logs)

192.168.51.4 : <http://192.168.51.4:50010>  Datanode starup is success..which is in
master node..




Thanks & Regards

 Brahma Reddy Battula




________________________________
From: Daniel Watrous [dwmaillist@gmail.com]
Sent: Friday, September 25, 2015 8:41 PM
To: user@hadoop.apache.org
Subject: Re: Problem running example (wrong IP address)

I'm still stuck on this and posted it to stackoverflow:
http://stackoverflow.com/questions/32785256/hadoop-datanode-binds-wrong-ip-address

Thanks,
Daniel

On Fri, Sep 25, 2015 at 8:28 AM, Daniel Watrous <dwmaillist@gmail.com<mailto:dwmaillist@gmail.com>>
wrote:
I could really use some help here. As you can see from the output below, the two attached
datanodes are identified with a non-existent IP address. Can someone tell me how that gets
selected or how to explicitly set it. Also, why are both datanodes shown under the same name/IP?

hadoop@hadoop-master:~$ hdfs dfsadmin -report
Configured Capacity: 84482326528 (78.68 GB)
Present Capacity: 75745546240 (70.54 GB)
DFS Remaining: 75744862208 (70.54 GB)
DFS Used: 684032 (668 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.51.1:50010<http://192.168.51.1:50010> (192.168.51.1)
Hostname: hadoop-data1
Decommission Status : Normal
Configured Capacity: 42241163264 (39.34 GB)
DFS Used: 303104 (296 KB)
Non DFS Used: 4302479360<tel:4302479360> (4.01 GB)
DFS Remaining: 37938380800 (35.33 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 25 13:25:37 UTC 2015


Name: 192.168.51.4:50010<http://192.168.51.4:50010> (hadoop-master)
Hostname: hadoop-master
Decommission Status : Normal
Configured Capacity: 42241163264 (39.34 GB)
DFS Used: 380928 (372 KB)
Non DFS Used: 4434300928<tel:4434300928> (4.13 GB)
DFS Remaining: 37806481408 (35.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.50%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 25 13:25:38 UTC 2015



On Thu, Sep 24, 2015 at 5:05 PM, Daniel Watrous <dwmaillist@gmail.com<mailto:dwmaillist@gmail.com>>
wrote:
The IP address is clearly wrong, but I'm not sure how it gets set. Can someone tell me how
to configure it to choose a valid IP address?

On Thu, Sep 24, 2015 at 3:26 PM, Daniel Watrous <dwmaillist@gmail.com<mailto:dwmaillist@gmail.com>>
wrote:
I just noticed that both datanodes appear to have chosen that IP address and bound that port
for HDFS communication.

http://screencast.com/t/OQNbrWFF

Any idea why this would be? Is there some way to specify which IP/hostname should be used
for that?

On Thu, Sep 24, 2015 at 3:11 PM, Daniel Watrous <dwmaillist@gmail.com<mailto:dwmaillist@gmail.com>>
wrote:
When I try to run a map reduce example, I get the following error:

hadoop@hadoop-master:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar
pi 10 30
Number of Maps  = 10
Samples per Map = 30
15/09/24 20:04:28 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as 192.168.51.1:50010<http://192.168.51.1:50010>
        at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1334)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
15/09/24 20:04:28 INFO hdfs.DFSClient: Abandoning BP-852923283-127.0.1.1-1443119668806:blk_1073741825_1001
15/09/24 20:04:28 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.51.1:50010<http://192.168.51.1:50010>,DS-45f6e06d-752e-41e8-ac25-ca88bce80d00,DISK]
15/09/24 20:04:28 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 65357ms (threshold=30000ms)
Wrote input for Map #0

I'm not sure why it's trying to access 192.168.51.1:50010<http://192.168.51.1:50010>,
which isn't even a valid IP address in my setup.

Daniel





Mime
View raw message