hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From umer arshad <m_umer_ars...@hotmail.com>
Subject RE: hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink
Date Tue, 01 Sep 2009 13:31:41 GMT

I have resolved the issue:
What i did: 

1) '/etc/init.d/iptables stop' -->stopped firewall
2) SELINUX=disabled in '/etc/selinux/config' file.-->disabled selinux
I worked for me after these two changes.
thanks,
--umer

> From: m_umer_arshad@hotmail.com
> To: common-user@hadoop.apache.org
> Subject: hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad
connect ack with firstBadLink
> Date: Mon, 31 Aug 2009 23:35:31 +0000
> 
> 
> Hi,
> 
> I have set-up an 8-node private hadoop cluster having IP-addresses as follows:
> 
> 192.168.1.10 (master)
> 192.168.1.11 
> 192.168.1.12  
> 192.168.1.13
> 192.168.1.14
> 192.168.1.15
> 192.168.1.16
> 192.168.1.17
> 
> Address 192.168.1.10, master, is running NN+JT and all other nodes are slaves i.e. running
DN+TT. I am trying to put data on HDFS using command: hadoop dfs -put 8GB_input 8GB_input
> 
> I have noticed that some blocks are not replicated/placed on nodes with IP addresses
192.168.1.11, 192.168.1.15, and 192.168.1.16 and i get the following error messages:
> ------------------------------------
> $ hadoop dfs -put 8GB_input 8GB_input
> 09/08/31 18:25:45 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:45 INFO hdfs.DFSClient: Abandoning block blk_-8575812198227241296_1001
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Abandoning block blk_-2932256218448902464_1001
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Abandoning block blk_-1014449966480421244_1001
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Abandoning block blk_7193173823538206978_1001
> 09/08/31 18:26:09 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable
to create new block.
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2731)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2182)
> 
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Error Recovery for block blk_7193173823538206978_1001
bad datanode[2] nodes == null
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/umer/8GB_input"
- Aborting...
> put: Bad connect ack with firstBadLink 192.168.1.16:50010
> -------------------------------------------------
> 
> Sometimes the input file is replicated successfully (excluding these three nodes) and
sometimes the copy process i.e. 'hdfs -put input input' terminates.
> 
> NOTE: My replication factor = 3. 
> 
> I am able to see that all machines are up-and-running using Web-UI http://192.168.1.10:50070.
> 
> I will be greatful for any suggestion/comment in this regard. 
> 
> thanks,
> --umer
> 
> 
> 
> 
> _________________________________________________________________
> Share your memories online with anyone you want.
> http://www.microsoft.com/middleeast/windows/windowslive/products/photos-share.aspx?tab=1

_________________________________________________________________
More than messages–check out the rest of the Windows Live™.
http://www.microsoft.com/windows/windowslive/
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message