hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From TS chia <the.ts.c...@gmail.com>
Subject Re: bin/hadoop fs -copyFromLocal fails when 1 datanode download
Date Sun, 08 Jan 2012 10:02:30 GMT
Hi K.S, James
Thanks!

I had dfs.replication as 3.

http://namenodeHost:50070/dfshealth.jsp<http://namenodehost:50070/dfshealth.jsp>
did detected a node was down, but it took quite a while.  1 to 2 mins.
At time, it was almost instant.

Any tuning parameters for it ?

Cheers
TS







On Fri, Jan 6, 2012 at 3:58 AM, James Warren <
james.warren@stanfordalumni.org> wrote:

> Alternatively, it could depend on the replication factor of the file
> you're attempting to download.  If you're not using replication (which is a
> distinct possibility for a small cluster) and the file has a block on the
> datanode you shut down... well, I'd expect exceptions such as those you're
> encountering.
>
> HTH,
> -jw
>
> On Thu, Jan 5, 2012 at 11:37 AM, Bejoy Ks <bejoy.hadoop@gmail.com> wrote:
>
>> Hi
>>       After you stopped one of your data node did you check whether it
>> was shown as dead node in hdfs report. You can view and confirm the same
>> from http://namenodeHost:50070/dfshealth.jsp in dead nodes list . It
>> could be a reason for the error that the datanode is not still marked as
>> dead.
>>
>> Regards
>> Bejoy.K.S
>>
>>
>> On Thu, Jan 5, 2012 at 9:53 PM, TS chia <the.ts.chia@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am new to Hadoop. I was able to 3 datanode running and working.
>>> I purposefully shutdown one datanode and execute
>>> "bin/hadoop fs -copyFromLocal ../hadoop.sh
>>> /user/coka/somedir/slave02-datanodeDown" to see what happen.
>>>
>>> The execution fails with the exception below.
>>>
>>> Why it is so ?
>>>
>>> Thanks in advance.
>>>
>>> Cheers
>>> TS
>>>
>>>
>>> 12/01/05 15:41:40 INFO hdfs.DFSClient: Exception in
>>> createBlockOutputStream java.io.IOException: Bad connect ack with
>>> firstBadLink as 192.160.1.103:50010
>>> 12/01/05 15:41:40 INFO hdfs.DFSClient: Abandoning block
>>> blk_-4266327738363863152_1008
>>> 12/01/05 15:41:40 INFO hdfs.DFSClient: Waiting to find target node:
>>> 192.160.1.4:50010
>>> 12/01/05 15:41:49 INFO hdfs.DFSClient: Exception in
>>> createBlockOutputStream java.io.IOException: Bad connect ack with
>>> firstBadLink as 192.160.1.103:50010
>>> 12/01/05 15:41:49 INFO hdfs.DFSClient: Abandoning block
>>> blk_1476878908693006667_1008
>>> 12/01/05 15:42:13 INFO hdfs.DFSClient: Exception in
>>> createBlockOutputStream java.io.IOException: Bad connect ack with
>>> firstBadLink as 192.160.1.103:50010
>>> 12/01/05 15:42:13 INFO hdfs.DFSClient: Abandoning block
>>> blk_-7529382935054091141_1008
>>> 12/01/05 15:42:13 INFO hdfs.DFSClient: Waiting to find target node:
>>> 192.160.1.4:50010
>>> 12/01/05 15:42:22 INFO hdfs.DFSClient: Exception in
>>> createBlockOutputStream java.io.IOException: Bad connect ack with
>>> firstBadLink as 192.160.1.103:50010
>>> 12/01/05 15:42:22 INFO hdfs.DFSClient: Abandoning block
>>> blk_375095740848321980_1008
>>> 12/01/05 15:42:28 WARN hdfs.DFSClient: DataStreamer Exception:
>>> java.io.IOException: Unable to create new block.
>>>  at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3002)
>>> at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
>>>  at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)
>>>
>>> 12/01/05 15:42:28 WARN hdfs.DFSClient: Error Recovery for block
>>> blk_375095740848321980_1008 bad datanode[1] nodes == null
>>> 12/01/05 15:42:28 WARN hdfs.DFSClient: Could not get block locations.
>>> Source file "/user/coka/somedir/slave02-datanodeDown" - Aborting...
>>> copyFromLocal: Bad connect ack with firstBadLink as 192.160.1.103:50010
>>> 12/01/05 15:42:28 ERROR hdfs.DFSClient: Exception closing file
>>> /user/coka/somedir/slave02-datanodeDown : java.io.IOException: Bad connect
>>> ack with firstBadLink as 192.160.1.103:50010
>>> java.io.IOException: Bad connect ack with firstBadLink as
>>> 192.160.1.103:50010
>>> at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:3068)
>>>  at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2983)
>>> at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
>>>  at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)
>>>
>>>
>>
>

Mime
View raw message