hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jitendra Yadav <jeetuyadav200...@gmail.com>
Subject Re: Strange error on Datanodes
Date Mon, 02 Dec 2013 15:54:43 GMT
Which hadoop destro you are using?, It would be good if you share the logs
from data node on which the data block(blk_-2927699636194035560_63092)
exist and from name nodes also.

Regards
Jitendra


On Mon, Dec 2, 2013 at 9:13 PM, Siddharth Tiwari
<siddharth.tiwari@live.com>wrote:

> Hi Jeet
>
> I have a cluster of size 25, 4 Admin nodes and 21 datanodes.
> 2 NN 2 JT 3 Zookeepers and 3 QJNs
>
> if you could help me in understanding what kind of logs you want I will
> provide it to you. Do you need hdfs-site.xml, core-site.xml and
> mapred-site.xmls ?
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
> ------------------------------
> Date: Mon, 2 Dec 2013 21:09:03 +0530
> Subject: Re: Strange error on Datanodes
> From: jeetuyadav200890@gmail.com
> To: user@hadoop.apache.org
>
>
> Hi,
>
> Can you share some more logs from Data nodes? could you please also share
> the conf and cluster size?
>
> Regards
> Jitendra
>
>
> On Mon, Dec 2, 2013 at 8:49 PM, Siddharth Tiwari <
> siddharth.tiwari@live.com> wrote:
>
> Hi team
>
> I see following errors on datanodes. What is the reason for this and how
> can it will be resolved:-
>
> 2013-12-02 13:11:36,441 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block BP-1854340821-10.238.9.151-1385733655875:blk_-2927699636194035560_63092
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be
ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.238.10.43:54040 remote=/10.238.10.43:50010]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:117)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:83)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:83)
> 	at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:169)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:114)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:694)
> 2013-12-02 13:12:06,572 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing
logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
> 2013-12-02 13:12:06,581 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hadoop (auth:SIMPLE) cause:java.io.IOException: All datanodes 10.238.10.43:50010 are bad.
Aborting...
> 2013-12-02 13:12:06,581 WARN org.apache.hadoop.mapred.Child: Error running child
> java.io.IOException: All datanodes 10.238.10.43:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:959)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:779)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
> 2013-12-02 13:12:06,583 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the
task
>
>
>
> **------------------------**
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
>
>

Mime
View raw message