hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: DataNode Timeout exceptions.
Date Wed, 27 May 2015 00:29:23 GMT
bq. All datanodes 112.123.123.123:50010 are bad. Aborting...

How many datanodes do you have ?

Can you check datanode namenode log ?

Cheers

On Tue, May 26, 2015 at 5:00 PM, S.L <simpleliving016@gmail.com> wrote:

> Hi All,
>
> I am on Apache Yarn 2.3.0 and lately I have been seeing this exceptions
> happening frequently.Can someone tell me the root cause of this issue.
>
> I have set the the property in mapred-site.xml as follows , is there any
> other property that I need to set also?
>
>     <property>
>       <name>mapreduce.task.timeout</name>
>       <value>1800000</value>
>       <description>
>       The time out value for taks, I set this because the JVMs might be
> busy in GC and this is causing timeout in Hadoop Tasks.
>       </description>
>     </property>
>
>
>
> 15/05/26 02:06:53 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor
> exception  for block
> BP-1751673171-112.123.123.123-1431824104307:blk_1073749395_8571
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for
> channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/112.123.123.123:35398
> remote=/112.123.123.123:50010]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1881)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:726)
> 15/05/26 02:06:53 INFO mapreduce.JobSubmitter: Cleaning up the staging
> area /tmp/hadoop-yarn/staging/df/.staging/job_1431824165463_0221
> 15/05/26 02:06:54 WARN security.UserGroupInformation:
> PriviledgedActionException as:df (auth:SIMPLE) cause:java.io.IOException:
> All datanodes 112.123.123.123:50010 are bad. Aborting...
> 15/05/26 02:06:54 WARN security.UserGroupInformation:
> PriviledgedActionException as:df (auth:SIMPLE) cause:java.io.IOException:
> All datanodes 112.123.123.123:50010 are bad. Aborting...
> Exception in thread "main" java.io.IOException: All datanodes
> 112.123.123.123:50010 are bad. Aborting...
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)
>
>
>
>

Mime
View raw message