hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From marius <m.die0...@googlemail.com>
Subject Re: sendChunks error
Date Mon, 20 Jul 2015 14:50:36 GMT
Hi,

i tried to reinstall hadoop on all nodes its now a five node setup 
(4*slave 1*slave/master). It still gives me the same error on all nodes. 
But the error is not consistent but comes and goes from time to time. 
This is the log from one datanode:
http://pastebin.com/SQd0G5tF

It still is hadoop 2.6.0 with CentOS 7, the hardware varies from node to 
node.

These are my configs:
http://pastebin.com/Fmi8bafT

Greetings Marius



Am 17.07.2015 um 18:15 schrieb Ted Yu:
> bq. IOException: Die Verbindung wurde vom Kommunikationspartner 
> zurückgesetzt
>
> Looks like the above means 'The connection was reset by the 
> communication partner'
>
> Which hadoop release do you use ?
>
> Can you pastebin more of the datanode log ?
>
> Thanks
>
> On Fri, Jul 17, 2015 at 9:11 AM, marius <m.die0123@googlemail.com 
> <mailto:m.die0123@googlemail.com>> wrote:
>
>     Hi,
>
>     when i tried to run some Jobs on my hadoop cluster, i found the
>     following error in my  datanode logs:
>     (the german means connection reseted by peer)
>
>     2015-07-17 16:33:45,671 ERROR
>     org.apache.hadoop.hdfs.server.datanode.DataNode:
>     BlockSender.sendChunks() exception:
>     java.io.IOException: Die Verbindung wurde vom
>     Kommunikationspartner zurückgesetzt
>             at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>             at
>     sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:443)
>             at
>     sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:575)
>             at org.apache.hadoop.net
>     <http://org.apache.hadoop.net>.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
>             at
>     org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
>             at
>     org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:728)
>             at
>     org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:496)
>             at
>     org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
>             at
>     org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>             at
>     org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>             at java.lang.Thread.run(Thread.java:745)
>
>     i already googled this but i could not find anything...
>     This appears several times and then the error vanishes and the
>     jobs proceeds normally, and the job does not fail. This happens on
>     various nodes. I already formated my namenode but that did not fix it.
>
>     Thanks and greetings
>
>     Marius
>
>


Mime
View raw message