hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marius (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-12260) BlockSender.sendChunks() exception
Date Thu, 23 Jul 2015 08:57:04 GMT
Marius created HADOOP-12260:

             Summary: BlockSender.sendChunks() exception
                 Key: HADOOP-12260
                 URL: https://issues.apache.org/jira/browse/HADOOP-12260
             Project: Hadoop Common
          Issue Type: Bug
    Affects Versions: 2.7.1, 2.6.0
         Environment: OS: CentOS Linux release 7.1.1503 (Core) 
Kernel: 3.10.0-229.1.2.el7.x86_64
            Reporter: Marius

I was running some streaming jobs with avro files from my hadoop cluster. They performed poorly
so i checked the logs of my datanodes and found this:

The cluster is running on CentOS machines:
CentOS Linux release 7.1.1503 (Core) 
This is the Kernel:
No one one the userlist replied and i could not find anything helpful on the internet despite
disk failure which is unlikely to cause this because here are several machines and its not
very likely that all of their disks fail at the same time.
This error is not reported on the console when running a job and the error occurs from time
to time and then dissapears and comes back again.
The block size of the cluster is the default value.

This is my command:
hadoop jar hadoop-streaming-2.7.1.jar -files mapper.py,reducer.py,avro-1.
7.7.jar,avro-mapred-1.7.7-hadoop2.jar -D mapreduce.job.reduces=15 -libjars avro-1.7.7.jar,avro-mapred-1.7.7-hadoop2.jar
-input /Y/Y1.avro -output /htest/output -mapper mapper.py -reducer reducer.py -inputformat


This message was sent by Atlassian JIRA

View raw message