hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From abhishek sharma <absha...@gmail.com>
Subject resolution to Hadoop error: mapred.JobClient: Error reading task output
Date Sun, 24 Jan 2010 02:43:12 GMT
Hi all,

I had sent a query yesterday asking about the following error

WARN mapred.JobClient: Error reading task
outputhttp://<machine.domainname>:50060/tasklog?plaintext=true&taskid=attempt_201001221644_0001_r_000001_2&filter=stdout
INFO mapred.JobClient: Task Id : attempt_201001221644_0001_r_000001_2,
Status : FAILED java.io.IOException: Task process exit with nonzero
status of 1. at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

I found the reason for it at
http://stackoverflow.com/questions/2091287/error-in-hadoop-mapreduce

The answer is pasted below:

One reason Hadoop produces this error is when the directory containing
the log files becomes too full. This is a limit of the Ext3 filesystem
which only allows a maximum of 32000 links per inode.

Check how full your logs directory is in: {hadoop-home}/logs/userlogs

A simple test for this problem is to just try and create a directory
from the command-line for example: $ mkdir
{hadoop-home}/logs//userlogs/testdir

If you have too many directories in userlogs the OS should fail to
create the directory and report there are too many.

 Thanks,
Abhishek

Mime
View raw message