hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From abhishek sharma <absha...@gmail.com>
Subject resolution to Hadoop error: mapred.JobClient: Error reading task output
Date Sun, 24 Jan 2010 02:43:12 GMT
Hi all,

I had sent a query yesterday asking about the following error

WARN mapred.JobClient: Error reading task
INFO mapred.JobClient: Task Id : attempt_201001221644_0001_r_000001_2,
Status : FAILED java.io.IOException: Task process exit with nonzero
status of 1. at

I found the reason for it at

The answer is pasted below:

One reason Hadoop produces this error is when the directory containing
the log files becomes too full. This is a limit of the Ext3 filesystem
which only allows a maximum of 32000 links per inode.

Check how full your logs directory is in: {hadoop-home}/logs/userlogs

A simple test for this problem is to just try and create a directory
from the command-line for example: $ mkdir

If you have too many directories in userlogs the OS should fail to
create the directory and report there are too many.


View raw message