hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raj V <rajv...@yahoo.com>
Subject Re: Reduce Error
Date Thu, 09 Dec 2010 04:00:06 GMT

Go through the jobtracker, find the relevant node that handled 
attempt_201012061426_0001_m_000292_0 and figure out 

if there are FS or permssion problems.


From: Adarsh Sharma <adarsh.sharma@orkash.com>
To: common-user@hadoop.apache.org
Sent: Wed, December 8, 2010 7:48:47 PM
Subject: Re: Reduce Error

Ted Yu wrote:
> Any chance mapred.local.dir is under /tmp and part of it got cleaned up ?
> On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma <adarsh.sharma@orkash.com>wrote:
>> Dear all,
>> Did anyone encounter the below error while running job in Hadoop. It occurs
>> in the reduce phase of the job.
>> attempt_201012061426_0001_m_000292_0:
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
>> valid local directory for
>> It states that it is not able to locate a file that is created in
>>  mapred.local.dir of Hadoop.
>> Thanks in Advance for any sort of information regarding this.
>> Best Regards
>> Adarsh Sharma
Hi Ted,

My mapred.local.dir is in /home/hadoop directory. I also check it with in 
/hdd2-2 directory where  we have lots of space.

Would mapred.map.tasks affects.

I checked with default and also with 80 maps and 16 reduces as I have 8 slaves.

<description>The local directory where MapReduce stores intermediate
data files.  May be a comma-separated list of directories on different devices 
in order to spread disk i/o.
Directories that do not exist are ignored.

<description>The shared directory where MapReduce stores control files.

Any further information u want.

Thanks & Regards

Adarsh Sharma
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message