hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Nauroth <cnaur...@hortonworks.com>
Subject Re: Could not find any valid local directory for jobcache EXCEPTION
Date Thu, 21 May 2015 17:55:51 GMT
Based on this stack trace, I'm guessing that you're running a 1.x version
of Hadoop.

The TaskTracker uses a set of local directories on the node for storage of
submitted job files during the task's execution.  This is configured in
mapred-site.xml in the property named mapred.job.local.dir.  The
DiskErrorException means that even after trying all directories configured
in mapped.job.local.dir, the TaskTracker couldn't find a place to store
the files.  Possible root causes are misconfiguration, permissions on the
local directories blocking access, disks are full, or disks have failed
and gone into read-only mode.

I hope this helps.

--Chris Nauroth

On 5/21/15, 3:51 AM, "Marko Dinic" <marko.dinic@nissatech.com> wrote:

>I'm new to Hadoop and I'm getting the following exception when I try to
>run my job on Hadoop cluster:
>org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>any valid local directory for jobcache/job_201409031055_3865/jars/job.jar
>     at 
>     at 
>     at 
>     at 
>     at 
>     at 
>     at 
>     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1381)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java
>Can anyone please tell me what seems to be the problem?
>Best regards,

View raw message