hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ross Boucher <bouc...@apple.com>
Subject Re: Running Custom Job
Date Fri, 21 Sep 2007 18:00:47 GMT
I re ran into this problem yesterday, and just thought I'd share the  
cause for the record.

After looking through my logs, I was seeing a bunch of exceptions  
trying to bound to already in use ports.  As a result, the namenode  
and jobtracker were failing to launch, which I had not noticed.  In  
fact, jps reported that they were not running.  But in reality, they  
were running, as some sort of zombie process on my system, which was  
blocking the startup.  After killing the processes, everything went  
back to working well.

On Sep 19, 2007, at 2:59 PM, Owen O'Malley wrote:

>
> On Sep 19, 2007, at 2:30 PM, Ross Boucher wrote:
>
>> Specifically, the job starts, and then each task that is scheduled  
>> fails, with the following error:
>>
>> Error initializing task_0007_m_000063_0:
>> java.io.IOException: /DFS_ROOT/tmp/mapred/system/submit_i849v1/ 
>> job.xml: No such file or directory
>
> Look at the configuration of your mapred.system.dir. It MUST be the  
> same on both the cluster and submitting node. Note that  
> mapred.system.dir must be in the default file system, which must  
> also be the same on the cluster and submitting node. Note that  
> there is a jira (HADOOP-1100) that would have the cluster pass the  
> system directory to the client, which would get rid of this issue.
>
> -- Owen


Mime
View raw message