incubator-mesos-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jessica J (JIRA)" <>
Subject [jira] [Commented] (MESOS-206) Long-running jobs on Hadoop framework do not run to completion
Date Thu, 28 Jun 2012 14:53:44 GMT


Jessica J commented on MESOS-206:

Yeah, there are a number of these errors for multiple tasks that receive resources, start,
and finally arrive at the TASK_FINISHED state. The JobTracker shows the error I pasted above
for each "unknown" task; the master log says,

I0628 09:48:01.400383 25789 master.cpp:956] Status update from slave(1)@[slave-ip]:59707:
task [task #] of framework 201206280753-36284608-5050-25784-0001 is now in state TASK_FINISHED
W0628 09:48:01.400524 25789 master.cpp:988] Status update from slave(1)@[slave-ip]:59707 ([slave
hostname]): error, couldn't lookup task [task #]

These status updates come from multiple slave nodes, as well, so it's not just a single node

The first exception I see is in the JobTracker's logs is a FileNotFoundException:

12/06/28 08:17:30 INFO mapred.TaskInProgress: Error from attempt_201206280805_0002_r_000014_1:
Error initializing attempt_201206280805_0002_r_000014_1: File does not exist: hdfs://namenode:54310/scratch/hadoop/mapred/staging/jessicaj/.staging/job_201206280805_0002/job.jar
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
        at org.apache.hadoop.fs.FileUtil.copy(
        at org.apache.hadoop.fs.FileUtil.copy(
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(
        at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(
        at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(
        at org.apache.hadoop.mapred.TaskTracker$
        at Method)
        at org.apache.hadoop.mapred.TaskTracker.initializeJob(
        at org.apache.hadoop.mapred.TaskTracker.localizeJob(
        at org.apache.hadoop.mapred.TaskTracker$

However, the job tracker starts scheduling tasks "with 0 map slots and 0 reduce slots" (my
first indication that something is wrong) a full 5 minutes before this exception occurs, so
I'm not sure how things correlate.

> Long-running jobs on Hadoop framework do not run to completion
> --------------------------------------------------------------
>                 Key: MESOS-206
>                 URL:
>             Project: Mesos
>          Issue Type: Bug
>          Components: framework
>            Reporter: Jessica J
>            Priority: Blocker
> When I run the MPI and Hadoop frameworks simultaneously with long-running jobs, the Hadoop
jobs fail to complete. The MPI job, which is shorter, completes normally, and the Hadoop framework
continues for a while, but eventually, although it appears to still be running, it stops making
progress on the jobs. The jobtracker keeps running, but each line of output indicates no map
or reduce tasks are actually being executed:
> 12/06/08 10:55:41 INFO mapred.FrameworkScheduler: Assigning tasks for [slavehost] with
0 map slots and 0 reduce slots
> I've examined the master's log and noticed this:
> I0608 10:40:43.106740  6317 master.cpp:681] Deactivating framework 201206080825-36284608-5050-6311-0000
as requested by scheduler(1)@[my-ip]:59317
> The framework ID is that of the Hadoop framework. This message is followed by messages
indicating the slaves "couldn't lookup task [#]" and "couldn't lookup framework 201206080825-36284608-5050-6311-0000."
> I thought the first time that this error was a fluke since it does not happen with shorter
running jobs or with the Hadoop framework running independently (i.e., no MPI), but I have
now consistently reproduced it 4 times.
> UPDATE: I just had the same issue occur when running Hadoop + Mesos without the MPI framework
running simultaneously.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message