hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-2177) The wait for spill completion should call Condition.awaitNanos(long nanosTimeout)
Date Mon, 08 Nov 2010 21:38:07 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12929747#action_12929747
] 

Chris Douglas commented on MAPREDUCE-2177:
------------------------------------------

The progress reporting during the merge is not on every record emitted. For jobs with combiners
that emit far fewer records than they consume, it's possible that the framework fails to report
progress, though (1) IIRC it reports at least once for every partition and (2) that wouldn't
explain why the job is taking so much longer for a particular spill.

Adding some reporting in the reader could make sense, but we could use more information. Adding
progress reporting only to prevent the job from being killed may be the wrong fix.

bq. But since we don't know how long each call to writer.append() / combinerRunner.combine()
would take, there is no guarantee that we can prevent this issue from happening.

If the task is stuck, then it should be killed. I agree that the timeout mechanism's granularity
is too coarse to measure all progress, but the overhead of measuring every event is too high
to be the default.

bq. Reporting progress from a thread that isn't blocked by long write to disk or combiner
call is one option. We can put some limit on the total amount of time spillDone.awaitNanos()
calls take in the following loop:

Again, _that_ thread isn't making progress. It shouldn't prevent the task from getting killed
if the merge is truly stuck.

Ted, please provide some details on the job you're running (w/ combiner? do reexecutions succeed?
does this happen on particular machines? do other tasks complete normally while another is
in this state?).

> The wait for spill completion should call Condition.awaitNanos(long nanosTimeout)
> ---------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2177
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2177
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: tasktracker
>    Affects Versions: 0.20.2
>            Reporter: Ted Yu
>
> We sometimes saw maptask timeout in cdh3b2. Here is log from one of the maptasks:
> 2010-11-04 10:34:23,820 INFO org.apache.hadoop.mapred.MapTask: Spilling map output: buffer
full= true
> 2010-11-04 10:34:23,820 INFO org.apache.hadoop.mapred.MapTask: bufstart = 119534169;
bufend = 59763857; bufvoid = 298844160
> 2010-11-04 10:34:23,820 INFO org.apache.hadoop.mapred.MapTask: kvstart = 438913; kvend
= 585320; length = 983040
> 2010-11-04 10:34:41,615 INFO org.apache.hadoop.mapred.MapTask: Finished spill 3
> 2010-11-04 10:35:45,352 INFO org.apache.hadoop.mapred.MapTask: Spilling map output: buffer
full= true
> 2010-11-04 10:35:45,547 INFO org.apache.hadoop.mapred.MapTask: bufstart = 59763857; bufend
= 298837899; bufvoid = 298844160
> 2010-11-04 10:35:45,547 INFO org.apache.hadoop.mapred.MapTask: kvstart = 585320; kvend
= 731585; length = 983040
> 2010-11-04 10:45:41,289 INFO org.apache.hadoop.mapred.MapTask: Finished spill 4
> Note how long the last spill took.
> In MapTask.java, the following code waits for spill to finish:
> while (kvstart != kvend) { reporter.progress(); spillDone.await(); }
> In trunk code, code is similar.
> There is no timeout mechanism for Condition.await(). In case the SpillThread takes long
before calling spillDone.signal(), we would see timeout.
> Condition.awaitNanos(long nanosTimeout) should be called.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message