hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-5043) Fetch failure processing can cause AM event queue to backup and eventually OOM
Date Sat, 02 Mar 2013 22:03:13 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Jason Lowe updated MAPREDUCE-5043:

    Attachment: MAPREDUCE-5043.patch

Patch to implement the proposed approach.  No unit tests since this is an optimization.

Did some rudimentary performance testing on a sleep job with 1000 maps and 1000 reduces and
forcing the reducer's Fetcher to always fail.  With this job profile fetch failure processing
appears to be well over 10x faster.  Before the patch processing a fetch failure for this
job profile took around 10msec per map attempt in the event.  After the patch the processing
a single map attempt in a fetch failure event was well less than a millisecond, and processing
30+ map attempts was taking around 1 millisecond total.
> Fetch failure processing can cause AM event queue to backup and eventually OOM
> ------------------------------------------------------------------------------
>                 Key: MAPREDUCE-5043
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5043
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mr-am
>    Affects Versions: 0.23.7, 2.0.4-beta
>            Reporter: Jason Lowe
>            Assignee: Jason Lowe
>            Priority: Blocker
>         Attachments: MAPREDUCE-5043.patch
> Saw an MRAppMaster with a 3G heap OOM.  Upon investigating another instance of it running,
we saw the UI in a weird state where the task table and task attempt tables in the job overview
page weren't consistent.  The AM log showed the AsyncDispatcher had hundreds of thousands
of events in the event queue, and jstacks showed it spending a lot of time in fetch failure
processing.  It turns out fetch failure processing is currently *very* expensive, with a triple
{{for}} loop where the inner loop is calling the quite-expensive {{TaskAttempt.getReport}}.
 That function ends up type-converting the entire task report, counters and all, and performing
locale conversions among other things.  It does this for every reduce task in the job, for
every map task that failed.  And when it's done building up the large task report, it pulls
out one field, the phase, then throws the report away.
> While the AM is busy processing fetch failures, tasks attempts are continuing to send
events to the AM including memory-expensive events like status updates which include the counters.
 These back up in the AsyncDispatcher event queue and eventually even an AM with a large heap
size will run out of memory and crash or expire because it thrashes in garbage collect.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message