hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4018) limit memory usage in jobtracker
Date Fri, 03 Oct 2008 06:44:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

dhruba borthakur updated HADOOP-4018:
-------------------------------------

    Attachment: maxSplits9.patch

Incorporated all review comments except one.

If the limit check fails, the JT raises an exception after marking the job as failed. Amar
had commented that it is not required to mark the job status as "failed" before raising the
exception. But I have seen that unless this status is set, a job that is expected to fail
does not fail. the unit test will then fail.

> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, maxSplits4.patch,
maxSplits5.patch, maxSplits6.patch, maxSplits7.patch, maxSplits8.patch, maxSplits9.patch
>
>
> We have seen instances when a user submitted a job with many thousands of mappers. The
JobTracker was running with 3GB heap, but it was still not enough to prevent memory trashing
from Garbage collection; effectively the Job Tracker was not able to serve jobs and had to
be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. This can be
a configurable parameter. Is there other things that eat huge globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message