hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4018) limit memory usage in jobtracker
Date Thu, 09 Oct 2008 06:34:46 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

dhruba borthakur updated HADOOP-4018:
-------------------------------------

    Fix Version/s: 0.19.0

Thanks Amar for reviewing it. I am marking it for 0.19 because this limit is very necessary
for clusters that have permanent JobTrackers (not using HOD). Otherwise a single erroneous
job could swamp the entire cluster. The fix is very low-risk. I am proposing that this fix
gets into 0.19 branch.

> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.19.0
>
>         Attachments: maxSplits.patch, maxSplits10.patch, maxSplits2.patch, maxSplits3.patch,
maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, maxSplits7.patch, maxSplits8.patch,
maxSplits9.patch
>
>
> We have seen instances when a user submitted a job with many thousands of mappers. The
JobTracker was running with 3GB heap, but it was still not enough to prevent memory trashing
from Garbage collection; effectively the Job Tracker was not able to serve jobs and had to
be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. This can be
a configurable parameter. Is there other things that eat huge globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message