hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4523) Enhance how memory-intensive user tasks are handled
Date Thu, 30 Oct 2008 20:22:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644088#action_12644088
] 

Owen O'Malley commented on HADOOP-4523:
---------------------------------------

This jira isn't very clear. What are you proposing changing? Is it to make the mapred.tasktracker.tasks.maxmemory
pluggable? If so, I'd propose making an interface like:

{code}
abstract class MemoryPlugin {
  long getVirtualMemorySize(Configuration conf);
}
{code}

and you configure an implementation of it. (mapred.server.memory.plugin ?)


> Enhance how memory-intensive user tasks are handled
> ---------------------------------------------------
>
>                 Key: HADOOP-4523
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4523
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.19.0
>            Reporter: Vivek Ratan
>            Assignee: Vinod K V
>
> HADOOP-3581 monitors each Hadoop task to see if its memory usage (which includes usage
of any tasks spawned by it and so on) is within a per-task limit. If the task's memory usage
goes over its limit, the task is killed. This, by itself, is not enough to prevent badly behaving
jobs from bringing down nodes. What is also needed is the ability to make sure that the sum
total of VM usage of all Hadoop tasks does not exceed a certain limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message