hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hemanth Yamijala (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3581) Prevent memory intensive user tasks from taking down nodes
Date Thu, 17 Jul 2008 11:03:32 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12614295#action_12614295
] 

Hemanth Yamijala commented on HADOOP-3581:
------------------------------------------

bq. A user should specify the MAX RAM in GB or MB that the tasks will use.

+1. I think that is much easier for a user to specify. 

Here's what I propose with respect to the configuration variables:

- mapred.tasktracker.tasks.maxmemory: Cumulative memory that can be used by all map/reduce
tasks. 
- mapred.map.task.maxmemory: (Overridable per job) Maximum memory any map task of a job can
take. By default, mapred.tasktracker.tasks.maxmemory / number of slots on a node
- mapred.reduce.task.maxmemory: (Overridable per job) Maximum memory any reduce of a job can
take. By default, mapred.tasktracker.tasks.maxmemory / number of slots on a node

Thoughts ? Specifically, on the default values, is it OK to give the same amount of max memory
to map tasks and reduce tasks ? Or should we look to divide the max memory so that there's
more (say twice) given to the reduce tasks, than to the map tasks ?

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: patch_3581_0.1.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, maybe due
to some inadvertent bugs in the user code, or the amount of data processed. When this happens,
the user tasks start to interfere with the proper execution of other processes on the node,
including other Hadoop daemons like the DataNode and TaskTracker. Thus, the node would become
unusable for any Hadoop tasks. There should be a way to prevent such tasks from bringing down
the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message