hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod K V (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3581) Prevent memory intensive user tasks from taking down nodes
Date Tue, 09 Sep 2008 04:53:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinod K V updated HADOOP-3581:
------------------------------

    Release Note: Added the ability to kill process-trees transgressing memory limits. Modified
TaskTracker to use this for controlling tasks; TT uses the configuration parameters introduced
in HADOOP-3759. In addition, mapred.tasktracker.taskmemorymanager.monitoring-interval specifies
the interval for which TT waits between cycles of monitoring tasks' memory usage and mapred.tasktracker.procfsbasedprocesstree.sleeptime-before-sigkill
specifies the time TT waits for sending a SIGKILL to a process-tree that has overrun memory
limits, after it has been sent a SIGTERM.
    Hadoop Flags: [Reviewed]

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod K V
>         Attachments: HADOOP-3581-final.txt, HADOOP-3581.20080901.2.txt, HADOOP-3581.20080902.txt,
HADOOP-3581.20080904.txt, HADOOP-3581.20080905.txt, HADOOP-3581.20080908.txt, HADOOP-3581.6.0.txt,
patch_3581_0.1.txt, patch_3581_3.3.txt, patch_3581_4.3.txt, patch_3581_4.4.txt, patch_3581_5.0.txt,
patch_3581_5.2.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, maybe due
to some inadvertent bugs in the user code, or the amount of data processed. When this happens,
the user tasks start to interfere with the proper execution of other processes on the node,
including other Hadoop daemons like the DataNode and TaskTracker. Thus, the node would become
unusable for any Hadoop tasks. There should be a way to prevent such tasks from bringing down
the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message