hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod Kumar Vavilapalli (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3581) Prevent memory intensive user tasks from taking down nodes
Date Wed, 03 Sep 2008 06:43:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12627920#action_12627920
] 

Vinod Kumar Vavilapalli commented on HADOOP-3581:
-------------------------------------------------

The first findBugs warning is already explained, cannot be avoided.
bq. The warning "Hard coded reference to an absolute pathname in org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessList()"
refers to the absolute path "/proc", which is inevitable.

The second one relates to making the member class ProcessTreeInfo static private . The current
class hierarchy is TaskTracker(non-static)->TaskMemoryManagerThread(non-static)->ProcessTreeInfo.
ProcessTreeInfo is only related to TaskMemoryManagerThread, so we wish to leave it like that.
And because of this three-level hierarchy, it cannot be made static.

The findBugs warnings are unavoidable. Patch is committable.

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: HADOOP-3581-final.txt, HADOOP-3581.20080901.2.txt, HADOOP-3581.20080902.txt,
HADOOP-3581.6.0.txt, patch_3581_0.1.txt, patch_3581_3.3.txt, patch_3581_4.3.txt, patch_3581_4.4.txt,
patch_3581_5.0.txt, patch_3581_5.2.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, maybe due
to some inadvertent bugs in the user code, or the amount of data processed. When this happens,
the user tasks start to interfere with the proper execution of other processes on the node,
including other Hadoop daemons like the DataNode and TaskTracker. Thus, the node would become
unusable for any Hadoop tasks. There should be a way to prevent such tasks from bringing down
the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message