hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod Kumar Vavilapalli (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3581) Prevent memory intensive user tasks from taking down nodes
Date Fri, 05 Sep 2008 04:21:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinod Kumar Vavilapalli updated HADOOP-3581:
--------------------------------------------

    Attachment: HADOOP-3581.20080905.txt

bq. In the addTask and removeTask API, the synchonization is not done while checking for whether
tasksToBeAdded and tasksToBeRemoved contain the object or not. I think this synchronization
is required. Because the run method will access these datastructures and modify them.
We actually don't need these checks, because no where is a task duplicately added to the data
structures in our code. Removing these checks, and attaching a new patch.

bq. In task cleanup, do we need synchronization when the PID directory is added to the cleaner
thread's queue ?
Not needed. Cleaner thread uses a LinkedBlockingQueue which internally uses locks.

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: HADOOP-3581-final.txt, HADOOP-3581.20080901.2.txt, HADOOP-3581.20080902.txt,
HADOOP-3581.20080904.txt, HADOOP-3581.20080905.txt, HADOOP-3581.6.0.txt, patch_3581_0.1.txt,
patch_3581_3.3.txt, patch_3581_4.3.txt, patch_3581_4.4.txt, patch_3581_5.0.txt, patch_3581_5.2.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, maybe due
to some inadvertent bugs in the user code, or the amount of data processed. When this happens,
the user tasks start to interfere with the proper execution of other processes on the node,
including other Hadoop daemons like the DataNode and TaskTracker. Thus, the node would become
unusable for any Hadoop tasks. There should be a way to prevent such tasks from bringing down
the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message