hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod Kumar Vavilapalli (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3581) Prevent memory intensive user tasks from taking down nodes
Date Sat, 26 Jul 2008 10:41:31 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinod Kumar Vavilapalli updated HADOOP-3581:
--------------------------------------------

    Attachment: patch_3581_3.3.txt

Attaching patch for review. It still doesn't have test-cases and documentation.

Notes:
 - TaskMemoryManagerThread: This is a thread in TaskTracker that manages memory usage of tasks
running under this TT. It is responsible for killing any task-trees that over-step memory
limits. It uses MONITORING_INTERVAL, the interval for which TaskMemoryManager sleeps before
initiating another memory management cycle. Default value 300ms - need an appropriate, but
small, value for this.
 - TaskMemoryManagerThread tracks tasks using ProcessTree objects of abstract class ProcessTree.
Currently, we only have implementation for Linux and Cygwin - ProcfsBasedProcessTree - a proc
file-system based ProcessTree.
 - For managing memory, ProcfsBasedProcessTree needs pid of root task to begin with. For this,
the way tasks are started is changed so as to store the pid of the started task process in
a temporary PidFile (by echoing $$). By this, we are doing away with the earlier proposal
of writing native code to get pid which involves having another external library. Using shell
features to get pid is straightforward, simple to incorporate and doesn't need multiple implementations.
- PidFiles reside in PIDDIR of TaskTracker's work-space. They are removed once a task process-tree
gets killed/finished.
 - Processes that survive the initial SIGTERM are killed by sending a subsequent SIGKILL after
SLEEP_TIME_BEFORE_SIGKILL. This is currently set to 5 secs, but this should be changed to
an appropriate value; the main downside of having this (large a ) value is that it leaves
enough time for rogue tasks to behave badly by expanding their memory usage beyond set limits.
 - All the three configuration parameters default to Long.MAX_VALUE (memory management disabled
by default).
 - Zombie process-trees : We manage non-empty process-trees even after root processes (Tasks)
exit so as to take care of rogue tasks that may fork off offsprings silently before they exit.

TODO:
 - Deprecate all of the ulimit business - i.e deprecating mapred.child.ulimit feature provided
by HADOOP-2765. We may still want to retain this for limiting other things like open files
etc., but HADOOP-3675 should be automatically providing such task setup feature. Comments?
 - Incorporate some of the methods in ProcfsBasedProcessTree(isEmpty, isZombie, reconstruct
etc) into ProcessTree ?

Also, please comment on a bunch of other minor TODO's marked in the patch.

Testing:
Tested the patch on a Linux cluster
 - with no limits (all the three parameters left unspecified),
 - with only TT limit set (tasks get default limits) and
 - with user-configured per-job limits (which override TT's limits). TaskMemoryManager works
as desired in all the above scenarios.

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: patch_3581_0.1.txt, patch_3581_3.3.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, maybe due
to some inadvertent bugs in the user code, or the amount of data processed. When this happens,
the user tasks start to interfere with the proper execution of other processes on the node,
including other Hadoop daemons like the DataNode and TaskTracker. Thus, the node would become
unusable for any Hadoop tasks. There should be a way to prevent such tasks from bringing down
the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message