hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brice Arnould (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3581) Prevent memory intensive user tasks from taking down nodes
Date Tue, 15 Jul 2008 09:45:31 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613568#action_12613568
] 

Brice Arnould commented on HADOOP-3581:
---------------------------------------

bq. Implementation in wrapper implies tracking per task and thus we will not have a global
picture of resource usage at TaskTracker level. Further, it is set-once-and-run kind of mechanism
- before launching tasks itself, we will have to declare the limits within which tasks can
run. If we wish to make these limits dynamic, we will need an extra communication pipe between
the wrapper and TaskTracker.
Could we solve this by adding an extra argument specifying the JobId and the UserId to enable
the script to do by job/user accounting ?

bq. We could not find an out-of-the-box OS solution to curtail a process and it's descendents'
memory limits. Specifically, ulimit did not seem to handle processes spawned from a parent
whose memory limit was set.
The wrapper I proposed before could solve this problem as a side effect (with {{/etc/security/limits.conf}}).
But it might not be portable and your solution is maybe for this case.

bq. One impact of HADOOP-3675 on this work is that, when the mechanism to launch a task becomes
pluggable, the way we monitor memory per task might need to change as well. So, for example,
if we have a task-per-thread implementation of a task runner, it would be difficult to monitor
memory per task because it is in the same process space, right ? In fact this proposal in
the patch works only if the task is launched in a separate process.
I'm afraid that many functionality will not to be available for threaded tasks anyway. My
next proposition will include a fallback mecanism so you should'nt have to take this in account.

PS: I'm quite in hurry, please excuse me for my english :-/

> Prevent memory intensive user tasks from taking down nodes
> ----------------------------------------------------------
>
>                 Key: HADOOP-3581
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3581
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod Kumar Vavilapalli
>         Attachments: patch_3581_0.1.txt
>
>
> Sometimes user Map/Reduce applications can get extremely memory intensive, maybe due
to some inadvertent bugs in the user code, or the amount of data processed. When this happens,
the user tasks start to interfere with the proper execution of other processes on the node,
including other Hadoop daemons like the DataNode and TaskTracker. Thus, the node would become
unusable for any Hadoop tasks. There should be a way to prevent such tasks from bringing down
the node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message