hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1221) Kill tasks on a node if the free physical memory on that machine falls below a configured threshold
Date Wed, 24 Feb 2010 23:15:28 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12838074#action_12838074
] 

Arun C Murthy commented on MAPREDUCE-1221:
------------------------------------------

Scott, as I said, it is reasonably to track either virtual memory or physical memory or both.

bq. The other main reason that we want to do this is that per task virtual-memory-limit is
an API change for our users.

A new feature might imply change, no? 

OTOH you could get away with simply setting the default values to be reasonable for a wide-variety
of uses so users do not have to do anything.

bq. I think it is may not be that bad that we kill the task.

Like I said, the problem is that there is no predictability. What if a job gets unlucky and
it's 4th attempt gets killed because it happened to run on a node where a rouge task of some
other job ... again, predictability is very important. Penalizing the right task is equally
important.

> Kill tasks on a node if the free physical memory on that machine falls below a configured
threshold
> ---------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-1221
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1221
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: tasktracker
>    Affects Versions: 0.22.0
>            Reporter: dhruba borthakur
>            Assignee: Scott Chen
>             Fix For: 0.22.0
>
>         Attachments: MAPREDUCE-1221-v1.patch, MAPREDUCE-1221-v2.patch, MAPREDUCE-1221-v3.patch
>
>
> The TaskTracker currently supports killing tasks if the virtual memory of a task exceeds
a set of configured thresholds. I would like to extend this feature to enable killing tasks
if the physical memory used by that task exceeds a certain threshold.
> On a certain operating system (guess?), if user space processes start using lots of memory,
the machine hangs and dies quickly. This means that we would like to prevent map-reduce jobs
from triggering this condition. From my understanding, the killing-based-on-virtual-memory-limits
(HADOOP-5883) were designed to address this problem. This works well when most map-reduce
jobs are Java jobs and have well-defined -Xmx parameters that specify the max virtual memory
for each task. On the other hand, if each task forks off mappers/reducers written in other
languages (python/php, etc), the total virtual memory usage of the process-subtree varies
greatly. In these cases, it is better to use kill-tasks-using-physical-memory-limits.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message