hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod K V (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4523) Enhance how memory-intensive user tasks are handled
Date Wed, 05 Nov 2008 09:39:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinod K V updated HADOOP-4523:
------------------------------

    Attachment: HADOOP-4523-200811-05.txt

Attaching a patch. This
 - makes TaskMemoryManagerThread to observe total memory usage across all tasks. If total
usage crosses overall limit, TT tries and kills any tasks which cross individual task limits.
If it cannot find such tasks, it kills the task with the least progress found via TaskTracker.findTaskToKill()
which has already been used in case of overflowing disk. This method first tries to find the
reduce task with least progress, otherwise it returns the map task with least progress.
 - marks tasks killed because of transgressing individual limits as failed, otherwise they
are marked as killed.
 - includes testTasksWithinTTLimits, testTaskBeyondIndividualLimitsAndTotalUsageBeyondTTLimits
and testTaskBeyondIndividualLimitsButTotalUsageWithinTTLimits. Couldn't write a test to check
killing of a task with least progress; simulating this situation proved very difficult.

> Enhance how memory-intensive user tasks are handled
> ---------------------------------------------------
>
>                 Key: HADOOP-4523
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4523
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.19.0
>            Reporter: Vivek Ratan
>            Assignee: Vinod K V
>         Attachments: HADOOP-4523-200811-05.txt
>
>
> HADOOP-3581 monitors each Hadoop task to see if its memory usage (which includes usage
of any tasks spawned by it and so on) is within a per-task limit. If the task's memory usage
goes over its limit, the task is killed. This, by itself, is not enough to prevent badly behaving
jobs from bringing down nodes. What is also needed is the ability to make sure that the sum
total of VM usage of all Hadoop tasks does not exceed a certain limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message