hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4305) repeatedly blacklisted tasktrackers should get declared dead
Date Fri, 31 Oct 2008 11:05:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644263#action_12644263

Steve Loughran commented on HADOOP-4305:

I'd be happiest if there was some way of reporting this to some policy component that made
the right decision. Because the action you take on a managed-VM cluster is different from
hadoop on physical. On physical, you blacklist and maybe trigger a reboot. Or you start running
well-known health tasks to see which parts of the system appear healthy. On a VM cluster you
just delete that node and create a new one -no need to faff around with the state of the VM
if it is a task-only VM; if its also a datanode you have to decommission it first.

> repeatedly blacklisted tasktrackers should get declared dead
> ------------------------------------------------------------
>                 Key: HADOOP-4305
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4305
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Christian Kunz
>            Assignee: Amareshwari Sriramadasu
>             Fix For: 0.20.0
> When running a batch of jobs it often happens that the same tasktrackers are blacklisted
again and again. This can slow job execution considerably, in particular, when tasks fail
because of timeout.
> It would make sense to no longer assign any tasks to such tasktrackers and to declare
them dead.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message