hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1324) FSError encountered by one running task should not be fatal to other tasks on that node
Date Tue, 08 May 2007 11:24:17 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12494259
] 

Hadoop QA commented on HADOOP-1324:
-----------------------------------

Integrated in Hadoop-Nightly #82 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/82/)

> FSError encountered by one running task should not be fatal to other tasks on that node
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-1324
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1324
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.12.3
>            Reporter: Devaraj Das
>         Assigned To: Arun C Murthy
>             Fix For: 0.13.0
>
>         Attachments: HADOOP-1324_20070507_1.patch
>
>
> Currently, if one task encounters a FSError, it reports that to the TaskTracker and the
TaskTracker reinitializes itself and effectively loses state of all the other running tasks
too. This can probably be improved especially after the fix for HADOOP-1252. The TaskTracker
should probably avoid reinitializing itself and instead get blacklisted for that job. Other
tasks should be allowed to continue as long as they can (complete successfully, or, fail either
due to disk problems or otherwise).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message