hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sharad Agarwal (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (HADOOP-153) skip records that throw exceptions
Date Mon, 07 Jul 2008 06:57:32 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12610897#action_12610897
] 

sharadag edited comment on HADOOP-153 at 7/6/08 11:55 PM:
----------------------------------------------------------------

Had an offline discussion with Eric and Devaraj, and we came up with following:
- Let this issue handle the case of crashes and hangups. For the case of catching the exception
for Java tasks, filed another Jira -> HADOOP-3700
- Design gets impacted based on the assumption of how frequent the failures would be. At this
point of time, design for INFREQUENT failures. This would simplify the design. Also, bad records
can be maintained by the Jobtracker (as pointed out by Enis), as no of bad records are expected
to be quite low.
- Failing Fast the bad jobs is very crucial to avoid wasting Grid resources. Thresholds should
be define in such a way that we identify bad jobs early enough, say maximum of 10% of the
maps can fail. Also, we need to make sure that we execute failed task VERY FAST.
- Apart from bad data, Task crashes could be due to bad user code (like Out of memory) or
bad nodes. To isolate these cases, on failure, reexecute on another node as now.  If it fails
AGAIN, then reexecute a third time, this time in the special mode where we report every record
completion to the Task tracker.
- For the case of Streaming, streaming would have to write the processed record count to the
stderr as a framework counter, to take advantage of this feature.






      was (Author: sharadag):
    Had an offline discussion with Eric and Devaraj, and we came up with following:
- Let this issue handle the case of crashes and hangups. For the case of catching the exception
for Java tasks, filed another Jira -> Hadoop-3700
- Design gets impacted based on the assumption of how frequent the failures would be. At this
point of time, design for INFREQUENT failures. This would simplify the design. Also, bad records
can be maintained by the Jobtracker (as pointed out by Enis), as no of bad records are expected
to be quite low.
- Failing Fast the bad jobs is very crucial to avoid wasting Grid resources. Thresholds should
be define in such a way that we identify bad jobs early enough, say maximum of 10% of the
maps can fail. Also, we need to make sure that we execute failed task VERY FAST.
- Apart from bad data, Task crashes could be due to bad user code (like Out of memory) or
bad nodes. To isolate these cases, on failure, reexecute on another node as now.  If it fails
AGAIN, then reexecute a third time, this time in the special mode where we report every record
completion to the Task tracker.
- For the case of Streaming, streaming would have to write the processed record count to the
stderr as a framework counter, to take advantage of this feature.





  
> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Sharad Agarwal
>         Attachments: skipRecords_wip1.patch
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message