hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Sun, 09 Mar 2008 20:05:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12576814#action_12576814

Devaraj Das commented on HADOOP-153:

bq. Mappers and Reducers should be able to tolerate killer records, just like the reader.
As I mentioned earlier, for the reader, I have an API in RecordReader for querying whether
the RecordReader is side-effect free. If the RecordReader is not side-effect free (for e.g.,
if the next method implementation has issues to do with proceeding further if the current
record read throws an exception), then the task is declared as having failed (and the offending
record is skipped in the task retry).
If we were to support the same strategy for Mapper/Reducer, we need to consider a similar
problem. Also, since Mapper/Reducer can throw exceptions due to problems not necessarily to
do with bad input, we probably need from the user, the info whether to continue with map/reduce
method invocations in the event of exceptions. In addition, the user can also mention the
fatal exceptions which should result in the task failure (e.g. OOM).
Makes sense?

> skip records that throw exceptions
> ----------------------------------
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>             Fix For: 0.17.0
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message