hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Runping Qi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Fri, 21 Apr 2006 01:22:05 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-153?page=comments#action_12375457 ] 

Runping Qi commented on HADOOP-153:


Exceptions in the map and reduce functions that are implemented by the user should be handled
by the user within the functions.
In the current implementation of sequencial file record reader, it is hard to skip to the
next record if exception happens during record reading.

> skip records that throw exceptions
> ----------------------------------
>          Key: HADOOP-153
>          URL: http://issues.apache.org/jira/browse/HADOOP-153
>      Project: Hadoop
>         Type: New Feature

>   Components: mapred
>     Versions: 0.2
>     Reporter: Doug Cutting
>     Assignee: Doug Cutting
>      Fix For: 0.2

> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message