hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Fri, 22 Feb 2008 19:21:19 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12571542#action_12571542
] 

Doug Cutting commented on HADOOP-153:
-------------------------------------

> I am tending to handle this by only keeping track of how RecordReader.next behaves.

I'm not sure what you mean by that.

There are two kinds of places where user code might throw per-record exceptions:

# under RecordReader#next() or RecordWriter#write().  Depending on the RecordReader/RecordWriter
implementation and the exception, it may or may not be possible to call next() or write()
again.  Either are likely to leave streams mid-object.  A reader of binary input might get
badly out of sync, and a writer of binary output might generate badly corrupt data.  To address
this correctly, we either need to change to contracts of next() and write(), or we need to
add new methods to re-sync these to object boundaries.

# under Mapper#map() or Reducer#reduce().  Exceptions here can be ignored without causing
anything worse than data loss.  We can safely proceed without worrying about corruption.


> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>             Fix For: 0.17.0
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message