hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Tue, 29 Apr 2008 08:57:58 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12592982#action_12592982
] 

Devaraj Das commented on HADOOP-153:
------------------------------------

bq. in the beginning of this jira - it was mentioned that problems from recordreader.next()
were also covered in this jira. I take it from these comments - that this is not the case
any more. it seems to me that if a recordreader can skip a record reliably (which is the support
required from record readers from this jira) - then it will also be able to avoid throwing
exceptions (since quite obviously, it can catch any exceptions and invoke the logic to skip
to the next record within the next() body).

I didn't mean that (not sure about Doug). Today, the way map tasks work is that the recordreader's
next method is invoked for each record. So if the implementation of next can handle corrupt
records, then the framework doesn't need to get involved. However, if the implementation of
next is not capable of handling bad records, then it is most likely going to throw an exception,
and, the proposal in this jira for handling bad input starts with that assumption. The framework
catches that exception and the recordreader's new interface (look at points 1, 2, and, 3 in
http://issues.apache.org/jira/browse/HADOOP-153?focusedCommentId=12574404#action_12574404)
comes handy to recover from that. For cases like OOM where continuing the current task execution
is not safe, the framework notifies the JobTracker about these bad records, and upon the next
retry these records are skipped (point 0 in the previously mentioned url) 

At this point of time, since we don't have a patch for this jira, I don't see this jira preempting
HADOOP-3144. Although it'd be nice if we have a patch for this jira to handle the general
case.

> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message