hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Tue, 29 Apr 2008 04:39:56 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12592950#action_12592950
] 

Joydeep Sen Sarma commented on HADOOP-153:
------------------------------------------

> That is, even though recordreaders might not have the logic to determine whether a particular
record is corrupt, the framework can do it with 
some help from the reader

> If the record reader can identify bad records and skip them, then the framework need
not get involved

in the beginning of this jira -  it was mentioned that problems from recordreader.next() were
also covered in this jira. I take it from these comments - that this is not the case any more.
it seems to me that if a recordreader can skip a record reliably (which is the support required
from record readers from this jira) - then it will also be able to avoid throwing exceptions
(since quite obviously, it can catch any exceptions and invoke the logic to skip to the next
record within the next() body).

just wanted to make sure since this jira was mentioned as something that might preempt 3144
(putting basic corruption detection code in the linerecordreader)  - but that doesn't seem
like the case ..

> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message