hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enis Soztutar (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Tue, 11 Mar 2008 11:58:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577411#action_12577411
] 

Enis Soztutar commented on HADOOP-153:
--------------------------------------

bq. The tricky bit will be identifying the failed record number, no? The naive approach would
be to have the child report after each record has been processed, so that the parent can then
know, when it crashes, which record it was on. But that would probably be too expensive.

In the original mapreduce paper, the record number is kept in a global variable which is then
passed to the master through an UDP package. The master decides that the record is malformed
if it sees more than one report for the same record number. 

I think we may be able catch the exception in the Child process and make an IPC to the TT(which
in turn reports this to JT). There may be situations in which the IPC will fail, for those
tasks we can adopt the above strategy of reporting the record number in the subsequent reexecution
of the task to find out exactly which record the computation fails. 


> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>             Fix For: 0.17.0
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message