hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enis Soztutar (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Mon, 10 Mar 2008 12:21:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12576961#action_12576961
] 

Enis Soztutar commented on HADOOP-153:
--------------------------------------

Honestly, i think bringing extra complexity (especially RecordReader API change, which will
break lots of legacy code) is not justified to solve this issue. To be more explicit, the
framework does not need to "continue from where it was" once a task throws an exception. I
think we can just do step 0 (as defined above) and re-execute the task from the beginning,
but this time skipping the problematic record. It is expected that resuming the execution
from where we left and completely re-executing the task will not make a big difference, since
the number of tasks is assumed to be high enough. 

I propose 
1.  Define the concept of a failed record number that is set by Tasks and propagated to the
JobTracker on task failures. This becomes part of the TIP object at the JobTracker.
2. Define an API in JobConf and a configuration item in hadoop-site to [dis]allow skipping
records. (can be merged with 3 below)
3. Define an API in JobConf and a configuration item in hadoop-site to set max number(or percentage)
of records that can be skipped 
4. Do not change RecordReader/Writer interfaces. 
5. The recovery/skip above is done on a best effort basis. That is, the worst case is that
tasks fail!
 

So this functionality will be completely transparent to the application code, except for setting
the configuration. If the user code is not side-effect free, than the user has to deal with
that, because already the tasks can be executed more than once for various reasons(for example
speculative execution).

Any thoughts ? 

> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>             Fix For: 0.17.0
>
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message