hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-153) skip records that throw exceptions
Date Tue, 06 May 2008 12:40:56 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12594570#action_12594570
] 

Devaraj Das commented on HADOOP-153:
------------------------------------

So here are some thoughts, after some discussion with others, on how to handle app level faults.
Comments welcome.

Java Maps
In this case, we can immediately know which record couldn't be processed and depending on
the type of exception that the method threw we can decide to continue or not (the user can
tell us which exceptions are fatal; we could also have a couple of defaults like OOM). If
we decide to not continue, the task can be reexecuted in the *same tasktracker slot* and this
time that record is skipped. In order to know which record should be skipped in the reexecution,
the task as part of the progress/ping RPC tells the TaskTracker the record number of the last
successfully processed record and the set of bad record numbers is passed to the task upon
reexecution and the task simply skips those for processing. 

Streaming 
In this case, the Java parent notifies the TaskTracker what the last successfully processed
record is. The "last successfully processed" record in this case refers to the record that
was sent to the streaming child process just before the crash was detected. The same TaskTracker
then reexecutes the task and this time, the Java task skips that record assuming that that
was the one on which the process crashed. If the process crashes even now, it gets reexecuted
and this time the Java parent skips the last 2 records. This could go on with every reexecution
skipping the last 2*exec-count number of records (where exec-count represents the number of
reexecutions). This will give us a range within which the faulty record exists. Upon the first
successful reexecution, the TaskTracker passes the range on to the JobTracker and the user
can then debug his input and/or the program that processes the input. An alternative strategy
is to do a binary-search for the offending record. 

Pipes 
The exact same thing as Streaming applies here too. The one point to note here is this that
if we enable the user to tell us whenever it can successfully process a record (similar to
the status/progress calls to the Java parent) it would substantially help in the reexecution
w.r.t skipping records.

> skip records that throw exceptions
> ----------------------------------
>
>                 Key: HADOOP-153
>                 URL: https://issues.apache.org/jira/browse/HADOOP-153
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>            Assignee: Devaraj Das
>
> MapReduce should skip records that throw exceptions.
> If the exception is thrown under RecordReader.next() then RecordReader implementations
should automatically skip to the start of a subsequent record.
> Exceptions in map and reduce implementations can simply be logged, unless they happen
under RecordWriter.write().  Cancelling partial output could be hard.  So such output errors
will still result in task failure.
> This behaviour should be optional, but enabled by default.  A count of errors per task
and job should be maintained and displayed in the web ui.  Perhaps if some percentage of records
(>50%?) result in exceptions then the task should fail.  This would stop jobs early that
are misconfigured or have buggy code.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message