hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Siddharth Seth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-4819) AM can rerun job after reporting final job status to the client
Date Thu, 03 Jan 2013 09:22:15 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13542814#comment-13542814
] 

Siddharth Seth commented on MAPREDUCE-4819:
-------------------------------------------

Bobby, Jason, Along with trying to ensure that a commit does not happen twice, I think there
is value in committing the job history file before changing job status to SUCCESS - primarily
for the RPC to behave consistently. It can otherwise see temporary final states, if the AM
crashes during the history file persist, and won't be able to retrieve counters or other job
status till the next AM attempt. This does have the drawback of a small performance hit though
- and also makes job history a critical part of a job.
Using separate files for marking success / failure - am guessing this is to have a smaller
change of a failing persist, as compared to persisting events via the HistoryFile, which may
already have a backlog of events ?

Wondering if it's possible to achieve the same checks via the CommitterEventHandler instead
of checking in the MRAppMaster class. i.e follow the regular recovery path - except the CommitHandler
emits success / failed / abort events depending on the presence of these files / (history
events).
Alternately, the current implementation could be simplified by using a custom RMCommunicator
- which does not depend on JobImpl. i.e. the history copier and an RMCommunicator to unregister
from the RM.

Comments on the current patch
- If the last AM attempt were to crash - data exists since the _SUCCESS_ file exists, RPC
will not see SUCCESS.
- While the new AM is running - it will not be able to handle status, counter etc requests.
This seems a little problematic if a success has been reported over RPC from the previous
AM. Since this AM is dealing with the history file - could possibly have it return information
from the history file ?
History commit before SUCCESS may help with the previous 2 points.

- If the recovered AppMaster is not the last retry - looks like the RM unregistration will
not happen. (isLastAMRetry)
- Is a KILLED status also required - KILLED during commit should not be reported as FAILED
- The check for commitSuccess / commitFailure in the AM - the failure check can happen before
the success check (low chance but a success file could be created followed by an RPC failure)
- CommitEventHandler.touchz could throw an exception if the file already exists - to prevent
lost AMs from committing. (maybe not required after MAPREDUCE-4832 ?)
- historyService creation - can move into the common if (copyHistory) check
- Don't think "AMStartedEvent" cannot be ignored - the history server will have no info about
past AMs. I think only the current AM needs to be ignored.

Wondering if it's possible to use HDFS dirs and timestamps to co-ordinate between an active
AM and lost AMs. 
Also, are hdfs dir operations cheaper than file create operations (NN only / NN +DN) ? Nor
sure if mkdir / 0 length file creation are NN only ops.

                
> AM can rerun job after reporting final job status to the client
> ---------------------------------------------------------------
>
>                 Key: MAPREDUCE-4819
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4819
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mr-am
>    Affects Versions: 0.23.3, 2.0.1-alpha
>            Reporter: Jason Lowe
>            Assignee: Bikas Saha
>            Priority: Critical
>         Attachments: MAPREDUCE-4819.1.patch, MAPREDUCE-4819.2.patch, MAPREDUCE-4819.3.patch,
MR-4819-bobby-trunk.txt, MR-4819-bobby-trunk.txt
>
>
> If the AM reports final job status to the client but then crashes before unregistering
with the RM then the RM can run another AM attempt.  Currently AM re-attempts assume that
the previous attempts did not reach a final job state, and that causes the job to rerun (from
scratch, if the output format doesn't support recovery).
> Re-running the job when we've already told the client the final status of the job is
bad for a number of reasons.  If the job failed, it's confusing at best since the client was
already told the job failed but the subsequent attempt could succeed.  If the job succeeded
there could be data loss, as a subsequent job launched by the client tries to consume the
job's output as input just as the re-attempt starts removing output files in preparation for
the output commit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message