hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod Kumar Vavilapalli (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-3656) Sort job on 350 scale is consistently failing with latest MRV2 code
Date Thu, 12 Jan 2012 19:31:38 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Vinod Kumar Vavilapalli updated MAPREDUCE-3656:

    Status: Open  (was: Patch Available)

Looks good overall. A couple of minor comments: 
 - The comment saying "Timing can cause this to happen [..]" needs to be updated/removed
 - For handling the case where JVM is unregistered before it gets a task, we should remove
it from {{launchedJVMs}} during unregister. Once we do this, we should think about synchronization
issues carefully.
 - In getTask(), why do we need both the checks {{jvmIDToActiveAttemptMap.containsKey(wJvmID)}}
and {{jvmIDToActiveAttemptMap.get(wJvmID) == null}}
 - We went through a couple of iterations on this part of the code, so let us make sure things
are fine by running the AMScalability benchmark (100K maps) once.
> Sort job on 350 scale is consistently failing with latest MRV2 code 
> --------------------------------------------------------------------
>                 Key: MAPREDUCE-3656
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3656
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster, mrv2, resourcemanager
>    Affects Versions: 0.23.1
>            Reporter: Karam Singh
>            Assignee: Siddharth Seth
>            Priority: Blocker
>             Fix For: 0.23.1
>         Attachments: MR3656.txt
> With the code checked out on last two days. 
> Sort Job on 350 node scale with 16800 maps and 680 reduces consistently failing for around
last 6 runs
> When around 50% of maps are completed, suddenly job jumps to failed state.
> On looking at NM log, found RM sent Stop Container Request to NM for AM container.
> But at INFO level from RM log not able find why RM is killing AM when job is not killed
> One thing found common on failed AM logs is -:
> org.apache.hadoop.yarn.state.InvalidStateTransitonException
> With with different.
> For e.g. One log says -:
> {code}
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: TA_UPDATE
> {code}
> Whereas other logs says -:
> {code}
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: JOB_COUNTER_UPDATE
> {code}

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message