hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-4797) LocalContainerAllocator can loop forever trying to contact the RM
Date Wed, 14 Nov 2012 02:26:12 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496769#comment-13496769

Jason Lowe commented on MAPREDUCE-4797:

The code looks like it will only try to connect so many times before giving up, but there's
a bug in LocalContainerAllocator.heartbeat:

AllocateResponse allocateResponse = scheduler.allocate(allocateRequest);
AMResponse response;
try {
  response = allocateResponse.getAMResponse();
  // Reset retry count if no exception occurred.
  retrystartTime = System.currentTimeMillis();
} catch (Exception e) {

Note that the try block is surrounding the retrieval of the response *after* the {{allocate}}
RPC call, so we're missing where the exception is really being thrown and not handling it
here where it has retry count logic.  The exception then bubbles up to the RMCommunicator
allocator thread where if the exception isn't a {{YarnException}} then it simply loops around
to try again, forever.
> LocalContainerAllocator can loop forever trying to contact the RM
> -----------------------------------------------------------------
>                 Key: MAPREDUCE-4797
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4797
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster
>    Affects Versions: 0.23.3, 2.0.1-alpha
>            Reporter: Jason Lowe
> If LocalContainerAllocator has trouble communicating with the RM it can end up retrying
forever if the nature of the error is not a YarnException.
> This can be particulary bad if the connection went down because the cluster was reset
such that the RM and NM have lost track of the process and therefore nothing else will eventually
kill the process.  In this scenario, the looping AM continues to pelt the RM with connection
requests every second using a stale token, and the RM logs the SASL exceptions over and over.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message