hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (YARN-68) NodeManager will refuse to shutdown indefinitely due to container log aggregation
Date Fri, 31 Aug 2012 19:13:08 GMT

     [ https://issues.apache.org/jira/browse/YARN-68?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Daryn Sharp updated YARN-68:
----------------------------

    Attachment: YARN-68.patch

Try much harder to shutdown the aggregators.  Will stop all the threads in the thread pool
instead of assuming every aggregator has an active thread.  Better exception handling and
setting of state to make it harder to get into a bad state.  It's not perfect because jammed
threads can still block shutdown/restart, but the improved logic it makes it much less likely.
                
> NodeManager will refuse to shutdown indefinitely due to container log aggregation
> ---------------------------------------------------------------------------------
>
>                 Key: YARN-68
>                 URL: https://issues.apache.org/jira/browse/YARN-68
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 0.23.3
>         Environment: QE
>            Reporter: patrick white
>            Assignee: Daryn Sharp
>         Attachments: YARN-68.patch
>
>
> The nodemanager is able to get into a state where containermanager.logaggregation.AppLogAggregatorImpl
will apparently wait
> indefinitely for log aggregation to complete for an application, even if that application
has abnormally terminated and is no longer present. 
> Observed behavior is that an attempt to stop the nodemanager daemon will return but have
no effect, the nm log continually displays messages similar to this:
> [Thread-1]2012-08-21 17:44:07,581 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
> Waiting for aggregation to complete for application_1345221477405_2733
> The only recovery we found to work was to 'kill -9' the nm process.
> What exactly causes the NM to enter this state is unclear but we do see this behavior
reliably when the NM has run a task which failed, for example when debugging oozie distcp
actions and having a distcp map task fail, the NM that was running the container will now
enter this state where a shutdown on said NM will never complete, 'never' in this case was
waiting for 2 hours before killing the nodemanager process.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message