hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-2314) ContainerManagementProtocolProxy can create thousands of threads for a large cluster
Date Fri, 18 Jul 2014 14:07:05 GMT

    [ https://issues.apache.org/jira/browse/YARN-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14066376#comment-14066376
] 

Jason Lowe commented on YARN-2314:
----------------------------------

While there is cache mismanagement going on as described above, a bigger issue is how this
cache interacts with the ClientCache in the RPC layer and how Connection instances behave.
 Despite this cache's intent to try to limit the number of connected NMs, calling stopProxy
does *not* mean the connection and corresponding IPC client thread is removed.  Closing a
proxy will only shutdown threads if there are *no* other instances of that protocol proxy
currently open.  See ClientCache.stopClient for details.  Given that the whole point of the
ContainerManagementProtocolProxy cache is to preserve at least one reference to the Client,
the IPC Client stop method will never be called in practice and IPC client threads will never
be explicitly torn down as a result of calling stopProxy.

As for Connection instances within the IPC Client, outside of erroneous operation they will
only shutdown if either they reach their idle timeout or are explicitly told to stop via Client.stop,
and the latter will never be called in practice per above.  That means the number of IPC client
threads lingering around is solely dictated by how fast we're connecting to new nodes and
how long the IPC idle timeout is.  By default this timeout is 10 seconds, and an AM running
a wide-spread large job on a large, idle cluster can easily allocate containers for and connect
to all of the nodes in less than 10 seconds.  That means we cam still have thousands of IPC
client threads despite ContainerManagementProtocolProxy's efforts to limit the number of connections.

In simplest terms this is a regression of MAPREDUCE-3333.  That patch explicitly tuned the
IPC timeout of ContainerManagement proxies to zero so they would be torn down as soon as we
finished the first call.  I've verified that setting the IPC timeout to zero prevents the
explosion of IPC client threads.  That's sort of a ham-fisted fix since it brings the whole
point of the NM proxy cache into question.  We would be keeping the proxy objects around,
but the connection to the NM would need to be re-established each time we reused it.  Not
sure the cache would be worth much at that point.  If we want to explicitly manage the number
of outstanding NM connections without forcing the connections to shutdown on each IPC call
then I think we need help from the IPC layer itself.  As I mentioned above, I don't think
there's an exposed mechanism to close an individual connection of an IPC Client.

So to sum up, we can fix the cache management bugs described in the first comment, but that
alone will not prevent thousands of IPC client threads from co-existing.  We either need to
set the IPC timeout to 0 (which brings the utility of the NM proxy cache into question) or
change the IPC layer to allow us to close individual Client connections.

> ContainerManagementProtocolProxy can create thousands of threads for a large cluster
> ------------------------------------------------------------------------------------
>
>                 Key: YARN-2314
>                 URL: https://issues.apache.org/jira/browse/YARN-2314
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 2.1.0-beta
>            Reporter: Jason Lowe
>            Priority: Critical
>
> ContainerManagementProtocolProxy has a cache of NM proxies, and the size of this cache
is configurable.  However the cache can grow far beyond the configured size when running on
a large cluster and blow AM address/container limits.  More details in the first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message