hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-5501) Container Pooling in YARN
Date Thu, 09 Feb 2017 14:27:41 GMT

    [ https://issues.apache.org/jira/browse/YARN-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859587#comment-15859587

Jason Lowe commented on YARN-5501:

Thanks for the detailed answers.  I highly recommend these get captured in a new revision
of the design document along with diagrams to help others come up to speed and join the discussion.
 Otherwise we have a wall of text in this JIRA that is essential reading for anyone understanding
what is really being proposed.

bq. If the app can provide some hints as to when it is good to consider a container pre-initialized
then when the container finishes we can carry out the required operations to go back to the
pre-init state.

My thinking is the mere fact that the container finishes is indicative that it is ready for
reuse.  Maybe there's an explicit API they call at the end of the task or whatever, but the
same has to be done for this existing design as well -- we need to know when the container
is ready to be reused.  The main difference I see in this approach is that there isn't an
explicit 'pre-init' step where the users or admins need to premeditate what will be run. 
Instead the first run of the app framework is the same performance it is today, but subsequent
runs are faster since it can reused those cached containers.  Seems to me the most difficult
part of this is coming up with an efficient container request protocol so YARN can know when
it can reuse an old, cached container and when it cannot.  The existing proposal works around
this by requiring the containers to be setup beforehand as special resource types, but that
won't work for a general container caching approach.

bq. What you are saying makes sense, but in that case container resizing won't work as well.

I don't believe it does for cases where we're trying to resize something like a JVM.  I honestly
don't know of any real-world use cases for it today, but my guess is that they either have
a small front-end launcher that can re-launch something when they are told (via a completely
separate, app-specific channel) that the resize is about to occur/has occurred, or they have
explicit memory caching that they can trivially grow or shrink (again, completely outside
of any help from YARN).

bq. For our scenarios resource constraints are enforced via job objects or cgroups so things
are ok.

They certainly are enforced, but how does the app know about the new constraints so they can
either avoid getting shot or take advantage of the new space?  Simply updating the cgroup
is not going to be sufficient.  Either the process will OOM because it slams into the new
lower limit (potentially instantly if it is already bigger than the new limit) or it will
be completely oblivious that it now has acres of memory that it can use.  If it tried to use
it before it would fail, so how does it know it grew?  For example, the JVM can't magically
do this unless the app is doing some sort of explicit off-heap memory management via direct
buffers, etc. and is told about its memory limit.  Simply updating the cgroup setting doesn't
seem to be a sufficient communication channel here, so I'm curious how that's all you need
to do for your scenario.

It seems to me there needs to be a predictable sequence here, and it's not the same for growing
and shrinking.  When we are growing the memory of a container it's a bit simpler.  The container
needs to be signaled about the resizing event _after_ the resizing has occurred at the enforcement
level (i.e.: cgroup).  This could simply be part of the code that the application runs as
part of reusing the container.  However if we are shrinking the memory of the container then
we need to signal the container _before_ the resizing has occurred _and_ we need to know when
the container process(es) have had a chance to react to that signal and get under the new,
lower limit before actually enforcing it (i..e: at the cgroup level).  Otherwise the only
way I can see this working is if the container resizing isn't generalized but requires the
container code to always shrink itself to a minimal acceptable level (i.e.: the smallest that
will ever be requested) after running each app's task so every case essentially becomes the
container growth scenario.

> Container Pooling in YARN
> -------------------------
>                 Key: YARN-5501
>                 URL: https://issues.apache.org/jira/browse/YARN-5501
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: Arun Suresh
>            Assignee: Hitesh Sharma
>         Attachments: Container Pooling - one pager.pdf
> This JIRA proposes a method for reducing the container launch latency in YARN. It introduces
a notion of pooling *Unattached Pre-Initialized Containers*.
> Proposal in brief:
> * Have a *Pre-Initialized Container Factory* service within the NM to create these unattached
> * The NM would then advertise these containers as special resource types (this should
be possible via YARN-3926).
> * When a start container request is received by the node manager for launching a container
requesting this specific type of resource, it will take one of these unattached pre-initialized
containers from the pool, and use it to service the container request.
> * Once the request is complete, the pre-initialized container would be released and ready
to serve another request.
> This capability would help reduce container launch latencies and thereby allow for development
of more interactive applications on YARN.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org

View raw message