hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-371) Resource-centric compression in AM-RM protocol limits scheduling
Date Tue, 05 Feb 2013 21:47:14 GMT

    [ https://issues.apache.org/jira/browse/YARN-371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13571763#comment-13571763

Arun C Murthy commented on YARN-371:

bq. 100,000 tasks at 150 bytes each (name of 3 nodes plus some extra stuff) is going to take
about 15MB to transfer

That seems reasonable, but makes assumptions such as 3 nodes per task etc. That isn't necessarily
true... for e.g. several MR apps have input with higher replication factor - I've personally
fixed JobTracker in the past to not fall over for (rogue?) jobs such as this.

More importantly - this becomes 15MB per application in the RM. Now, multiply that by 10,000
concurrent apps... or 100,000 apps as we scale.

Also, it's 15MB of PB on the wire, which in java land (i.e. objects), is probably much more.


Anyway, my point is simple: I'd like to see use-cases where the current protocol is in sufficient.
Even then, I value the ability to scale gracefully over a couple of corner-case features...


Given we are very close to stabilizing YARN all of these changes look, at best, a 3.x change
to me.

It's perfectly fine to experiment as Bobby says, just be aware of current design centre and
context, and particularly when the proposed model was explicitly rejected when we designed
the current system:

It's fine to change it at some point, it just needs a very, very good reason to do so. Frankly,
I'd like to explore the edges of the current system before we change it.
> Resource-centric compression in AM-RM protocol limits scheduling
> ----------------------------------------------------------------
>                 Key: YARN-371
>                 URL: https://issues.apache.org/jira/browse/YARN-371
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: api, resourcemanager, scheduler
>    Affects Versions: 2.0.2-alpha
>            Reporter: Sandy Ryza
>            Assignee: Sandy Ryza
> Each AMRM heartbeat consists of a list of resource requests. Currently, each resource
request consists of a container count, a resource vector, and a location, which may be a node,
a rack, or "*". When an application wishes to request a task run in multiple localtions, it
must issue a request for each location.  This means that for a node-local task, it must issue
three requests, one at the node-level, one at the rack-level, and one with * (any). These
requests are not linked with each other, so when a container is allocated for one of them,
the RM has no way of knowing which others to get rid of. When a node-local container is allocated,
this is handled by decrementing the number of requests on that node's rack and in *. But when
the scheduler allocates a task with a node-local request on its rack, the request on the node
is left there.  This can cause delay-scheduling to try to assign a container on a node that
nobody cares about anymore.
> Additionally, unless I am missing something, the current model does not allow requests
for containers only on a specific node or specific rack. While this is not a use case for
MapReduce currently, it is conceivable that it might be something useful to support in the
future, for example to schedule long-running services that persist state in a particular location,
or for applications that generally care less about latency than data-locality.
> Lastly, the ability to understand which requests are for the same task will possibly
allow future schedulers to make more intelligent scheduling decisions, as well as permit a
more exact understanding of request load.
> I would propose the tweak of allowing a single ResourceRequest to encapsulate all the
location information for a task.  So instead of just a single location, a ResourceRequest
would contain an array of locations, including nodes that it would be happy with, racks that
it would be happy with, and possibly *.  Side effects of this change would be a reduction
in the amount of data that needs to be transferred in a heartbeat, as well in as the RM's
memory footprint, becaused what used to be different requests for the same task are now able
to share some common data.
> While this change breaks compatibility, if it is going to happen, it makes sense to do
it now, before YARN becomes beta.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message