hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management
Date Thu, 08 Dec 2016 22:23:59 GMT

    [ https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15733575#comment-15733575

Wangda Tan commented on YARN-5734:

Thanks [~jhung] / [~mshen] / [~zhouyejoe] / [~zhz] for pushing this forward.

A couple of questions regarding to design: 

*1) How to handle bad configuration update?*

Existing design is updating config first, and then notify scheduler to do update. But how
to avoid update failures? IIUC, PluggablePolicy is added to validate config, but does that
mean we have to duplicate some validation logics from scheduler to PluggablePolicy?

I have an idea that might simplify the overall process:

MutableConfigurationManager always maintain the latest-in-use-config (version=X) 

a. When queue admin request to update some fields, it merges the latest-in-use-config and
new-updated-field to a new configuration proposal (version=X+1). By using ConfigurationProvider,
it can either get a new CapacitySchedulerConfiguration (CS) or a new AllocationConfiguration
b. Then it calls scheduler.reinitialize(...) API and scheduler uses exactly same logic to
validate configuration (including CS#parseQueue, etc.) 
c. If b succeed, write the ver=X+1 config to state store, and response to client about the
operation succeeded. The latest-in-use-config updated to X+1
d. If b failed, it report to client and the new-updated-field will simply discarded.

This proposal should still fit existing overall architecture. The good things are, it avoids
PluggablePolicy implementation (which may require duplicate queue config validation logic),
and it avoids write a bad config to store.

*2) I think existing design which support using two sources of configuration at the same time
is a little confusing, for example:*
- Admin setup a cluster from scratch, RM saves xml file to store, but admin could continue
edit the capacity-scheduler.xml on disk and call rmadmin -refreshQueue, what should happen?

To me this should not allowed: 
- Existing -refreshQueue is added because under the configuration-file based solution, content
in file and memory could be different, -refreshQueue is a way to sync the two.
- In the other hand, store-based solution doesn't need refreshQueue CLI at all, because content
in store and memory should be always synced.

So I would prefer to add an option to yarn-site.xml to explicitly specify which config source
the scheduler will use. If file-based solution is specified, no dynamic update queue operation
will be allowed. If store-based solution is specified, no refreshQueue CLI will be allowed.

If admin want to load configuration file from xml while setting the cluster, or want to switch
from xml-file based config to store-based config, we can provide a CLI to load a XML file
and save it to store.


> OrgQueue for easy CapacityScheduler queue configuration management
> ------------------------------------------------------------------
>                 Key: YARN-5734
>                 URL: https://issues.apache.org/jira/browse/YARN-5734
>             Project: Hadoop YARN
>          Issue Type: New Feature
>            Reporter: Min Shen
>            Assignee: Min Shen
>         Attachments: OrgQueue_API-Based_Config_Management_v1.pdf, OrgQueue_Design_v0.pdf
> The current xml based configuration mechanism in CapacityScheduler makes it very inconvenient
to apply any changes to the queue configurations. We saw 2 main drawbacks in the file based
configuration mechanism:
> # This makes it very inconvenient to automate queue configuration updates. For example,
in our cluster setup, we leverage the queue mapping feature from YARN-2411 to route users
to their dedicated organization queues. It could be extremely cumbersome to keep updating
the config file to manage the very dynamic mapping between users to organizations.
> # Even a user has the admin permission on one specific queue, that user is unable to
make any queue configuration changes to resize the subqueues, changing queue ACLs, or creating
new queues. All these operations need to be performed in a centralized manner by the cluster
> With these current limitations, we realized the need of a more flexible configuration
mechanism that allows queue configurations to be stored and managed more dynamically. We developed
the feature internally at LinkedIn which introduces the concept of MutableConfigurationProvider.
What it essentially does is to provide a set of configuration mutation APIs that allows queue
configurations to be updated externally with a set of REST APIs. When performing the queue
configuration changes, the queue ACLs will be honored, which means only queue administrators
can make configuration changes to a given queue. MutableConfigurationProvider is implemented
as a pluggable interface, and we have one implementation of this interface which is based
on Derby embedded database.
> This feature has been deployed at LinkedIn's Hadoop cluster for a year now, and have
gone through several iterations of gathering feedbacks from users and improving accordingly.
With this feature, cluster administrators are able to automate lots of thequeue configuration
management tasks, such as setting the queue capacities to adjust cluster resources between
queues based on established resource consumption patterns, or managing updating the user to
queue mappings. We have attached our design documentation with this ticket and would like
to receive feedbacks from the community regarding how to best integrate it with the latest
version of YARN.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org

View raw message