hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Billie Rinaldi (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API
Date Thu, 11 Jan 2018 18:59:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16322749#comment-16322749

Billie Rinaldi commented on YARN-7605:

bq. Back to the comment about getStatus structure, do you still want the returned value of
stopped service to be partial information, or similar to running application?

bq. If hdfs unstable can cause RM unstable, that sort of cluster downtime issue sounds more
critical to me than this partial information. Because this endpoint can be very frequently
called while app is accepted (client likes to poll every second or so to wait for app is running),
that essentially means RM will hit HDFS for every app getStatus call before it gets running.
Unless a concrete use case is asked for a complete information while app is accepted or completed,
I prefer adding this later with a proper caching implementation built. Just my opinion, Gour
Saha , Billie Rinaldi ?

I've also seen the rest endpoint hit frequently to determine when the app has gone from accepted
to running. I feel like the problem here is that the status call is returning a combined spec
+ status object. We've discussed moving towards status returning just a status object (I see
this including service and component states and container information) and having a separate
call that would retrieve the spec. This seems like it would solve both issues because the
HDFS (or whatever storage) retrieval could be made for the spec retrieval only. I guess I
would rather move towards removing unneeded information from the AM status retrieval, rather
than adding unneeded information into the nonexistent-AM status retrieval.

> Implement doAs for Api Service REST API
> ---------------------------------------
>                 Key: YARN-7605
>                 URL: https://issues.apache.org/jira/browse/YARN-7605
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>             Fix For: yarn-native-services
>         Attachments: YARN-7605.001.patch, YARN-7605.004.patch, YARN-7605.005.patch, YARN-7605.006.patch,
YARN-7605.007.patch, YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, YARN-7605.011.patch,
YARN-7605.012.patch, YARN-7605.013.patch, YARN-7605.014.patch, YARN-7605.015.patch
> In YARN-7540, all client entry points for API service is centralized to use REST API
instead of having direct file system and resource manager rpc calls.  This change helped to
centralize yarn metadata to be owned by yarn user instead of crawling through every user's
home directory to find metadata.  The next step is to make sure "doAs" calls work properly
for API Service.  The metadata is stored by YARN user, but the actual workload still need
to be performed as end users, hence API service must authenticate end user kerberos credential,
and perform doAs call when requesting containers via ServiceClient.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org

View raw message