hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Junping Du (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3039) [Aggregator wireup] Implement ATS app-appgregator service discovery
Date Thu, 12 Mar 2015 18:51:39 GMT

    [ https://issues.apache.org/jira/browse/YARN-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359163#comment-14359163
] 

Junping Du commented on YARN-3039:
----------------------------------

Thanks [~sjlee0] for review and many good comments! I addressed most of them in v5 patch and
will post it soon.
bq. I see some methods are marked as Stable (e.g. AggregatorNodemanagerProtocol), but I feel
that’s bit premature. Can we mark these still unstable or evolving? Note that at least at
this point even the class names can change.
Nice catch! Let's mark it with evolving instead.

bq. While we’re on the API annotations, I notice that the annotations are on methods rather
than classes themselves. I usually set them on the classes with the understanding that the
entire class is unstable, for example. Which is a more common practice?
We could have an annotation in class level which is default publicity and stability for each
method. However, each method could have its own annotation to override the class one. In most
cases, the class level annotation is more public and stable than individual methods which
is first-class contract with end users or other components (or they will have concerns to
use it). Take an example, if we need to add a new API which is not stable yet to a protocol
class marked with stable, we shouldn't regression the whole class from stable to evolving
or unstable, but we can mark the new method as unstable or evolving. Make sense?

bq. I feel that the default for NM_AGGREGATOR_SERVICE_THREAD_COUNT doesn't have to be as high
as 20 as the traffic will be pretty low; 5?
5 sounds good. Will update.

bq. Since createTimelineClient(ApplicationId) was introduced only on this branch, we should
be able to just replace it instead of adding a new deprecated method, no?
I didn't check the history when we are adding createTimelineClient(ApplicationId). If so,
we can remove it. For other createTimelineClient(), we can mark them as deprecated if it is
showed up in previous releases because newInstance() is a more standard way of doing factory
work.

bq. putObjects() Not sure if I understand the comment “timelineServiceAddress couldn’t
have been initialized”; can’t putObjects() be called in a steady state? If so, shouldn’t
we check if timelineServiceAddress is not null before proceeding to loop and wait for the
value to come in? Otherwise, we would introduce a 1 second latency in every put call even
in a steady state?
In steady state, there is no initialized delay becuase timelineServiceAddress is already there
(in timelineClient). The cost here only happens for the first event when timelineClient start
to post or after timelineServiceAddress get updated (for failure or other reasons). We design
this to make sure TimelineClient can handle service discovery itself rather than letting caller
to figure it out.


bq. maxRetries -> retries might be a better variable name?
Updated.

bq. It might be good to create a small helper method for polling for the timelineServiceAddress
value
You are right that we can always abstract some helper method to make logic more consisely.
Update it with pollTimelineServiceAddress() method.

bq. Not sure if we need a while loop for needRetry; it either succeeds (at which point needRetry
becomes false and you exit normally) or it doesn’t (in which case we go into the exception
handling and we try only once to get the value). Basically I’m not sure whether this retry
code is what you meant to do?
That's actually a bug here that try-catch should be inside while loop, so we can tolerant
post failure for some retries times (the address could be stale and being updated by RM) within
the while loop. Thanks for identifying this.

bq. I think it may be enough to make timelineServiceAddress volatile instead of making getter/setter
synchronized.
Agree. Given only one thread could update today, so volatile should be safe enough.

bq. doPostingObject() has duplicate code with putObjects(); can we consider ways to eliminate
code duplication? I know it calls different methods deep inside the implementation, but there
should be a way to reduce code duplication.
Agree. In new patch (v5), I will abstract all common logic in some helper methods. 

bq. typo: ILLEAGAL_NUMBER_MESSAGE -> ILLEGAL_NUMBER_MESSAGE, Context.java, We might want
to update the comment (or method names) a little bit. NodeManager.java, removeRegisteredAggregators()
-> removeRegisteredAggregator() (should be singular)
Updated for these comments.

bq. We need removeKnownAggregator() as well; otherwise we’d blow the NM memory. Perhaps
what we need is more like setKnownAggregators() instead of add? Essentially NM would need
to replace its knowledge of the known aggregators every time RM updates it via heartbeat,
right?
I was thinking some shortcut for registeredAggegators goes to knownAggregators directly so
some local aggregators don't have to wait info comes from RM and we don't have to check two
lists for NM's TimelineClient. So let's keep add logic here instead of set and I will update
the shortcut in v5 patch. 
For removing known or registered aggregators, I was thinking to defer these to other JIRA
for failure cases (given this patch is big enough). However, I will add some handling for
normal case, e.g. end of application.

bq. NodeStatusUpdaterImpl.java, I’m not sure if all the add/remove***Aggregators() methods
are called at the right time. Especially, I’m not sure if we’re handling the case of removing
an aggregator (e.g. app finished). When an app finishes and the per-app aggregator is shut
down, NM needs to be notified, remove it from its registered aggregators, and let RM know
that it’s gone so that it can be removed from the RM state store as well. How is this being
handled? It’s not clear to me. At least I don’t see any calls for removeRegisteredAggregators().
As said above, I will add a handling in v5 patch which is triggered in application finish
event in NM side. For RM side, it is already simply remove this info when received finshApplicationMaster
event.

bq. ResourceTrackerService.java (see above), Shouldn’t we handle a situation where the aggregator
is removed? updateAppAggregatorsMap() seems to deal with adding/updating an aggregator only.
See above comments.

bq. Any race condition in updateAppAggregatorsMap() when one AM (app attempt) goes down and
another AM (app attempt) comes up?
Hmm. That's a very good point. I think we should remove the aggregator info (from NM and RM)
in AM failure case and make sure the failed over happens when aggregator get cleaned. We have
a dedicated JIRA to handle failed over cases, add a TODO in v5 patch.  

bq. RMAppImpl.java, Would this be backward compatible from the RM state store perspective?
I don't think so. ApplicationDataProto is also be a protobuffer object, and new field for
aggreagtorAddress is optional.

bq. TimelineAggregatorsCollection.java, What value will be set as timelineRestServerBindAddress?
It appears to be just the value of TIMELINE_SERVICE_BIND_HOST. That doesn't sound right to
me, For each node, we need to get the fully qualified hostname of the node. It doesn’t look
like that’s the case here?
Another very good point. I think we need resolved address here. Updated in v5 patch.

bq. Would it be a good idea to provide an RPC method to query for the updated aggregator information?
Waiting for the heartbeats is OK, but if it fails, it might be easier/better to query (e.g.
NM discovering the aggregator address).
Let's hold on adding new API now until we are figuring out failure cases later(in another
filed Jira). Even then, we can still make heartbeat request which is also a RPC call but just
no need to wait another heartbeat interval.

> [Aggregator wireup] Implement ATS app-appgregator service discovery
> -------------------------------------------------------------------
>
>                 Key: YARN-3039
>                 URL: https://issues.apache.org/jira/browse/YARN-3039
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Sangjin Lee
>            Assignee: Junping Du
>         Attachments: Service Binding for applicationaggregator of ATS (draft).pdf, Service
Discovery For Application Aggregator of ATS (v2).pdf, YARN-3039-no-test.patch, YARN-3039-v2-incomplete.patch,
YARN-3039-v3-core-changes-only.patch, YARN-3039-v4.patch
>
>
> Per design in YARN-2928, implement ATS writer service discovery. This is essential for
off-node clients to send writes to the right ATS writer. This should also handle the case
of AM failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message