hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Craig Welch (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-1680) availableResources sent to applicationMaster in heartbeat should exclude blacklistedNodes free memory.
Date Wed, 06 May 2015 01:49:01 GMT

    [ https://issues.apache.org/jira/browse/YARN-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529738#comment-14529738
] 

Craig Welch commented on YARN-1680:
-----------------------------------

bq. I think we should stop adding such application-specific logic into RM, application can
have very varied resource request, for example

On the whole I think that's a reasonable perspective, but I'm not sure this is the right place
to "write the line in the sand".  It isn't clear to me that this will be particularly costly,
and the deadlock issues are quite real.  Further, it seems to me that nodelable specific calculations
are very much of the same cloth as this / the same type of problem and cost, and they are
in the scheduler.  So why not this also?  And if this doesn't belong in the scheduler, I'd
suggest that nodelable specific headroom logic probably doesn't belong there either.

bq. In short term, treat the headroom just a hint, like what Karthik Kambatla mentioned

I think that's a nice idea, but it doesn't make up for having accurate headroom.  It may keep
these cases from leading to deadlock but there will be a cost, the job will be slowed as it
reacts after the fact to allocation failures - so job completion will slow.  Better than a
deadlock, but not as good as if it had received accurate headroom and could have avoided the
reactionary delay.  There may be other issues with that change as well, I'm not sure it should
be undertaken lightly or that we should take a dependency on it to solve this issue.

bq. In longer term, support headroom calculation in client-side utils, maybe AMRMClient is
a good place.

or at least divide the headroom calculation between the scheduler and "elsewhere".  This brings
back the earlier question - do we have a good place we can do this which is shared among the
AM implementations so that we don't have duplicated logic.  I'm still skeptical that this
is really the right time and place to begin making that transition - but assuming we want
to at some point, it's worth seeing if we have a good place to do it - is it AMRMClient (Is
this the same as the AMRMClientLibrary for purposes of this discussion???)



> availableResources sent to applicationMaster in heartbeat should exclude blacklistedNodes
free memory.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: YARN-1680
>                 URL: https://issues.apache.org/jira/browse/YARN-1680
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacityscheduler
>    Affects Versions: 2.2.0, 2.3.0
>         Environment: SuSE 11 SP2 + Hadoop-2.3 
>            Reporter: Rohith
>            Assignee: Craig Welch
>         Attachments: YARN-1680-WIP.patch, YARN-1680-v2.patch, YARN-1680-v2.patch, YARN-1680.patch
>
>
> There are 4 NodeManagers with 8GB each.Total cluster capacity is 32GB.Cluster slow start
is set to 1.
> Job is running reducer task occupied 29GB of cluster.One NodeManager(NM-4) is become
unstable(3 Map got killed), MRAppMaster blacklisted unstable NodeManager(NM-4). All reducer
task are running in cluster now.
> MRAppMaster does not preempt the reducers because for Reducer preemption calculation,
headRoom is considering blacklisted nodes memory. This makes jobs to hang forever(ResourceManager
does not assing any new containers on blacklisted nodes but returns availableResouce considers
cluster free memory). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message