hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Girish Lingappa <glinga...@pivotal.io>
Subject Re: Memory consumption by AM
Date Thu, 23 Oct 2014 17:08:28 GMT

If you are using 2.2 you have one option of limiting the number of
concurrent applications that get launched by setting a property in the
scheduler configuration. You can refer to that here :
Please look for yarn.scheduler.capacity.maximum-applications .

You will find a similar setting for fair scheduler as well, maxRunningApps (


On Thu, Oct 23, 2014 at 12:46 AM, Jakub Stransky <stransky.ja@gmail.com>

> Hello experienced users,
> we are new to hadoop hence using nearly default configuration including
> scheduler - which I guess by default is Capacity Scheduler.
> Lately we were confronted with following behaviour on the cluster. We are
> using apache oozie for job submission of various data pipes. We have single
> customer for our cluster. There were submitted several jobs - hence
> allocated containers to run an AM from YARN but after such allocation there
> were not enough remaining resources to run any Mappers/Reducers so cluster
> were effectively deadlocked. All resources consumed by AM and all of them
> were waiting for resources.
> We are using HDP 2.0 hence hadoop 2.2.0.  Is there any way how to prevent
> this from happening ?
> Thanks for suggestions
> Jakub

View raw message