hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ch huang <justlo...@gmail.com>
Subject Re: issue about capacity scheduler
Date Thu, 05 Dec 2013 00:56:39 GMT
if i have 40GB memory of cluster resource, and
"yarn.scheduler.capacity.maximum-am-resource-percent" set to 0.1 ,so that's
mean when i lauch a appMaster ,i need allocate 4GB to the  appMaster ? ,if
so, why i increasing the value will cause more appMaster running
concurrently,instead of decreasing ?

On Thu, Dec 5, 2013 at 5:04 AM, Jian He <jhe@hortonworks.com> wrote:

> you can probably try increasing
> "yarn.scheduler.capacity.maximum-am-resource-percent",
> This controls the max concurrently running AMs.
> Thanks,
> Jian
> On Wed, Dec 4, 2013 at 1:33 AM, ch huang <justlooks@gmail.com> wrote:
>> hi,maillist :
>>                  i use yarn framework and capacity scheduler  ,and i have
>> two queue ,one for hive and the other for big MR job
>> in hive  queue it's work fine,because hive task is very faster ,but what
>> i think is user A submitted two big MR job ,so first big job eat
>> all the resource belongs to the queue ,the other big MR job should wait
>> until first job finished ,how can i let the same user 's MR job can run
>> parallel?
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

View raw message