hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jian He <...@hortonworks.com>
Subject Re: issue about capacity scheduler
Date Wed, 04 Dec 2013 21:04:11 GMT
you can probably try increasing
"yarn.scheduler.capacity.maximum-am-resource-percent",
This controls the max concurrently running AMs.

Thanks,
Jian


On Wed, Dec 4, 2013 at 1:33 AM, ch huang <justlooks@gmail.com> wrote:

> hi,maillist :
>                  i use yarn framework and capacity scheduler  ,and i have
> two queue ,one for hive and the other for big MR job
> in hive  queue it's work fine,because hive task is very faster ,but what i
> think is user A submitted two big MR job ,so first big job eat
> all the resource belongs to the queue ,the other big MR job should wait
> until first job finished ,how can i let the same user 's MR job can run
> parallel?
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
View raw message