hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun Murthy <...@hortonworks.com>
Subject Re: max concurrent mapper/reducer in hadoop
Date Fri, 22 Jul 2011 16:50:22 GMT
Moving to mapreduce-dev@, bcc general@.

Yes, as described in the bug, the CS has high-ram jobs which is a
better model for shared multi-tenant clusters. The hadoop-0.20.203
release from Apache has the most current and tested version of the
CapacityScheduler.

Arun

Sent from my iPhone

On Jul 22, 2011, at 9:36 AM, Liang Chenmin <chenminl@cs.cmu.edu> wrote:

> Hi all,
>    I am using hadoop 0.20.2 cdh3 version. The old method to set max
> concurrent mapper/reducer in code no longer works. I saw a patch about this,
> but the current status is "won't fixed". Is there any update about this? I
> am using Fair Scheduler, should I use Capacity Scheduler instead?
> https://issues.apache.org/jira/browse/HADOOP-5170
>
> Thanks,
> chenmin liang

Mime
View raw message