hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Naganarasimha G R (Naga)" <garlanaganarasi...@huawei.com>
Subject RE: Is there any way to limit the concurrent running mappers per job?
Date Wed, 22 Apr 2015 11:37:37 GMT
Hi Zhe,

AFAIK there is no such explicit requirement to support MR clients limiting the number of containers/tasks
for a given job at any given point of time.
In fact as explained earlier Admin can control this by queue capacity, max capacity and user
specific capacity configurations.
Is there any particular use-case where you want to control from client side instead of queue
configurations ?

Regards,
Naga

________________________________
From: Zhe Li [allenlee.lz@gmail.com]
Sent: Wednesday, April 22, 2015 16:38
To: user@hadoop.apache.org
Subject: Re: Is there any way to limit the concurrent running mappers per job?

Thanks Naga for your reply.

Does the community has a plan to support the limit per job in future?

Thanks.

On Tue, Apr 21, 2015 at 3:49 PM, Naganarasimha G R (Naga) <garlanaganarasimha@huawei.com<mailto:garlanaganarasimha@huawei.com>>
wrote:
Hi Sanjeev,
YARN already supports to map the deprecated configuration name to the new one so even if "mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob"
is used it would be having the same behavior.
Also it needs to be noted that "mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob"
is jobtracker config so it will have no impact on YARN.
Only way to configure, is by configuring schedulers to impact headroom of a user or application.
Please refer
http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html &
http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/FairScheduler.html.

Regards,
Naga
________________________________
From: Sanjeev Tripurari [sanjeev.tripurari@inmobi.com<mailto:sanjeev.tripurari@inmobi.com>]
Sent: Tuesday, April 21, 2015 11:54
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Is there any way to limit the concurrent running mappers per job?

Hi,

Check if this works for you,
mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob

Some properties have been changed with yarn implementation
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

-Sanjeev



On Tue, Apr 21, 2015 at 4:32 AM, Zhe Li <allenlee.lz@gmail.com<mailto:allenlee.lz@gmail.com>>
wrote:
Hi, after upgraded to Hadoop 2 (yarn), I found that 'mapred.jobtracker.taskScheduler.maxRunningTasksPerJob'
no longer worked, right?

One workaround is to use queue to limit it, but it's not easy to control it from job submitter.
Is there any way to limit the concurrent running mappers per job?
Any documents or discussions before?

BTW, any way to search this mailing list before I post a new question?

Thanks very much.


_____________________________________________________________
The information contained in this communication is intended solely for the use of the individual
or entity to whom it is addressed and others authorized to receive it. It may contain confidential
or legally privileged information. If you are not the intended recipient you are hereby notified
that any disclosure, copying, distribution or taking any action in reliance on the contents
of this information is strictly prohibited and may be unlawful. If you have received this
communication in error, please notify us immediately by responding to this email and then
delete it from your system. The firm is neither liable for the proper and complete transmission
of the information contained in this communication nor for any delay in its receipt.


Mime
View raw message