hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jakub Stransky <stransky...@gmail.com>
Subject Re: How to limit the number of containers requested by a pig script?
Date Tue, 21 Oct 2014 06:42:35 GMT
Hello,

as far as I understand. Number of mappers you cannot drive. The number of
reducers you can control via PARALEL keyword. Number of containers on a
node is given by following combination of settings:
yarn.nodemanager.resource.memory-mb - set on a cluster. And following
properties can be "modified" from your script setting to a different
number, mapreduce.map.memory.mb and mapreduce.reduce.memory.mb.

Hope this helps

On 21 October 2014 07:31, Sunil S Nandihalli <sunil.nandihalli@gmail.com>
wrote:

> Hi Everybody,
>  I would like to know how I can limit the number of concurrent containers
> requested(and used ofcourse)  by my pig-script (not as a yarn queue
> configuration or some such stuff..  I want to limit it from outside on a
>  per job basis. I would ideally like to set the number in my pig-script.)
> Can I do this?
> Thanks,
> Sunil.
>



-- 
Jakub Stransky
cz.linkedin.com/in/jakubstransky

Mime
View raw message