hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Lewis <lordjoe2...@gmail.com>
Subject Any way to limit the total tasks running on a node in 0.202
Date Thu, 10 Nov 2011 03:06:27 GMT
Hadoop can set the maximum mappers and reducers running on a node but under
0.20.2 I do not see a way to limit
the system from running mappers and reducers together with the total
exceeding individual limits.
I find that when my mappers are about 50% done the system kicks off
reducers. I have raised the maxmemory in
chilld.java,vm.opts because I have been hitting GC limits and the values
work well when I am running 6 mappers OR
6 reducers but when my mappers are half way done I see 6 mappers AND 6
reducers running and this challenges the
total memory on a node.
How can I keep the total tasks on a node under control without limiting the
maximum mappers and reducers to half the total I want??

Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com

View raw message