hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Sturlese <marc.sturl...@gmail.com>
Subject Dealing with Jobs with different memory and slots requirements
Date Tue, 16 Nov 2010 00:29:07 GMT

I have a hadoop test cluster (12 nodes) and I am running different MapReduce
jobs. These Jobs are executed sequencially as the input of one needs the
output of the other.
I am wandering if there is a way to manage the memory of the nodes per Job.
I mean, there are jobs that use all the reduce slots of my cluster and don't
use much memory, these scale so well. But, there are others that don't use
all the reduce slots (and can't be more parallelized) and would be much
faster if i was able to asign more memory to them. I don't see a way to do
something similar to that if I don't turn off the cluster, change the nodes
conf and turn it on again. Which is pretty dirty...
It would be good if, in the same cluster, I could have some nodes with less
reducers and more memory for them and I could tell a Job to use those
nodes... but I don't think it's possible
Maybe I am not dealing with the problem in the right way... Any suggestion
or advice?
Thanks in advance 
-- 
View this message in context: http://lucene.472066.n3.nabble.com/Dealing-with-Jobs-with-different-memory-and-slots-requirements-tp1908293p1908293.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Mime
View raw message