hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leo Leung <lle...@ddn.com>
Subject RE: Question on MapReduce
Date Fri, 11 May 2012 17:48:32 GMT
Nope, you must tune the config on that specific super node to have more M/R slots (this is
for 1.0.x)
This does not mean the JobTracker will be eager to stuff that super node with all the M/R
jobs at hand.

It still goes through the scheduler,  Capacity Scheduler is most likely what you have.  (check
your config)

IMO, If the data locality is not going to be there, your cluster is going to suffer from Network
I/O.


-----Original Message-----
From: Satheesh Kumar [mailto:nkseam@gmail.com] 
Sent: Friday, May 11, 2012 9:51 AM
To: common-user@hadoop.apache.org
Subject: Question on MapReduce

Hi,

I am a newbie on Hadoop and have a quick question on optimal compute vs.
storage resources for MapReduce.

If I have a multiprocessor node with 4 processors, will Hadoop schedule higher number of Map
or Reduce tasks on the system than on a uni-processor system? In other words, does Hadoop
detect denser systems and schedule denser tasks on multiprocessor systems?

If yes, will that imply that it makes sense to attach higher capacity storage to store more
number of blocks on systems with dense compute?

Any insights will be very useful.

Thanks,
Satheesh

Mime
View raw message