hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Venner <ja...@attributor.com>
Subject Re: Using pseudo-distributed operation on multiple core CPU
Date Thu, 24 Apr 2008 03:07:25 GMT
<property>

  <name>mapred.tasktracker.map.tasks.maximum</name>

  <value>*2*</value>

  <description>The maximum number of map tasks that will be run

  simultaneously by a task tracker.

  </description>

</property>

<property>

  <name>mapred.tasktracker.reduce.tasks.maximum</name>

  <value>*2*</value>

  <description>The maximum number of reduce tasks that will be run

  simultaneously by a task tracker.

  </description>

</property>


but the above in your hadoop-site.xml and change the *2* to the number 
of your cpus, that will essentially do what you are asking

L. Mark Hall wrote:
> Sorry, I sent this earlier from the wrong account.
>
> Hello to the Hadoop Core development group,
>
> I am interested as to weather or not I could use Hadoop in the 
> pseudo-distributed mode to launch multiple client processes on all 
> cores of a multi-core processor and then re-assemble the results.
>
> Map-reduce is probably overkill for this kind of thing, but I have 
> very limited engineering resources and can't really start from scratch 
> with pipes or sockets or an MPI implementation.  If Hadoop can be 
> configured to do this out of the box, that would be very useful.  I am 
> also dumping my results into a Java database (H2), so it would be nice 
> to keep the entire application in Java.
>
> I would greatly appreciate any information that could be provided.
>
> Mark
>
> LMH_medchemist

-- 
Jason Venner
Attributor - Publish with Confidence <http://www.attributor.com/>
Attributor is hiring Hadoop Wranglers and coding wizards, contact if 
interested

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message