hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Devaraj Das <d...@yahoo-inc.com>
Subject Re: How to coordinate nodes of different computing powers in a same cluster?
Date Wed, 24 Dec 2008 10:01:38 GMT



On 12/24/08 3:20 PM, "Aaron Kimball" <aaron@cloudera.com> wrote:

> Jeremy,
> 
> A clarification: there is currently no mechanism in Hadoop to slot
> particular tasks on particular nodes. Hadoop does not take into account a
> particular node's suitability for a given task; if one node has more CPU,
> and another node has more IO, you cannot indicate that certain tasks should
> be done on the CPU-intense nodes, and others on the IO-intense nodes.
> 
> Speculative execution, though, means that any tasks which are "left behind"
> near the end of a job will be re-executed in parallel on multiple other
> "empty" nodes which are waiting for the full job to complete. Hopefully,
> it'll also pick a "correct" node for the task via this secondary random
> placement, if it didn't do it in the first apportioning of jobs. By default,
> I think map task speculation is enabled, but reduce task speculation is
> disabled.
> 
By default, speculative execution is enabled for both. But yes, the current
implementation of speculative execution has some shortcomings that
https://issues.apache.org/jira/browse/HADOOP-2141 is trying to address (and
that also includes the case of trying to avoid scheduling speculative tasks
on slow machines).
The other thing to note is that faster machines will execute more tasks than
the slower machines when there are lots of tasks to execute, since machines
pull tasks from the JobTracker when they are done running the current tasks.

> - Aaron
> 
> On Wed, Dec 24, 2008 at 1:12 AM, Devaraj Das <ddas@yahoo-inc.com> wrote:
> 
>> You can enable speculative execution for your jobs.
>> 
>> 
>> On 12/24/08 10:25 AM, "Jeremy Chow" <coderplay@gmail.com> wrote:
>> 
>>> Hi list,
>>> I've come up against a scenario like this,  to finish a same task, one of
>> my
>>> hadoop cluster only needs 5 seconds, and another one needs more than 2
>>> minutes.
>>> It's a common phenomenon that will decrease the parallelism of our system
>>> due to the faster one will wait the slower one. How to coordinate those
>>> nodes of different computing powers in a same cluster?
>>> 
>>> Thanks,
>>> Jeremy
>> 
>> 
>> 



Mime
View raw message