hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From German Florez-Larrahondo <german...@samsung.com>
Subject RE: Fault tolerance and Speculative Execution
Date Thu, 18 Jul 2013 17:37:38 GMT
Also, a  simple explanation of how speculative execution works and what are
the key settings  can be found here:

In addition, there used to be other parameters (slownodethreshold,
slowtaskthreshold & speculativecap)
but I believe they were deprecated...

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Thursday, July 18, 2013 12:11 PM
To: <user@hadoop.apache.org>
Subject: Re: Fault tolerance and Speculative Execution

What you describe in the first paragraph is not true.

Speculative execution API toggles are listed in the documentation:
and in the mapred-default page in property form:
http://hadoop.apache.org/docs/stable/mapred-default.html. Speculative
execution is enabled by default.

On Thu, Jul 18, 2013 at 10:32 PM, Sundeep Kambhampati
<kambhamp@cse.ohio-state.edu> wrote:
> Hi all,
> Is it true that Hadoop 'always' starts same map tasks multiple times 
> in order to be fault tolerant. i.e. same task is launched on several 
> machines so that even if a node fails then same task would be 
> available on other node. And in case no node fails redundant task that
finishes late is killed.
> If it is true how can I change that configuration for Hadoop to do it 
> or not do it.
> Speculative execution on the other hand does what I explained above 
> (redundant map tasks) but only after all the map tasks are scheduled 
> and if some nodes are free it starts redundant map tasks for those 
> which are running slow. Is it always true? How do change this 
> configuration enable/disable.
> I am using Hadoop-1.1.2 incase version matters.
> I really appreciate if someone could help me with this. Thank you.
> Regards
> Sundeep

Harsh J

View raw message