hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From oj...@doc.ic.ac.uk
Subject Re: Error reporting from map function
Date Tue, 31 Jul 2007 14:36:23 GMT
Well, I don't think it will be too much of a problem for me. I'll only  
be running this one type of job. The problem I have is that I can only  
throw IOExceptions out of the Mapper function. So if a job fails for  
some other reason, other than my numerical calculation error I have no  
way of knowing. I'd like to retry if its a communication problem, but  
terminate if its a calculation problem within my function.

I'm getting the feeling this isn't possible?

Quoting Jeroen Verhagen <jeroenverhagen@gmail.com>:

> Hi,
>
> On 7/30/07, Anthony D. Urso <anthonyu@killa.net> wrote:
>> Call JobConf.setMaxMapAttempts(0) in the job conf, then throw an exception
>> when your mapper fails.  This should kill the entire job instantly, since
>> the job tracker will allow no mapper failures.
>
> Wouldn't this cause all other running and future jobs to stop
> attempting to recover from an error? Or do all jobs have copies of the
> original job conf?
>
> --
>
> regards,
>
> Jeroen
>




Mime
View raw message