hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From oj...@doc.ic.ac.uk
Subject Re: Error reporting from map function
Date Mon, 30 Jul 2007 23:09:27 GMT
Thanks Anthony, its good to know it can be done! However, I was hoping  
to be able to report the numerical error in my map function. With the  
the way you suggest would there be any way to access the exception  
thrown? I'm running the map-reduce job from a gui, so would rather  
have an error box come up than just have an exception appear on the  
command line. I'd also like to be able to differentiate between a job  
that fails because of this numerical error in the map task and a job  
that fails because, say, the namenode crashes.


Quoting "Anthony D. Urso" <anthonyu@killa.net>:

> Call JobConf.setMaxMapAttempts(0) in the job conf, then throw an exception
> when your mapper fails.  This should kill the entire job instantly, since
> the job tracker will allow no mapper failures.
> Cheers,
> Anthony
> On Mon, Jul 30, 2007 at 09:42:09PM +0100, ojh06@doc.ic.ac.uk wrote:
>> Hi,
>> Apologies for yet another question from me, but here goes!
>> I've written a map task that will on occasion not compute the correct
>> result. This can easily be detected, at which point I'd like the map
>> task to report the error and terminate the entire map/reduce job. Does
>> anyone know of a way I can do this?
>> I've been looking around the archives and the api, and the only thing
>> that comes close is the reporter class, but I can't I think that only
>> reports stuff and doesn't actually allow control of the job?
>> Any help much appreciated as ever,
>> Cheers,
>> Ollie
> --
>  Au
>  PGP Key ID: 0x385B44CB
>  Fingerprint: 9E9E B116 DB2C D734 C090  E72F 43A0 95C4 385B 44CB
>     "Maximus vero fugiens a quodam Urso, milite Romano, interemptus est"
>                                                - Getica 235

View raw message