hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Silvina Caíno Lores <silvi.ca...@gmail.com>
Subject Re: How to obtain the exception actually failed the job on Mapper or Reducer at runtime?
Date Wed, 11 Dec 2013 07:43:12 GMT

You can check the userlogs directory where the job and attempt logs are
stored. For each attempt you should have a stderr, stdout and syslog file.
The first two hold the program output for each stream (useful for debug
purposes), while the last contains execution details provided by the

Hope it helps.
On 11 Dec 2013 03:59, "Kan Tao" <ken.taokan@gmail.com> wrote:

> Hi guys,
> Does anyone knows how to ‘capture’ the exception which actually failed the
> job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
> be fault tolerant that the failed jobs will be automatically rerun for a
> certain amount of times and won’t actually expose the real problem unless
> you look into the error log? In my use case, I would like to capture the
> exception and make different response based on the type of the exception.
> Thanks in advance.
> Regards,
> Ken

View raw message