hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sofia Georgiakaki <geosofie_...@yahoo.com>
Subject Re: many killed tasks, long execution time
Date Fri, 23 Sep 2011 14:04:26 GMT
Mr. Bobby, thank you for your reply.
The IOException was related with the speculative execution. In my Reducers I create some files
written on the HDFS, so in some occasions multiple tasks attempted to write the same file.
I turned the speculative mode off for the reduce tasks, and the problem was solved.

However, the major problem with the long execution time remains. I can assume now that all
these failed map tasks have to do with the speculative execution too, so the source of the
problem must be somewhere else.

I noticed that the average time for the map tasks (as well as the time e.g. the longer mapper
finishes) increases as I increase the number of reducers! Is this normal??? The input is always
the same, as well as the number of the map tasks (158 map tasks executed on the 12-node cluster.
each node has capacity for 4 map tasks).
In addition, the performance of the Job is ok when the number of reducers are in the range
2-12, and then if I increase the reducers further, the performance gets worse and worse...

Any ideas would be helpful!
Thank you!





________________________________
From: Robert Evans <evans@yahoo-inc.com>
To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>; Sofia Georgiakaki
<geosofie_tuc@yahoo.com>
Sent: Friday, September 23, 2011 4:28 PM
Subject: Re: many killed tasks, long execution time

Can you include the complete stack trace of the IOException you are seeing?

--Bobby Evans

On 9/23/11 2:15 AM, "Sofia Georgiakaki" <geosofie_tuc@yahoo.com> wrote:




Good morning!

I would be grateful if anyone could help me about a serious problem that I'm facing.
I try to run a hadoop job on a 12-node luster (has 48 task capacity), and I have problems
when dealing with big input data (10-20GB) which gets worse when I increase the number of
reducers.
Many tasks get killed (for example 25 out of the 148 map tasks, and 15 out of 40 reducers)
and the job struggles to finish.

The job is heavy in general, as it builds an Rtree on hdfs.
During the reduce phase, I also create and write some binary files on HDFS using FSDataOutputStream.
and I noticed  that sometimes some tasks fail to write correctly to their particular binary
file, throwing an IOexception when they try to execute  dataFileOut.write(m_buffer); .

I'm using 0.20.203 version and I had also tested the code on 0.20.2 before (facing the same
problems with killed tasks!)


I would appreciate any advice/idea, as I have to finish my diploma thesis (it has taken me
a year, I hope not to take longer).

Thank you very much in advance
Sofia
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message