hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mayuresh <mayuresh.kshirsa...@gmail.com>
Subject Debugging killed task attempts
Date Thu, 02 Jun 2011 09:36:58 GMT
Hi,

I am trying to scan around 4,600,000 rows of hbase data. I am using hive to
query them back. I start the job with around 25 maps and there are 11 nodes
in my cluster each running 2 maps at a time.

I saw that it took around 7 minutes to scan all this data with 7 nodes,
However I added 4 more nodes, and it is taking even more time. In the map
task which is taking the longest, I see the following:

attempt_201106011013_0010_m_000009_0    Task attempt:
/default-rack/domU-12-31-39-0F-75-13.compute-1.internal
Cleanup Attempt: /default-rack/domU-12-31-39-0F-75-13.compute-1.internal
KILLED    100.00%
    2-Jun-2011 08:53:16    2-Jun-2011 09:01:48 (8mins, 32sec)

and

attempt_201106011013_0010_m_000009_1
/default-rack/ip-10-196-198-48.ec2.internal    SUCCEEDED    100.00%
    2-Jun-2011 08:57:28    2-Jun-2011 09:01:44 (4mins, 15sec)

The first attempt waited for 8mins 32secs before getting killed. I checked
datanode logs and all I see over there is some data coming in and some going
out. Can someout point me out to exactly how can I debug what exactly was
going on, and how can I avoid such long non-useful task attempts from being
run?

Thanks,
-Mayuresh

Mime
View raw message