hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Schneider <Sch...@TransPac.com>
Subject Re: Getting past a HADOOP-5233 issue?
Date Thu, 16 Sep 2010 20:33:48 GMT
Hi Gang,

>I'm running 0.19.1, and unfortunately I've apparently just been bitten by https://issues.apache.org/jira/browse/HADOOP-5233.
(Note: I usually run 0.19.2 and will soon be running on 0.20.2, but had trouble finding a
public m1.large AMI for Amazon EC2.)
>My map-only job has written 22971 parts (of 22972) to my output sequence file in DFS.
I'm wondering whether anyone has a good idea for unsticking this map task by hand so that
(ideally) it will write its output to the sequence file and I can move on. It wouldn't be
the end of the world to kill the job and go on without this output, but I thought I'd ask
first in case someone had a good idea.

Just in case anyone else has a similar problem, I happened across this command just before
I was about to kill my job:

hadoop job [-fail-task <task-id>]

I was able to fail the offending task (specifying the task attempt ID, of course - see https://issues.apache.org/jira/browse/MAPREDUCE-985),
a new attempt was immediately launched, and the job completed successfully. Hooray!


- Chris
Chris Schneider
Bixo Labs, Inc.

View raw message