hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Dyer <psyb...@gmail.com>
Subject Re: Job cleanup
Date Wed, 17 Apr 2013 08:44:41 GMT
I think the problem is I need to report progress() from my cleanup task.
How can I do this?

The commitJob() in my custom
only provides org.apache.hadoop.mapreduce.JobContext[2]
which has no getProgressible() like the old


On Sat, Apr 13, 2013 at 2:35 PM, Robert Dyer <psybers@gmail.com> wrote:

> What does the job cleanup task do?  My understanding was it just cleaned
> up any intermediate/temporary files and moved the reducer output to the
> output directory?  Does it do more?
> One of my jobs runs, all maps and reduces finish, but then the job cleanup
> task never finishes.  Instead it gets killed several times until the entire
> Job gets killed:
> Task attempt_201303272327_0772_m_000105_0 failed to report status for 600 seconds. Killing!
> I suppose that since my reducers generate around 20GB of output, that
> perhaps moving it takes too long?
> Is it possible to disable speculative execution *only* for the cleanup
> task?

View raw message