hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ed Mazur <ma...@cs.umass.edu>
Subject Re: Map output compression leads to JVM crash (0.20.0)
Date Mon, 26 Oct 2009 04:14:56 GMT
I'm not sure if this is what you're asking for the count of, but only
83 task attempts were made in my last execution.  This error happens
with every task, so the job fails quickly.

Ed

On Mon, Oct 26, 2009 at 12:00 AM, Amogh Vasekar <amogh@yahoo-inc.com> wrote:
> Hi,
> Can you let us know if the count of attempt_ s is 32k – 1? I remember
> reading about similar error  sometime back.
>
> Amogh
>
>
> On 10/26/09 9:06 AM, "Ed Mazur" <mazur@cs.umass.edu> wrote:
>
> I'm having problems on 0.20.0 when map output compression is enabled.
> Map tasks complete (TaskRunner: Task 'attempt_*' done), but it looks
> like the JVM running the task crashes immediately after.  Here's the
> TaskTracker log:
>
> java.io.IOException: Task process exit with nonzero status of 134.
>     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
>
> An error per task attempt.  Each also produces a JRE error report file:
>
> http://pastebin.com/f590087f0
>
> This was using DefaultCodec.  I observed similar results with GzipCodec.
>
> Ed Mazur
>
>

Mime
View raw message