hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Miles Osborne" <mi...@inf.ed.ac.uk>
Subject Re: Pipes task being killed
Date Wed, 05 Mar 2008 18:58:18 GMT
Is this also true for streaming?

Miles

On 05/03/2008, Richard Kasperski <rkasper@yahoo-inc.com> wrote:
>
> I think you just need to write to stderr. My understanding is that
> hadoop is happy as long as input is being consumed, output is being
> generated or status is being generated.
>
>
> Rahul Sood wrote:
> > Hi,
> >
> > We have a Pipes C++ application where the Reduce task does a lot of
> > computation. After some time the task gets killed by the Hadoop
> > framework. The job output shows the following error:
> >
> > Task task_200803051654_0001_r_000000_0 failed to report status for 604
> > seconds. Killing!
> >
> > Is there any way to send a heartbeat to the TaskTracker from a Pipes
> > application. I believe this is possible in Java using
> > org.apache.hadoop.util.Progress and we're looking for something
> > equivalent in the C++ Pipes API.
> >
> > -Rahul Sood
> > rsood@yahoo-inc.com
> >
> >
> >
>
>


-- 
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message