hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MONTMORY Alain <alain.montm...@thalesgroup.com>
Subject exception related to logging (0.21.0)
Date Tue, 18 Jan 2011 08:58:30 GMT
Hi everybody,

When running (0.21.0) map/reduce jobs i have got this exception : 

java.lang.NullPointerException at org.apache.hadoop.mapred.TaskLogAppender.flush(TaskLogAppender.java:69)
at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:222) at org.apache.hadoop.mapred.Child$4.run(Child.java:219)
at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742) at
org.apache.hadoop.mapred.Child.main(Child.java:211)

in the mailling list, I have seen another user which have the same probleme (2010-06-15) but
i don't see if a solution has been found....

Thanks you, any ideas apreciated

[@@THALES GROUP RESTRICTED@@]

-----Message d'origine-----
De : Harsh J [mailto:qwertymaniac@gmail.com] 
Envoyé : dimanche 2 janvier 2011 12:06
À : mapreduce-user@hadoop.apache.org
Objet : Re: 0.20.2 : Running Chained jobs using JobControl

Although for complex workflows one should checkout Oozie or Azkaban.

On Sun, Jan 2, 2011 at 1:55 PM, Hari Sreekumar <hsreekumar@clickable.com> wrote:
> Can't we run chained jobs like this?
> bool j1 = job1.waitForCompletion(..) ;
> if(b1) job2.waitForCompletion(..) ;
>
> and setting up the jobs such that job1's output dir is job2's input dir?
> Thanks,
> Hari

Yes, this could work for simple success/failure based chaining
(although it makes the driver code look a tad messy?).

This is what JobControl is aiming to provide from within Hadoop
libraries itself. Plus the ability to have more control on the
dependent, waiting jobs.

-- 
Harsh J
www.harshj.com

Mime
View raw message