Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BD12A10DE5 for ; Fri, 11 Oct 2013 12:53:07 +0000 (UTC) Received: (qmail 7893 invoked by uid 500); 11 Oct 2013 12:52:57 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 7651 invoked by uid 500); 11 Oct 2013 12:52:56 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 7636 invoked by uid 99); 11 Oct 2013 12:52:55 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Oct 2013 12:52:55 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of darkwolli32@gmail.com designates 209.85.215.52 as permitted sender) Received: from [209.85.215.52] (HELO mail-la0-f52.google.com) (209.85.215.52) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Oct 2013 12:52:51 +0000 Received: by mail-la0-f52.google.com with SMTP id ev20so3390344lab.11 for ; Fri, 11 Oct 2013 05:52:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=XTUg7CZeetVYHh/cKBI8E9V6Xm/9YFfqImnmnO+PaI4=; b=Z+z/z++nAJ4+Egh/we4Bu9CQV++zmvHgP57gKqhCdJwj6KNSYXW1Ns/tV0XHk7qfGK Rva3zGTQVDe/ou+ao0jOiaxNgjQEkAmjxX76UcKY1QgOgO62Cy2YY/S3q9/G9jPoxskw FFv34HxW6s8mkay55FNB9wU01VqtYsgKsPeBdI4eMvVT86DhcT6fP8N5e/qk/uuynYDO LM2pcaAomHeknZeI5C6mnJ0pf49dfhcsofLosguPAWsQ7KobucW49oussJOB4B8Gt+aB GuVHB1C7hrUgivwT4F8aC+Jyw8Zc3OL+E55fWwrpW/ldEsj52Larh5zpPVjVkvDEKFrC /B9Q== MIME-Version: 1.0 X-Received: by 10.112.89.100 with SMTP id bn4mr16650938lbb.16.1381495949874; Fri, 11 Oct 2013 05:52:29 -0700 (PDT) Received: by 10.152.22.194 with HTTP; Fri, 11 Oct 2013 05:52:29 -0700 (PDT) In-Reply-To: References: Date: Fri, 11 Oct 2013 14:52:29 +0200 Message-ID: Subject: Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology From: fab wol To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c379ec59dc2504e8769750 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c379ec59dc2504e8769750 Content-Type: text/plain; charset=ISO-8859-1 this line: 2013-10-11 10:24:53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang.NullPointerException is imho indicating that i am using the user "mapred" for executing (fyi: submitting the job from the CLI ( hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar wordcount hdfs_input_path hdfs_output_path) from another node) ... the file permissions for this file are: -rwxr-x--x 1 mapred hadoop 1382 Oct 10 15:02 topology.py* i temporarly had set the permissions to 777 to see if something changes, but it didn't ... I checked only the jobtracker, are the other nodes important for this as well? thx already in advance, especially for the quick response! Wolli 2013/10/11 DSuiter RDX > The user running the job (might not be your username depending on your > setup) does not appear to have executable permissions on the jobtracker > cluster topology python script - I'm basing this on the lines: > > 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: > Exception running > /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py > 10.160.25.249 > java.io.IOException: Cannot run program > "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in > directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): > java.io.IOException: error=13, Permission denied > > So checking on the permissions for that file, determining what user is > kicking off your job, which depends on how you submit it, and making sure > that user has the execute permission on that file will probably fix this. > > If you are using a management console, such as Cloudera SCM, when you > submit jobs, they are run as an application user, so, Flume services run > under the "Flume" user, HBase jobs will typically run under the HBase user, > and so on. It can cause some surprises if you do not expect it. > > *Devin Suiter* > Jr. Data Solutions Software Engineer > 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212 > Google Voice: 412-256-8556 | www.rdx.com > > > On Fri, Oct 11, 2013 at 7:59 AM, fab wol wrote: > >> Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster, >> only 7 days old, and someone tried some HBase stuff on it. Now I wanted to >> try some MR Stuff on it, but starting a Job is already not possible (even >> the wordcount example). The error log of the jobtracker produces a log 700k >> lines long but it consists mainly of these lines repeatedly: >> >> 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost >> tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712' >> 2013-10-11 10:24:53,033 ERROR >> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException >> as:mapred (auth:SIMPLE) cause:java.io.IOException: >> java.lang.NullPointerException >> 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 22 on 8021, call >> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, >> true, true, -1), rpc version=2, client version=32, >> methodsFingerPrint=-159967141 from 10.160.25.250:44389: error: >> java.io.IOException: java.lang.NullPointerException >> java.io.IOException: java.lang.NullPointerException >> at >> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751) >> at >> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731) >> at >> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227) >> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931) >> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at >> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474) >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) >> at java.security.AccessController.doPrivileged(Native Method) >> at javax.security.auth.Subject.doAs(Subject.java:396) >> at >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) >> 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: >> Exception running >> /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py >> 10.160.25.249 >> java.io.IOException: Cannot run program >> "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in >> directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): >> java.io.IOException: error=13, Permission denied >> at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) >> at org.apache.hadoop.util.Shell.runCommand(Shell.java:206) >> at org.apache.hadoop.util.Shell.run(Shell.java:188) >> at >> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381) >> at >> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242) >> at >> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180) >> at >> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119) >> at >> org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750) >> at >> org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731) >> at >> org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227) >> at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931) >> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at >> org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474) >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) >> at java.security.AccessController.doPrivileged(Native Method) >> at javax.security.auth.Subject.doAs(Subject.java:396) >> at >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) >> Caused by: java.io.IOException: java.io.IOException: error=13, Permission >> denied >> at java.lang.UNIXProcess.(UNIXProcess.java:148) >> at java.lang.ProcessImpl.start(ProcessImpl.java:65) >> at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) >> ... 21 more >> >> it doesn't matter if it is a pure hadoop job or a oozie submitted job. >> there seems to be something wrong in the basic configuration. Anyone an >> idea? >> >> Cheers >> Wolli >> > > --001a11c379ec59dc2504e8769750 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
this line:

2013-10-11 10:24:53,033 ERRO= R org.apache.hadoop.security.UserGroupInformation: PriviledgedAction= Exception as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang.Null= PointerException

is imho indicating that i am using the use= r "mapred" for executing (fyi: submitting the job from the CLI (= =A0=A0hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce= /hadoop-examples.jar wordcount hdfs_input_path hdfs_output_path) from anoth= er node) ... the file permissions for this file are:

-rwxr-x--x =A01 mapred hadoop 1382 Oct 10 15:02= topology.py*
i temporarly had set the p= ermissions to 777 to see if something changes, but it didn't ... I chec= ked only the jobtracker, are the other nodes important for this as well?

thx already in advance, especially = for the quick response!
Wolli


2013/10= /11 DSuiter RDX <dsuiter@rdx.com>
The user running the job (might not be your username depen= ding on your setup) does not appear to have executable permissions on the j= obtracker cluster topology python script - I'm basing this on the lines= :

2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBase= dMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-J= OBTRACKER/topology.py 10.160.25.249=A0
j= ava.io.IOException: Cannot run program "/run/cloudera-scm-agent/proces= s/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloud= era-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException:= error=3D13, Permission denied

So checking on the permissions for that file, determining wh= at user is kicking off your job, which depends on how you submit it, and ma= king sure that user has the execute permission on that file will probably f= ix this.

If you are using a management console, such as Cloudera SCM, when you s= ubmit jobs, they are run as an application user, so, Flume services run und= er the "Flume" user, HBase jobs will typically run under the HBas= e user, and so on. It can cause some surprises if you do not expect it.
Devin Suiter
Jr. Da= ta Solutions Software Engineer
100 Sandusky Street | 2nd = Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 |=A0www.rdx.com=


On Fri, Oct 11, 2013 at 7:59 AM, fab wol= <darkwolli32@gmail.com> wrote:
Hey everyone,=A0I've got supplied with a decent ten node CDH 4.4 clus= ter, only 7 days old, and someone tried some HBase stuff on it. Now I wante= d to try some MR Stuff on it, but starting a Job is already not possible (e= ven the wordcount example). The error log of the jobtracker produces a log = 700k lines long but it consists mainly of these lines repeatedly:
2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred= .JobTracker: Lost tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
2013-10-11 10:24= :53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedA= ctionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang= .NullPointerException
2013-10-11 10:24= :53,034 INFO org.apache.hadoop.ipc.Server: IPC Server handler 22 on 8021, c= all heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, tr= ue, true, -1), rpc version=3D2, client version=3D32, methodsFingerPrint=3D-= 159967141 from 10.= 160.25.250:44389: error: java.io.IOException: java.lang.NullPointerExce= ption
java.io.IOExcept= ion: java.lang.NullPointerException
at or= g.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2= 751)
at org.apache.hadoop.mapred.JobTracker.addNewT= racker(JobTracker.java:2731)
at org.apache.hadoop.mapred.Jo= bTracker.processHeartbeat(JobTracker.java:3227)
= at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:293= 1)
at sun.reflect.GeneratedMethodAccessor5.invoke= (Unknown Source)
at sun.reflect.DelegatingMetho= dAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Ser= ver$WritableRpcInvoker.call(WritableRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$S= erver.call(RPC.java:1002)
at org.apache.h= adoop.ipc.Server$Handler$1.run(Server.java:1751)
at org.apache.hadoop.ipc.Server$Handler$1.run(= Server.java:1747)
at java.security.AccessControl= ler.doPrivileged(Native Method)
at javax.= security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInforma= tion.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Serve= r$Handler.run(Server.java:1745)
2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.Scr= iptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapr= educe-JOBTRACKER/topology.py 10.160.25.249=A0
java.io.IOExcept= ion: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce= -JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/p= rocess/556-mapreduce-JOBTRACKER"): java.io.IOException: error=3D13, Pe= rmission denied
at java.lang.ProcessBuilder.start(ProcessBuild= er.java:460)
at org.apache.hadoop.util.Shel= l.runCommand(Shell.java:206)
at org.apach= e.hadoop.util.Shell.run(Shell.java:188)
at org.apache.hadoop.util.Shell$ShellCommandEx= ecutor.execute(Shell.java:381)
at org.apache.hadoop.net.Scrip= tBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.ja= va:242)
at org.apache.hadoop.net.Scrip= tBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
at org.apache.hadoop.net.CachedDNSToSwitc= hMapping.resolve(CachedDNSToSwitchMapping.java:119)
at org.apache.hadoop.mapred.JobTracker.resolve= AndAddToTopology(JobTracker.java:2750)
at org.apache.hadoop.mapred.Jo= bTracker.addNewTracker(JobTracker.java:2731)
at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java= :3227)
at org.apache.hadoop.mapred.JobTracker.heartbe= at(JobTracker.java:2931)
at sun.reflect.GeneratedMethod= Accessor5.invoke(Unknown Source)
at sun.r= eflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav= a:25)
at java.lang.reflect.Method.invoke(Method.java= :597)
at org.apache.hadoop.ipc.WritableRp= cEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.j= ava:1002)
at org.apache.hadoop.ipc.Serve= r$Handler$1.run(Server.java:1751)
at org.= apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
at java.security.AccessController.doPrivileged= (Native Method)
at javax.security.auth.Subject= .doAs(Subject.java:396)
at org.apache.had= oop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Se= rver.java:1745)
Caused by: java.io.IOException: java.io.IOException: error=3D13, Permission= denied
at java.lang.UNIXProcess.<init= >(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.jav= a:65)
at java.lang.ProcessBuilder.start(P= rocessBuilder.java:453)
... 21 more

it doesn't matter if it is a pure hadoop job or a oozie s= ubmitted job. there seems to be something wrong in the basic configuration.= Anyone an idea?

Cheers
Wolli


--001a11c379ec59dc2504e8769750--