Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2C04410D21 for ; Fri, 11 Oct 2013 12:20:29 +0000 (UTC) Received: (qmail 56054 invoked by uid 500); 11 Oct 2013 12:20:22 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 55829 invoked by uid 500); 11 Oct 2013 12:20:21 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 55822 invoked by uid 99); 11 Oct 2013 12:20:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Oct 2013 12:20:20 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dsuiter@rdx.com designates 74.125.82.172 as permitted sender) Received: from [74.125.82.172] (HELO mail-we0-f172.google.com) (74.125.82.172) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Oct 2013 12:20:16 +0000 Received: by mail-we0-f172.google.com with SMTP id q58so4111686wes.31 for ; Fri, 11 Oct 2013 05:19:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rdx.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=p772NygtcIeXqW18y3Dum4FGVTRW8LL19aBYro3aoUM=; b=WM9lomLwnyqJPM4MrDUU6qjGgmmpckJb3Q1QNrh0MrvPvOz5m1S6iFMQvHfdz0GKhA gJaCJW0kLqFVChSRo8jDBjaN0j7ACwPSmj6yYxfA5W3a0r5s068DEuYk04ccglkE5dZl Itjd0U+sWxMdgistLt94lECKOnp5zACf3yi0E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=p772NygtcIeXqW18y3Dum4FGVTRW8LL19aBYro3aoUM=; b=dW5E6Uz6jMvnKbmLoQEQjCBXj1UO6MIuoLdcdgpzWNQFQIENjDjSxJIqelpNa8p+Nq SF8mysGTiI3h0yiwP9pVvnaHUUXSAX4AMkfnhBmT58umpFCElVBZU4S7dH7fMMfUD2pb LuJ9IB1Wb4kC+1rzo6gc1WFsBNJuNX6H9Nwn0IRwG9/baoCVTkcShUpdza07GwbsmamP bxesjntuEDHqRTsMjyWyWufkt9Hcy1YczPv6jtOb8Ydnrkaa3Ig611zTVOT6yFWU3Wa9 bhvfYe6uDHH6H0EWQfROjLfQTfnFVM/6a4GEYkmJrrlT5+D9xuiI64vreBHCsoStafRu EPFw== X-Gm-Message-State: ALoCoQnpZEL/oGGUz3jrm4BRzGEOVkYR+PZmUzfRbi880krmHofKj5schz/LIdOdvYQYjoqJg2gn MIME-Version: 1.0 X-Received: by 10.180.211.111 with SMTP id nb15mr2877299wic.55.1381493994628; Fri, 11 Oct 2013 05:19:54 -0700 (PDT) Received: by 10.216.52.134 with HTTP; Fri, 11 Oct 2013 05:19:54 -0700 (PDT) In-Reply-To: References: Date: Fri, 11 Oct 2013 08:19:54 -0400 Message-ID: Subject: Re: Job initialization failed: java.lang.NullPointerException at resolveAndAddToTopology From: DSuiter RDX To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c34854cf3de204e8762270 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c34854cf3de204e8762270 Content-Type: text/plain; charset=ISO-8859-1 The user running the job (might not be your username depending on your setup) does not appear to have executable permissions on the jobtracker cluster topology python script - I'm basing this on the lines: 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py 10.160.25.249 java.io.IOException: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException: error=13, Permission denied So checking on the permissions for that file, determining what user is kicking off your job, which depends on how you submit it, and making sure that user has the execute permission on that file will probably fix this. If you are using a management console, such as Cloudera SCM, when you submit jobs, they are run as an application user, so, Flume services run under the "Flume" user, HBase jobs will typically run under the HBase user, and so on. It can cause some surprises if you do not expect it. *Devin Suiter* Jr. Data Solutions Software Engineer 100 Sandusky Street | 2nd Floor | Pittsburgh, PA 15212 Google Voice: 412-256-8556 | www.rdx.com On Fri, Oct 11, 2013 at 7:59 AM, fab wol wrote: > Hey everyone, I've got supplied with a decent ten node CDH 4.4 cluster, > only 7 days old, and someone tried some HBase stuff on it. Now I wanted to > try some MR Stuff on it, but starting a Job is already not possible (even > the wordcount example). The error log of the jobtracker produces a log 700k > lines long but it consists mainly of these lines repeatedly: > > 2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred.JobTracker: Lost > tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712' > 2013-10-11 10:24:53,033 ERROR > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:mapred (auth:SIMPLE) cause:java.io.IOException: > java.lang.NullPointerException > 2013-10-11 10:24:53,034 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 22 on 8021, call > heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, > true, true, -1), rpc version=2, client version=32, > methodsFingerPrint=-159967141 from 10.160.25.250:44389: error: > java.io.IOException: java.lang.NullPointerException > java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2751) > at > org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731) > at > org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227) > at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931) > at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) > 2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBasedMapping: > Exception running > /run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py > 10.160.25.249 > java.io.IOException: Cannot run program > "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER/topology.py" (in > directory "/run/cloudera-scm-agent/process/556-mapreduce-JOBTRACKER"): > java.io.IOException: error=13, Permission denied > at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) > at org.apache.hadoop.util.Shell.runCommand(Shell.java:206) > at org.apache.hadoop.util.Shell.run(Shell.java:188) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381) > at > org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:242) > at > org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180) > at > org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119) > at > org.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2750) > at > org.apache.hadoop.mapred.JobTracker.addNewTracker(JobTracker.java:2731) > at > org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java:3227) > at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2931) > at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) > Caused by: java.io.IOException: java.io.IOException: error=13, Permission > denied > at java.lang.UNIXProcess.(UNIXProcess.java:148) > at java.lang.ProcessImpl.start(ProcessImpl.java:65) > at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) > ... 21 more > > it doesn't matter if it is a pure hadoop job or a oozie submitted job. > there seems to be something wrong in the basic configuration. Anyone an > idea? > > Cheers > Wolli > --001a11c34854cf3de204e8762270 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
The user running the job (might not be your username depen= ding on your setup) does not appear to have executable permissions on the j= obtracker cluster topology python script - I'm basing this on the lines= :

2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.ScriptBase= dMapping: Exception running /run/cloudera-scm-agent/process/556-mapreduce-J= OBTRACKER/topology.py 10.160.25.249=A0
j= ava.io.IOException: Cannot run program "/run/cloudera-scm-agent/proces= s/556-mapreduce-JOBTRACKER/topology.py" (in directory "/run/cloud= era-scm-agent/process/556-mapreduce-JOBTRACKER"): java.io.IOException:= error=3D13, Permission denied

So checking on the permissions for that file, determining wh= at user is kicking off your job, which depends on how you submit it, and ma= king sure that user has the execute permission on that file will probably f= ix this.

If you are using a management console, such as Cloudera SCM, when you s= ubmit jobs, they are run as an application user, so, Flume services run und= er the "Flume" user, HBase jobs will typically run under the HBas= e user, and so on. It can cause some surprises if you do not expect it.
Devin Suiter
Jr. Da= ta Solutions Software Engineer
100 Sandusky Street | 2nd = Floor | Pittsburgh, PA 15212
Google Voice: 412-256-8556 |=A0www.rdx.com


On Fri, Oct 11, 2013 at 7:59 AM, fab wol= <darkwolli32@gmail.com> wrote:
Hey everyone,=A0I've got supplied with a decent ten node CDH 4.4 clus= ter, only 7 days old, and someone tried some HBase stuff on it. Now I wante= d to try some MR Stuff on it, but starting a Job is already not possible (e= ven the wordcount example). The error log of the jobtracker produces a log = 700k lines long but it consists mainly of these lines repeatedly:
2013-10-11 10:24:53,033 INFO org.apache.hadoop.mapred= .JobTracker: Lost tracker 'tracker_z-asanode02:localhost/127.0.0.1:53712'
2013-10-11 10:24= :53,033 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedA= ctionException as:mapred (auth:SIMPLE) cause:java.io.IOException: java.lang= .NullPointerException
2013-10-11 10:24= :53,034 INFO org.apache.hadoop.ipc.Server: IPC Server handler 22 on 8021, c= all heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@13b31acd, true, tr= ue, true, -1), rpc version=3D2, client version=3D32, methodsFingerPrint=3D-= 159967141 from 10.= 160.25.250:44389: error: java.io.IOException: java.lang.NullPointerExce= ption
java.io.IOExcept= ion: java.lang.NullPointerException
at or= g.apache.hadoop.mapred.JobTracker.resolveAndAddToTopology(JobTracker.java:2= 751)
at org.apache.hadoop.mapred.JobTracker.addNewT= racker(JobTracker.java:2731)
at org.apache.hadoop.mapred.Jo= bTracker.processHeartbeat(JobTracker.java:3227)
= at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:293= 1)
at sun.reflect.GeneratedMethodAccessor5.invoke= (Unknown Source)
at sun.reflect.DelegatingMetho= dAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Ser= ver$WritableRpcInvoker.call(WritableRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$S= erver.call(RPC.java:1002)
at org.apache.h= adoop.ipc.Server$Handler$1.run(Server.java:1751)
at org.apache.hadoop.ipc.Server$Handler$1.run(= Server.java:1747)
at java.security.AccessControl= ler.doPrivileged(Native Method)
at javax.= security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInforma= tion.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Serve= r$Handler.run(Server.java:1745)
2013-10-11 10:24:53,035 WARN org.apache.hadoop.net.Scr= iptBasedMapping: Exception running /run/cloudera-scm-agent/process/556-mapr= educe-JOBTRACKER/topology.py 10.160.25.249=A0
java.io.IOExcept= ion: Cannot run program "/run/cloudera-scm-agent/process/556-mapreduce= -JOBTRACKER/topology.py" (in directory "/run/cloudera-scm-agent/p= rocess/556-mapreduce-JOBTRACKER"): java.io.IOException: error=3D13, Pe= rmission denied
at java.lang.ProcessBuilder.start(ProcessBuild= er.java:460)
at org.apache.hadoop.util.Shel= l.runCommand(Shell.java:206)
at org.apach= e.hadoop.util.Shell.run(Shell.java:188)
at org.apache.hadoop.util.Shell$ShellCommandEx= ecutor.execute(Shell.java:381)
at org.apache.hadoop.net.Scrip= tBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.ja= va:242)
at org.apache.hadoop.net.Scrip= tBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:180)
at org.apache.hadoop.net.CachedDNSToSwitc= hMapping.resolve(CachedDNSToSwitchMapping.java:119)
at org.apache.hadoop.mapred.JobTracker.resolve= AndAddToTopology(JobTracker.java:2750)
at org.apache.hadoop.mapred.Jo= bTracker.addNewTracker(JobTracker.java:2731)
at org.apache.hadoop.mapred.JobTracker.processHeartbeat(JobTracker.java= :3227)
at org.apache.hadoop.mapred.JobTracker.heartbe= at(JobTracker.java:2931)
at sun.reflect.GeneratedMethod= Accessor5.invoke(Unknown Source)
at sun.r= eflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav= a:25)
at java.lang.reflect.Method.invoke(Method.java= :597)
at org.apache.hadoop.ipc.WritableRp= cEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:474)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.j= ava:1002)
at org.apache.hadoop.ipc.Serve= r$Handler$1.run(Server.java:1751)
at org.= apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
at java.security.AccessController.doPrivileged= (Native Method)
at javax.security.auth.Subject= .doAs(Subject.java:396)
at org.apache.had= oop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Se= rver.java:1745)
Caused by: java.io.IOException: java.io.IOException: error=3D13, Permission= denied
at java.lang.UNIXProcess.<init= >(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.jav= a:65)
at java.lang.ProcessBuilder.start(P= rocessBuilder.java:453)
... 21 more

it doesn't matter if it is a pure hadoop job or a oozie s= ubmitted job. there seems to be something wrong in the basic configuration.= Anyone an idea?

Cheers
Wolli

--001a11c34854cf3de204e8762270--