Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C7E161782F for ; Tue, 27 Jan 2015 14:40:24 +0000 (UTC) Received: (qmail 85605 invoked by uid 500); 27 Jan 2015 14:40:19 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 85469 invoked by uid 500); 27 Jan 2015 14:40:19 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 85447 invoked by uid 99); 27 Jan 2015 14:40:18 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Jan 2015 14:40:18 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of matvey1414@gmail.com designates 209.85.220.169 as permitted sender) Received: from [209.85.220.169] (HELO mail-vc0-f169.google.com) (209.85.220.169) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Jan 2015 14:39:53 +0000 Received: by mail-vc0-f169.google.com with SMTP id hq12so4785221vcb.0 for ; Tue, 27 Jan 2015 06:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Rl4Z5gET8q8jCwB7CHK4QwCsU9KPwE8gwyv4NIJ8gXY=; b=e1DcvfenXcaLf4uVbJBKxhViodVA+hJX1OJyYfQiZYEMRtAlJN8XH+YQ0vJubn9JNt WYfYa1eOkyCJv54VVzCEK5C7OgQv6tx53tgZAJmpB7mQVfsDAUA3TD73/peYN8ngDQjx beZX26qQDrA3ADU1A2uucCYQzo0FDEYLaT7bbtru5t9h+6hurqFPMe5IEk5CAEcxkNlZ ro74wL/Qs41npsGIaW0h21ynpK9XaGQZAg6nen6DiRfORFWembbEmu6VBm2mfLPIzMW7 epcBKAZrVUEUKBeNlAWkkuxEBaF06M12gXAVoKliqw0YG582GTgVlx1n3nwI8Pbh5TEZ X5sw== MIME-Version: 1.0 X-Received: by 10.52.243.41 with SMTP id wv9mr535962vdc.20.1422369591517; Tue, 27 Jan 2015 06:39:51 -0800 (PST) Received: by 10.52.33.229 with HTTP; Tue, 27 Jan 2015 06:39:51 -0800 (PST) In-Reply-To: <561754906.726258.1422300154525.JavaMail.yahoo@mail.yahoo.com> References: <561754906.726258.1422300154525.JavaMail.yahoo@mail.yahoo.com> Date: Tue, 27 Jan 2015 09:39:51 -0500 Message-ID: Subject: Re: yarn jobhistory server not displaying all jobs From: Matt K To: user@hadoop.apache.org, Ravi Prakash Content-Type: multipart/alternative; boundary=001a113401383de7ef050da33a25 X-Virus-Checked: Checked by ClamAV on apache.org --001a113401383de7ef050da33a25 Content-Type: text/plain; charset=UTF-8 Thanks Ravi! This helps. On Mon, Jan 26, 2015 at 2:22 PM, Ravi Prakash wrote: > Hi Matt! > > Take a look at the mapreduce.jobhistory.* configuration parameters here > for the delay in moving finished jobs to the HistoryServer: > > https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml > > I've seen this error "hadoop is not allowed to impersonate hadoop" when I > tried configuring hadoop proxy users > > > On Friday, January 23, 2015 10:43 AM, Matt K > wrote: > > > Hello, > > I am an issue with Yarn's JobHistory Server, which is making it painful to > debug jobs. The latest jobs (from the last 12 hours or so) are missing from > the JobHistory Server, but present in ResourceManager Yarn UI. I am seeing > 8 jobs only in the JobHistory, and 15 in Yarn UI. > > Not much useful stuff in the logs. Every few hours, this exception pops up > in mapred-hadoop-historyserver.log, but I don't know if it's related. > > 2015-01-23 03:41:40,003 WARN > org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService: Could not process > job files > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): > User: hadoop is not allowed to impersonate hadoop > at org.apache.hadoop.ipc.Client.call(Client.java:1409) > at org.apache.hadoop.ipc.Client.call(Client.java:1362) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source) > at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:219) > at > org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1137) > at > org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1127) > at > org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1117) > at > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:264) > at > org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:231) > at > org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:224) > at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1290) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300) > at > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:296) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764) > at > org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.buildJobIndexInfo(KilledHistoryService.java:196) > at > org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.access$100(KilledHistoryService.java:85) > at > org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:128) > at > org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:125) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at > org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.run(KilledHistoryService.java:125) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > > Has anyone ran into this before? > > Thanks, > -Matt > > > -- www.calcmachine.com - easy online calculator. --001a113401383de7ef050da33a25 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Thanks Ravi! This helps.
<= br>
On Mon, Jan 26, 2015 at 2:22 PM, Ravi Prakash= <ravihoo@ymail.com> wrote:
Hi Matt!

Ta= ke a look at the mapreduce.jobhistory.* configuration parameters here for t= he delay in moving finished jobs to the HistoryServer:

I've seen this error "hadoop is not allowed to impersonat= e hadoop" when I tried configuring hadoop proxy users


On Friday, Janua= ry 23, 2015 10:43 AM, Matt K <matvey1414@gmail.com> wrote:

Hello,

I am an issue with Yarn's JobHistory Server, which = is making it painful to debug jobs. The latest jobs (from the last 12 hours= or so) are missing from the JobHistory Server, but present in ResourceMana= ger Yarn UI. I am seeing 8 jobs only in the JobHistory, and 15 in Yarn UI.<= /div>

Not much useful stuff in the logs. Every few hours= , this exception pops up in mapred-hadoop-historyserver.log, but I don'= t know if it's related.

2015-01-23 03:41:40,003 WARN org.apache.hadoop.mapreduce.v2.hs.KilledH= istoryService: Could not process job files
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.autho= rize.AuthorizationException): User: hadoop is not allowed to impersonate ha= doop
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Client.call(Clien= t.java:1409)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Client.call(Clien= t.java:1362)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine= $Invoker.invoke(ProtobufRpcEngine.java:206)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at com.sun.proxy.$Proxy9.getBlockLocations= (Unknown Source)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.reflect.GeneratedMethodAccessor18.i= nvoke(Unknown Source)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.reflect.DelegatingMethodAccessorImp= l.invoke(DelegatingMethodAccessorImpl.java:43)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.reflect.Method.invoke(Method.= java:606)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.io.retry.RetryInvocat= ionHandler.invokeMethod(RetryInvocationHandler.java:186)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.io.retry.RetryInvocat= ionHandler.invoke(RetryInvocationHandler.java:102)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at com.sun.proxy.$Proxy9.getBlockLocations= (Unknown Source)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocolPB.Clien= tNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTrans= latorPB.java:219)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSClient.callGe= tBlockLocations(DFSClient.java:1137)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSClient.getLoc= atedBlocks(DFSClient.java:1127)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSClient.getLoc= atedBlocks(DFSClient.java:1117)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSInputStream.f= etchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:264)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSInputStream.o= penInfo(DFSInputStream.java:231)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSInputStream.&= lt;init>(DFSInputStream.java:224)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DFSClient.open(D= FSClient.java:1290)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DistributedFileS= ystem$3.doCall(DistributedFileSystem.java:300)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DistributedFileS= ystem$3.doCall(DistributedFileSystem.java:296)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.fs.FileSystemLinkReso= lver.resolve(FileSystemLinkResolver.java:81)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DistributedFileS= ystem.open(DistributedFileSystem.java:296)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.fs.FileSystem.open(Fi= leSystem.java:764)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.mapreduce.v2.hs.Kille= dHistoryService$FlagFileHandler.buildJobIndexInfo(KilledHistoryService.java= :196)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.mapreduce.v2.hs.Kille= dHistoryService$FlagFileHandler.access$100(KilledHistoryService.java:85)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.mapreduce.v2.hs.Kille= dHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:128)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.mapreduce.v2.hs.Kille= dHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:125)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.security.AccessController.doPrivil= eged(Native Method)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at javax.security.auth.Subject.doAs(Subjec= t.java:415)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.security.UserGroupInf= ormation.doAs(UserGroupInformation.java:1548)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.mapreduce.v2.hs.Kille= dHistoryService$FlagFileHandler.run(KilledHistoryService.java:125)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.util.TimerThread.mainLoop(Timer.ja= va:555)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.util.TimerThread.run(Timer.java:50= 5)

Has anyone ran into this before?

Thanks,
-Matt


<= /blockquote>


--
www.calcmachine.co= m - easy online calculator.
--001a113401383de7ef050da33a25--