Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CD5042007D1 for ; Thu, 12 May 2016 09:41:59 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id CBF85160939; Thu, 12 May 2016 07:41:59 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 92715160868 for ; Thu, 12 May 2016 09:41:58 +0200 (CEST) Received: (qmail 3743 invoked by uid 500); 12 May 2016 07:41:57 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 3733 invoked by uid 99); 12 May 2016 07:41:57 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 May 2016 07:41:57 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id DEE26C0BC5 for ; Thu, 12 May 2016 07:41:56 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.879 X-Spam-Level: * X-Spam-Status: No, score=1.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id Db5911iprKtE for ; Thu, 12 May 2016 07:41:54 +0000 (UTC) Received: from mail-yw0-f179.google.com (mail-yw0-f179.google.com [209.85.161.179]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id F080B5F247 for ; Thu, 12 May 2016 07:41:53 +0000 (UTC) Received: by mail-yw0-f179.google.com with SMTP id o66so73845942ywc.3 for ; Thu, 12 May 2016 00:41:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=Qd08gmG80lqEgyOgCtnwS5+2a6FVpZxp/iBElVboCOk=; b=bn21lwU53Ha/lD5mWgosZkIe9e+Vzk7vP7xzIACRq88cDfHN7Y42x+/o4/W7ZnUbGC fcMQxW7Ra72c4IQK0K2ZTsCLfMOIOx/6fDEDVQeF5f6kQ7Cm+5fYaTpNRDzlcJbdwDlQ BirntXZoD9Je5ZYDzUPOqMfXFh7r4rKv8nzsHHEdri93uOyMzuSas2T6jg7V3j5mPBO/ CNSa+TYyRisrs2Y0PxT1UeqWa5OHob1BfglgwSNGEcea8F6oYg4MJWCveICfgw7WRTIU NGEDd71Xz1aXhTconHJocYvo9YJy6xX6lG5Ne1x7ZZURsmnWsLyYXBqIaSepITKJboYZ tKnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=Qd08gmG80lqEgyOgCtnwS5+2a6FVpZxp/iBElVboCOk=; b=dtOa4odYWNI/NH7QLjHEnt20bP37/RmPaLxLeqH6iaXLfNQetQMe0aL1AFKkROFYDD 43/YLh03Hmnw6vwbhMHfxWNj0+q53gnUd6Q+DShsOfoU3HBDdsjXU4iOL4M6T7purF/1 5ksmfl3V8uVSucwTn5c+AYSw2F/LkmHEWOv59xA78F5bcS0IkUP+tWL2S+qPmbqS1gTd obbJV5qkFzklEL4uF1yGaWUeBHcUnS7P16QgPgcjdahd/hIDaBbvfMZJyCSAAHv36vR8 kaAtuQzLynZkN+ehsVuCGm4+OF/EQuLuklsBNmWx4ed65kxS+VTf0L16iw+u51or7BSk XLsA== X-Gm-Message-State: AOPr4FUQM0VRW0qxDTlrqoTf6ylYMDkwVevVVL55MNZ5J0dvdoyy40Tr+F+bGuKemG04adsYdnHXutVVGvuMcQ== MIME-Version: 1.0 X-Received: by 10.13.197.194 with SMTP id h185mr3747412ywd.12.1463038913437; Thu, 12 May 2016 00:41:53 -0700 (PDT) Received: by 10.129.86.133 with HTTP; Thu, 12 May 2016 00:41:53 -0700 (PDT) Received: by 10.129.86.133 with HTTP; Thu, 12 May 2016 00:41:53 -0700 (PDT) In-Reply-To: References: Date: Thu, 12 May 2016 13:11:53 +0530 Message-ID: Subject: Re: LeaseExpiredException in Hive From: "sreebalineni ." To: user@hive.apache.org Content-Type: multipart/alternative; boundary=001a114edd4aba66bb0532a04ac7 archived-at: Thu, 12 May 2016 07:42:00 -0000 --001a114edd4aba66bb0532a04ac7 Content-Type: text/plain; charset=UTF-8 Please do test with debug mode I suspect it could be access issue of that path or no enough space On May 12, 2016 12:54 PM, "Arun Vasu" wrote: > Hi All, > > I was running 10 hive queries (INSERT INTO TABLE..... ) parallel and it is > failing with an exception saying few of the MR jobs cannot find the > required files. > I have got the reason for this issue: > http://stackoverflow.com/questions/7559880/leaseexpiredexception-no-lease-error-on-hdfs > > I am calling the Hive queries using scalding Execution.sequence(...) and > attached the the exception trace is given below, > It is really great if someone can shed some light on this issue and how to > solve it? > > > 12/05/2016 4:36:03 PM INFO: parquet.hadoop.InternalParquetRecordWriter: > Flushing mem columnStore to file. allocated memory: 80,530,611 > > 16/05/12 16:42:07 WARN hdfs.DFSClient: DataStreamer Exception > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on > /tmp/hive-dintg/hive_2016-05-12_16-25-33_755_4680854916521511758-7/-mr-10005/0/emptyFile > (inode 456570980): File does not exist. Holder > DFSClient_NONMAPREDUCE_-2077717826_17 does not have any open files. > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3416) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3218) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3100) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:636) > > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:415) > > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) > > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > > > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > at com.sun.proxy.$Proxy16.addBlock(Unknown Source) > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:606) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > at com.sun.proxy.$Proxy17.addBlock(Unknown Source) > > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > > 16/05/12 16:42:07 ERROR hdfs.DFSClient: Failed to close inode 456570980 > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on > /tmp/hive-acoe_omnds_dintg/hive_2016-05-12_16-25-33_755_4680854916521511758-7/-mr-10005/0/emptyFile > (inode 456570980): File does not exist. Holder > DFSClient_NONMAPREDUCE_-2077717826_17 does not have any open files. > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3416) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3218) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3100) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:636) > > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:415) > > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) > > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > > > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > at com.sun.proxy.$Proxy16.addBlock(Unknown Source) > > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:606) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > at com.sun.proxy.$Proxy17.addBlock(Unknown Source) > > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > > 16/05/12 16:42:07 WARN hdfs.DFSClient: DataStreamer Exception > > > -- > Thanks, > Arun > --001a114edd4aba66bb0532a04ac7 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Please do test with debug mode I suspect it could be access = issue of that path or no enough space

On May 12, 2016 12:54 PM, "Arun Vasu" = <arunvpy@gmail.com> wrote:
Hi A= ll,

I was running 10 hive queries (INSERT INTO TABLE....= . ) parallel and it is failing with an exception saying few of the MR jobs = cannot find the required files.
=

I am calling the Hive queries using scalding Execution.= sequence(...) =C2=A0and attached the the exception trace is given below,
It is really great if someone can shed some light on this issue and= how to solve it?


12/05/2016 4:36:03 PM INFO: parquet.hadoop.InternalParquetRecordWr= iter: Flushing mem columnStore to file. allocated memory: 80,530,611=

16/05/12 16:42:07 WARN hdfs.DFSClient: DataStreamer Exception

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.serve= r.namenode.LeaseExpiredException): No lease on /tmp/hive-dintg/hive_2016-05= -12_16-25-33_755_4680854916521511758-7/-mr-10005/0/emptyFile (inode 4565709= 80): File does not exist. Holder DFSClient_NONMAPREDUCE_-2077717826_17 does= not have any open files.

at org.apache.hadoop.hdfs.server.namenode.FSNamesyst= em.checkLease(FSNamesystem.java:3416)

at org.apache.hadoop.hdfs.server.namenode.FSNamesyst= em.analyzeFileState(FSNamesystem.java:3218)

at org.apache.hadoop.hdfs.server.namenode.FSNamesyst= em.getAdditionalBlock(FSNamesystem.java:3100)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRp= cServer.addBlock(NameNodeRpcServer.java:636)

at org.apache.hadoop.hdfs.server.namenode.Authorizat= ionProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientPro= tocol.java:188)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeP= rotocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTran= slatorPB.java:476)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamen= odeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenod= eProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pr= otoBufRpcInvoker.call(ProtobufRpcEngine.java:587)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:10= 26)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server= .java:2013)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server= .java:2009)

at java.security.AccessController.doPrivileged(Nativ= e Method)

at javax.security.auth.Subject.doAs(Subject.java:415= )

at org.apache.hadoop.security.UserGroupInformation.d= oAs(UserGroupInformation.java:1642)

at org.apache.hadoop.ipc.Server$Handler.run(Server.j= ava:2007)


at org.apache.hadoop.ipc.Client.call(Client.java:141= 1)

at org.apache.hadoop.ipc.Client.call(Client.java:136= 4)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.i= nvoke(ProtobufRpcEngine.java:206)

at com.sun.proxy.$Proxy16.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeP= rotocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)

at sun.reflect.GeneratedMethodAccessor12.invoke(Unkn= own Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(D= elegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)<= /span>

at org.apache.hadoop.io.retry.RetryInvocationHandler= .invokeMethod(RetryInvocationHandler.java:187)

at org.apache.hadoop.io.retry.RetryInvocationHandler= .invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy17.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStream= er.locateFollowingBlock(DFSOutputStream.java:1473)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStream= er.nextBlockOutputStream(DFSOutputStream.java:1290)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStream= er.run(DFSOutputStream.java:536)

16/05/12 16:42:07 ERROR hdfs.DFSClient: Failed to close inode 4565= 70980

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.serve= r.namenode.LeaseExpiredException): No lease on /tmp/hive-acoe_omnds_dintg/h= ive_2016-05-12_16-25-33_755_4680854916521511758-7/-mr-10005/0/emptyFile (in= ode 456570980): File does not exist. Holder DFSClient_NONMAPREDUCE_-2077717= 826_17 does not have any open files.

at org.apache.hadoop.hdfs.server.namenode.FSNamesyst= em.checkLease(FSNamesystem.java:3416)

at org.apache.hadoop.hdfs.server.namenode.FSNamesyst= em.analyzeFileState(FSNamesystem.java:3218)

at org.apache.hadoop.hdfs.server.namenode.FSNamesyst= em.getAdditionalBlock(FSNamesystem.java:3100)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRp= cServer.addBlock(NameNodeRpcServer.java:636)

at org.apache.hadoop.hdfs.server.namenode.Authorizat= ionProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientPro= tocol.java:188)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeP= rotocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTran= slatorPB.java:476)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamen= odeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenod= eProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pr= otoBufRpcInvoker.call(ProtobufRpcEngine.java:587)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:10= 26)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server= .java:2013)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server= .java:2009)

at java.security.AccessController.doPrivileged(Nativ= e Method)

at javax.security.auth.Subject.doAs(Subject.java:415= )

at org.apache.hadoop.security.UserGroupInformation.d= oAs(UserGroupInformation.java:1642)

at org.apache.hadoop.ipc.Server$Handler.run(Server.j= ava:2007)


at org.apache.hadoop.ipc.Client.call(Client.java:141= 1)

at org.apache.hadoop.ipc.Client.call(Client.java:136= 4)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.i= nvoke(ProtobufRpcEngine.java:206)

at com.sun.proxy.$Proxy16.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeP= rotocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)

at sun.reflect.GeneratedMethodAccessor12.invoke(Unkn= own Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(D= elegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)<= /span>

at org.apache.hadoop.io.retry.RetryInvocationHandler= .invokeMethod(RetryInvocationHandler.java:187)

at org.apache.hadoop.io.retry.RetryInvocationHandler= .invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy17.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStream= er.locateFollowingBlock(DFSOutputStream.java:1473)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStream= er.nextBlockOutputStream(DFSOutputStream.java:1290)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStream= er.run(DFSOutputStream.java:536)

16/05/12 16:42:07 WARN hdfs.DFSClient: DataStreamer Exception



--
Thanks,
Ar= un
--001a114edd4aba66bb0532a04ac7--