Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C6F1E1164E for ; Sat, 2 Aug 2014 17:51:34 +0000 (UTC) Received: (qmail 2871 invoked by uid 500); 2 Aug 2014 17:51:29 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 2721 invoked by uid 500); 2 Aug 2014 17:51:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 2709 invoked by uid 99); 2 Aug 2014 17:51:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Aug 2014 17:51:29 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of hadoophive@gmail.com designates 209.85.223.172 as permitted sender) Received: from [209.85.223.172] (HELO mail-ie0-f172.google.com) (209.85.223.172) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Aug 2014 17:51:23 +0000 Received: by mail-ie0-f172.google.com with SMTP id lx4so7714192iec.17 for ; Sat, 02 Aug 2014 10:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=AaQfM1pSHStHE3XGMKoCw3jcqRcfGwifHfQJxVvNMiY=; b=xSn5pEgTekvUdO4IoR1KINviTkw07lf59vIdG8t58FRzMhQGvQe0VaDYl4Apdr4epT VVGsVBmYC0xNiq6o7vR4rZqxOM7ZDRrpcbWFw9ZDkl9D8qg0POU3vR6XRDQ+MSH7n4Sc TZIXEx+cEp+IRk2WiShb2cHgf1mj8XMEZEo+Fj1E4qu6d4qH1oosmMHFPPE9pQFilQbF Rdmp2TQRb8S6wwYEi/fMtXLc8GaItyUJMMZW5K7dlXQhQ4jUZ59BrHp1tMrTdCOm/d9C /pZyeWHNJ/2zma2OxNeUqZKKmE+duf1k0S7lyUjQKN2OZxG6WYdNqIXrMHmtC+9AVgUn e7Qw== MIME-Version: 1.0 X-Received: by 10.42.208.70 with SMTP id gb6mr18235078icb.89.1407001858695; Sat, 02 Aug 2014 10:50:58 -0700 (PDT) Received: by 10.107.156.133 with HTTP; Sat, 2 Aug 2014 10:50:58 -0700 (PDT) Received: by 10.107.156.133 with HTTP; Sat, 2 Aug 2014 10:50:58 -0700 (PDT) In-Reply-To: References: Date: Sat, 2 Aug 2014 23:20:58 +0530 Message-ID: Subject: Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException) From: hadoop hive To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf303ea812fc653804ffa925fd X-Virus-Checked: Checked by ClamAV on apache.org --20cf303ea812fc653804ffa925fd Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hey try change ulimit to 64k for user which running query and change time from scheduler which should be set to 600sec. Check the jt logs also for further issues. Thanks On Aug 2, 2014 11:09 PM, "Ana Gillan" wrote: > I=E2=80=99m not sure which user is fetching the data, but I=E2=80=99m ass= uming no one > changed that from the default. The data isn=E2=80=99t huge in size, just = in number, > so I suppose the open files limit is not the issue? > > I=E2=80=99m running the job again with mapred.task.timeout=3D1200000, but= containers > are still being killed in the same way=E2=80=A6 Just without the timeout = message. > And it somehow massively slowed down the machine as well, so even typing > commands took a long time (???) > > I=E2=80=99m not sure what you mean by which stage it=E2=80=99s getting ki= lled on. If you > mean in the command line progress counters, it's always on Stage-1. > Also, this is the end of the container log for the killed container. > Failed and killed jobs always start fine with lots of these =E2=80=9Cproc= essing > file=E2=80=9D and =E2=80=9Cprocessing alias=E2=80=9D statements, but then= suddenly warn about a > DataStreamer Exception and then are killed with an error, which is the sa= me > as the warning. Not sure if this exception is the actual issue or if it= =E2=80=99s > just a knock-on effect of something else. > > 2014-08-02 17:47:38,618 INFO [main] > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing fil= e > hdfs://clustnm:8020/user/usnm123/foldernm/fivek/2w63.xml.gz > 2014-08-02 17:47:38,641 INFO [main] > org.apache.hadoop.hive.ql.exec.MapOperator: Processing alias > foldernm_xml_load for file hdfs://clustnm:8020/user/usnm123/foldernm/five= k > 2014-08-02 17:47:38,932 INFO [main] > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing fil= e > hdfs://clustnm:8020/user/usnm123/foldernm/fivek/2w67.xml.gz > 2014-08-02 17:47:38,989 INFO [main] > org.apache.hadoop.hive.ql.exec.MapOperator: Processing alias > foldernm_xml_load for file hdfs://clustnm:8020/user/usnm123/foldernm/five= k > 2014-08-02 17:47:42,675 INFO [main] > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing fil= e > hdfs://clustnm:8020/user/usnm123/foldernm/fivek/2w6i.xml.gz > 2014-08-02 17:47:42,888 INFO [main] > org.apache.hadoop.hive.ql.exec.MapOperator: Processing alias > foldernm_xml_load for file hdfs://clustnm:8020/user/usnm123/foldernm/five= k > 2014-08-02 17:47:45,416 WARN [Thread-8] org.apache.hadoop.hdfs.DFSClient: > DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namen= ode.LeaseExpiredException): > No lease on > /tmp/hive-usnm123/hive_2014-08-02_17-41-52_914_251548734850890001/_task_t= mp.-ext-10001/_tmp.000006_0: > File does not exist. Holder > DFSClient_attempt_1403771939632_0409_m_000006_0_303479000_1 does not have > any open files. > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesyst= em.java:2398) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNa= mesystem.java:2217) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FS= Namesystem.java:2137) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNod= eRpcServer.java:491) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransla= torPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Client= NamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:407= 44) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Pr= otobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:202) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:83) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addB= lock(ClientNamenodeProtocolTranslatorPB.java:311) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(= DFSOutputStream.java:1156) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream= (DFSOutputStream.java:1009) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.j= ava:464) > 2014-08-02 17:47:45,417 ERROR [Thread-3] org.apache.hadoop.hdfs.DFSClient= : > Failed to close file > /tmp/hive-usnm123/hive_2014-08-02_17-41-52_914_251548734850890001/_task_t= mp.-ext-10001/_tmp.000006_0 > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namen= ode.LeaseExpiredException): > No lease on > /tmp/hive-usnm123/hive_2014-08-02_17-41-52_914_251548734850890001/_task_t= mp.-ext-10001/_tmp.000006_0: > File does not exist. Holder > DFSClient_attempt_1403771939632_0409_m_000006_0_303479000_1 does not have > any open files. > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesyst= em.java:2398) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNa= mesystem.java:2217) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FS= Namesystem.java:2137) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNod= eRpcServer.java:491) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransla= torPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Client= NamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:407= 44) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Pr= otobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:202) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:83) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addB= lock(ClientNamenodeProtocolTranslatorPB.java:311) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(= DFSOutputStream.java:1156) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream= (DFSOutputStream.java:1009) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.j= ava:464) > > > > Thanks a lot for your attention!=09 > > > From: hadoop hive > Reply-To: > Date: Saturday, 2 August 2014 17:36 > To: > Subject: Re: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namen= ode.LeaseExpiredException) > > 32k seems fine for mapred user(hope you using this for fetching you data) > but if you have huge data on your system you can try 64k. > > Did you try increasing you time from 600 sec to like 20 mins. > > Can you also check on which stage its getting hanged or killed. > > Thanks > --20cf303ea812fc653804ffa925fd Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Hey try change ulimit to 64k for user which running query an= d change time from scheduler which should be set to 600sec.

Check the jt logs also for further issues.

Thanks

On Aug 2, 2014 11:09 PM, "Ana Gillan" = <ana.gillan@gmail.com> wr= ote:
I=E2=80=99m not sure which user is fetching th= e data, but I=E2=80=99m assuming no one changed that from the default. The = data isn=E2=80=99t huge in size, just in number, so I suppose the open file= s limit is not the issue?

I=E2=80=99m running the job again with mapred.task.time= out=3D1200000, but containers are still being killed in the same way=E2=80= =A6 Just without the timeout message. And it somehow massively slowed down = the machine as well, so even typing commands took a long time (???)

I=E2=80=99m not sure what you mean by which stage it=E2= =80=99s getting killed on. If you mean in the command line progress counter= s, it's always on Stage-1.
Also, this is the end of the conta= iner log for the killed container. Failed and killed jobs always start fine= with lots of these =E2=80=9Cprocessing file=E2=80=9D and =E2=80=9Cprocessi= ng alias=E2=80=9D statements, but then suddenly warn about a DataStreamer E= xception and then are killed with an error, which is the same as the warnin= g. Not sure if this exception is the actual issue or if it=E2=80=99s just a= knock-on effect of something else.

2014-08-02 17:47:38,618 INFO [main] org.apache.had= oop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://clustnm= :8020/user/usnm123/foldernm/fivek/2w63.xml.gz
2014-08-02 17:47:38= ,641 INFO [main] org.apache.hadoop.hive.ql.exec.MapOperator: Processing ali= as foldernm_xml_load for file hdfs://clustnm:8020/user/usnm123/foldernm/fiv= ek
2014-08-02 17:47:38,932 INFO [main] org.apache.hadoop.hive.ql.io.HiveC= ontextAwareRecordReader: Processing file hdfs://clustnm:8020/user/usnm123/f= oldernm/fivek/2w67.xml.gz
2014-08-02 17:47:38,989 INFO [main] org= .apache.hadoop.hive.ql.exec.MapOperator: Processing alias foldernm_xml_load= for file hdfs://clustnm:8020/user/usnm123/foldernm/fivek
2014-08-02 17:47:42,675 INFO [main] org.apache.hadoop.hive.ql.io.HiveC= ontextAwareRecordReader: Processing file hdfs://clustnm:8020/user/usnm123/f= oldernm/fivek/2w6i.xml.gz
2014-08-02 17:47:42,888 INFO [main] org= .apache.hadoop.hive.ql.exec.MapOperator: Processing alias foldernm_xml_load= for file hdfs://clustnm:8020/user/usnm123/foldernm/fivek
2014-08-02 17:47:45,416 WARN [Thread-8] org.apache.hadoop.hdfs.DFSClie= nt: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(= org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on = /tmp/hive-usnm123/hive_2014-08-02_17-41-52_914_251548734850890001/_task_tmp= .-ext-10001/_tmp.000006_0: File does not exist. Holder DFSClient_attempt_14= 03771939632_0409_m_000006_0_303479000_1 does not have any open files.
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2398)
= at org.apache.hadoop.hdfs.serv= er.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2217)
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2137)
at org.apache.hadoop.h= dfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)<= /div>
at org.apache.hadoop.hdfs= .protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNam= enodeProtocolServerSideTranslatorPB.java:351)
at org.apache.hadoop.hdfs.protocol.proto.ClientNa= menodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientName= nodeProtocolProtos.java:40744)
at org.apache.hadoop.ipc.= ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454= )
at org.apache.hado= op.ipc.RPC$Server.call(RPC.java:1014)
at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:1741)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.jav= a:1737)
at java.security.AccessCo= ntroller.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.secu= rity.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.ipc.Server$= Handler.run(Server.java:1735)

at org.apa= che.hadoop.ipc.Client.call(Client.java:1240)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker= .invoke(ProtobufRpcEngine.java:202)
at com.sun.proxy.$Proxy10= .addBlock(Unknown Source)
<= /span>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
<= div> at sun.reflect.NativeMethodAcc= essorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.= invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Re= tryInvocationHandler.java:164)
at org.apache.hadoop.io.r= etry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at com.sun.proxy.$Proxy10.ad= dBlock(Unknown Source)
at org.apache.hadoop.hdfs= .protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProto= colTranslatorPB.java:311)
<= /span>at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowin= gBlock(DFSOutputStream.java:1156)
at org.apache.hadoop.hdfs= .DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:10= 09)
at org.apache.ha= doop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
2014-08-02 17:47:45,417 ERROR [Thread-3] org.apache.hadoop.hdfs.DFSCli= ent: Failed to close file /tmp/hive-usnm123/hive_2014-08-02_17-41-52_914_25= 1548734850890001/_task_tmp.-ext-10001/_tmp.000006_0
org.apache.ha= doop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpire= dException): No lease on /tmp/hive-usnm123/hive_2014-08-02_17-41-52_914_251= 548734850890001/_task_tmp.-ext-10001/_tmp.000006_0: File does not exist. Ho= lder DFSClient_attempt_1403771939632_0409_m_000006_0_303479000_1 does not h= ave any open files.
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2398)
= at org.apache.hadoop.hdfs.serv= er.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2217)
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2137)
at org.apache.hadoop.h= dfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491)<= /div>
at org.apache.hadoop.hdfs= .protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNam= enodeProtocolServerSideTranslatorPB.java:351)
at org.apache.hadoop.hdfs.protocol.proto.ClientNa= menodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientName= nodeProtocolProtos.java:40744)
at org.apache.hadoop.ipc.= ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454= )
at org.apache.hado= op.ipc.RPC$Server.call(RPC.java:1014)
at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:1741)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.jav= a:1737)
at java.security.AccessCo= ntroller.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.secu= rity.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.ipc.Server$= Handler.run(Server.java:1735)

at org.apa= che.hadoop.ipc.Client.call(Client.java:1240)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker= .invoke(ProtobufRpcEngine.java:202)
at com.sun.proxy.$Proxy10= .addBlock(Unknown Source)
<= /span>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
<= div> at sun.reflect.NativeMethodAcc= essorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.= invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Re= tryInvocationHandler.java:164)
at org.apache.hadoop.io.r= etry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at com.sun.proxy.$Proxy10.ad= dBlock(Unknown Source)
at org.apache.hadoop.hdfs= .protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProto= colTranslatorPB.java:311)
<= /span>at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowin= gBlock(DFSOutputStream.java:1156)
at org.apache.hadoop.hdfs= .DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:10= 09)
at org.apache.ha= doop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)


Thanks a lo=
t for your attention!	

From: hadoop hive <hadoophive@gmail.com><= br>Reply-To: <user@hadoop.apache.org> Date: Saturday, 2 August 2014 17:3= 6
To: <user@hadoop.apache.org>
Subject: Re: org.apache.hadoop.ipc.R= emoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException= )

32k seems fine for mapred user(hope you= using this for fetching you data) but if you have huge data on your system= you can try 64k.

Did you try increasing you time from 60= 0 sec to like 20 mins.

Can you also check on which stage its getting hanged or kill= ed.

Thanks

--20cf303ea812fc653804ffa925fd--