hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brahma Reddy Battula <brahmareddy.batt...@huawei.com>
Subject RE: LeaseExpiredException: No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.
Date Tue, 18 Oct 2016 11:09:41 GMT
Can you trace namenode logs, whether this is file deleted/renamed(might be parent folder) before
this reducer run..?




--Brahma Reddy Battula

From: Zhang Jianfeng [mailto:jzhang.ch@gmail.com]
Sent: 18 October 2016 18:55
To: Gaurav Kumar
Cc: user.hadoop; Rakesh Radhakrishnan
Subject: Re: LeaseExpiredException: No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.

Thanks Gaurav. For my case, I called the HDFS API to write the reducer result into HDFS directly,
not using Spark.

2016-10-17 23:24 GMT+08:00 Gaurav Kumar <gauravkumar37@gmail.com<mailto:gauravkumar37@gmail.com>>:

Hi,

Please also check for coalesced RDD. I encountered the same error while writing a coalesced
rdd/df to HDFS. If this is the case, please use repartition instead.

Sent from OnePlus 3

Thanks & Regards,
Gaurav Kumar

On Oct 17, 2016 11:22 AM, "Zhang Jianfeng" <jzhang.ch@gmail.com<mailto:jzhang.ch@gmail.com>>
wrote:
Thanks Rakesh for your kind help. Actually during the job only one reducer result file (for
example part-r-2) had this error, other reducers worked well.

Best Regards,
Jian Feng

2016-10-17 11:49 GMT+08:00 Rakesh Radhakrishnan <rakeshr@apache.org<mailto:rakeshr@apache.org>>:
Hi Jian Feng,

Could you please check your code and see any possibilities of simultaneous access to the same
file. Mostly this situation happens when multiple clients tries to access the same file.

Code Reference:- https://github.com/apache/hadoop/blob/branch-2.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L2737

Best Regards,
Rakesh
Intel

On Mon, Oct 17, 2016 at 7:16 AM, Zhang Jianfeng <jzhang.ch@gmail.com<mailto:jzhang.ch@gmail.com>>
wrote:
Hi ,

    I hit an wired error. On our hadoop cluster (2.2.0), occasionally a LeaseExpiredException
is thrown.

The stacktrace is as below:


org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.executions/.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2737)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2801)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2783)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.com<http://namenode.NameNodeRpcServer.com>plete(NameNodeRpcServer.java:611)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:428)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59586)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)

at java.security.AccessController.doPrivileged(AccessController.java:310)

at javax.security.auth.Subject.do<http://javax.security.auth.Subject.do>As(Subject.java:573)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)

at org.apache.hadoop.ipc.Client.call(Client.java:1300)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

at $Proxy7.complete(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)

at java.lang.reflect.Method.invoke(Method.java:611)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at $Proxy7.complete(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:371)

at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1894)

at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1881)

at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:71)

at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:104)

at java.io.FilterOutputStream.close(FilterOutputStream.java:154)
Any help will be appreciated!

--
Best Regards,
Jian Feng




--
Best Regards,
Jian Feng




--
Best Regards,
Jian Feng
Mime
View raw message