Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 14D1E200C09 for ; Wed, 25 Jan 2017 15:02:37 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 1385E160B5A; Wed, 25 Jan 2017 14:02:37 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 37AC5160B4E for ; Wed, 25 Jan 2017 15:02:36 +0100 (CET) Received: (qmail 22853 invoked by uid 500); 25 Jan 2017 14:02:30 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 22839 invoked by uid 99); 25 Jan 2017 14:02:30 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Jan 2017 14:02:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id E1947C0FA0 for ; Wed, 25 Jan 2017 14:02:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.999 X-Spam-Level: X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id MiHZZkcJXQUm for ; Wed, 25 Jan 2017 14:02:28 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 97C8F5F613 for ; Wed, 25 Jan 2017 14:02:27 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id A3ED0E0146 for ; Wed, 25 Jan 2017 14:02:26 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 6177D25288 for ; Wed, 25 Jan 2017 14:02:26 +0000 (UTC) Date: Wed, 25 Jan 2017 14:02:26 +0000 (UTC) From: "Dmitry Goldenberg (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (HDFS-11367) AlreadyBeingCreatedException "current leaseholder is trying to recreate file" when trying to append to file MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 25 Jan 2017 14:02:37 -0000 Dmitry Goldenberg created HDFS-11367: ---------------------------------------- Summary: AlreadyBeingCreatedException "current leaseholder is = trying to recreate file" when trying to append to file Key: HDFS-11367 URL: https://issues.apache.org/jira/browse/HDFS-11367 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.5.0 Environment: Red Hat Enterprise Linux Server release 6.8 Reporter: Dmitry Goldenberg We have code which creates a file in HDFS and continuously appends lines to= the file, then closes the file at the end. This is done by a single dedica= ted thread. We specifically instrumented the code to make sure only one 'client'/thread= ever writes to the file because we were seeing "current leaseholder is try= ing to recreate file" errors. For some background see this for example: https://community.cloudera.com/t5= /Storage-Random-Access-HDFS/How-to-append-files-to-HDFS-with-Java-quot-curr= ent-leaseholder/m-p/41369 This issue is very critical to us as any error terminates a mission critica= l application in production. Intermittently, we see the below exception, regardless of what our code is = doing which is create the file, keep appending, then close: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.Alrea= dyBeingCreatedException): failed to create file /data/records_20170125_1.tx= t for DFSClient_NONMAPREDUCE_-167421175_1 for client 1XX.2XX.1XX.XXX becaus= e current leaseholder is trying to recreate file. =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3075) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2905) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3189) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .namenode.FSNamesystem.appendFile(FSNamesystem.java:3153) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:612) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .namenode.AuthorizationProviderProxyClientProtocol.append(AuthorizationProv= iderProxyClientProtocol.java:125) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.protoc= olPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProt= ocolServerSideTranslatorPB.java:414) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.protoc= ol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocking= Method(ClientNamenodeProtocolProtos.java) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Protobu= fRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.RPC$Ser= ver.call(RPC.java:1073) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Server$= Handler$1.run(Server.java:2086) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Server$= Handler$1.run(Server.java:2082) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.security.AccessControlle= r.doPrivileged(Native Method) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at javax.security.auth.Subject.d= oAs(Subject.java:415) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.security.Us= erGroupInformation.doAs(UserGroupInformation.java:1767) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Server$= Handler.run(Server.java:2080) =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client.= call(Client.java:1411) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client.= call(Client.java:1364) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Protobu= fRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at com.sun.proxy.$Proxy24.append= (Unknown Source) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.reflect.NativeMethodAcces= sorImpl.invoke0(Native Method) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.reflect.NativeMethodAcces= sorImpl.invoke(NativeMethodAccessorImpl.java:62) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.reflect.DelegatingMethodA= ccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.lang.reflect.Method.invo= ke(Method.java:483) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.io.retry.Re= tryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.io.retry.Re= tryInvocationHandler.invoke(RetryInvocationHandler.java:102) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at com.sun.proxy.$Proxy24.append= (Unknown Source) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.protoc= olPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTransl= atorPB.java:282) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSCli= ent.callAppend(DFSClient.java:1586) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSCli= ent.append(DFSClient.java:1626) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSCli= ent.append(DFSClient.java:1614) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.Distri= butedFileSystem$4.doCall(DistributedFileSystem.java:313) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.Distri= butedFileSystem$4.doCall(DistributedFileSystem.java:309) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.fs.FileSyst= emLinkResolver.resolve(FileSystemLinkResolver.java:81) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.Distri= butedFileSystem.append(DistributedFileSystem.java:309) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.fs.FileSyst= em.append(FileSystem.java:1161) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at com.myco.MyAppender.getOutput= Stream(MyAppender.java:147) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org