Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 2A645200B9F for ; Tue, 11 Oct 2016 17:59:16 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 29247160AE6; Tue, 11 Oct 2016 15:59:16 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 49135160AC3 for ; Tue, 11 Oct 2016 17:59:15 +0200 (CEST) Received: (qmail 1271 invoked by uid 500); 11 Oct 2016 15:59:14 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flink.apache.org Delivered-To: mailing list user@flink.apache.org Received: (qmail 1262 invoked by uid 99); 11 Oct 2016 15:59:14 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Oct 2016 15:59:14 +0000 Received: from mail-it0-f54.google.com (mail-it0-f54.google.com [209.85.214.54]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 121C51A0251 for ; Tue, 11 Oct 2016 15:59:14 +0000 (UTC) Received: by mail-it0-f54.google.com with SMTP id e203so50498957itc.0 for ; Tue, 11 Oct 2016 08:59:13 -0700 (PDT) X-Gm-Message-State: AA6/9RlYBDv64a2Hztf/B2Lx5z3uXyZMbNtGeHdO2tF+cYp/7XJOKHYbq6HKgVpa975gVzpcTw6SXtdrbyWqXA== X-Received: by 10.36.79.17 with SMTP id c17mr6686106itb.48.1476201553376; Tue, 11 Oct 2016 08:59:13 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.3.234 with HTTP; Tue, 11 Oct 2016 08:59:12 -0700 (PDT) In-Reply-To: References: From: Stephan Ewen Date: Tue, 11 Oct 2016 17:59:12 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: PathIsNotEmptyDirectoryException in Namenode HDFS log when using Jobmanager HA in YARN To: user@flink.apache.org Content-Type: multipart/alternative; boundary=001a11449a2e351374053e98f59f archived-at: Tue, 11 Oct 2016 15:59:16 -0000 --001a11449a2e351374053e98f59f Content-Type: text/plain; charset=UTF-8 Hi! I think to some extend this is expected. There is some cleanup code that deletes files and then issues parent directory remove requests. It relies on the fact that the parent directory is only removed if it is empty (after the last file was deleted). Is this a problem right now, or just a confusing behavior? Greetings, Stephan On Tue, Oct 11, 2016 at 5:25 PM, static-max wrote: > Hi, > > I get many (multiple times per minute) errors in my Namenode HDFS logfile: > > 2016-10-11 17:17:07,596 INFO ipc.Server (Server.java:logException(2401)) > - IPC Server handler 295 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.delete > from datanode1:34872 Call#2361 Retry#0 > org.apache.hadoop.fs.PathIsNotEmptyDirectoryException: `/flink/recovery > is non empty': Directory is not empty > at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete( > FSDirDeleteOp.java:89) > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete( > FSNamesystem.java:3829) > at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer. > delete(NameNodeRpcServer.java:1071) > at org.apache.hadoop.hdfs.protocolPB. > ClientNamenodeProtocolServerSideTranslatorPB.delete( > ClientNamenodeProtocolServerSideTranslatorPB.java:619) > at org.apache.hadoop.hdfs.protocol.proto. > ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod( > ClientNamenodeProtocolProtos.java) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ > ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs( > UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) > > That is the directory I configured for Jobmanager HA. I deleted it before > starting the YARN session but that did not help. The folder gets created by > Flink without problems. > > I'm using latest Flink Master (Commit: 6731ec1) and build it for Hadoop > 2.7.3. > > Any idea is highly appreciated. Thanks a lot! > --001a11449a2e351374053e98f59f Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi!

I think to some extend this is expe= cted. There is some cleanup code that deletes files and then =C2=A0issues p= arent directory remove requests. It relies on the fact that the parent dire= ctory is only removed if it is empty (after the last file was deleted).

Is this a problem right now, or just a confusing beha= vior?

Greetings,
Stephan

<= /div>

On Tue= , Oct 11, 2016 at 5:25 PM, static-max <flashacid@googlemail.com= > wrote:
Hi,
I get many (multiple times per minute) errors in my Namen= ode HDFS logfile:

2016-10-11 17:17:07,596 INF= O =C2=A0ipc.Server (Server.java:logException(2401)) - IPC Server handl= er 295 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.de= lete from datanode1:34872 Call#2361 Retry#0
org.apache.hadoop.fs.= PathIsNotEmptyDirectoryException: `/flink/recovery is non empty&#= 39;: Directory is not empty
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.ap= ache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDelete= Op.java:89)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.delete(FSNamesystem.java:3829)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.nam= enode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1071)
=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocolPB.= ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNa= menodeProtocolServerSideTranslatorPB.java:619)
=C2=A0 =C2=A0= =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocol.proto.ClientNam= enodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(<= wbr>ClientNamenodeProtocolProtos.java)
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBuf= RpcInvoker.call(ProtobufRpcEngine.java:640)
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Server$H= andler$1.run(Server.java:2313)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 a= t org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.security.AccessController.= doPrivileged(Native Method)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at j= avax.security.auth.Subject.doAs(Subject.java:422)
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at org.apache.hadoop.security.UserGroupInformation.d= oAs(UserGroupInformation.java:1724)
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:230= 7)

That is the directory I configured for Jo= bmanager HA. I deleted it before starting the YARN session but that did not= help. The folder gets created by Flink without problems.

I'm using latest Flink Master (Commit: 6731ec1) and build it fo= r Hadoop 2.7.3.

Any idea is highly appreciated. Th= anks a lot!

--001a11449a2e351374053e98f59f--