Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A753618B41 for ; Fri, 26 Feb 2016 20:55:55 +0000 (UTC) Received: (qmail 52557 invoked by uid 500); 26 Feb 2016 20:55:48 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 52410 invoked by uid 500); 26 Feb 2016 20:55:48 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 52400 invoked by uid 99); 26 Feb 2016 20:55:48 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 26 Feb 2016 20:55:48 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id D0721C0BD9 for ; Fri, 26 Feb 2016 20:55:47 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.798 X-Spam-Level: ** X-Spam-Status: No, score=2.798 tagged_above=-999 required=6.31 tests=[FSL_HELO_BARE_IP_2=1.499, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 6FpXLEAzIw6m for ; Fri, 26 Feb 2016 20:55:45 +0000 (UTC) Received: from relayvx12c.securemail.intermedia.net (relayvx12c.securemail.intermedia.net [64.78.52.187]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 7A88E5F5A1 for ; Fri, 26 Feb 2016 20:55:45 +0000 (UTC) Received: from securemail.intermedia.net (localhost [127.0.0.1]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by emg-ca-1-2.localdomain (Postfix) with ESMTPS id 89F4B53E2E; Fri, 26 Feb 2016 12:55:44 -0800 (PST) Subject: Re: libhdfs force close hdfsFile MIME-Version: 1.0 x-echoworx-msg-id: 0950e5f1-5255-40e1-8247-173639862aa4 x-echoworx-emg-received: Fri, 26 Feb 2016 12:55:44.502 -0800 x-echoworx-message-code-hashed: 0d96785a6bfda93fb60c33e70fdf55dfaf90c9ffdfacde386e3505d96ab3f0b6 x-echoworx-action: delivered Received: from 10.254.155.17 ([10.254.155.17]) by emg-ca-1-2 (JAMES SMTP Server 2.3.2) with SMTP ID 714; Fri, 26 Feb 2016 12:55:44 -0800 (PST) Received: from MBX080-W4-CO-1.exch080.serverpod.net (unknown [10.224.117.101]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by emg-ca-1-2.localdomain (Postfix) with ESMTPS id 4DF3E53E2E; Fri, 26 Feb 2016 12:55:44 -0800 (PST) Received: from MBX080-W4-CO-2.exch080.serverpod.net (10.224.117.102) by MBX080-W4-CO-1.exch080.serverpod.net (10.224.117.101) with Microsoft SMTP Server (TLS) id 15.0.1130.7; Fri, 26 Feb 2016 12:55:43 -0800 Received: from MBX080-W4-CO-2.exch080.serverpod.net ([10.224.117.102]) by mbx080-w4-co-2.exch080.serverpod.net ([10.224.117.102]) with mapi id 15.00.1130.005; Fri, 26 Feb 2016 12:55:43 -0800 From: Chris Nauroth To: Ken Huang , "user@hadoop.apache.org" Thread-Topic: libhdfs force close hdfsFile Thread-Index: AQHRcDf+VkPpfeXP3k6fo27QEnkHep8+z6qA Date: Fri, 26 Feb 2016 20:55:42 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-messagesentrepresentingtype: 1 x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [50.248.208.113] x-source-routing-agent: Processed Content-Type: multipart/alternative; boundary="_000_D2F5F9943C116cnaurothhortonworkscom_" --_000_D2F5F9943C116cnaurothhortonworkscom_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hello Ken, The closest thing to what you're requesting is in the Java API, there is th= e slightly dodgy, semi-private, we-hope-only-HBase-calls-it method Distribu= tedFileSystem#recoverLease. This is capable of telling the NameNode to rec= over the lease (and ultimately close the file if necessary) based on any sp= ecified path. This method is not exposed through libhdfs though, and just = so it's clear, I wouldn't recommend using it even if it was. When I hear questions like this, it's often because an application is writi= ng to a file at a certain path and there is a desire for recoverability if = the application terminates prematurely, such as due to a server crash. Use= rs would like another process to be able to take over right away and start = writing to the file again, but the NameNode won't allow this until after ex= piration of the old client's lease. Is this the use case you had in mind? If so, then a pattern that can work well is for the application to create a= nd write to a unique temporary file name instead of the final destination p= ath. Then, after writing all data, the application renames the temporary f= ile to the desired final destination. Since the leases are tracked on the = file paths being written, the old client's lease on its temporary file won'= t block the new client from writing to a different temporary file. --Chris Nauroth From: Ken Huang > Date: Thursday, February 25, 2016 at 5:49 PM To: "user@hadoop.apache.org" > Subject: libhdfs force close hdfsFile Hi, Does anyone know how to close a hdfsFile while the connection between hdfsC= lient and NameNode is lost ? Thanks Ken Huang --_000_D2F5F9943C116cnaurothhortonworkscom_ Content-Type: text/html; charset="us-ascii" Content-ID: <9D8088B166C1D24BB0DC2DF78100AE14@exch080.serverpod.net> Content-Transfer-Encoding: quoted-printable
Hello Ken,

The closest thing to what you're requesting is in the Java API, there = is the slightly dodgy, semi-private, we-hope-only-HBase-calls-it method Dis= tributedFileSystem#recoverLease.  This is capable of telling the NameN= ode to recover the lease (and ultimately close the file if necessary) based on any specified path.  This metho= d is not exposed through libhdfs though, and just so it's clear, I wouldn't= recommend using it even if it was.

When I hear questions like this, it's often because an application is = writing to a file at a certain path and there is a desire for recoverabilit= y if the application terminates prematurely, such as due to a server crash.=  Users would like another process to be able to take over right away and start writing to the file again, bu= t the NameNode won't allow this until after expiration of the old client's = lease.  Is this the use case you had in mind?

If so, then a pattern that can work well is for the application to cre= ate and write to a unique temporary file name instead of the final destinat= ion path.  Then, after writing all data, the application renames the t= emporary file to the desired final destination.  Since the leases are tracked on the file paths being written, the ol= d client's lease on its temporary file won't block the new client from writ= ing to a different temporary file.

--Chris Nauroth

From: Ken Huang <dnionhkx@gmail.com>
Date: Thursday, February 25, 2016 a= t 5:49 PM
To: "user@hadoop.apache.org" <user@hadoop.apache.org>
Subject: libhdfs force close hdfsFi= le

Hi,

Does anyone know how to close a hdfsFile while the connection between = hdfsClient and NameNode is lost ?

Thanks
Ken Huang
--_000_D2F5F9943C116cnaurothhortonworkscom_--