hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Nauroth <cnaur...@hortonworks.com>
Subject Re: libhdfs force close hdfsFile
Date Fri, 26 Feb 2016 20:55:42 GMT
Hello Ken,

The closest thing to what you're requesting is in the Java API, there is the slightly dodgy,
semi-private, we-hope-only-HBase-calls-it method DistributedFileSystem#recoverLease.  This
is capable of telling the NameNode to recover the lease (and ultimately close the file if
necessary) based on any specified path.  This method is not exposed through libhdfs though,
and just so it's clear, I wouldn't recommend using it even if it was.

When I hear questions like this, it's often because an application is writing to a file at
a certain path and there is a desire for recoverability if the application terminates prematurely,
such as due to a server crash.  Users would like another process to be able to take over right
away and start writing to the file again, but the NameNode won't allow this until after expiration
of the old client's lease.  Is this the use case you had in mind?

If so, then a pattern that can work well is for the application to create and write to a unique
temporary file name instead of the final destination path.  Then, after writing all data,
the application renames the temporary file to the desired final destination.  Since the leases
are tracked on the file paths being written, the old client's lease on its temporary file
won't block the new client from writing to a different temporary file.

--Chris Nauroth

From: Ken Huang <dnionhkx@gmail.com<mailto:dnionhkx@gmail.com>>
Date: Thursday, February 25, 2016 at 5:49 PM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" <user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Subject: libhdfs force close hdfsFile


Does anyone know how to close a hdfsFile while the connection between hdfsClient and NameNode
is lost ?

Ken Huang

View raw message