hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Maheswara Rao G 72686 <mahesw...@huawei.com>
Subject Re: FileSystem.close() using Threads !!!
Date Wed, 15 Jun 2011 15:35:48 GMT
Hi Karthik,

  FileSystem will cache the objects.
  Based on schema and url info it will form the key and get the fs from cache.

  you can close the fs only after completed your operations with filesystem object. Even if
you call two times FileSystem.get(conf), it will return same object. So, you cant close like
that.

 (or) initialize their own DistributedFileSystems separately and use. 

  (or) use FileSystem.newInstance(conf..)


Regards,
Uma Mahesh 

******************************************************************************************
 This email and its attachments contain confidential information from HUAWEI, which is intended
only for the person or entity whose address is listed above. Any use of the information contained
here in any way (including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive
this email in error, please notify the sender by phone or email immediately and delete it!
 *****************************************************************************************

----- Original Message -----
From: karthik tunga <karthik.tunga@gmail.com>
Date: Wednesday, June 15, 2011 8:39 pm
Subject: FileSystem.close() using Threads !!!
To: hdfs-dev@hadoop.apache.org

> Hi,
> 
> I have 2 threads that copy file from hdfs and delete the directory
> after copying the file.
> 
> In both the threads I use "FileSystem hdfs = FileSystem.get(conf);"
> Once i finish copying and deleting I close the filesystem(
> hdfs.close() in the finally block)
> 
> If one of threads does a FileSystem.close()(while the other thread is
> still copying) the other threads stops copying and throws an error
> 
> 
> java.io.IOException: Filesystem closed
>        at 
> org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:226)     
>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:67)
>        at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1678)        at
java.io.FilterInputStream.close(FilterInputStream.java:155)
>        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:58)
>        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
>        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209)
>        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)
>        at 
> org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216)        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)
> 
> 
> Should I NOT do FileSystem.close() in the finally block ?? How I solve
> this issue ?
> 
> 
> Cheers,
> Karthik
> 

Mime
View raw message