hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "amith (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3545) DFSClient leak due to malfunctioning of FileSystem Cache
Date Tue, 19 Jun 2012 16:19:43 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396896#comment-13396896
] 

amith commented on HDFS-3545:
-----------------------------

Currently to fix this defect, we need to get the FileSystem object from the cache if the UGI
credentials matches, but this FileSystem objects can lead to wrong token being used in setting
up connections causing security related issue.  HADOOP-6564
                
> DFSClient leak due to malfunctioning of FileSystem Cache
> --------------------------------------------------------
>
>                 Key: HDFS-3545
>                 URL: https://issues.apache.org/jira/browse/HDFS-3545
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 2.0.0-alpha, 3.0.0
>            Reporter: amith
>            Priority: Critical
>
> For every FileSystem.get new FileSystem object is getting created even though the UGI
object passed has same name. This is creating the lot of FileSystem objects and cached in
FileSystem cache instead of using the same cached object .
> This is causing the Cache to grow in size causing OOME
> This behaviour can be seen in Mapred and Hive components also since they use  FileSystem.get
in the described fashion

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message