hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rohini Palaniswamy (JIRA)" <>
Subject [jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE. (Must cache UGIs.)
Date Wed, 11 Jul 2012 18:01:37 GMT


Rohini Palaniswamy commented on HIVE-3098:

bq. to workaround underlying Filesystem issue, lets just disable the fs.cache via config parameter.
Disabling cache will plug the memory leak by not filling FS cache.
  Disabling fs.cache is not going to matter. We are already getting a new FileSystem object
for every request and the fs.cache is not used at all. The newly created FileSystem objects
are still going to be in memory until garbage collected. And they don't get garbage collected
for some reason (We have not analyzed what else is holding reference to them) as the experiments
Mithun ran with fs.cache disabled did not make any difference. 

bq. Once FSContext apis are declared stable for other projects to consume, we can switch over
to those where the promise is that underlying problem is fixed.
   As Daryn had mentioned in previous comments, this is not going to solve the problem at

We need to fix this. It is not good to have to restart the metastore server every week or
two. I see two possible interim fixes until it is solved in core hadoop itself for all applications.
 1) Disable fs.cache and do fs.close() after every filesystem call in the code.  If there
is a place to put the fs.close() after every request is executed it is easy. Or you might
have to put fs.close() in too many places. Anyways it is not going to be good for performance.
  2) Add the fs.close() logic after a timeout to the current patch. Mithun was already working
on this. I can't understand why you think it adds too much complexity. Other applications
are already doing this. It will be difficult to answer to the Ops guys why we have a software
that needs to be restarted often.
> Memory leak from large number of FileSystem instances in FileSystem.CACHE. (Must cache
> ---------------------------------------------------------------------------------------------
>                 Key: HIVE-3098
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Shims
>    Affects Versions: 0.9.0
>         Environment: Running with Hadoop / 1.0.x with security turned on.
>            Reporter: Mithun Radhakrishnan
>            Assignee: Mithun Radhakrishnan
>         Attachments: HIVE-3098.patch
> The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing the Oracle
> The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, in under
24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 1000000 instances of FileSystem,
whose combined retained-mem consumed the entire heap.
> It boiled down to hadoop::UserGroupInformation::equals() being implemented such that
the "Subject" member is compared for equality ("=="), and not equivalence (".equals()"). This
causes equivalent UGI instances to compare as unequal, and causes a new FileSystem instance
to be created and cached.
> The UGI.equals() is so implemented, incidentally, as a fix for yet another problem (HADOOP-6670);
so it is unlikely that that implementation can be modified.
> The solution for this is to check for UGI equivalence in HCatalog (i.e. in the Hive metastore),
using an cache for UGI instances in the shims.
> I have a patch to fix this. I'll upload it shortly. I just ran an overnight test to confirm
that the memory-leak has been arrested.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message