hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marc-Olivier Fleury (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4635) Memory leak ?
Date Wed, 12 Nov 2008 16:23:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12646945#action_12646945
] 

Marc-Olivier Fleury commented on HADOOP-4635:
---------------------------------------------

Well, I think that the leak you mentioned that happens at each hdfsConnect should definitely
be fixed, and if this is the right place to report that issue, we can use it to correct it.

I am still not sure where exactly the leak happens... is it the doConnect function that does
not clean correctly what it allocates, or is it  hdfsConnectAsUser that has problems?

Looking at the code of doConnect, it seems that everything is freed (call to freeGroups and
free(user)). Was this the leak you were mentioning? Or is there an issue with getGroups/freeGroups?

Thanks for your help, I really need to fix this problem...

> Memory leak ?
> -------------
>
>                 Key: HADOOP-4635
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4635
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fuse-dfs
>    Affects Versions: 0.20.0
>            Reporter: Marc-Olivier Fleury
>
> I am running a process that needs to crawl a tree structure containing ~10K images, copy
the images to the local disk, process these images, and copy them back to HDFS.
> My problem is the following : after about 10h of processing, the processes crash, complaining
about a std::bad_alloc exception (I use hadoop pipes to run existing software). When running
fuse_dfs in debug mode, I get an outOfMemoryError, telling that there is no more room in the
heap.
> While the process is running, using top or ps, I notice that fuse is using up an increasing
amount of memory, until some limit is reached. At that point , the memory used is oscillating.
I suppose that this is due to the use of the virtual memory.
> This leads me to the conclusion that there is some memory leak in fuse_dfs, since the
only other programs running are Hadoop and the existing software, both thoroughly tested in
the past.
> My problem is that my knowledge concerning memory leak tracking is rather limited, so
I will need some instructions to get more insight concerning this issue.
> Thank you

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message