hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-596) Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory for mOwner and mGroup
Date Sun, 15 Nov 2009 04:18:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778049#action_12778049

Eli Collins commented on HDFS-596:

Hey Christian,

Thanks for bumping the priority, and sorry this bit you. I filed HDFS-773 to add memory leaks
to the existing libhdfs unit testing.

The uploaded patch looks good me. I confirmed that it applies to 20.1 and the unit tests pass.
I also tested that it fixes the leaks in hdfsListDirectory by writing a program that calls
it (and hdfsFreeFileInfo) in a tight loop on a directory with a 1000 files; after applying
the patch memory usage no longer grows unbounded. I've checked it into the next CDH release.

I also confirmed that the patch applies, builds, and hdfs_test runs on trunk (though it fails,
that's another jira). 

Let's check this patch into 20.2 and trunk.


> Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory for mOwner
and mGroup
> ------------------------------------------------------------------------------------------------
>                 Key: HDFS-596
>                 URL: https://issues.apache.org/jira/browse/HDFS-596
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: contrib/fuse-dfs
>    Affects Versions: 0.20.1
>         Environment: Linux hadoop-001 2.6.28-14-server #47-Ubuntu SMP Sat Jul 25 01:18:34
UTC 2009 i686 GNU/Linux. Namenode with 1GB memory. 
>            Reporter: Zhang Bingjun
>            Priority: Blocker
>             Fix For: 0.20.2
>         Attachments: HDFS-596.patch
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
> This bugs affects fuse-dfs severely. In my test, about 1GB memory were exhausted and
the fuse-dfs mount directory was disconnected after writing 14000 files. This bug is related
to the memory leak problem of this issue: http://issues.apache.org/jira/browse/HDFS-420. 
> The bug can be fixed very easily. In function hdfsFreeFileInfo() in file hdfs.c (under
c++/libhdfs/) change code block:
>     //Free the mName
>     int i;
>     for (i=0; i < numEntries; ++i) {
>         if (hdfsFileInfo[i].mName) {
>             free(hdfsFileInfo[i].mName);
>         }
>     }
> into:
>     // free mName, mOwner and mGroup
>     int i;
>     for (i=0; i < numEntries; ++i) {
>         if (hdfsFileInfo[i].mName) {
>             free(hdfsFileInfo[i].mName);
>         }
>         if (hdfsFileInfo[i].mOwner){
>             free(hdfsFileInfo[i].mOwner);
>         }
>         if (hdfsFileInfo[i].mGroup){
>             free(hdfsFileInfo[i].mGroup);
>         }
>     }
> I am new to Jira and haven't figured out a way to generate .patch file yet. Could anyone
help me do that so that others can commit the changes into the code base. Thanks!

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message