hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-596) Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory for mOwner and mGroup
Date Sun, 15 Nov 2009 17:33:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778128#action_12778128
] 

Eli Collins commented on HDFS-596:
----------------------------------

Here's the program in case you want to re-test.

{code}
#include "hdfs.h" 

int main(void) {
  hdfsFS fs;
  hdfsFileInfo *infos;
  int numInfos;

  fs = hdfsConnect("localhost", 8020);
  if(!fs) {
    perror("hdfsConnect");
    return -1;
  } 

  while (1) {
    if ((infos = hdfsListDirectory(fs, "/", &numInfos)) != NULL) {
      hdfsFreeFileInfo(infos, numInfos);
    } else {
      if (errno) {
        perror("hdfsListDirectory");
        return -1;
      }
    }
  }

  hdfsDisconnect(fs);
  return 0;
}
{code}


> Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory for mOwner
and mGroup
> ------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-596
>                 URL: https://issues.apache.org/jira/browse/HDFS-596
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: contrib/fuse-dfs
>    Affects Versions: 0.20.1
>         Environment: Linux hadoop-001 2.6.28-14-server #47-Ubuntu SMP Sat Jul 25 01:18:34
UTC 2009 i686 GNU/Linux. Namenode with 1GB memory. 
>            Reporter: Zhang Bingjun
>            Assignee: Zhang Bingjun
>            Priority: Blocker
>             Fix For: 0.20.2
>
>         Attachments: HDFS-596.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> This bugs affects fuse-dfs severely. In my test, about 1GB memory were exhausted and
the fuse-dfs mount directory was disconnected after writing 14000 files. This bug is related
to the memory leak problem of this issue: http://issues.apache.org/jira/browse/HDFS-420. 
> The bug can be fixed very easily. In function hdfsFreeFileInfo() in file hdfs.c (under
c++/libhdfs/) change code block:
>     //Free the mName
>     int i;
>     for (i=0; i < numEntries; ++i) {
>         if (hdfsFileInfo[i].mName) {
>             free(hdfsFileInfo[i].mName);
>         }
>     }
> into:
>     // free mName, mOwner and mGroup
>     int i;
>     for (i=0; i < numEntries; ++i) {
>         if (hdfsFileInfo[i].mName) {
>             free(hdfsFileInfo[i].mName);
>         }
>         if (hdfsFileInfo[i].mOwner){
>             free(hdfsFileInfo[i].mOwner);
>         }
>         if (hdfsFileInfo[i].mGroup){
>             free(hdfsFileInfo[i].mGroup);
>         }
>     }
> I am new to Jira and haven't figured out a way to generate .patch file yet. Could anyone
help me do that so that others can commit the changes into the code base. Thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message