hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9182) the buffer used in hdfsRead seems leaks when the thread exits
Date Mon, 07 Jan 2013 18:24:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546119#comment-13546119

Todd Lipcon commented on HADOOP-9182:

We don't do anything thread-local that could cause this. My guess is you're forgetting to
close your files. If you can show a simple standalone reproducing program, we can take a look.
Otherwise I'm inclined to mark "cannot reproduce"
> the buffer used in hdfsRead seems leaks when the thread exits
> -------------------------------------------------------------
>                 Key: HADOOP-9182
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9182
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: filecache
>         Environment: Linux RHEP x64 
>            Reporter: dingyichuan
> I use multi-threads in my c++ program to download 3000 files in HDFS use libhdfs. Every
thread is created by "pthread_create" to download a file and exit. We monitor the memory status
when the program is running. It seems every thread will create a buffer which size is specified
by the buffersize parameter in "hdfsOpenFile" function. But when the thread finish the task
and exit, it doesn't free the buffer. So our program will eventually abort by Java's "out
of memory" exception. I just don't know how to free the buffer or I use these functions in
wrong way. Thanks!

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message