hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-7456) Connection with RemoteException is not removed from cached HashTable and cause memory leak
Date Mon, 11 Jul 2011 14:04:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063347#comment-13063347
] 

Uma Maheswara Rao G commented on HADOOP-7456:
---------------------------------------------

Looks Todd has fixed the bug to remove the id in below condition	
{code} 
} else if (state == Status.ERROR.state) {
{code}
https://issues.apache.org/jira/browse/HADOOP-6833

But in Fatal State 
{code}
} else if (state == Status.FATAL.state) { 
{code}

It is marking as closed.
So, markClosed will do  shouldCloseConnection.compareAndSet(false, true)

After receiveResponse , it will call close().

As per my observation close should clean the calls. ( by invoking the api cleanupCalls();
).......no?


> Connection with RemoteException is not removed from cached HashTable and cause memory
leak
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-7456
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7456
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.20.2
>            Reporter: Angelo K. Huang
>
> In a long running system like Oozie, we use hadoop client APIs, such as FileSystem.exists(),
to check files exist on hdfs or not to kick off a user job. But in a production environment,
user sometimes gives wrong or invalid format of file/directory paths. In that case, after
server was up for couple days, we found around 80% of memory were taken away by hadoop ipc
client connections. In one of the connections, there was a hashtable contains 200k entries.
We cross-checked Hadoop code and found out that in org.apache.hadoop.ipc.Client.receiveResponse(),
if state if fatal, the call object does not remove from the hashtable (calls) and keeps in
the memory until system throws OutOfMemory error or crash. The code in question is here :
> * org.apache.hadoop.ipc.Client.receiveResponse()
>  } else if (state == Status.FATAL.state) {
>           // Close the connection
>           markClosed(new RemoteException(WritableUtils.readString(in), 
>                                          WritableUtils.readString(in)));
>  }

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message