hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6460) Namenode runs of out of memory due to memory leak in ipc Server
Date Wed, 23 Dec 2009 00:25:29 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793854#action_12793854
] 

Suresh Srinivas commented on HADOOP-6460:
-----------------------------------------

I think having a separate class is a good idea; current behavior is when the buffer has grown
large, we make a copy to send in response. Then we release the large buffer in the stream
and create a new small one. I am thinking of adding optimization later where the buffer in
the stream will be used for sending response (no copy created) and a smaller buffer is created
for the stream to continue with. Also I get access to capacity of the buffer that is being
used.

As regards to printing the warning, would we be hitting response sizes of more than 1MB very
often? In that case the logic of resetting the buffer back to 10240 does not seem like a good
idea. Hoping that log would give a good idea about the size of the buffer for large responses
and based on that we could make additional tweaks.

> Namenode runs of out of memory due to memory leak in ipc Server
> ---------------------------------------------------------------
>
>                 Key: HADOOP-6460
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6460
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.1, 0.21.0, 0.22.0
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>            Priority: Blocker
>             Fix For: 0.20.2, 0.21.0, 0.22.0
>
>         Attachments: hadoop-6460.1.patch, hadoop-6460.patch
>
>
> Namenode heap usage grows disproportional to the number objects supports (files, directories
and blocks). Based on heap dump analysis, this is due to large growth in ByteArrayOutputStream
allocated in o.a.h.ipc.Server.Handler.run().

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message