hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6460) Namenode runs of out of memory due to memory leak in ipc Server
Date Wed, 23 Dec 2009 06:27:29 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793949#action_12793949

Raghu Angadi commented on HADOOP-6460:

bq. As regards to printing the warning, would we be hitting response sizes of more than 1MB
very often? In that case the logic of resetting the buffer back to 10240 does not seem like
a good idea. Hoping that log would give a good idea about the size of the buffer for large
responses and based on that we could make additional tweaks.

In the case of NN, the log is is fine. Not sure if HBase clients still fetch data over RPCs
those servers could see a lot warnings with some tables.

> Namenode runs of out of memory due to memory leak in ipc Server
> ---------------------------------------------------------------
>                 Key: HADOOP-6460
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6460
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.1, 0.21.0, 0.22.0
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>            Priority: Blocker
>             Fix For: 0.20.2, 0.21.0, 0.22.0
>         Attachments: hadoop-6460.1.patch, hadoop-6460.patch
> Namenode heap usage grows disproportional to the number objects supports (files, directories
and blocks). Based on heap dump analysis, this is due to large growth in ByteArrayOutputStream
allocated in o.a.h.ipc.Server.Handler.run().

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message