hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2975) IPC server should not allocate a buffer for each request
Date Mon, 02 Jun 2008 18:47:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12601715#action_12601715
] 

Hairong Kuang commented on HADOOP-2975:
---------------------------------------

I guess the simplest solution is set the buffer size to be like 512K.

If we want to be more accurate, we could run gridmix or sort benchmarks on a large, for example
200-node, cluster, logging all rpc request sizes. We could find out the buffer size after
examining the size distribution. In hadoop, I think the rpc request size spikes are block
reports. Other rpcs should be at a reasonable size. 

> IPC server should not allocate a buffer for each request
> --------------------------------------------------------
>
>                 Key: HADOOP-2975
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2975
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 0.16.0
>            Reporter: Hairong Kuang
>         Attachments: Hadoop-2975-v1.patch
>
>
> Currently the IPC server allocates a buffer for each incoming request. The buffer is
thrown away after the request is serialized. This leads to very inefficient heap utilization.
It would be nicer if all requests from one connection could share a same common buffer since
the ipc server has only one request is being read from a socket at a time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message