hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ankur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2975) IPC server should not allocate a buffer for each request
Date Wed, 04 Jun 2008 06:22:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12602184#action_12602184

Ankur commented on HADOOP-2975:

Hairong, thanks for the link. 

Yes java indeed does a pretty good job of allocating small objects that are created frequently
but we need to be careful about too many and too frequent allocations since garbage collection
then would start incurring a noticeable cost causing a performance penalty. I am assuming
that will be the case in case of a heavily loaded RPC server. So it does make sense to allocate
a buffer whose size is higher than a typical RPC request.  

On the other hand for bugger RPC requests like block reports we can get away by allocating
a larger buffer and shrinking it when not required since block report rpc request would be
relatively infrequent.

Since most RPC fall in the 1 KB range it makes sense to take a default buffer size as 2 KB.

Attached is the new version of the patch that uses a default buffer size of 2 KB.  

We still need to evaluate the performance improvement from this patch (if any) to test our
ideas. It would be best if we can run gridmix or sort benchmark on a large cluster in a pre
and post patch environment to collect some performance numbers and typical RPC request sizes.
I have a small 10 node cluster with really old machines and not a lot of disk for HDFS ( 200
GB for the whole cluster ). So i guess this will need external help. 

Suggestions ?

> IPC server should not allocate a buffer for each request
> --------------------------------------------------------
>                 Key: HADOOP-2975
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2975
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 0.16.0
>            Reporter: Hairong Kuang
>         Attachments: Hadoop-2975-v1.patch, Hadoop-2975-v2.patch, Hadoop-2975-v3.patch
> Currently the IPC server allocates a buffer for each incoming request. The buffer is
thrown away after the request is serialized. This leads to very inefficient heap utilization.
It would be nicer if all requests from one connection could share a same common buffer since
the ipc server has only one request is being read from a socket at a time.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message