hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ankur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2975) IPC server should not allocate a buffer for each request
Date Tue, 03 Jun 2008 07:26:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Ankur updated HADOOP-2975:

    Attachment: Hadoop-2975-v2.patch

Here's the new version of the patch that implements the suggested changes with a slight change.

- The buffer size is set to be 1024 K instead of 512K as originally suggested.

If that seems to be on the higher side it can be change by simply setting newly defined DATA_BUFFER_SIZE
in the nested Connection class. 

I feel it is still valuable to collect typical RPC request size by running sort or gridmix
on a large cluster and have it documented somewhere ( wiki ?).

For now I think we should be cool with this as it will only be a matter of adjusting the DATA_BUFFER_SIZE.

> IPC server should not allocate a buffer for each request
> --------------------------------------------------------
>                 Key: HADOOP-2975
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2975
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 0.16.0
>            Reporter: Hairong Kuang
>         Attachments: Hadoop-2975-v1.patch, Hadoop-2975-v2.patch
> Currently the IPC server allocates a buffer for each incoming request. The buffer is
thrown away after the request is serialized. This leads to very inefficient heap utilization.
It would be nicer if all requests from one connection could share a same common buffer since
the ipc server has only one request is being read from a socket at a time.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message