hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dhruba Borthakur <dhr...@gmail.com>
Subject Re: socket buffer sizes hardcoded
Date Fri, 04 Sep 2009 17:55:58 GMT
Making it configurable seems like a good thing. There is a JIRA (owned by
Sanjay) that describes that some of these configuration variables on the
client side might become "undocumented"; tjois means that they might change
semantics from a release to another.


On Wed, Sep 2, 2009 at 7:45 PM, Jay Kreps <jay.kreps@gmail.com> wrote:

> Hey Guys,
> I am interested in increasing the throughput of an HDFS read while
> transferring data between datacenters that are geographically far
> apart and hence have a network latency of around 60ms. I see in the
> HDFS code that the DFSClient and DataNode seem to hardcode their
> socket buffer sizes to 128KB (DFSClient.createBlockOutputStream and
> DataNode.startDataNode). Is there a reason for this?
> I want to expose this value as a configurable property so that when i
> read over the high-latency link I can set the ideal buffer size for
> this particular application (around 800KB for our desired bandwidth).
> Is there a reason this is not done currently? Would you take a patch
> that added this property? Am I looking at totally the wrong code?
> -Jay

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message