hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer <awittena...@linkedin.com>
Subject Re: Why does the default packet size in HDFS is 64k?
Date Tue, 08 Jun 2010 01:18:50 GMT

You may find https://issues.apache.org/jira/browse/HADOOP-1702 enlightening.


On Jun 6, 2010, at 6:25 PM, ChingShen wrote:

> No, I mentioned that the default packet size in HDFS.
> 
> On Mon, Jun 7, 2010 at 9:19 AM, Kevin Tse <kevintse.onjee@gmail.com> wrote:
> 
>> you mean data blocks in HDFS? take a look at this and read the "Data Block"
>> section.http://hadoop.apache.org/common/docs/r0.19.1/hdfs_design.html
>> 
>> On Mon, Jun 7, 2010 at 8:59 AM, ChingShen <chingshenchen@gmail.com> wrote:
>> 
>>> Hi all,
>>> 
>>> Why does the default packet size in HDFS is 64k? How do we know 64k is
>> the
>>> best for all platforms?
>>> 
>>> Thanks a lot.
>>> 
>>> Shen
>>> 
>> 


Mime
View raw message