hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Naama Kraus" <naamakr...@gmail.com>
Subject Underlying file system Block size
Date Mon, 30 Jun 2008 19:10:08 GMT
Hi All,

To my knowledge, HDFS block size is 64MB - fairly large. Is this a
requirement from a file system, if one wishes to implement Hadoop on top of
it ? Or is there a way to get along with a file system supporting a smaller
block size such as 1M or even less ? What is the case for existing, non
HDFS, implementations of Hadoop (such as S3, KFS) ?

Thanks for any input,

oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo
00 oo 00 oo
"If you want your children to be intelligent, read them fairy tales. If you
want them to be more intelligent, read them more fairy tales." (Albert

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message