hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jinsong Hu" <jinsong...@hotmail.com>
Subject making file system block size bigger to improve hdfs performance ?
Date Mon, 03 Oct 2011 05:05:52 GMT
Hi, There:
  I just thought an idea. When we format the disk , the block size is 
usually 1K to 4K. For hdfs, the block size is usually 64M.
I wonder if we change the raw file system's block size to something 
significantly bigger, say, 1M or 8M, will that improve
disk IO performance for hadoop's hdfs ?
  Currently, I noticed that mapr distribution uses mfs, its own file system. 
That resulted in 4 times performance gain in terms
of disk IO. I just wonder if we tune the hosting os parameters, we can 
achieve better disk IO performance with just the regular
apache hadoop distribution.
  I understand that making the block size bigger can result in some disk 
space waste for small files. However, for disk dedicated
for hdfs, where most of the files are very big, I just wonder if it is a 
good idea. Any body have any comment ?

Jimmy 


Mime
View raw message