hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: making file system block size bigger to improve hdfs performance ?
Date Mon, 10 Oct 2011 10:48:38 GMT
On 09/10/11 07:01, M. C. Srivas wrote:

> If you insist on HDFS, try using XFS underneath, it does a much better job
> than ext3 or ext4 for Hadoop in terms of how data is layed out on disk. But
> its memory footprint is alteast twice of that of ext3, so it will gobble up
> a lot more memory on your box.

How stable have you found XFS? I know people have worked a lot on ext4 
and I am using it locally, even if something (VirtualBox) tell me off 
for doing so. I know the Lustre people are using underneath their DFS, 
and with wide use it does tend to get debugged by others before you use 
your data.

Mime
View raw message