hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "M. C. Srivas" <mcsri...@gmail.com>
Subject Re: making file system block size bigger to improve hdfs performance ?
Date Mon, 10 Oct 2011 13:51:45 GMT
XFS was created in 1991 by Silicon Graphics.  It was designed for streaming.
The Linux port was in 2002 or so.

I've used it extensively for the past 8 years. It is very stable, and many
NAS companies have embedded it in their products. In particular, it works
well even when the disk starts getting full. ext4 tends to have problems
with multiple streams (it seeks too much), and ext3 has a fragmentation

(MapR's disk layout is even better compared to XFS  ...  couldn't resist)

On Mon, Oct 10, 2011 at 3:48 AM, Steve Loughran <stevel@apache.org> wrote:

> On 09/10/11 07:01, M. C. Srivas wrote:
>  If you insist on HDFS, try using XFS underneath, it does a much better job
>> than ext3 or ext4 for Hadoop in terms of how data is layed out on disk.
>> But
>> its memory footprint is alteast twice of that of ext3, so it will gobble
>> up
>> a lot more memory on your box.
> How stable have you found XFS? I know people have worked a lot on ext4 and
> I am using it locally, even if something (VirtualBox) tell me off for doing
> so. I know the Lustre people are using underneath their DFS, and with wide
> use it does tend to get debugged by others before you use your data.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message