hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrzej Bialecki (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-50) dfs datanode should store blocks in multiple directories
Date Sat, 25 Mar 2006 20:04:21 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-50?page=comments#action_12371869 ] 

Andrzej Bialecki  commented on HADOOP-50:

I think this is a valid concern. Most filesystems work poorly with thousands of files in a
single directory. My recent tests on ext3 show that listing the data directory with 50,000
blocks takes several seconds.

FSDataset:80 contains a commented out section, which seems to address this issue. Anyone knows
why it's not used?

> dfs datanode should store blocks in multiple directories
> --------------------------------------------------------
>          Key: HADOOP-50
>          URL: http://issues.apache.org/jira/browse/HADOOP-50
>      Project: Hadoop
>         Type: Bug
>   Components: dfs
>     Versions: 0.2
>     Reporter: Doug Cutting
>     Assignee: Mike Cafarella
>      Fix For: 0.2

> The datanode currently stores all file blocks in a single directory.  With 32MB blocks
and terabyte filesystems, this will create too many files in a single directory for many filesystems.
 Thus blocks should be stored in multiple directories, perhaps even a shallow hierarchy.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message