hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Cafarella (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-50) dfs datanode should store blocks in multiple directories
Date Tue, 28 Mar 2006 17:05:41 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-50?page=comments#action_12372119 ] 

Mike Cafarella commented on HADOOP-50:

Hi Andrzej,

I wrote this code and got it 90% working some time ago, but then had to abandon
it for a more important bug.  It is not ready to go in its current state, but shouldn't
be too hard.  I can bring this code back to life..


> dfs datanode should store blocks in multiple directories
> --------------------------------------------------------
>          Key: HADOOP-50
>          URL: http://issues.apache.org/jira/browse/HADOOP-50
>      Project: Hadoop
>         Type: Bug
>   Components: dfs
>     Versions: 0.2
>     Reporter: Doug Cutting
>     Assignee: Mike Cafarella
>      Fix For: 0.2

> The datanode currently stores all file blocks in a single directory.  With 32MB blocks
and terabyte filesystems, this will create too many files in a single directory for many filesystems.
 Thus blocks should be stored in multiple directories, perhaps even a shallow hierarchy.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message