hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Johan Oskarson (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-296) Do not assign blocks to a datanode with < x mb free
Date Wed, 14 Jun 2006 09:42:30 GMT
     [ http://issues.apache.org/jira/browse/HADOOP-296?page=all ]

Johan Oskarson updated HADOOP-296:

    Attachment: minspacev2.patch

Thanks for the feedback.
Quick patch that reads dfs.datanode.du.reserved and dfs.datanode.du.pct from the config.

Hope this is what you ment.


> Do not assign blocks to a datanode with < x mb free
> ---------------------------------------------------
>          Key: HADOOP-296
>          URL: http://issues.apache.org/jira/browse/HADOOP-296
>      Project: Hadoop
>         Type: New Feature

>   Components: dfs
>     Versions: 0.3.2
>     Reporter: Johan Oskarson
>  Attachments: minspace.patch, minspacev2.patch
> We're running a smallish cluster with very different machines, some with only 60 gb harddrives
> This creates a problem when inserting files into the dfs, these machines run out of space
quickly and then they cannot run any map reduce operations
> A solution would be to not assign any new blocks once the space is below a certain user
configurable threshold
> This free space could then be used by the map reduce operations instead (if that's on
the same disk)

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message