hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-296) Do not assign blocks to a datanode with < x mb free
Date Wed, 14 Jun 2006 17:42:32 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-296?page=comments#action_12416233 ] 

Konstantin Shvachko commented on HADOOP-296:

Yoram, as far as I know you use a large cluster of mostly identical machines.
For such clusters you need one configuration uniformly distributed to other nodes.
The case that Johan describes is different. You can have a uniform config, but should be
able to correct it for one or two nodes that are different from everything else.

Johan, I think this is good.
Please replace 0.98f by USABLE_DISK_PCT_DEFAULT
+1 after that.

> Do not assign blocks to a datanode with < x mb free
> ---------------------------------------------------
>          Key: HADOOP-296
>          URL: http://issues.apache.org/jira/browse/HADOOP-296
>      Project: Hadoop
>         Type: New Feature

>   Components: dfs
>     Versions: 0.3.2
>     Reporter: Johan Oskarson
>  Attachments: minspace.patch, minspacev2.patch
> We're running a smallish cluster with very different machines, some with only 60 gb harddrives
> This creates a problem when inserting files into the dfs, these machines run out of space
quickly and then they cannot run any map reduce operations
> A solution would be to not assign any new blocks once the space is below a certain user
configurable threshold
> This free space could then be used by the map reduce operations instead (if that's on
the same disk)

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message