hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Hoffman (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode
Date Tue, 25 Sep 2012 16:28:09 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13462916#comment-13462916
] 

Steve Hoffman commented on HDFS-1312:
-------------------------------------

Wow, I can't believe this is still lingering out there as a new feature request.  I'd argue
this is a bug -- and a big one.  Here's why:

* You have Nx 12x3TB machines in your cluster.
* 1 disk fails on 12 drive machine.  Let's say they each were 80% full.
* You install the replacement drive (0% full), but by the time you do this the under-replicated
blocks have been fixed (on this and other nodes)
* The 0% full drive will fill at the same rate as the blocks on the other disks.  That machine's
other 11 disks will fill to 100% as the block placement is at a node level and the node seems
to use a round-robin algorithm even though there is more space.

The only way we have found to move blocks internally (without taking the cluster down completely)
is to decommission the node and have it empty and then re-add it to the cluster so the balancer
can take over and move block back onto it.

Hard drives fail.  This isn't news to anybody.  The larger (12 disk) nodes only make the problem
worse in time to empty and fill again.  Even if you had a 1U 4 disk machine it is still bad
'cause you lose 25% of your capacity on 1 disk failure where the impact of the 12 disk machine
is less than 9%. 

The remove/add of a complete node seems like a pretty poor option.
Or am I alone in this?  Can we please revive this JIRA?
                
> Re-balance disks within a Datanode
> ----------------------------------
>
>                 Key: HDFS-1312
>                 URL: https://issues.apache.org/jira/browse/HDFS-1312
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: data-node
>            Reporter: Travis Crawford
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations where certain
disks are full while others are significantly less used. Users at many different sites have
experienced this issue, and HDFS administrators are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling disks at
the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. In write-heavy
environments this will still make use of all spindles, equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are added/replaced
in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message