hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1312) Re-balance disks within a Datanode
Date Thu, 03 Mar 2011 00:09:36 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13001765#comment-13001765

Allen Wittenauer commented on HDFS-1312:

> IMHO, you can monitor the file distribution among disks with external tools 
> such as ganglia, is it required to integrate it in the Web interface of HDFS?

Yes.  First off, Ganglia sucks at large scales for too many reasons to go into here.  Secondly,
only Hadoop really knows what file systems are in play for HDFS.

> From my point, there are mainly two cases of re-balance:

There's a third, and one I suspect is more common than people realize:  mass deletes.  

> Re-balance should be only process while it is not in heavy load 
> (should this be guaranteed by the administrator?)


> Lock origin disks: stop written to them and wait finalization on them.

I don't think it is realistic to expect the system to be idle during a rebalance.  Ultimately,
it shouldn't matter from the rebalancer's perspective anyway; the only performance hit that
should be noticeable would be for blocks in the middle of being moved.  Even then, the DN
process knows what blocks are where and can read from the 'old' location.

If DN's being idle are a requirement, then one is better off just shutting down the DN (and
TT?) processes and doing them offline.

> Find the deepest dirs in every selected disk and move blocks from those dirs.
> And if a dir is empty, then the dir should also be removed.

Why does depth matter?

> If two or more dirs are located in a same disk, they might confuse the 
> space calculation. And this is just the case in MiniDFSCluster deployment.

This is also the case with pooled storage systems (such as ZFS) on real clusters already.

> Re-balance disks within a Datanode
> ----------------------------------
>                 Key: HDFS-1312
>                 URL: https://issues.apache.org/jira/browse/HDFS-1312
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: data-node
>            Reporter: Travis Crawford
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations where certain
disks are full while others are significantly less used. Users at many different sites have
experienced this issue, and HDFS administrators are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling disks at
the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. In write-heavy
environments this will still make use of all spindles, equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are added/replaced
in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is not needed.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message