hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Kimball (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-738) Improve the disk utilization of HDFS
Date Tue, 27 Oct 2009 16:22:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12770553#action_12770553
] 

Aaron Kimball commented on HDFS-738:
------------------------------------

This might be better placed in hadoop-common; I think it would be a good idea to consider
the drives under mapred.local.dir at the same time.

> Improve the disk utilization of HDFS
> ------------------------------------
>
>                 Key: HDFS-738
>                 URL: https://issues.apache.org/jira/browse/HDFS-738
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>            Reporter: Zheng Shao
>
> HDFS data node currently assigns writers to disks randomly. This is good if there are
a large number of readers/writers on a single data node, but might create a lot of contentions
if there are only 4 readers/writers on a 4-disk node.
> A better way is to introduce a base class DiskHandler, for registering all disk operations
(read/write), as well as getting the best disk for writing new blocks. A good strategy of
the DiskHandler would be to distribute the load of the writes to the disks with more free
spaces as well as less recent activities. There can be many strategies.
> This could help improve the HDFS multi-threaded write throughput a lot - we are seeing
<25MB/s/disk on a 4-disk/node 4-node cluster (replication is already considered) given
8 concurrent writers (24 writers considering replication). I believe we can improve that to
2x.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message