hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2699) Store data and checksums together in block file
Date Sat, 17 Dec 2011 19:45:30 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171653#comment-13171653
] 

dhruba borthakur commented on HDFS-2699:
----------------------------------------

Hi andrew, all of the points you mentioned are valid points that could decrease the amount
of iops needed for a particular workload. But my point is that if we keep the other pieces
constant (amount of ram, amount of flash, etc), then what can we do to reduce iops for the
same workload. If the machine has more RAM memory, I would rather give all of it to the hbase
block cache, because accesess from the hbase block cache is more optimal that accessing the
file system cache. The hbase block cache can do better caching policies (because it is closer
to the application) than the OS file cache (I am making the same arguments why databases typically
do unbuffered io from the filesytem).

Most disks are getting larger and larger in size (4TB disks coming next year), but the iops
per spindle has not changed much. Given that, an efficient storage system should strive to
optimize on iops, is it not?

                
> Store data and checksums together in block file
> -----------------------------------------------
>
>                 Key: HDFS-2699
>                 URL: https://issues.apache.org/jira/browse/HDFS-2699
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> The current implementation of HDFS stores the data in one block file and the metadata(checksum)
in another block file. This means that every read from HDFS actually consumes two disk iops,
one to the datafile and one to the checksum file. This is a major problem for scaling HBase,
because HBase is usually  bottlenecked on the number of random disk iops that the storage-hardware
offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message