hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2699) Store data and checksums together in block file
Date Sat, 17 Dec 2011 19:17:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171643#comment-13171643
] 

Andrew Purtell commented on HDFS-2699:
--------------------------------------

IMHO, this is a design evolution question for HDFS. Is pread a first class use case? How many
clients beyond HBase?

If so, I think it makes sense to consider changes to DN storage that reduce IOPS.

If not and/or if changes to DN storage are too radical by consensus, then a means to optionally
fadvise away data file pages seems worthwhile to try. There are other considerations that
suggest deployments should use a reasonable amount of RAM, this will be available in part
for OS blockcache.

There are other various alternatives: application level checksums, mixed device deployment
(flash + disk), etc. Given the above two options, it may be a distraction to consider more
options unless there is a compelling reason. (For example, optimizing IOPS for disk provides
the same benefit for flash devices.)
                
> Store data and checksums together in block file
> -----------------------------------------------
>
>                 Key: HDFS-2699
>                 URL: https://issues.apache.org/jira/browse/HDFS-2699
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> The current implementation of HDFS stores the data in one block file and the metadata(checksum)
in another block file. This means that every read from HDFS actually consumes two disk iops,
one to the datafile and one to the checksum file. This is a major problem for scaling HBase,
because HBase is usually  bottlenecked on the number of random disk iops that the storage-hardware
offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message