hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4053) Increase the default block size
Date Wed, 17 Oct 2012 14:58:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13477935#comment-13477935

Daryn Sharp commented on HDFS-4053:

+1 Assuming test failure is unrelated.  I'd suggest having the default as "128*1024*1024"
(it used to 64*1024*1024) since it's easier to grok.  What would be even better is allow units
such as "M", "G", etc.  I'm sure we have a conversion method lurking somewhere in hadoop.
 Just suggestions.
> Increase the default block size
> -------------------------------
>                 Key: HDFS-4053
>                 URL: https://issues.apache.org/jira/browse/HDFS-4053
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Eli Collins
>            Assignee: Eli Collins
>         Attachments: hdfs-4053.txt, hdfs-4053.txt
> The default HDFS block size ({{dfs.blocksize}}) has been 64mb forever. 128mb works well
in practice on today's hardware configurations, most clusters I work with use it or higher
(eg 256mb). Let's bump to 128mb in trunk for v3.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message