hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-583) HDFS should enforce a max block size
Date Wed, 23 Jul 2014 22:46:41 GMT

     [ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Allen Wittenauer resolved HDFS-583.

    Resolution: Won't Fix

I'm going to close this as Won't Fix.  At some point in time, block sizes were inadvertently
limited to 2GB.  This was increased to (some other value which escapes me at the moment, but
it might be 4GB).

In practice, users tend not to mess with large block sizes unless they have a very specific
reason... especially when one considers that disk quotas are also in play.  

> HDFS should enforce a max block size
> ------------------------------------
>                 Key: HDFS-583
>                 URL: https://issues.apache.org/jira/browse/HDFS-583
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Hairong Kuang
> When DataNode creates a replica, it should enforce a max block size, so clients can't
go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that
check the block size.

This message was sent by Atlassian JIRA

View raw message