hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-583) HDFS should enforce a max block size
Date Sat, 07 Jan 2012 17:07:39 GMT

     [ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Harsh J updated HDFS-583:

    Component/s:     (was: data-node)
        Summary: HDFS should enforce a max block size  (was: DataNode should enforce a max
block size)
> HDFS should enforce a max block size
> ------------------------------------
>                 Key: HDFS-583
>                 URL: https://issues.apache.org/jira/browse/HDFS-583
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Hairong Kuang
> When DataNode creates a replica, it should enforce a max block size, so clients can't
go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that
check the block size.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message