hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-583) DataNode should enforce a max block size
Date Wed, 02 Sep 2009 00:40:32 GMT

    [ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12750205#action_12750205

Tsz Wo (Nicholas), SZE commented on HDFS-583:

Should this also be enforced in the NameNode?  Otherwise, we might be able to create a file
with some huge block size but not able to write anything (since Datanodes forbids).

> DataNode should enforce a max block size
> ----------------------------------------
>                 Key: HDFS-583
>                 URL: https://issues.apache.org/jira/browse/HDFS-583
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>            Reporter: Hairong Kuang
> When DataNode creates a replica, it should enforce a max block size, so clients can't
go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that
check the block size.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message