hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-583) DataNode should enforce a max block size
Date Wed, 02 Sep 2009 00:36:32 GMT

    [ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12750204#action_12750204

dhruba borthakur commented on HDFS-583:

You are not suggesting  another configurable parameter, isn't it?

> DataNode should enforce a max block size
> ----------------------------------------
>                 Key: HDFS-583
>                 URL: https://issues.apache.org/jira/browse/HDFS-583
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>            Reporter: Hairong Kuang
> When DataNode creates a replica, it should enforce a max block size, so clients can't
go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that
check the block size.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message