hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size
Date Wed, 15 May 2013 14:05:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13658373#comment-13658373
] 

Suresh Srinivas commented on HDFS-4305:
---------------------------------------

See - https://builds.apache.org/job/Hadoop-Hdfs-trunk/1399/

The following failures seem related to this jira.

{noformat}
Running org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.436 sec

Results :

Tests in error:
  testOperation[4](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem): Specified
block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size):
1024 < 1048576(..)
  testOperationDoAs[4](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem):
Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size):
1024 < 1048576(..)
  testOperation[4](org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem): Specified
block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size):
1024 < 1048576(..)
  testOperationDoAs[4](org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem): Specified
block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size):
1024 < 1048576(..)
{noformat}
                
> Add a configurable limit on number of blocks per file, and min block size
> -------------------------------------------------------------------------
>
>                 Key: HDFS-4305
>                 URL: https://issues.apache.org/jira/browse/HDFS-4305
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 1.0.4, 2.0.4-alpha
>            Reporter: Todd Lipcon
>            Assignee: Andrew Wang
>            Priority: Minor
>             Fix For: 2.0.5-beta
>
>         Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch
>
>
> We recently had an issue where a user set the block size very very low and managed to
create a single file with hundreds of thousands of blocks. This caused problems with the edit
log since the OP_ADD op was so large (HDFS-4304). I imagine it could also cause efficiency
issues in the NN. To prevent users from making such mistakes, we should:
> - introduce a configurable minimum block size, below which requests are rejected
> - introduce a configurable maximum number of blocks per file, above which requests to
add another block are rejected (with a suitably high default as to not prevent legitimate
large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message