hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5517) Lower the default maximum number of blocks per file
Date Wed, 30 Nov 2016 00:58:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15707100#comment-15707100
] 

Andrew Wang commented on HDFS-5517:
-----------------------------------

Checkstyles are ones I'm apt to ignore (line length in DFSConfigKeys, method length in existing
test). Test failure is due to a port conflict, and not one of the ones that failed last time.

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-5517.002.patch, HDFS-5517.003.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set the default
to 1MM. In practice this limit is so high as to never be hit, whereas we know that an individual
file with 10s of thousands of blocks can cause problems. We should lower the default value,
in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message