hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file
Date Thu, 01 Dec 2016 00:01:03 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Wang updated HDFS-5517:
------------------------------
       Resolution: Fixed
    Fix Version/s: 3.0.0-alpha2
           Status: Resolved  (was: Patch Available)

Committed, thanks for the review Akira and sorry again for missing the broken tests earlier.

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>             Fix For: 3.0.0-alpha2
>
>         Attachments: HDFS-5517.002.patch, HDFS-5517.003.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set the default
to 1MM. In practice this limit is so high as to never be hit, whereas we know that an individual
file with 10s of thousands of blocks can cause problems. We should lower the default value,
in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message