hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file
Date Mon, 28 Nov 2016 21:20:59 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Wang updated HDFS-5517:
------------------------------
    Attachment: HDFS-5517.002.patch

I did the trival rebase for this change, patch attached.

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set the default
to 1MM. In practice this limit is so high as to never be hit, whereas we know that an individual
file with 10s of thousands of blocks can cause problems. We should lower the default value,
in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message