hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Akira Ajisaka (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5517) Lower the default maximum number of blocks per file
Date Tue, 29 Nov 2016 11:56:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15705094#comment-15705094
] 

Akira Ajisaka commented on HDFS-5517:
-------------------------------------

bq. -1	unit	63m 3s	hadoop-hdfs in the patch failed.
The failure looks related to this patch. Hi [~andrew.wang] and [~atm], would you fix it?

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>             Fix For: 3.0.0-alpha2
>
>         Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set the default
to 1MM. In practice this limit is so high as to never be hit, whereas we know that an individual
file with 10s of thousands of blocks can cause problems. We should lower the default value,
in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message