hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinayakumar B <vinayakumar...@huawei.com>
Subject [Important] What is the practical maximum HDFS blocksize used in clusters?
Date Tue, 16 Feb 2016 08:04:13 GMT
Hi All,

Just wanted to know, what is the maximum and practical dfs.block.size used in production/test
clusters.

  Current default value is 128MB and it can support upto 128TB ( Yup, right. It's just a configuration
value though)

   I have seen clusters using upto 1G block size for big files.

   Is there anyone using >2GB for block size?

  This is just to check, whether any compatibility issue arises if we reduce the max supported
blocksize to 32GB ( to be safer side ).

-vinay

Mime
View raw message