Just wanted to know, what is the maximum and practical dfs.block.size used in production/test clusters.
Current default value is 128MB and it can support upto 128TB ( Yup, right. It’s just a configuration value though)
I have seen clusters using upto 1G block size for big files.
Is there anyone using >2GB for block size?
This is just to check, whether any compatibility issue arises if we reduce the max supported blocksize to 32GB ( to be safer side ).