hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mirko Kämpf <mirko.kae...@gmail.com>
Subject Re: can block size for namenode be different from datanode block size?
Date Wed, 25 Mar 2015 15:20:03 GMT
Hi Mich,

please see the comments in your text.



2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh <mich@peridale.co.uk>:

>
> Hi,
>
> The block size for HDFS is currently set to 128MB by defauilt. This is
> configurable.
>
Correct, an HDFS client can overwrite the cfg-property and define a
different block size for HDFS blocks.

>
> My point is that I assume this  parameter in hadoop-core.xml sets the
> block size for both namenode and datanode.

Correct, the block-size is a "HDFS wide setting" but in general the
HDFS-client makes the blocks.


> However, the storage and
> random access for metadata in nsamenode is different and suits smaller
> block sizes.
>
HDFS blocksize has no impact here. NameNode metadata is held in memory. For
reliability it is dumped to local discs of the server.


>
> For example in Linux the OS block size is 4k which means one HTFS blopck
> size  of 128MB can hold 32K OS blocks. For metadata this may not be
> useful and smaller block size will be suitable and hence my question.
>
Remember, metadata is in memory. The fsimage-file, which contains the
metadata
is loaded on startup of the NameNode.

Please be not confused by the two types of block-sizes.

Hope this helps a bit.
Cheers,
Mirko


>
> Thanks,
>
> Mich
>

Mime
View raw message