Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2300A17B5B for ; Wed, 25 Mar 2015 15:22:31 +0000 (UTC) Received: (qmail 8873 invoked by uid 500); 25 Mar 2015 15:22:18 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 8742 invoked by uid 500); 25 Mar 2015 15:22:18 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 8731 invoked by uid 99); 25 Mar 2015 15:22:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Mar 2015 15:22:18 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mirko.kaempf@gmail.com designates 209.85.192.174 as permitted sender) Received: from [209.85.192.174] (HELO mail-pd0-f174.google.com) (209.85.192.174) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Mar 2015 15:22:13 +0000 Received: by pdbcz9 with SMTP id cz9so31513185pdb.3 for ; Wed, 25 Mar 2015 08:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=anNN5Q4UVC9VwidOo1pvV6sZftDuqEUcRWrxtn0brT0=; b=Ge2xEXHoe0j08XoBmDzAE4iatUCY+jZgHwhPmuMyGCzza/+FLdGYxgAwlRNPj+4lOV u5UOiqID268itH/UjdIssEgPC/uD8vOFQYLllW5PwTPX6Tbsk6NarbZ/uYqsp9HiTq65 rPX/YK5FkpsDvDAqAzuz8oA6JSViLhwLlyo5UMhSiJtqZAQrMlMizBXUDiGbGAoO1eAL sdsSeWAmUMx2c/2vpboq807CSrS1W1WdgKl8jdrPA6V4Hy8vlgBDnafuMIedSvUjfrhV iOAN0e6cTbAk6UNbpc9P9iaXAMhHIlsLyoKq3/NrOYVFr58ZafZyqkURqnGAf9F2K64Q Dqiw== X-Received: by 10.70.42.197 with SMTP id q5mr18222642pdl.115.1427296823427; Wed, 25 Mar 2015 08:20:23 -0700 (PDT) MIME-Version: 1.0 Received: by 10.66.144.37 with HTTP; Wed, 25 Mar 2015 08:20:03 -0700 (PDT) In-Reply-To: References: From: =?UTF-8?B?TWlya28gS8OkbXBm?= Date: Wed, 25 Mar 2015 15:20:03 +0000 Message-ID: Subject: Re: can block size for namenode be different from datanode block size? To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7bfeac5226480b05121e70bb X-Virus-Checked: Checked by ClamAV on apache.org --047d7bfeac5226480b05121e70bb Content-Type: text/plain; charset=UTF-8 Hi Mich, please see the comments in your text. 2015-03-25 15:11 GMT+00:00 Dr Mich Talebzadeh : > > Hi, > > The block size for HDFS is currently set to 128MB by defauilt. This is > configurable. > Correct, an HDFS client can overwrite the cfg-property and define a different block size for HDFS blocks. > > My point is that I assume this parameter in hadoop-core.xml sets the > block size for both namenode and datanode. Correct, the block-size is a "HDFS wide setting" but in general the HDFS-client makes the blocks. > However, the storage and > random access for metadata in nsamenode is different and suits smaller > block sizes. > HDFS blocksize has no impact here. NameNode metadata is held in memory. For reliability it is dumped to local discs of the server. > > For example in Linux the OS block size is 4k which means one HTFS blopck > size of 128MB can hold 32K OS blocks. For metadata this may not be > useful and smaller block size will be suitable and hence my question. > Remember, metadata is in memory. The fsimage-file, which contains the metadata is loaded on startup of the NameNode. Please be not confused by the two types of block-sizes. Hope this helps a bit. Cheers, Mirko > > Thanks, > > Mich > --047d7bfeac5226480b05121e70bb Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi Mich,

please see the comments in your text.


=C2=A0
2015-03-25 15:11 GMT+00:00= Dr Mich Talebzadeh <mich@peridale.co.uk>:

Hi,

The block size for HDFS is currently set to 128MB by defauilt. This is
configurable.
Correct, an HDFS client can overwrite th= e cfg-property and define a different block size for HDFS blocks.=C2=A0

My point is that I assume this=C2=A0 parameter in hadoop-core.xml sets the<= br> block size for both namenode and datanode.
Correct, the b= lock-size is a "HDFS wide setting" but in general the HDFS-client= makes the blocks.
=C2=A0=C2=A0
Howe= ver, the storage and
random access for metadata in nsamenode is different and suits smaller
block sizes.
HDFS blocksize has no impact here. NameNo= de metadata is held in memory. For reliability it is dumped to local discs = of the server.
=C2=A0

For example in Linux the OS block size is 4k which means one HTFS blopck size=C2=A0 of 128MB can hold 32K OS blocks. For metadata this may not be useful and smaller block size will be suitable and hence my question.
Remember, metadata is in memory. The fsimage-file, which co= ntains the metadata
is loaded on startup of the NameNode.
Please be not confused by the two types of block-sizes.

Hope this helps a bit.
Cheers,
Mirko<= /div>
=C2=A0

Thanks,

Mich

--047d7bfeac5226480b05121e70bb--