Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 55C428F4E for ; Wed, 10 Aug 2011 17:07:51 +0000 (UTC) Received: (qmail 6143 invoked by uid 500); 10 Aug 2011 17:07:48 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 6074 invoked by uid 500); 10 Aug 2011 17:07:47 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 6062 invoked by uid 99); 10 Aug 2011 17:07:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Aug 2011 17:07:47 +0000 X-ASF-Spam-Status: No, hits=2.9 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [180.222.116.64] (HELO web95902.mail.in.yahoo.com) (180.222.116.64) by apache.org (qpsmtpd/0.29) with SMTP; Wed, 10 Aug 2011 17:07:41 +0000 Received: (qmail 34923 invoked by uid 60001); 10 Aug 2011 17:07:18 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.co.in; s=s1024; t=1312996037; bh=RO9oKCPFNgCYZb4jnZo5sZPLQwhYMNgRikpp7cMQWP4=; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=kOlLLJR3dpyGLPOjeJ51iCpq8YqdtaFkfo+3zF2kpTaUdWY1fsNuOiT5N5SEvYnIwMV37PzStYLSfrdNx9inoHhcWHm6CraJeD138hcfUInr4Hgv81gTGZ3/4FAglW7z8x+OZUgGJfLJf1773UghzjP7LDpjQwUikJueIhNKlAY= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.co.in; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=nnl1Xfi+tXeprV5OJG3lVT+nb+vq7uqmiMrRgks/2ekyDMPgftJbqvtYAy4qpv7RK94GkQjLSGjPaSs+9LJE1cPQ7JgXDcFzxEaCDeBo1DUdedZHhafY3LULlEzm8SDuTk6yI39fwDGmw6iBEmXmXMyqLKZn5NE0h89N5xajWg8=; X-YMail-OSG: 0Yh1S7wVM1lZUXeM4smm5jHxFHRWOiwxu8AQyNHImh.4c6E yNUYDf1xiW4e3Fe6EHCXo.XvRdHrSKfA939uyGxHA8Fkn.0KE84E3a4vP6IE XA59SM17xR8LMxuhJ3zTkXVBidIvVZCDh.zkj0tZjc.kARol_rwpUuHJ44UQ R6eyG9oD69eTpCn1fuYki2ZMUgLDK4Hgplu1FPiJI6tJotfyWrr25dp9R4CM q.Enk0B4FLu5gTT5OrkOFJpEnjvalX2SD5E7mghhicGKwOFaAO1YTuSbrmQE gLtoQmcwMxU6.bouSemvKffA97KOwMMktVI2SRJQxYOOdZk01xAEVvxcVm24 vojdVmGPL.ew6k9gWCgPEjC.Vr3hQNkX3RY8YmqBNhLF.w86b4P3JbdPPHMz 9V5r19JG0jf5o8XBW6TQ- Received: from [17.246.54.120] by web95902.mail.in.yahoo.com via HTTP; Wed, 10 Aug 2011 22:37:17 IST X-Mailer: YahooMailWebService/0.8.113.313619 References: <1312963096.55212.YahooMailNeo@web95906.mail.in.yahoo.com> Message-ID: <1312996037.34735.YahooMailNeo@web95902.mail.in.yahoo.com> Date: Wed, 10 Aug 2011 22:37:17 +0530 (IST) From: jagaran das Reply-To: jagaran das Subject: Re: Namenode Scalability To: "common-user@hadoop.apache.org" , jagaran das In-Reply-To: <1312963096.55212.YahooMailNeo@web95906.mail.in.yahoo.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="0-2096955307-1312996037=:34735" --0-2096955307-1312996037=:34735 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable To be precise, the projected data is around 1 PB.=0ABut the publishing rate= is also around 1GBPS.=0A=0APlease suggest.=0A=0A=0A_______________________= _________=0AFrom: jagaran das =0ATo: "common-user@= hadoop.apache.org" =0ASent: Wednesday, 10 Au= gust 2011 12:58 AM=0ASubject: Namenode Scalability=0A=0AIn my current proje= ct we =A0are planning to streams of data to Namenode (20 Node Cluster).=0AD= ata Volume would be around 1 PB per day.=0ABut there are application which = can publish data at 1GBPS.=0A=0AFew queries:=0A=0A1. Can a single Namenode = handle such high speed writes? Or it becomes unresponsive when GC cycle kic= ks in.=0A2. Can we have multiple=A0federated=A0Name nodes=A0=A0sharing the = same slaves and then we can distribute the writes accordingly.=0A3. Can mul= tiple region servers of HBase help us ??=0A=0APlease suggest how we can des= ign the streaming part to handle such scale of data.=A0=0A=0ARegards,=0AJag= aran Das=A0 --0-2096955307-1312996037=:34735--