Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C2044F3ED for ; Thu, 18 Apr 2013 16:32:54 +0000 (UTC) Received: (qmail 28956 invoked by uid 500); 18 Apr 2013 16:32:49 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 28449 invoked by uid 500); 18 Apr 2013 16:32:47 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 28429 invoked by uid 99); 18 Apr 2013 16:32:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 18 Apr 2013 16:32:47 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of aolixiang@gmail.com designates 209.85.214.180 as permitted sender) Received: from [209.85.214.180] (HELO mail-ob0-f180.google.com) (209.85.214.180) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 18 Apr 2013 16:32:42 +0000 Received: by mail-ob0-f180.google.com with SMTP id un3so2625013obb.39 for ; Thu, 18 Apr 2013 09:32:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=IPQS+B414z+DVXulI/ufaMJkUy2uByoeNRv0Oag+/IY=; b=sI1CEuWuREM7lk4CWTMdH1yezfvQL0HkaSyQIr+zgDRqEIO24kwxCxTUNpM2ckv89u XV1i2moRR6u0WO6Yr3A5N52/lRlWANvMF7JEOoO1Dc5EvpRFAcZ1twsR//vtzuG00Dlo IRJQl66tpxLoox50HMpZWppLg4yrDEWFf6Xh97NeJ0fmLhijpPJ5HBlqNGiEnPYnKcZw dMccdwSx5dICrqzq/4hUcm7N/xmCjkM2rgVmbaQAbUmxqh3csgibOpaVMzbj83CiaBYE X4hscQFikXe7QZOhLEZbLTMpTMWC52/hvwDA2pvJOwbWNg10/dtoTGojPb1qy9D8mYXu ofbw== MIME-Version: 1.0 X-Received: by 10.182.64.74 with SMTP id m10mr989283obs.61.1366302741662; Thu, 18 Apr 2013 09:32:21 -0700 (PDT) Received: by 10.60.146.241 with HTTP; Thu, 18 Apr 2013 09:32:21 -0700 (PDT) In-Reply-To: References: Date: Fri, 19 Apr 2013 00:32:21 +0800 Message-ID: Subject: Re: Run multiple HDFS instances From: Lixiang Ao To: "user@hadoop.apache.org" , "chris@embree.us" Content-Type: multipart/alternative; boundary=14dae93b5e3692949504daa52572 X-Virus-Checked: Checked by ClamAV on apache.org --14dae93b5e3692949504daa52572 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Actually I'm trying to do something like combining multiple namenodes so that they present themselves to clients as a single namespace, implementing basic namenode functionalities. =D4=DA 2013=C4=EA4=D4=C218=C8=D5=D0=C7=C6=DA=CB=C4=A3=ACChris Embree =D0=B4= =B5=C0=A3=BA > Glad you got this working... can you explain your use case a little? I'= m > trying to understand why you might want to do that. > > > On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao > > wrote: > >> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works! >> Everything looks fine now. >> >> Seems direct command "hdfs namenode" gives a better sense of control :) >> >> Thanks a lot. >> >> =D4=DA 2013=C4=EA4=D4=C218=C8=D5=D0=C7=C6=DA=CB=C4=A3=ACHarsh J =D0=B4= =B5=C0=A3=BA >> >> Yes you can but if you want the scripts to work, you should have them >>> use a different PID directory (I think its called HADOOP_PID_DIR) >>> every time you invoke them. >>> >>> I instead prefer to start the daemons up via their direct command such >>> as "hdfs namenode" and so and move them to the background, with a >>> redirect for logging. >>> >>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao wrote= : >>> > Hi all, >>> > >>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n >>> > datanodes, on a single machine? >>> > >>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file >>> > conflicting between HDFSes, but when I started the second HDFS, I got >>> the >>> > errors: >>> > >>> > Starting namenodes on [localhost] >>> > localhost: namenode running as process 20544. Stop it first. >>> > localhost: datanode running as process 20786. Stop it first. >>> > Starting secondary namenodes [0.0.0.0] >>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first. >>> > >>> > Is there a way to solve this? >>> > Thank you in advance, >>> > >>> > Lixiang Ao >>> >>> >>> >>> -- >>> Harsh J >>> >> > --14dae93b5e3692949504daa52572 Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: quoted-printable Actually I'm trying to do something like combining multiple namenodes s= o that they present themselves to clients as a single namespace, implementi= ng basic namenode functionalities.

=D4=DA 2013=C4=EA4=D4=C218=C8=D5= =D0=C7=C6=DA=CB=C4=A3=ACChris Embree =D0=B4=B5=C0=A3=BA
Glad you got this working..= . can you explain your use case a little?   I'm trying to understa= nd why you might want to do that.


On Thu, Apr 1= 8, 2013 at 11:29 AM, Lixiang Ao <= aolixiang@gmail.com> wrote:
I modified sbin/hadoop-daemon.sh, where= HADOOP_PID_DIR is set. It works!  Everything looks fine now.

Seems direct command "hdfs namenode" gives a better se= nse of control  :)

Thanks a lot.

=D4=DA 2013=C4=EA4=D4=C218= =C8=D5=D0=C7=C6=DA=CB=C4=A3=ACHarsh J =D0=B4=B5=C0=A3=BA

Yes you can but if you want the scripts to work,= you should have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.

I instead prefer to start the daemons up via their direct command such
as "hdfs namenode" and so and move them to the background, with a=
redirect for logging.

On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <aolixiang@gmail.com&= gt; wrote:
> Hi all,
>
> Can I run mutiple HDFS instances, that is, n seperate namenodes and n<= br> > datanodes, on a single machine?
>
> I've modified core-site.xml and hdfs-site.xml to avoid port and fi= le
> conflicting between HDFSes, but when I started the second HDFS, I got = the
> errors:
>
> Starting namenodes on [localhost]
> localhost: namenode running as process 20544. Stop it first.
> localhost: datanode running as process 20786. Stop it first.
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: secondarynam= enode running as process 21074. Stop it first.
>
> Is there a way to solve this?
> Thank you in advance,
>
> Lixiang Ao



--
Harsh J

--14dae93b5e3692949504daa52572--