Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CB93810044 for ; Thu, 18 Apr 2013 15:29:35 +0000 (UTC) Received: (qmail 17092 invoked by uid 500); 18 Apr 2013 15:29:30 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 16731 invoked by uid 500); 18 Apr 2013 15:29:28 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 16719 invoked by uid 99); 18 Apr 2013 15:29:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 18 Apr 2013 15:29:28 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of aolixiang@gmail.com designates 209.85.214.182 as permitted sender) Received: from [209.85.214.182] (HELO mail-ob0-f182.google.com) (209.85.214.182) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 18 Apr 2013 15:29:21 +0000 Received: by mail-ob0-f182.google.com with SMTP id dn14so2624748obc.13 for ; Thu, 18 Apr 2013 08:29:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=fa5Cgki5ZlCLJjOvjW0Of6rgmKs6NWgP/M9Glg++NAE=; b=rtMGabhg6HYTd3YUVW9CSqK4NNa0wNEDjsRmXnLFMa8g3+2C38TUPbCyanLz/o538i 9dvxTS0QehxK5l83ztCIYh7ALwEl8xciVi5qZJs/O9KE1y3AO8ZXZzCi2bV4vl39bXYJ ff2raFLrJxilRBSqmoxuXzB3Xaa3kZ9sbInzX303xtRMuP9vYbVRAq6Z43tvJR5QFEs8 WayjZbwldi7bYU/2vgvndPA+H5cicdnO4EaFHZz83i2tGSqYD09bc3cMDRVOEtLy4wBC YO0LoEFEMOiENizswEwAAHdqot9Zxh2B88byywkjZv1M+AUHGtNDOXs6mTgvBj6iGBUD /SfA== MIME-Version: 1.0 X-Received: by 10.60.65.100 with SMTP id w4mr638291oes.79.1366298940867; Thu, 18 Apr 2013 08:29:00 -0700 (PDT) Received: by 10.60.146.241 with HTTP; Thu, 18 Apr 2013 08:29:00 -0700 (PDT) In-Reply-To: References: Date: Thu, 18 Apr 2013 23:29:00 +0800 Message-ID: Subject: Run multiple HDFS instances From: Lixiang Ao To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a11c253e4070c7104daa44351 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c253e4070c7104daa44351 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works! Everything looks fine now. Seems direct command "hdfs namenode" gives a better sense of control :) Thanks a lot. =D4=DA 2013=C4=EA4=D4=C218=C8=D5=D0=C7=C6=DA=CB=C4=A3=ACHarsh J =D0=B4=B5= =C0=A3=BA > Yes you can but if you want the scripts to work, you should have them > use a different PID directory (I think its called HADOOP_PID_DIR) > every time you invoke them. > > I instead prefer to start the daemons up via their direct command such > as "hdfs namenode" and so and move them to the background, with a > redirect for logging. > > On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao wrote: > > Hi all, > > > > Can I run mutiple HDFS instances, that is, n seperate namenodes and n > > datanodes, on a single machine? > > > > I've modified core-site.xml and hdfs-site.xml to avoid port and file > > conflicting between HDFSes, but when I started the second HDFS, I got t= he > > errors: > > > > Starting namenodes on [localhost] > > localhost: namenode running as process 20544. Stop it first. > > localhost: datanode running as process 20786. Stop it first. > > Starting secondary namenodes [0.0.0.0] > > 0.0.0.0: secondarynamenode running as process 21074. Stop it first. > > > > Is there a way to solve this? > > Thank you in advance, > > > > Lixiang Ao > > > > -- > Harsh J > --001a11c253e4070c7104daa44351 Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: quoted-printable
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It work= s!  Everything looks fine now.

Seems direct c= ommand "hdfs namenode" gives a better sense of control  :)

Thanks a lot.

=D4=DA 2013=C4=EA4=D4=C218= =C8=D5=D0=C7=C6=DA=CB=C4=A3=ACHarsh J =D0=B4=B5=C0=A3=BA
Yes you can but if you want the scripts to work, you shoul= d have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.

I instead prefer to start the daemons up via their direct command such
as "hdfs namenode" and so and move them to the background, with a=
redirect for logging.

On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <aolixiang@gmail.com&= gt; wrote:
> Hi all,
>
> Can I run mutiple HDFS instances, that is, n seperate namenodes and n<= br> > datanodes, on a single machine?
>
> I've modified core-site.xml and hdfs-site.xml to avoid port and fi= le
> conflicting between HDFSes, but when I started the second HDFS, I got = the
> errors:
>
> Starting namenodes on [localhost]
> localhost: namenode running as process 20544. Stop it first.
> localhost: datanode running as process 20786. Stop it first.
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: secondarynam= enode running as process 21074. Stop it first.
>
> Is there a way to solve this?
> Thank you in advance,
>
> Lixiang Ao



--
Harsh J
--001a11c253e4070c7104daa44351--