Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 12354E628 for ; Wed, 6 Mar 2013 09:35:18 +0000 (UTC) Received: (qmail 49579 invoked by uid 500); 6 Mar 2013 09:35:12 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 49244 invoked by uid 500); 6 Mar 2013 09:35:12 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 49217 invoked by uid 99); 6 Mar 2013 09:35:11 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Mar 2013 09:35:11 +0000 X-ASF-Spam-Status: No, hits=2.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of vikascjadhav87@gmail.com designates 209.85.220.170 as permitted sender) Received: from [209.85.220.170] (HELO mail-vc0-f170.google.com) (209.85.220.170) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Mar 2013 09:35:03 +0000 Received: by mail-vc0-f170.google.com with SMTP id p16so4806701vcq.29 for ; Wed, 06 Mar 2013 01:34:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=H2tHZpCExNAqnrbHbIbDyZTc+wMFH6zZjO8VB3s112Y=; b=xE+M2cj84sg/mi4QvLXt/s+Z/leU8tJlt8qLRjz3jSGcW/ELLudFM1jXK9uts77M39 WkPZ+rclDN/X50OwIQVgZoBKFByrD/AAOFRrsHXnOw5yti6Wl1vEXPyX6zw16w2wnoP2 fkcE2E8wEy29Fuugu9JV0eMICHMPzC0uwoF+EgAvO0056ZG3aA/VhyJTI0pgwGDbOxpL 6hvy96FRkXMgQIbTWnKWDDsq3eJF0x5O9B+LgpaPuNz8QEbTNdHKy7nlUyIasi7FGvLy xdI0r90l5qFKi86K+/z2IuaKfQcwMBnmV6u1zFptxdyZv+cbV9dP2K0fNEG0fcffgYDM uDlg== MIME-Version: 1.0 X-Received: by 10.52.99.67 with SMTP id eo3mr9715848vdb.21.1362562482495; Wed, 06 Mar 2013 01:34:42 -0800 (PST) Received: by 10.58.73.226 with HTTP; Wed, 6 Mar 2013 01:34:42 -0800 (PST) In-Reply-To: <8AD4EE147886274A8B495D6AF407DF6910E90BE8@szxeml534-mbx.china.huawei.com> References: <8AD4EE147886274A8B495D6AF407DF6910E90BE8@szxeml534-mbx.china.huawei.com> Date: Wed, 6 Mar 2013 15:04:42 +0530 Message-ID: Subject: Re: Hadoop cluster setup - could not see second datanode From: Vikas Jadhav To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf307f377ac1157704d73e4cda X-Virus-Checked: Checked by ClamAV on apache.org --20cf307f377ac1157704d73e4cda Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable 1) check whehter you can ssh to other node from namenode set your configuration carefully **** fs.default.name**** localhost:9000**** replace localhost with name node having namnode ruuning and shoulde resolvable (try ping to that node from other node) ** 2) go to conf/slaves file check it has following lines 1. namnodeip 2. datanodeip On Wed, Mar 6, 2013 at 9:29 AM, Brahma Reddy Battula < brahmareddy.battula@huawei.com> wrote: > Although Hadoop is designed and developed for distributed computing it > can be run on a single node in pseudo distributed mode and with multiple > data node on single machine . Developers often run multiple data nodes on > single node to develop and test distributed features,data node behavior, > Name node interaction with data node and for other reasons. > > Please go through following blog for same.. > > > http://www.blogger.com/blogger.g?blogID=3D2277703965936900657#editor/targ= et=3Dpost;postID=3D8231904039775612388 > ------------------------------ > *From:* Robert Evans [evans@yahoo-inc.com] > *Sent:* Tuesday, March 05, 2013 11:57 PM > *To:* user@hadoop.apache.org > *Subject:* Re: Hadoop cluster setup - could not see second datanode > > Why would you need several data nodes? It is simple to have one data > node and one name node on the same machine. I believe that you can make > multiple data nodes run on the same machine, but it would take quite a bi= t > of configuration work to do it, and it would only really be helpful for y= ou > to do some very specific testing involving multiple data nodes. > > --Bobby > > From: =E5=8D=96=E6=8A=A5=E7=9A=84=E5=B0=8F=E8=A1=8C=E5=AE=B6 <85469843@= qq.com> > Reply-To: "user@hadoop.apache.org" > Date: Tuesday, March 5, 2013 8:41 AM > To: user > Subject: Re:RE: Hadoop cluster setup - could not see second datanode > > Hello, > Can Namenode and several datanodes exist in one machine? > I only have one PC. I want to configure it like this way. > > BRs//Julian > > > ------------------ Original ------------------ > *From: * "AMARNATH, Balachandar"; > *Date: * Tue, Mar 5, 2013 07:55 PM > *To: * "user@hadoop.apache.org"; ** > *Subject: * RE: Hadoop cluster setup - could not see second datanode > > I fixed it the below issue J > > > > > > Regards > > Bala > > > > *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com] > > *Sent:* 05 March 2013 17:05 > *To:* user@hadoop.apache.org > *Subject:* Hadoop cluster setup - could not see second datanode > > > > Thanks for the information, > > > > Now I am trying to install hadoop dfs using 2 nodes. A namenode cum > datanode, and a separate data node. I use the following configuration for > my hdfs-site.xml > > > > > > > > > > fs.default.name > > localhost:9000 > > > > > > > > dfs.data.dir > > /home/bala/data > > > > > > > > dfs.name.dir > > /home/bala/name > > > > > > > > > > In namenode, I have added the datanode hostnames (machine1 and machine2). > > When I do =E2=80=98start-all.sh=E2=80=99, I see the log that the data nod= e is starting in > both the machines but I went to the browser in the namenode, I see only o= ne > live node. (That is the namenode which is configured as datanode) > > > > Any hint here will help me > > > > > > With regards > > Bala > > > > > > > > > > > > *From:* Mahesh Balija [mailto:balijamahesh.mca@gmail.com] > > *Sent:* 05 March 2013 14:15 > *To:* user@hadoop.apache.org > *Subject:* Re: Hadoop file system > > > > You can be able to use Hdfs alone in the distributed mode to fulfill your > requirement. > Hdfs has the Filesystem java api through which you can interact with the > HDFS from your client. > HDFS is good if you have less number of files with huge size rather than > you having many files with small size. > > Best, > Mahesh Balija, > Calsoft Labs. > > On Tue, Mar 5, 2013 at 10:43 AM, AMARNATH, Balachandar < > BALACHANDAR.AMARNATH@airbus.com> wrote: > > > > Hi, > > > > I am new to hdfs. In my java application, I need to perform =E2=80=98simi= lar > operation=E2=80=99 over large number of files. I would like to store thos= e files in > distributed machines. I don=E2=80=99t think, I will need map reduce parad= igm. But > however I would like to use HDFS for file storage and access. Is it > possible (or nice idea) to use HDFS as a stand alone stuff? And, java API= s > are available to work with HDFS so that I can read/write in distributed > environment ? Any thoughts here will be helpful. > > > > > > With thanks and regards > > Balachandar > > > > > > > > The information in this e-mail is confidential. The contents may not be d= isclosed or used by anyone other than the addressee. Access to this e-mail = by anyone else is unauthorised. > > If you are not the intended recipient, please notify Airbus immediately a= nd delete this e-mail. > > Airbus cannot accept any responsibility for the accuracy or completeness = of this e-mail as it has been sent over public networks. If you have any co= ncerns over the content of this message or its Accuracy or Integrity, pleas= e contact Airbus immediately. > > All outgoing e-mails from Airbus are checked using regularly updated viru= s scanning software but you should take whatever measures you deem to be ap= propriate to ensure that this message and any attachments are virus free. > > > > The information in this e-mail is confidential. The contents may not be d= isclosed or used by anyone other than the addressee. Access to this e-mail = by anyone else is unauthorised. > > If you are not the intended recipient, please notify Airbus immediately a= nd delete this e-mail. > > Airbus cannot accept any responsibility for the accuracy or completeness = of this e-mail as it has been sent over public networks. If you have any co= ncerns over the content of this message or its Accuracy or Integrity, pleas= e contact Airbus immediately. > > All outgoing e-mails from Airbus are checked using regularly updated viru= s scanning software but you should take whatever measures you deem to be ap= propriate to ensure that this message and any attachments are virus free. > > The information in this e-mail is confidential. The contents may not be = disclosed or used by anyone other than the addressee. Access to this e-mail= by anyone else is unauthorised. > If you are not the intended recipient, please notify Airbus immediately a= nd delete this e-mail. > Airbus cannot accept any responsibility for the accuracy or completeness = of this e-mail as it has been sent over public networks. If you have any co= ncerns over the content of this message or its Accuracy or Integrity, pleas= e contact Airbus immediately. > All outgoing e-mails from Airbus are checked using regularly updated viru= s scanning software but you should take whatever measures you deem to be ap= propriate to ensure that this message and any attachments are virus free. > > --=20 * * * Thanx and Regards* * Vikas Jadhav* --20cf307f377ac1157704d73e4cda Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: quoted-printable
1) check whehter you can ssh to other node from  namenode
<= div> 
set your configuration carefully

<property>

<name>fs.default.name</nam= e>

<value>local= host:9000</value>

</property>

 

replace localhost with name= node having namnode ruuning and shoulde resolvable (try ping to that node = from other node)

 

2)  g= o to conf/slaves file

   check it has f= ollowing lines

 

  1. namnodeip=

   2. datanodeip

On Wed, Mar 6, 2013 at 9:29 AM, Brahma Reddy Battula= <brahmareddy.battula@huawei.com> wrote:
Although Hadoop is designed and developed for distributed computing it= can be run on a single node in pseudo distributed mode and with multiple d= ata node on single machine . Developers often run multiple data nodes on si= ngle node to develop and test distributed features,data node behavior, Name node interaction with data node and for = other reasons.

Please go through following blog for same..

http://w= ww.blogger.com/blogger.g?blogID=3D2277703965936900657#editor/target=3Dpost;= postID=3D8231904039775612388


Fro= m: Robert Evans [evans@yahoo-inc.com]
Sent: Tuesday, March 05, 2013 11:57 PM
To: user= @hadoop.apache.org
Subject: Re: Hadoop cluster setup - could not see second datanode

Why would you need several data nodes?  It is simple to have one = data node and one name node on the same machine.  I believe that you c= an make multiple data nodes run on the same machine, but it would take quit= e a bit of configuration work to do it, and it would only really be helpful for you to do some very specific testing i= nvolving multiple data nodes.

--Bobby

From: =C2=F4=B1=A8=B5=C4=D0=A1=D0= =D0=BC=D2 <85469843= @qq.com>
Reply-To: "user@hadoop.apache.org" &= lt;user@hadoop.= apache.org>
Date: Tuesday, March 5, 2013 8:41 A= M
To: user <user@hadoop.apache.org>
Subject: Re:RE: Hadoop cluster setu= p - could not see second datanode

Hello,
Can  Namenode and several datanodes exist in one machine?
I only have one PC. I want to configure it like this way.

BRs//Julian


----= -------------- Original ------------------
From:  "AMARNATH, Balachandar"<BALACHANDAR.AMARNATH= @airbus.com>;
Date:  Tue, Mar 5, 2013 07:55 PM
Subject:  RE: Hadoop cluster setup - could not see second = datanode

I fixed it the below issue J

 

 

Regards

Bala

 

From: AMA= RNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
Sent: 05 March 2013 17:05
To: user= @hadoop.apache.org
Subject: Hadoop cluster setup - could not see second datanode=

 

Thanks for the info= rmation,

 

Now I am trying to = install hadoop dfs using 2 nodes. A namenode cum datanode, and a separate d= ata node. I use the following configuration for my hdfs-site.xml

 

<configuration&g= t;

 

  <property= >

    <n= ame>fs.default.name= </name>

    <v= alue>localhost:9000</value>

  </propert= y>

 

  <property= >

    <n= ame>dfs.data.dir</name>

    <v= alue>/home/bala/data</value>

  </propert= y>

 

  <property= >

    <n= ame>dfs.name.dir</name>

    <v= alue>/home/bala/name</value>

  </propert= y>

</configuration&= gt;

 

 

In namenode, I have= added the datanode hostnames (machine1 and machine2).

When I do ‘st= art-all.sh’, I see the log that the data node is starting in both the= machines but I went to the browser in the namenode, I see only one live node. (That is the namenode which is configured as dat= anode)

 

Any hint here will = help me

 

 

With regards=

Bala

 

 

 

 

 

From: Mah= esh Balija [mailto:balijamahesh.mca@gmail.com]
Sent: 05 March 2013 14:15
To: user= @hadoop.apache.org
Subject: Re: Hadoop file system

 

You can be able to use Hdfs alone in the distributed mode to fulfill your r= equirement.
Hdfs has the Filesystem java api through which you can interact with the HD= FS from your client.
HDFS is good if you have less number of files with huge size rather than yo= u having many files with small size.

Best,
Mahesh Balija,
Calsoft Labs.

On Tue, Mar 5, 2013 at 10= :43 AM, AMARNATH, Balachandar <BALACHANDAR.AMARNATH@airbus.com> wrote:<= /p>

 

Hi,

 

I am new to hdfs. In my java application, I need to per= form ‘similar operation’ over large number of files. I would li= ke to store those files in distributed machines. I don’t think, I will need map reduce paradigm. But however I would = like to use HDFS for file storage and access. Is it possible (or nice idea)= to use HDFS as a stand alone stuff? And, java APIs are available to work w= ith HDFS so that I can read/write in distributed environment ? Any thoughts here will be helpful.

 

 

With thanks and regards

Balachandar

 

 

 

The information in this e-mail is confident=
ial. The contents may not be disclosed or used by anyone other than the add=
ressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, plea=
se notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for=
 the accuracy or completeness of this e-mail as it has been sent over publi=
c networks. If you have any concerns over the content of this message or it=
s Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checke=
d using regularly updated virus scanning software but you should take whate=
ver measures you deem to be appropriate to ensure that this message and any=
 attachments are virus free.

 

The information in this e-mail is confident=
ial. The contents may not be disclosed or used by anyone other than the add=
ressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, plea=
se notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for=
 the accuracy or completeness of this e-mail as it has been sent over publi=
c networks. If you have any concerns over the content of this message or it=
s Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checke=
d using regularly updated virus scanning software but you should take whate=
ver measures you deem to be appropriate to ensure that this message and any=
 attachments are virus free.
The information in this e-mail is confidential. The contents may not b=
e disclosed or used by anyone other than the addressee. Access to this e-ma=
il by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and=
 delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of=
 this e-mail as it has been sent over public networks. If you have any conc=
erns over the content of this message or its Accuracy or Integrity, please =
contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus =
scanning software but you should take whatever measures you deem to be appr=
opriate to ensure that this message and any attachments are virus free.



--


Thanx and Regards
 Vikas Jadhav
--20cf307f377ac1157704d73e4cda--