Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 34408 invoked from network); 4 Mar 2009 06:37:12 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 4 Mar 2009 06:37:12 -0000 Received: (qmail 508 invoked by uid 500); 4 Mar 2009 06:37:05 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 192 invoked by uid 500); 4 Mar 2009 06:37:04 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Delivered-To: moderator for core-user@hadoop.apache.org Received: (qmail 75652 invoked by uid 99); 4 Mar 2009 06:07:39 -0000 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=HTML_MESSAGE,MSGID_FROM_MTA_HEADER,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of shefali_p@rediffmail.com designates 202.137.237.204 as permitted sender) Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=redf; d=rediffmail.com; b=tCpyw36S0pCDR6CjhzJpfFfI4SCJblCBWSg2iHbSYZOIDCc39fm7Xa+/qdRjRaxh0JVIQPoFO0YUlEM1S0WNDa/r2Lum5aXNSMB9oVXY8WUkkMCJBast7wZk/rb9ffgPU29YTBi7Ub56aJa4jxM6WDG5687duHdmZQb78Pjbi7g= ; XX-CMAE-analysis: score=0 v=1.0 c=1 a=U7Urs5lSp6UA:10 a=RrIf-FrGRJ0A:10 a=reRN_k78R_QA:10 a=uPZiAMpXAAAA:8 a=mV9VRH-2AAAA:8 a=COfzQ7OkAAAA:8 a=A7bb6jzMAAAA:8 a=Y8ZYWYZQAAAA:8 a=gol-n86kAAAA:8 a=QEOFzi_3-V0Cm2TV_SsA:9 a=cp6ih9auVWIlgiH-psQA:7 a=6lSBNdIYZvVmsOCoiSL92Fwy9HAA:4 a=BI31Azdvs5oA:10 a=uJRR3OmFOomA9Sdg:21 a=yxbJ1dC0fvY-oKVH:21 a=6SwCy3u5AAAA:8 a=wXOaVjv1on-GpkAUyEYA:9 a=YkYu9w6Lbn1ZUbeb2bQA:7 a=ye-XTRyQOSpM9Kvep7-6GM8MyqoA:4 a=MRcuu4QPSFIA:10 Date: 4 Mar 2009 06:06:31 -0000 Message-ID: <20090304060631.20655.qmail@f5mail-237-204.rediffmail.com> MIME-Version: 1.0 From: "shefali pawar" Reply-To: "shefali pawar" To: core-user@hadoop.apache.org Subject: Re: Re: Re: Re: Re: Re: Re: Regarding "Hadoop multi cluster" set-up Content-type: multipart/alternative; boundary="Next_1236146791---0-202.137.237.204-20455" X-Virus-Checked: Checked by ClamAV on apache.org --Next_1236146791---0-202.137.237.204-20455 Content-type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline =0AWe set-up a dedicated LAN consisting of the 2 computers using a switch. = I think that made a difference and the 2 node cluster is working fine now. = Also now we are working on Ubuntu and not Fedora.=0A=0AThanks for all the h= elp.=0A=0AShefali=0A=0A=0AOn Thu, 12 Feb 2009 shefali pawar wrote :=0A>I ch= anged the value... It is still not working!=0A>=0A>Shefali=0A>=0A>On Tue, 1= 0 Feb 2009 22:23:10 +0530 wrote=0A> >in hadoop-site.xml=0A> >change master= :54311=0A> >=0A> >to hdfs://master:54311=0A> >=0A> >=0A> >--nitesh=0A> >=0A= > >On Tue, Feb 10, 2009 at 9:50 PM, shefali pawar wrote:=0A> >=0A> >> I tri= ed that, but it is not working either!=0A> >>=0A> >> Shefali=0A> >>=0A> >> = On Sun, 08 Feb 2009 05:27:54 +0530 =A0wrote=0A> >> >I ran into this trouble= again. This time, formatting the namenode didnt=0A> >> >help. So, I change= d the directories where the metadata and the data was=0A> >> >being stored.= That made it work.=0A> >> >=0A> >> >You might want to check this up at you= r end too.=0A> >> >=0A> >> >Amandeep=0A> >> >=0A> >> >PS: I dont have an ex= planation for how and why this made it work.=0A> >> >=0A> >> >=0A> >> >Aman= deep Khurana=0A> >> >Computer Science Graduate Student=0A> >> >University o= f California, Santa Cruz=0A> >> >=0A> >> >=0A> >> >On Sat, Feb 7, 2009 at 9= :06 AM, jason hadoop =A0wrote:=0A> >> >=0A> >> >> On your master machine, u= se the netstat command to determine what ports=0A> >> and=0A> >> >> address= es the namenode process is listening on.=0A> >> >>=0A> >> >> On the datanod= e machines, examine the log files,, to verify that the=0A> >> >> datanode h= as attempted to connect to the namenode ip address on one of=0A> >> >> thos= e=0A> >> >> ports, and was successfull.=0A> >> >>=0A> >> >> The common port= s used for datanode -> namenode rondevu are 50010, 54320=0A> >> and=0A> >> = >> 8020, depending on your hadoop version=0A> >> >>=0A> >> >> If the datano= des have been started, and the connection to the namenode=0A> >> >> failed,= there will be a log message with a socket error, indicating what=0A> >> >>= host and port the datanode used to attempt to communicate with the=0A> >> = >> namenode.=0A> >> >> Verify that that ip address is correct for your name= node, and reachable=0A> >> >> from=0A> >> >> the datanode host (for multi h= omed machines this can be an issue), and=0A> >> that=0A> >> >> the port lis= ted is one of the tcp ports that the namenode process is=0A> >> >> listing= =0A> >> >> on.=0A> >> >>=0A> >> >> For linux, you can use command=0A> >> >>= *netstat -a -t -n -p | grep java | grep LISTEN*=0A> >> >> to determine the= ip addresses and ports and pids of the java processes=0A> >> that=0A> >> >= > are listening for tcp socket connections=0A> >> >>=0A> >> >> and the jps = command from the bin directory of your java installation to=0A> >> >> deter= mine the pid of the namenode.=0A> >> >>=0A> >> >> On Sat, Feb 7, 2009 at 6:= 27 AM, shefali pawar > >wrote:=0A> >> >>=0A> >> >> > Hi,=0A> >> >> >=0A> >>= >> > No, not yet. We are still struggling! If you find the solution please= =0A> >> let=0A> >> >> > me know.=0A> >> >> >=0A> >> >> > Shefali=0A> >> >> = >=0A> >> >> > On Sat, 07 Feb 2009 02:56:15 +0530 =A0wrote=0A> >> >> > >I ha= d to change the master on my running cluster and ended up with=0A> >> the= =0A> >> >> > same=0A> >> >> > >problem. Were you able to fix it at your end= ?=0A> >> >> > >=0A> >> >> > >Amandeep=0A> >> >> > >=0A> >> >> > >=0A> >> >>= > >Amandeep Khurana=0A> >> >> > >Computer Science Graduate Student=0A> >> = >> > >University of California, Santa Cruz=0A> >> >> > >=0A> >> >> > >=0A> = >> >> > >On Thu, Feb 5, 2009 at 8:46 AM, shefali pawar wrote:=0A> >> >> > >= =0A> >> >> > >> Hi,=0A> >> >> > >>=0A> >> >> > >> I do not think that the f= irewall is blocking the port because it=0A> >> has=0A> >> >> > been=0A> >> = >> > >> turned off on both the computers! And also since it is a random=0A>= >> port=0A> >> >> > number=0A> >> >> > >> I do not think it should create = a problem.=0A> >> >> > >>=0A> >> >> > >> I do not understand what is going = wrong!=0A> >> >> > >>=0A> >> >> > >> Shefali=0A> >> >> > >>=0A> >> >> > >> = On Wed, 04 Feb 2009 23:23:04 +0530 =A0wrote=0A> >> >> > >> >I'm not certain= that the firewall is your problem but if that port=0A> >> is=0A> >> >> > >= > >blocked on your master you should open it to let communication=0A> >> >>= through.=0A> >> >> > >> Here=0A> >> >> > >> >is one website that might be = relevant:=0A> >> >> > >> >=0A> >> >> > >> >=0A> >> >> > >>=0A> >> >> >=0A> = >> >>=0A> >> http://stackoverflow.com/questions/255077/open-ports-under-fed= ora-core-8-for-vmware-server=0A> >> >> > >> >=0A> >> >> > >> >but again, th= is may not be your problem.=0A> >> >> > >> >=0A> >> >> > >> >John=0A> >> >>= > >> >=0A> >> >> > >> >On Wed, Feb 4, 2009 at 12:46 PM, shefali pawar wrot= e:=0A> >> >> > >> >=0A> >> >> > >> >> Hi,=0A> >> >> > >> >>=0A> >> >> > >> = >> I will have to check. I can do that tomorrow in college. But if=0A> >> >= > that=0A> >> >> > is=0A> >> >> > >> the=0A> >> >> > >> >> case what should= i do?=0A> >> >> > >> >>=0A> >> >> > >> >> Should i change the port number = and try again?=0A> >> >> > >> >>=0A> >> >> > >> >> Shefali=0A> >> >> > >> >= >=0A> >> >> > >> >> On Wed, 04 Feb 2009 S D wrote :=0A> >> >> > >> >>=0A> >= > >> > >> >> >Shefali,=0A> >> >> > >> >> >=0A> >> >> > >> >> >Is your firew= all blocking port 54310 on the master?=0A> >> >> > >> >> >=0A> >> >> > >> >= > >John=0A> >> >> > >> >> >=0A> >> >> > >> >> >On Wed, Feb 4, 2009 at 12:34= PM, shefali pawar > >wrote:=0A> >> >> > >> >> >=0A> >> >> > >> >> > > Hi,= =0A> >> >> > >> >> > >=0A> >> >> > >> >> > > I am trying to set-up a two no= de cluster using Hadoop0.19.0,=0A> >> >> with=0A> >> >> > 1=0A> >> >> > >> = >> > > master(which should also work as a slave) and 1 slave node.=0A> >> >= > > >> >> > >=0A> >> >> > >> >> > > But while running bin/start-dfs.sh the = datanode is not=0A> >> starting=0A> >> >> > on=0A> >> >> > >> the=0A> >> >>= > >> >> > > slave. I had read the previous mails on the list, but=0A> >> n= othing=0A> >> >> > seems=0A> >> >> > >> to=0A> >> >> > >> >> be=0A> >> >> >= >> >> > > working in this case. I am getting the following error in=0A> >>= the=0A> >> >> > >> >> > > hadoop-root-datanode-slave log file while runnin= g the=0A> >> command=0A> >> >> > >> >> > > bin/start-dfs.sh =3D>=0A> >> >> = > >> >> > >=0A> >> >> > >> >> > > 2009-02-03 13:00:27,516 INFO=0A> >> >> > = >> >> > > org.apache.hadoop.hdfs.server.datanode.DataNode:=0A> >> STARTUP_M= SG:=0A> >> >> > >> >> > >=0A> >> /*****************************************= *******************=0A> >> >> > >> >> > > STARTUP_MSG: Starting DataNode=0A= > >> >> > >> >> > > STARTUP_MSG: =A0host =3D slave/172.16.0.32=0A> >> >> > = >> >> > > STARTUP_MSG: =A0args =3D []=0A> >> >> > >> >> > > STARTUP_MSG: = =A0version =3D 0.19.0=0A> >> >> > >> >> > > STARTUP_MSG: =A0build =3D=0A> >= > >> > >> >> > >=0A> >> >> > https://svn.apache.org/repos/asf/hadoop/core/b= ranches/branch-0.19-r=0A> >> >> > >> >> > > 713890; compiled by 'ndaley' on= Fri Nov 14 03:12:29 UTC 2008=0A> >> >> > >> >> > >=0A> >> ****************= ********************************************/=0A> >> >> > >> >> > > 2009-02= -03 13:00:28,725 INFO org.apache.hadoop.ipc.Client:=0A> >> >> > Retrying=0A= > >> >> > >> >> connect=0A> >> >> > >> >> > > to server: master/172.16.0.46= :54310. Already tried 0=0A> >> time(s).=0A> >> >> > >> >> > > 2009-02-03 13= :00:29,726 INFO org.apache.hadoop.ipc.Client:=0A> >> >> > Retrying=0A> >> >= > > >> >> connect=0A> >> >> > >> >> > > to server: master/172.16.0.46:54310= . Already tried 1=0A> >> time(s).=0A> >> >> > >> >> > > 2009-02-03 13:00:30= ,727 INFO org.apache.hadoop.ipc.Client:=0A> >> >> > Retrying=0A> >> >> > >>= >> connect=0A> >> >> > >> >> > > to server: master/172.16.0.46:54310. Alre= ady tried 2=0A> >> time(s).=0A> >> >> > >> >> > > 2009-02-03 13:00:31,728 I= NFO org.apache.hadoop.ipc.Client:=0A> >> >> > Retrying=0A> >> >> > >> >> co= nnect=0A> >> >> > >> >> > > to server: master/172.16.0.46:54310. Already tr= ied 3=0A> >> time(s).=0A> >> >> > >> >> > > 2009-02-03 13:00:32,729 INFO or= g.apache.hadoop.ipc.Client:=0A> >> >> > Retrying=0A> >> >> > >> >> connect= =0A> >> >> > >> >> > > to server: master/172.16.0.46:54310. Already tried 4= =0A> >> time(s).=0A> >> >> > >> >> > > 2009-02-03 13:00:33,730 INFO org.apa= che.hadoop.ipc.Client:=0A> >> >> > Retrying=0A> >> >> > >> >> connect=0A> >= > >> > >> >> > > to server: master/172.16.0.46:54310. Already tried 5=0A> >= > time(s).=0A> >> >> > >> >> > > 2009-02-03 13:00:34,731 INFO org.apache.ha= doop.ipc.Client:=0A> >> >> > Retrying=0A> >> >> > >> >> connect=0A> >> >> >= >> >> > > to server: master/172.16.0.46:54310. Already tried 6=0A> >> time= (s).=0A> >> >> > >> >> > > 2009-02-03 13:00:35,732 INFO org.apache.hadoop.i= pc.Client:=0A> >> >> > Retrying=0A> >> >> > >> >> connect=0A> >> >> > >> >>= > > to server: master/172.16.0.46:54310. Already tried 7=0A> >> time(s).= =0A> >> >> > >> >> > > 2009-02-03 13:00:36,733 INFO org.apache.hadoop.ipc.C= lient:=0A> >> >> > Retrying=0A> >> >> > >> >> connect=0A> >> >> > >> >> > >= to server: master/172.16.0.46:54310. Already tried 8=0A> >> time(s).=0A> >= > >> > >> >> > > 2009-02-03 13:00:37,734 INFO org.apache.hadoop.ipc.Client:= =0A> >> >> > Retrying=0A> >> >> > >> >> connect=0A> >> >> > >> >> > > to se= rver: master/172.16.0.46:54310. Already tried 9=0A> >> time(s).=0A> >> >> >= >> >> > > 2009-02-03 13:00:37,738 ERROR=0A> >> >> > >> >> > > org.apache.h= adoop.hdfs.server.datanode.DataNode:=0A> >> >> > >> java.io.IOException:=0A= > >> >> > >> >> Call=0A> >> >> > >> >> > > to master/172.16.0.46:54310 fail= ed on local exception: No=0A> >> >> route=0A> >> >> > to=0A> >> >> > >> >> = host=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Client.c= all(Client.java:699)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >> org.= apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)=0A> >> >> > >> >> > > = =A0 =A0 =A0 =A0at $Proxy4.getProtocolVersion(Unknown Source)=0A> >> >> > >>= >> > > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)= =0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.RPC.getProxy= (RPC.java:306)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at org.apache.hadoop.ip= c.RPC.getProxy(RPC.java:343)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >>= org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)=0A> >> >> > >> >> > >= =A0 =A0 =A0 =A0at=0A> >> >> > >> >> > >=0A> >> >> > >> >>=0A> >> >> > >>= =0A> >> >> >=0A> >> >>=0A> >> org.apache.hadoop.hdfs.server.datanode.DataNo= de.startDataNode(DataNode.java:258)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at= =0A> >> >> > >> >> > >=0A> >> >> > >> >>=0A> >> org.apache.hadoop.hdfs.serv= er.datanode.DataNode.(DataNode.java:205)=0A> >> >> > >> >> > > =A0 =A0 =A0 = =A0at=0A> >> >> > >> >> > >=0A> >> >> > >> >>=0A> >> >> > >>=0A> >> >> >=0A= > >> >>=0A> >> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance= (DataNode.java:1199)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >> > >>= >> > >=0A> >> >> > >> >>=0A> >> >> > >>=0A> >> >> >=0A> >> >>=0A> >> org.a= pache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.jav= a:1154)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >> > >> >> > >=0A> >= > >> > >> >>=0A> >> >> > >>=0A> >> >> >=0A> >> >>=0A> >> org.apache.hadoop.= hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1162)=0A> >> >> = > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >> > >> >> > >=0A> >> >> > >> >>=0A> >= > >> >=0A> >> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode= .java:1284)=0A> >> >> > >> >> > > Caused by: java.net.NoRouteToHostExceptio= n: No route to host=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at sun.nio.ch.Sock= etChannelImpl.checkConnect(Native=0A> >> >> > Method)=0A> >> >> > >> >> > >= =A0 =A0 =A0 =A0at=0A> >> >> > >> >> > >=0A> >> >> > >>=0A> >> sun.nio.ch.S= ocketChannelImpl.finishConnect(SocketChannelImpl.java:574)=0A> >> >> > >> >= > > > =A0 =A0 =A0 =A0at=0A> >> >> > sun.nio.ch.SocketAdaptor.connect(Socket= Adaptor.java:100)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >> > >> >>= > >=0A> >> >> > >>=0A> >> >> org.apache.hadoop.ipc.Client$Connection.setup= IOstreams(Client.java:299)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >= > > >> >> > >=0A> >> >> > org.apache.hadoop.ipc.Client$Connection.access$17= 00(Client.java:176)=0A> >> >> > >> >> > > =A0 =A0 =A0 =A0at=0A> >> >> > >> = org.apache.hadoop.ipc.Client.getConnection(Client.java:772)=0A> >> >> > >> = >> > > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Client.call(Client.java:685)= =0A> >> >> > >> >> > > =A0 =A0 =A0 =A0... 12 more=0A> >> >> > >> >> > >=0A>= >> >> > >> >> > > 2009-02-03 13:00:37,739 INFO=0A> >> >> > >> >> > > org.a= pache.hadoop.hdfs.server.datanode.DataNode:=0A> >> SHUTDOWN_MSG:=0A> >> >> = > >> >> > >=0A> >> /*******************************************************= *****=0A> >> >> > >> >> > > SHUTDOWN_MSG: Shutting down DataNode at slave/1= 72.16.0.32=0A> >> >> > >> >> > >=0A> >> ***********************************= *************************/=0A> >> >> > >> >> > >=0A> >> >> > >> >> > >=0A> = >> >> > >> >> > > Also, the Pseudo distributed operation is working on both= =0A> >> the=0A> >> >> > >> machines.=0A> >> >> > >> >> And=0A> >> >> > >> >= > > > i am able to ssh from 'master to master' and 'master to=0A> >> slave'= =0A> >> >> > via a=0A> >> >> > >> >> > > password-less ssh login. I do not = think there is any problem=0A> >> >> with=0A> >> >> > >> the=0A> >> >> > >>= >> > > network because cross pinging is working fine.=0A> >> >> > >> >> > = >=0A> >> >> > >> >> > > I am working on Linux (Fedora 8)=0A> >> >> > >> >> = > >=0A> >> >> > >> >> > > The following is the configuration which i am usi= ng=0A> >> >> > >> >> > >=0A> >> >> > >> >> > > On master and slave, /conf/m= asters looks like this:=0A> >> >> > >> >> > >=0A> >> >> > >> >> > > =A0mast= er=0A> >> >> > >> >> > >=0A> >> >> > >> >> > > On master and slave, /conf/s= laves looks like this:=0A> >> >> > >> >> > >=0A> >> >> > >> >> > > =A0maste= r=0A> >> >> > >> >> > > =A0slave=0A> >> >> > >> >> > >=0A> >> >> > >> >> > = > On both the machines conf/hadoop-site.xml looks like this=0A> >> >> > >> = >> > >=0A> >> >> > >> >> > >=0A> >> >> > >> >> > > =A0fs.default.name=0A> >= > >> > >> >> > > =A0hdfs://master:54310=0A> >> >> > >> >> > > =A0The name o= f the default file system. =A0A URI whose=0A> >> >> > >> >> > > =A0scheme a= nd authority determine the FileSystem=0A> >> implementation.=0A> >> >> > = =A0The=0A> >> >> > >> >> > > =A0uri's scheme determines the config property= =0A> >> (fs.SCHEME.impl)=0A> >> >> > naming=0A> >> >> > >> >> > > =A0the Fi= leSystem implementation class. =A0The uri's authority=0A> >> is=0A> >> >> >= used=0A> >> >> > >> to=0A> >> >> > >> >> > > =A0determine the host, port, = etc. for a filesystem.=0A> >> >> > >> >> > >=0A> >> >> > >> >> > >=0A> >> >= > > >> >> > > =A0mapred.job.tracker=0A> >> >> > >> >> > > =A0master:54311= =0A> >> >> > >> >> > > =A0The host and port that the MapReduce job tracker = runs=0A> >> >> > >> >> > > =A0at. =A0If "local", then jobs are run in-proce= ss as a single=0A> >> map=0A> >> >> > >> >> > > =A0and reduce task.=0A> >> = >> > >> >> > >=0A> >> >> > >> >> > >=0A> >> >> > >> >> > >=0A> >> >> > >> >= > > > =A0dfs.replication=0A> >> >> > >> >> > > =A02=0A> >> >> > >> >> > > = =A0Default block replication.=0A> >> >> > >> >> > > =A0The actual number of= replications can be specified when the=0A> >> >> file=0A> >> >> > is=0A> >= > >> > >> >> > > created.=0A> >> >> > >> >> > > =A0The default is used if r= eplication is not specified in=0A> >> create=0A> >> >> > time.=0A> >> >> > = >> >> > >=0A> >> >> > >> >> > >=0A> >> >> > >> >> > >=0A> >> >> > >> >> > >= namenode is formatted succesfully by running=0A> >> >> > >> >> > >=0A> >> = >> > >> >> > > "bin/hadoop namenode -format"=0A> >> >> > >> >> > >=0A> >> >= > > >> >> > > on the master node.=0A> >> >> > >> >> > >=0A> >> >> > >> >> >= > I am new to Hadoop and I do not know what is going wrong.=0A> >> >> > >>= >> > >=0A> >> >> > >> >> > > Any help will be appreciated.=0A> >> >> > >> = >> > >=0A> >> >> > >> >> > > Thanking you in advance=0A> >> >> > >> >> > >= =0A> >> >> > >> >> > > Shefali Pawar=0A> >> >> > >> >> > > Pune, India=0A> = >> >> > >> >> > >=0A> >> >> > >> >>=0A> >> >> > >> >>=0A> >> >> > >> >>=0A>= >> >> > >> >=0A> >> >> > >>=0A> >> >> > >=0A> >> >> >=0A> >> >>=0A> >> >= =0A> >>=0A> >=0A> >=0A> >=0A> >--=0A> >Nitesh Bhatia=0A> >Dhirubhai Ambani = Institute of Information & Communication Technology=0A> >Gandhinagar=0A> >G= ujarat=0A> >=0A> >"Life is never perfect. It just depends where you draw th= e line."=0A> >=0A> >visit:=0A> >http://www.awaaaz.com - connecting through = music=0A> >http://www.volstreet.com - lets volunteer for better tomorrow=0A= > >http://www.instibuzz.com - Voice opinions, Transact easily, Have fun=0A>= >=0A --Next_1236146791---0-202.137.237.204-20455--