Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 36750 invoked from network); 17 Jun 2009 05:43:04 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 17 Jun 2009 05:43:04 -0000 Received: (qmail 94051 invoked by uid 500); 17 Jun 2009 05:43:13 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 93952 invoked by uid 500); 17 Jun 2009 05:43:13 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 93942 invoked by uid 99); 17 Jun 2009 05:43:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Jun 2009 05:43:13 +0000 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of pareekash@gmail.com designates 209.85.216.171 as permitted sender) Received: from [209.85.216.171] (HELO mail-px0-f171.google.com) (209.85.216.171) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Jun 2009 05:43:00 +0000 Received: by pxi1 with SMTP id 1so132150pxi.5 for ; Tue, 16 Jun 2009 22:42:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=Wd2sM0PUMV9Y8SjmpF/AP77so7AMKFivqO4yV33478I=; b=hcOYeFRmek9jRw8iuMTtwR1FmcrCWehiexDY2D+Jm5XcIZEOhPNya9YsU2I6WRvmKY wZYYo/ctv43i2UJkuwexbxZQ45Nri0NdR1FZMXUJHqOctug2rSlToBG0sTPyR1uXYKWi e4oTiVklp3PLG3a0lCZufE853hp4X0eYo7mQY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=ZjUQl6f0U5nCmyCMMrfyHp0GUTrtDXHJzy9EcJC8pvqoEEIvPweNrlQ5hkTsWvmta8 3vV2tpZ8PHBOJb9OITDA5gW9439p5WR6DSi1Bk2TTGh+iRwT1cLYxKMLxZ1pXZ/eLmP9 vzWokNxM8oMAHnUSrt/53I32ndkk+dao+MJC8= MIME-Version: 1.0 Received: by 10.114.111.1 with SMTP id j1mr14844158wac.119.1245217358633; Tue, 16 Jun 2009 22:42:38 -0700 (PDT) Date: Wed, 17 Jun 2009 11:12:38 +0530 Message-ID: <45d9159d0906162242p39f409cka48912999065e2f3@mail.gmail.com> Subject: Problem in viewing WEB UI From: ashish pareek To: core-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001636417901545cd1046c84c2f5 X-Virus-Checked: Checked by ClamAV on apache.org --001636417901545cd1046c84c2f5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi, When I run command *bin/hadoop dfsadmin -report *it shows that 2 datanodes are alive but when i try to http://hadoopmster:50070/ but the problem is that it opens doesnot opne http://hadoopmaster:50070/dfshealth.jsp page and throws *error HTTP: 404 . So why is't happening like this? * Regards, Ashish Pareek On Wed, Jun 17, 2009 at 10:06 AM, Sugandha Neaolekar < sugandha.n87@gmail.com> wrote: > Well, You just have to specify the address in the URL address bar as:: > http://hadoopmaster:50070 U'll be able to see the web UI..! > > > On Tue, Jun 16, 2009 at 7:17 PM, ashish pareek wrote= : > >> HI Sugandha, >> Hmmm........ your suggestion helped and Now I am able >> to run two datanode ....one on the same machine as name node and other o= n >> the different machine ........Thanks a lot :) >> >> But the problem is now I am not able to see web UI ....= . >> for both datanode and as well as name node.... >> should I have to consider some more things in the site.xml ? if so pleas= e >> help... >> >> Thanking you again, >> regards, >> Ashish Pareek. >> >> On Tue, Jun 16, 2009 at 3:10 PM, Sugandha Naolekar < >> sugandha.n87@gmail.com> wrote: >> >>> hi,,! >>> >>> >>> First of all, get your concepts clear of hadoop. >>> You can refer to the following >>> >>> site:: >>> http://www.google.co.in/url?sa=3Dt&source=3Dweb&ct=3Dres&cd=3D1&url=3Dh= ttp%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_(S= ingle-Node_Cluster)&ei=3DlGU3Spv2FZbLjAe19KmiDQ&usg=3DAFQjCNFbmVGsoChOSMzCB= 3tRhoV0ylHOzA&sig2=3Dt2AJ_nf24SFtveN4PHS_TA >>> >>> >>> I have small doubt whether in the mater.xml and slave.xml we can have >>> same port numbers to both of them like >>> >>> >>> for slave ::::: >>> >>> >>> fs.default.name >>> hdfs://hadoopslave: >>>> >>>> 9000 >>>> >>>> >>>> >>>> for master::: >>>> >>>> >>>> fs.default.name >>>> hdfs://hadoopmaster:9000 >>>> >>>> >>>> >>> >>> Well, any two daemons or services can run on the same port unless, the= y >>> are not run on the same machine.If you wish to run DN and NN on the sam= e >>> machine, their port numbers have to be different. >>> >>> >>> >>> >>> On Tue, Jun 16, 2009 at 2:55 PM, ashish pareek wro= te: >>> >>>> HI sugandha, >>>> >>>> >>>> >>>> and one more thing can we have in slave::: >>>> >>>> >>>> dfs.datanode.address>>>> >>>>> name> >>>>> hadoopmaster:9000 >>>>> hadoopslave:9001 >>>>> >>>>> >>>> >>> >>> Also, fs,default.name is the tag which specifies the default filesystem= . >>> And generaLLY, it is run on namenode. So, it;s value has to be a nameno= de's >>> address only and not slave's. >>> >>> >>>> >>>> Else if you have complete procedure for installing and running Hadoop = in >>>> cluster can you please send it to me ...... I need to step up hadoop w= ith in >>>> two days and show it to my guide.Currently I am doing my masters. >>>> >>>> Thanks for your spending time >>> >>> >>> Try for the above, and this should work! >>> >>>> >>>> >>>> regards, >>>> Ashish Pareek >>>> >>>> >>>> On Tue, Jun 16, 2009 at 2:33 PM, Sugandha Naolekar < >>>> sugandha.n87@gmail.com> wrote: >>>> >>>>> Following changes are to be done:: >>>>> >>>>> Under master folder:: >>>>> >>>>> -> put slaves address as well under the values of >>>>> tag(dfs.datanode.address) >>>>> >>>>> -> You want to make namenode as datanode as well. As per your config >>>>> file, you have specified hadoopmaster in your slave file. If you don= 't want >>>>> that, remove ti from slaves file. >>>>> >>>>> UNder slave folder:: >>>>> >>>>> -> put only slave's (the m/c where you intend to run your datanode)'s >>>>> address.under datanode.address tag. Else >>>>> it should go as such:: >>>>> >>>>> >>>>> dfs.datanode.address >>>>> hadoopmaster:9000 >>>>> hadoopslave:9001 >>>>> >>>>> >>>>> Also, your port numbers hould be different. the daemons NN,DN,JT,TT >>>>> should run independently on different ports. >>>>> >>>>> >>>>> On Tue, Jun 16, 2009 at 2:05 PM, Sugandha Naolekar < >>>>> sugandha.n87@gmail.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> ---------- Forwarded message ---------- >>>>>> From: ashish pareek >>>>>> Date: Tue, Jun 16, 2009 at 2:00 PM >>>>>> Subject: Re: org.apache.hadoop.ipc.client : trying connect to server >>>>>> failed >>>>>> To: Sugandha Naolekar >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jun 16, 2009 at 1:58 PM, ashish pareek = wrote: >>>>>> >>>>>>> HI , >>>>>>> I am sending .tar.gz format containing both master and datanod= e >>>>>>> config files ....... >>>>>>> >>>>>>> Regards, >>>>>>> Ashish Pareek >>>>>>> >>>>>>> >>>>>>> On Tue, Jun 16, 2009 at 1:47 PM, Sugandha Naolekar < >>>>>>> sugandha.n87@gmail.com> wrote: >>>>>>> >>>>>>>> can u pls send me a zip or a tar file? I don't have windows system= s >>>>>>>> but, linux >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jun 16, 2009 at 1:19 PM, ashish pareek >>>>>>> > wrote: >>>>>>>> >>>>>>>>> HI Sungandha , >>>>>>>>> Thanks for your reply .... I am sending you >>>>>>>>> master and slave configuration files if you can go through it and= tell me >>>>>>>>> where I am going wrong it would be helpful. >>>>>>>>> >>>>>>>>> Hope to get a reply soon ........... Than= ks >>>>>>>>> again! >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Ashish Pareek >>>>>>>>> >>>>>>>>> On Tue, Jun 16, 2009 at 11:12 AM, Sugandha Naolekar < >>>>>>>>> sugandha.n87@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Ashish! >>>>>>>>>> >>>>>>>>>> Try for the following things:: >>>>>>>>>> >>>>>>>>>> -> Check the config file(hadoop-site.xml) of namenode. >>>>>>>>>> -> Make sure, the tag(dfs.datanode.addres)'s value you have give= n >>>>>>>>>> correctly >>>>>>>>>> it's IP,and the name of that machine. >>>>>>>>>> -> Also, check for the name added in /etc/hosts file. >>>>>>>>>> -> Check for the ssh keys of datanodes present in namenode's >>>>>>>>>> known_hosts >>>>>>>>>> file >>>>>>>>>> -> check for the value of dfs.datanode.addres on datanode's conf= ig >>>>>>>>>> file. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Jun 16, 2009 at 10:58 AM, ashish pareek < >>>>>>>>>> pareekash@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>> > HI , >>>>>>>>>> > I am trying to step up a hadoop cluster on 3GB machine and >>>>>>>>>> using hadoop >>>>>>>>>> > 0.18.3 and have followed procedure given in apache hadoop si= te >>>>>>>>>> for hadoop >>>>>>>>>> > cluster. >>>>>>>>>> > In conf/slaves I have added two datanode i.e including the >>>>>>>>>> namenode >>>>>>>>>> > vitrual machine and other machine virtual machine (datanode) >>>>>>>>>> ..... and >>>>>>>>>> > have >>>>>>>>>> > set up passwordless ssh between both virtual machines ..... Bu= t >>>>>>>>>> now problem >>>>>>>>>> > is when I run command : >>>>>>>>>> > >>>>>>>>>> > bin/hadoop start-all.sh >>>>>>>>>> > >>>>>>>>>> > It start only one datanode on the same namenode vitrual machin= e >>>>>>>>>> but it >>>>>>>>>> > doesn't start the datanode on other machine..... >>>>>>>>>> > >>>>>>>>>> > in logs/hadoop-datanode.log i get message >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > INFO org.apache.hadoop.ipc.Client: Retrying >>>>>>>>>> > connect to server: hadoop1/192.168.1.28:9000. Already >>>>>>>>>> > >>>>>>>>>> > tried 1 time(s). >>>>>>>>>> > >>>>>>>>>> > 2009-05-09 18:35:14,266 INFO org.apache.hadoop.ipc.Client: >>>>>>>>>> Retrying >>>>>>>>>> > connect to server: hadoop1/192.168.1.28:9000. Already tried 2 >>>>>>>>>> time(s). >>>>>>>>>> > >>>>>>>>>> > 2009-05-09 18:35:14,266 INFO org.apache.hadoop.ipc.Client: >>>>>>>>>> Retrying >>>>>>>>>> > connect to server: hadoop1/192.168.1.28:9000. Already tried 3 >>>>>>>>>> time(s). >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > >>>>>>>>>> > . >>>>>>>>>> > . >>>>>>>>>> > >>>>>>>>>> > . >>>>>>>>>> > >>>>>>>>>> > I have tried formatting and start the cluster again .....but >>>>>>>>>> still I >>>>>>>>>> > get the same error. >>>>>>>>>> > >>>>>>>>>> > So can any one help in solving this problem. :) >>>>>>>>>> > >>>>>>>>>> > Thanks >>>>>>>>>> > >>>>>>>>>> > Regards >>>>>>>>>> > >>>>>>>>>> > Ashish Pareek >>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Regards! >>>>>>>>>> Sugandha >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Regards! >>>>>>>> Sugandha >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Regards! >>>>>> Sugandha >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Regards! >>>>> Sugandha >>>>> >>>> >>>> >>> >>> >>> -- >>> Regards! >>> Sugandha >>> >> >> > > > -- > Regards! > Sugandha > --001636417901545cd1046c84c2f5--