Return-Path: Delivered-To: apmail-lucene-hadoop-user-archive@locus.apache.org Received: (qmail 84033 invoked from network); 2 Mar 2007 22:10:23 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 2 Mar 2007 22:10:23 -0000 Received: (qmail 48906 invoked by uid 500); 2 Mar 2007 22:10:30 -0000 Delivered-To: apmail-lucene-hadoop-user-archive@lucene.apache.org Received: (qmail 48874 invoked by uid 500); 2 Mar 2007 22:10:30 -0000 Mailing-List: contact hadoop-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-user@lucene.apache.org Delivered-To: mailing list hadoop-user@lucene.apache.org Received: (qmail 48865 invoked by uid 99); 2 Mar 2007 22:10:30 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Mar 2007 14:10:30 -0800 X-ASF-Spam-Status: No, hits=2.0 required=10.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (herse.apache.org: domain of albert.chern@gmail.com designates 64.233.182.187 as permitted sender) Received: from [64.233.182.187] (HELO nf-out-0910.google.com) (64.233.182.187) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Mar 2007 14:10:19 -0800 Received: by nf-out-0910.google.com with SMTP id i2so1337893nfe for ; Fri, 02 Mar 2007 14:09:57 -0800 (PST) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:references; b=S3EOlHJtuRqnMCLV9jbPMJipcWGtovNUGcKYen5NA65bNbdM2irloVfEPH/G6CbdzxiQmTJkFovwxakyWzTk/X/4pl9JSgZmGcUpuBupZ68czNa8o2Vv2SqzddPiycuTUEBMbH/8PVknapzstHM5YTgp5RhSFzNK4HTpZX05I5c= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:references; b=YjDgw1tv4HnEDXUm4YyjHsAAqQ0eE9cvC1+gqFWM/dRJv3gcgYkt3Jh79XU6AMmQ7v/P1R26gnafipZZ6T9A4ufIVwuOHN8DawV+y6dBDLAYvCGPjqxeOltVh6hSg9zdnnz+DCisIiV+j/efnuLrpJSIKOm+Z6CMZ3MB5IhsktU= Received: by 10.82.188.15 with SMTP id l15mr1167793buf.1172873397215; Fri, 02 Mar 2007 14:09:57 -0800 (PST) Received: by 10.82.108.15 with HTTP; Fri, 2 Mar 2007 14:09:57 -0800 (PST) Message-ID: <73c1c44f0703021409t44eabad5wc771a2f1bfc4a4dc@mail.gmail.com> Date: Fri, 2 Mar 2007 14:09:57 -0800 From: "Albert Chern" To: hadoop-user@lucene.apache.org Subject: Re: Detailed steps to run Hadoop in distributed system... In-Reply-To: <9273327.post@talk.nabble.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_76265_20819111.1172873397167" References: <9265480.post@talk.nabble.com> <73c1c44f0703020848o3936c85fu8ca714c9e19bc759@mail.gmail.com> <9273327.post@talk.nabble.com> X-Virus-Checked: Checked by ClamAV on apache.org ------=_Part_76265_20819111.1172873397167 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Are you sure? Look at the errors you are getting when you run bin/start- all.sh. The master node is unable to ssh to the slave. Try ssh-ing into the slave from the master node...shouldn't work. On 3/2/07, jaylac wrote: > > > > in the last line "<" is missing when i was copying and pasting.... in the > hadoop-site.xml file its written correctly.... > > I can ping the machines from each other.... No problem in it..... > > Thanks for ur reply... please try to find any other mistakes...... > > > Albert Chern wrote: > > > > Your hadoop-site.xml is missing a "<" in the last line, but it looks > like > > you're having a network problem. Can you ping the machines from each > > other? > > > > On 3/1/07, jaylac wrote: > >> > >> > >> Hi Hadoop-Users..... > >> > >> Have anyone successfully tried running hadoop in two systems? > >> > >> I've tried running the wordcount example in one system.. It works > fine... > >> But when i try to add nodes to the cluster and run wordcount example, i > >> get > >> errors.... > >> > >> So please let me know the detailed steps to be followed... > >> > >> Though the steps are given in the hadoop website, i need some help from > u > >> people... > >> > >> They might have thought some steps to be obvious and would have not > stold > >> that in the website... > >> > >> Im new user... So i simply followed the instructions given... I might > >> have > >> overlooked some steps which is necessary to run it.... > >> > >> Another important doubt.... > >> > >> In master node, i have a user name called "jaya"... Is it necessary to > >> create a user name called "jaya" in the slave system also... or we can > >> simply use the user name that exist in the slave machine? > >> > >> > >> > >> Im using two RED HAT LINUX machines... one master(10.229.62.6) and the > >> other > >> slave(10.229.62.56) > >> In master node, the user name is jaya > >> In slave node, the user name is 146736 > >> > >> The steps which i follow is..... > >> > >> Edit /home/jaya/.bashrc file > >> Here ill set the HADOOP_CONF_DIR environment variable > >> > >> MASTER NODE > >> > >> 1. Edit conf/slaves file.... > >> Contents > >> ==================== > >> localhost > >> 146736@10.229.62.56 > >> ==================== > >> > >> 2. Edit conf/hadoop-en.sh file > >> Here ill set the JAVA_HOME environment variable > >> Thats it.... No other changes in this file.... > >> PLEASE LET ME KNOW IF I SHOULD ADD ANYTHING HERE > >> > >> 3. Edit conf/hadoop-site.xml file > >> Contents > >> =========================================== > >> > >> > >> > >> > >> > >> > >> > >> > >> fs.default.name > >> 10.229.62.6:50010 > >> > >> > >> > >> mapred.job.tracker > >> 10.229.62.6:50011 > >> > >> > >> > >> dfs.replication > >> 2 > >> > >> > >> /configuration> > >> ==================================== > >> > >> LET ME KNOW IF I NEED TO ADD ANYTHING HERE.... > >> > >> SLAVE NODE > >> > >> 1. Edit conf/masters file.... > >> Contents > >> ==================== > >> localhost > >> jaya@10.229.62.56 > >> ==================== > >> > >> 2. Edit conf/hadoop-en.sh file > >> Here ill set the JAVA_HOME environment variable > >> Thats it.... No other changes in this file.... > >> PLEASE LET ME KNOW IF I SHOULD ADD ANYTHING HERE > >> > >> 3. Edit conf/hadoop-site.xml file > >> Contents > >> =========================================== > >> > >> > >> > >> > >> > >> > >> > >> > >> fs.default.name > >> 10.229.62.6:50010 > >> > >> > >> > >> mapred.job.tracker > >> 10.229.62.6:50011 > >> > >> > >> > >> dfs.replication > >> 2 > >> > >> > >> /configuration> > >> ==================================== > >> > >> LET ME KNOW IF I NEED TO ADD ANYTHING HERE.... > >> > >> I've already done steps for passwordless login > >> > >> Thats is all........... Then ill perform the following operations.... > >> > >> In the HADOOP_HOME directory, > >> > >> jaya@localhost hadoop-0.11.0]$ bin/hadoop namenode -format > >> Re-format filesystem in /tmp/hadoop-146736/dfs/name ? (Y or N) Y > >> Formatted /tmp/hadoop-146736/dfs/name > >> [jaya@localhost hadoop-0.11.0]$ > >> > >> Then > >> > >> [jaya@localhost hadoop-0.11.0]$ bin/start-all.sh > >> starting namenode, logging to > >> /opt/hadoop-0.11.0/bin/../logs/hadoop- > >> jaya-namenode-localhost.localdomain.out > >> localhost: starting datanode, logging to > >> /opt/hadoop-0.11.0/bin/../logs/hadoop- > >> jaya-datanode-localhost.localdomain.out > >> 146736@10.229.62.56: ssh: connect to host 10.229.62.56 port 22: No > route > >> to > >> host > >> localhost: starting secondarynamenode, logging to > >> /opt/hadoop-0.11.0/bin/../logs/hadoop- > >> jaya-secondarynamenode-localhost.localdomain.out > >> starting jobtracker, logging to > >> /opt/hadoop-0.11.0/bin/../logs/hadoop- > >> jaya-jobtracker-localhost.localdomain.out > >> localhost: starting tasktracker, logging to > >> /opt/hadoop-0.11.0/bin/../logs/hadoop- > >> jaya-tasktracker-localhost.localdomain.out > >> 146736@10.229.62.56: ssh: connect to host 10.229.62.56 port 22: No > route > >> to > >> host > >> [jaya@localhost hadoop-0.11.0]$ > >> > >> [jaya@localhost hadoop-0.11.0]$ mkdir input > >> [jaya@localhost hadoop-0.11.0]$ cp conf/*.xml input > >> [jaya@localhost hadoop-0.11.0]$ > >> > >> [jaya@localhost hadoop-0.11.0]$ bin/hadoop dfs -put input input > >> jaya@localhost hadoop-0.11.0]$ bin/hadoop dfs -lsr / > >> /tmp > >> /tmp/hadoop-jaya > >> /tmp/hadoop-jaya/mapred > >> /tmp/hadoop-jaya/mapred/system > >> /user > >> /user/jaya > >> /user/jaya/input > >> /user/jaya/input/hadoop-default.xml 21708 > >> /user/jaya/input/hadoop-site.xml 1333 > >> /user/jaya/input/mapred-default.xml 180 > >> [jaya@localhost hadoop-0.11.0]$ > >> > >> > >> > >> [jaya@localhost hadoop-0.11.0]$ bin/hadoop dfs -ls input > >> Found 3 items > >> /user/jaya/input/hadoop-default.xml 21708 > >> /user/jaya/input/hadoop-site.xml 1333 > >> /user/jaya/input/mapred-default.xml 180 > >> [jaya@localhost hadoop-0.11.0]$ bin/hadoop dfs -ls output > >> Found 0 items > >> [jaya@localhost hadoop-0.11.0]$ bin/hadoop jar > hadoop-0.11.0-examples.jar > >> wordcount input output > >> java.net.SocketTimeoutException: timed out waiting for rpc response > >> at org.apache.hadoop.ipc.Client.call(Client.java:469) > >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:164) > >> at $Proxy1.getProtocolVersion(Unknown Source) > >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:248) > >> at org.apache.hadoop.mapred.JobClient.init(JobClient.java:200) > >> at org.apache.hadoop.mapred.JobClient.(JobClient.java > :192) > >> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java > :381) > >> at org.apache.hadoop.examples.WordCount.main(WordCount.java > :143) > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >> at > >> sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java > >> :39) > >> at > >> sun.reflect.DelegatingMethodAccessorImpl.invoke( > >> DelegatingMethodAccessorImpl.java:25) > >> at java.lang.reflect.Method.invoke(Method.java:597) > >> at > >> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke( > >> ProgramDriver.java:71) > >> at > >> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:143) > >> at > >> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:40) > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >> at > >> sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java > >> :39) > >> at > >> sun.reflect.DelegatingMethodAccessorImpl.invoke( > >> DelegatingMethodAccessorImpl.java:25) > >> at java.lang.reflect.Method.invoke(Method.java:597) > >> at org.apache.hadoop.util.RunJar.main(RunJar.java:155) > >> [jaya@localhost hadoop-0.11.0]$ > >> > >> > >> I dont know where the problem is....... > >> > >> I've not created any directory called output.... if at all we need to > >> create > >> one, where should we create? > >> Should i configure some more settings.... Please explain in detail.... > >> > >> Please do help me..... > >> > >> Thanks in advance > >> Jaya > >> -- > >> View this message in context: > >> > http://www.nabble.com/Detailed-steps-to-run-Hadoop-in-distributed-system...-tf3332250.html#a9265480 > >> Sent from the Hadoop Users mailing list archive at Nabble.com. > >> > >> > > > > > > -- > View this message in context: > http://www.nabble.com/Detailed-steps-to-run-Hadoop-in-distributed-system...-tf3332250.html#a9273327 > Sent from the Hadoop Users mailing list archive at Nabble.com. > > ------=_Part_76265_20819111.1172873397167--