Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DF7089EDD for ; Thu, 9 Feb 2012 10:41:40 +0000 (UTC) Received: (qmail 39762 invoked by uid 500); 9 Feb 2012 10:41:36 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 38557 invoked by uid 500); 9 Feb 2012 10:41:23 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 38505 invoked by uid 99); 9 Feb 2012 10:41:19 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Feb 2012 10:41:19 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of wget.null@googlemail.com designates 209.85.160.48 as permitted sender) Received: from [209.85.160.48] (HELO mail-pw0-f48.google.com) (209.85.160.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Feb 2012 10:41:12 +0000 Received: by pbcuo1 with SMTP id uo1so1408628pbc.35 for ; Thu, 09 Feb 2012 02:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=7JvTxGu0H6ryjEURQb3G4UuoL1WXLYPu3R8weuV/ZDU=; b=LyQAgaD7AaUZLpSAgMJM2+iPzBOOUkhH/xvmH6pPrzi13BzIsTB79Y0U8/BG9LH0C2 2UUSS7bp1gf95ev11FriFbwo/t8GGcgJQvI9kU5APhBDa+Kn1eVJm8aLJi/FC7tYbO0H thutc0TYIW9G4ELecAQXAAGTi1ej0/FpFHO2g= Received: by 10.68.74.69 with SMTP id r5mr4545914pbv.118.1328784051196; Thu, 09 Feb 2012 02:40:51 -0800 (PST) Received: from [192.168.200.101] (HSI-KBW-149-172-23-146.hsi13.kabel-badenwuerttemberg.de. [149.172.23.146]) by mx.google.com with ESMTPS id p9sm5206527pbb.9.2012.02.09.02.40.49 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 09 Feb 2012 02:40:50 -0800 (PST) Subject: Re: HELP - Problem in setting up Hadoop - Multi-Node Cluster Mime-Version: 1.0 (Apple Message framework v1257) Content-Type: text/plain; charset=iso-8859-1 From: alo alt In-Reply-To: Date: Thu, 9 Feb 2012 11:40:46 +0100 Cc: Guruprasad B Content-Transfer-Encoding: quoted-printable Message-Id: <63AE6D4E-7BF7-4832-BA5C-E4D0CC2F55D4@gmail.com> References: <4F32CEC8.80507@oracle.com> To: common-user@hadoop.apache.org X-Mailer: Apple Mail (2.1257) X-Virus-Checked: Checked by ClamAV on apache.org Please use jdk 6 latest. best, Alex=20 -- Alexander Lorenz http://mapredit.blogspot.com On Feb 9, 2012, at 11:11 AM, hadoop hive wrote: > did you make check the ssh between localhost means its should be ssh = password less between localhost=20 >=20 > public-key =3Dauthorized_key >=20 > On Thu, Feb 9, 2012 at 1:06 AM, Robin Mueller-Bady = wrote: > Dear Guruprasad, >=20 > it would be very helpful to provide details from your configuration = files as well as more details on your setup. > It seems to be that the connection from slave to master cannot be = established ("Connection reset by peer"). > Do you use a virtual environment, physical master/slaves or all on one = machine ? > Please paste also the output of "kingul2" namenode logs. >=20 > Regards, >=20 > Robin >=20 >=20 > On 02/08/12 13:06, Guruprasad B wrote: >> Hi, >>=20 >> I am Guruprasad from Bangalore (India). I need help in setting up = hadoop >> platform. I am very much new to Hadoop Platform. >>=20 >> I am following the below given articles and I was able to set up >> "Single-Node Cluster >> " >> = http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-singl= e-node-cluster/#what-we-want-to-do >>=20 >> Now I am trying to set up " >> Multi-Node Cluster" by following the below given >> article. >>=20 >> = http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi= -node-cluster/ >>=20 >>=20 >> Below given is my setup: >> Hadoop : hadoop_0.20.2 >> Linux: Ubuntu Linux 10.10 >> Java: java-7-oracle >>=20 >>=20 >> I have successfully reached till the topic "Starting the multi-node >> cluster" in the above given article. >> When I start the HDFS/MapReduce daemons it is getting started and = going >> down immediately both in master & slave as well, >> please have a look at the below logs, >>=20 >> hduser@kinigul2:/usr/local/hadoop$ bin/start-dfs.sh >> starting namenode, logging to >> /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-kinigul2.out >> master: starting datanode, logging to >> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-kinigul2.out >> slave: starting datanode, logging to >> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-guruL.out >> master: starting secondarynamenode, logging to >> = /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-kinigul2.out= >>=20 >> hduser@kinigul2:/usr/local/hadoop$ jps >> 6098 DataNode >> 6328 Jps >> 5914 NameNode >> 6276 SecondaryNameNode >>=20 >> hduser@kinigul2:/usr/local/hadoop$ jps >> 6350 Jps >>=20 >>=20 >> I am getting below given error in slave logs: >>=20 >> 2012-02-08 21:04:01,641 ERROR >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: = Call >> to master/ >> 16.150.98.62:54310 >> failed on local exception: >> java.io.IOException: Connection reset by peer >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:775) >> at org.apache.hadoop.ipc.Client.call(Client.java:743) >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >> at $Proxy4.getProtocolVersion(Unknown Source) >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346) >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383) >> at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314) >> at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291) >> at >> = org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.jav= a:269) >> at >> = org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:216) >> at >> = org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java= :1283) >> at >> = org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNo= de.java:1238) >> at >> = org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.ja= va:1246) >> at >> = org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368) >> Caused by: java.io.IOException: Connection reset by peer >> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) >> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) >> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218) >> at sun.nio.ch.IOUtil.read(IOUtil.java:191) >> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359) >> at >> = org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream= .java:55) >> at >> = org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:14= 2) >> at >> = org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155) >> at >> = org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128) >> at java.io.FilterInputStream.read(FilterInputStream.java:133) >> at >> = org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:2= 76) >> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) >> at java.io.BufferedInputStream.read(BufferedInputStream.java:254) >> at java.io.DataInputStream.readInt(DataInputStream.java:387) >> at >> = org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) >> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) >>=20 >>=20 >> Can you please tell what could be the reason behind this or point me = to >> some pointers? >>=20 >> Regards, >> Guruprasad >>=20 >>=20 >=20 > --=20 >=20 > Robin M=FCller-Bady | Sales Consultant > Phone: +49 211 74839 701 | Mobile: +49 172 8438346=20 > Oracle STCC Fusion Middleware >=20 > ORACLE Deutschland B.V. & Co. KG | Hamborner Strasse 51 | 40472 = D=FCsseldorf >=20 > ORACLE Deutschland B.V. & Co. KG > Hauptverwaltung: Riesstr. 25, D-80992 M=FCnchen > Registergericht: Amtsgericht M=FCnchen, HRA 95603 > Gesch=E4ftsf=FChrer: J=FCrgen Kunz >=20 > Komplement=E4rin: ORACLE Deutschland Verwaltung B.V. > Hertogswetering 163/167, 3543 AS Utrecht, Niederlande > Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697 > Gesch=E4ftsf=FChrer: Alexander van der Ven, Astrid Kepper, Val Maher >=20 > Oracle is committed to developing practices and products that = help protect the environment >=20