hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anil Gupta <anilgupt...@gmail.com>
Subject Re: HELP - Problem in setting up Hadoop - Multi-Node Cluster
Date Thu, 09 Feb 2012 19:42:13 GMT
Hi, 
I have dealt with this kind of this problem earlier. 
Check the logs of datanode as well as namenode.

In order to test the connectivity:
ssh into slave from master and ssh into master from the same slave. Leave the ssh session
open for as long as u can.

In my case when I did the above experiment the ssh session was dropping so I got to know that
it's a network related problem. It has got nothing to do with Hadoop.

Best Regards,
Anil

On Feb 9, 2012, at 5:51 AM, "15club.cn" <15club.cn@gmail.com> wrote:

> Maybe, check your iptables first. For hadoop on multi-machines, do shut
> down iptables. And it will block the connections between all the nodes.
> #/etc/init.d/iptables stop
> 
> 2012/2/9 alo alt <wget.null@googlemail.com>
> 
>> Please use jdk 6 latest.
>> 
>> best,
>> Alex
>> 
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>> 
>> On Feb 9, 2012, at 11:11 AM, hadoop hive wrote:
>> 
>>> did you make check the ssh between localhost means its should be ssh
>> password less between localhost
>>> 
>>> public-key =authorized_key
>>> 
>>> On Thu, Feb 9, 2012 at 1:06 AM, Robin Mueller-Bady <
>> robin.mueller-bady@oracle.com> wrote:
>>> Dear Guruprasad,
>>> 
>>> it would be very helpful to provide details from your configuration
>> files as well as more details on your setup.
>>> It seems to be that the connection from slave to master cannot be
>> established ("Connection reset by peer").
>>> Do you use a virtual environment, physical master/slaves or all on one
>> machine ?
>>> Please paste also the output of "kingul2" namenode logs.
>>> 
>>> Regards,
>>> 
>>> Robin
>>> 
>>> 
>>> On 02/08/12 13:06, Guruprasad B wrote:
>>>> Hi,
>>>> 
>>>> I am Guruprasad from Bangalore (India). I need help in setting up hadoop
>>>> platform. I am very much new to Hadoop Platform.
>>>> 
>>>> I am following the below given articles and I was able to set up
>>>> "Single-Node Cluster
>>>> "
>>>> 
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#what-we-want-to-do
>>>> 
>>>> Now I am trying to set up "
>>>> Multi-Node Cluster" by following the below given
>>>> article.
>>>> 
>>>> 
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
>>>> 
>>>> 
>>>> Below given is my setup:
>>>> Hadoop : hadoop_0.20.2
>>>> Linux: Ubuntu Linux 10.10
>>>> Java: java-7-oracle
>>>> 
>>>> 
>>>> I have successfully reached till the topic "Starting the multi-node
>>>> cluster" in the above given article.
>>>> When I start the HDFS/MapReduce daemons it is getting started and going
>>>> down immediately both in master & slave as well,
>>>> please have a look at the below logs,
>>>> 
>>>> hduser@kinigul2:/usr/local/hadoop$ bin/start-dfs.sh
>>>> starting namenode, logging to
>>>> /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-kinigul2.out
>>>> master: starting datanode, logging to
>>>> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-kinigul2.out
>>>> slave: starting datanode, logging to
>>>> /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-guruL.out
>>>> master: starting secondarynamenode, logging to
>>>> 
>> /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-kinigul2.out
>>>> 
>>>> hduser@kinigul2:/usr/local/hadoop$ jps
>>>> 6098 DataNode
>>>> 6328 Jps
>>>> 5914 NameNode
>>>> 6276 SecondaryNameNode
>>>> 
>>>> hduser@kinigul2:/usr/local/hadoop$ jps
>>>> 6350 Jps
>>>> 
>>>> 
>>>> I am getting below given error in slave logs:
>>>> 
>>>> 2012-02-08 21:04:01,641 ERROR
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Call
>>>> to master/
>>>> 16.150.98.62:54310
>>>> failed on local exception:
>>>> java.io.IOException: Connection reset by peer
>>>>    at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
>>>>    at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>>    at $Proxy4.getProtocolVersion(Unknown Source)
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
>>>>    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
>>>>    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
>>>>    at
>>>> 
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
>>>>    at
>>>> 
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
>>>>    at
>>>> 
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
>>>>    at
>>>> 
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
>>>>    at
>>>> 
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
>>>>    at
>>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>    at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>    at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>    at
>>>> 
>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>    at
>>>> 
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>    at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>    at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>    at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>    at
>>>> 
>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
>>>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>    at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>    at
>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>>>>    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>>> 
>>>> 
>>>> Can you please tell what could be the reason behind this or point me to
>>>> some pointers?
>>>> 
>>>> Regards,
>>>> Guruprasad
>>>> 
>>>> 
>>> 
>>> --
>>> 
>>> Robin Müller-Bady | Sales Consultant
>>> Phone: +49 211 74839 701 | Mobile: +49 172 8438346
>>> Oracle STCC Fusion Middleware
>>> 
>>> ORACLE Deutschland B.V. & Co. KG | Hamborner Strasse 51 | 40472
>> Düsseldorf
>>> 
>>> ORACLE Deutschland B.V. & Co. KG
>>> Hauptverwaltung: Riesstr. 25, D-80992 München
>>> Registergericht: Amtsgericht München, HRA 95603
>>> Geschäftsführer: Jürgen Kunz
>>> 
>>> Komplementärin: ORACLE Deutschland Verwaltung B.V.
>>> Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
>>> Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
>>> Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher
>>> 
>>>      Oracle is committed to developing practices and products that help
>> protect the environment
>>> 
>> 
>> 

Mime
View raw message