hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gmac...@cs.ucf.edu
Subject Re: Connection problems
Date Thu, 30 Oct 2008 18:03:47 GMT
Alberto,

Drop the 'hdfs,' hadoop will complain but it won't keep it from working 
(you can add it later). make sure that the full address of democrito is 
written out. So, for instance my cluster is scerola, so the headnode is 
<value>scerola.cs.ucf.edu:9000</value> (can also just use the ip address). 
Also the hadoop-site.xml needs to be identical for all nodes whether master 
or slave

regards

 - Grant

On Oct 28 2008, Messina Alberto wrote:

>Hello,
>
>I've just installed a minimal fully distributed hadoop pool with two
>servers: a master (democrito) and a slave (pascal). The master runs on
>port 24000. All firewalls are down, ssh passphraseless is ok, nothing in
>the hosts.allow/hosts.deny. Configuration files are the following (resp.
>on the master and on the slave):
>
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
><!-- Put site-specific property overrides in this file. -->
>
><configuration>
>  <property>
>    <name>fs.default.name</name>
>    <value>democrito:24000</value>
>  </property>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>democrito:24001</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
></configuration>
>
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
><!-- Put site-specific property overrides in this file. -->
>
><configuration>
>  <property>
>        <name>fs.default.name</name>
>        <value>hdfs://democrito:24000</value>
>  </property>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>democrito:24001</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
>
></configuration>
>
>
>I've got "connection refused " problems when the slave tries to connect
>to the master in the bootstrap phase:
>
>2008-10-28 16:18:35,228 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 0 time(s).
>2008-10-28 16:18:36,231 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 1 time(s).
>2008-10-28 16:18:37,233 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 2 time(s).
>2008-10-28 16:18:38,235 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 3 time(s).
>2008-10-28 16:18:39,238 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 4 time(s).
>2008-10-28 16:18:40,241 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 5 time(s).
>2008-10-28 16:18:41,244 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 6 time(s).
>2008-10-28 16:18:42,246 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 7 time(s).
>2008-10-28 16:18:43,248 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 8 time(s).
>2008-10-28 16:18:44,250 INFO org.apache.hadoop.ipc.Client: Retrying
>connect to server: democrito/10.2.6.187:24000. Already tried 9 time(s).
>2008-10-28 16:18:44,254 ERROR org.apache.hadoop.dfs.DataNode:
>java.io.IOException: Call failed on local exception
>        at org.apache.hadoop.ipc.Client.call(Client.java:718)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>        at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown
>Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343)
>        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288)
>        at
>org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:244)
>        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
>        at
>org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
>        at
>org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
>        at
>org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
>        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
>Caused by: java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
>sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
>        at
>org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:300)
>        at
>org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177)
>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:789)
>        at org.apache.hadoop.ipc.Client.call(Client.java:704)
>        ... 12 more
>
>Can anyone help please?
>
>
>Thank you
>
>Alberto 
>--------------------------------------------------------
>
> Questa e-mail, ed i suoi eventuali allegati, contengono informazioni 
> confidenziali e riservate. Se avete ricevuto questa comunicazione per 
> errore non utilizzatene il contenuto e non portatelo a conoscenza di 
> alcuno. Siete inoltre pregati di eliminarla dalla vostra casella e 
> avvisare il mittente. E' da rilevare inoltre che l'attuale infrastruttura 
> tecnologica non puo' garantire l'autenticita' del mittente, ne' tantomeno 
> l'integrita' dei contenuti. Opinioni, conclusioni ed altre informazioni 
> contenute nel messaggio possono rappresentare punti di vista personali a 
> meno di diversa esplicita indicazione autorizzata. 
> --------------------------------------------------------
>

-- 
Grant Mackey
UCF Researcher
Eng. III Rm238


Mime
View raw message