hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oliver Haggarty <oj...@doc.ic.ac.uk>
Subject Re: hadoop-site.xml help
Date Thu, 28 Jun 2007 15:01:47 GMT
Hi,

Apologies for what is going to be a very vague and non-technical reply.
I had this error and I think it is something to do with the ports and
what is available on your machine. I think I solved it by trying
different port numbers, and the problem went away and everything worked
fine.

I found that 50010 for fs.default.name and 40000 for mapred.job.tracker
worked ok. Maybe give them a try. From what I've read on the mailing
list archives th 50010 seems to be a fairly commonly used port, and the
40000 was used by someone else too.

I still don't know whether Hadoop places any resrtictions on the ports
it uses, or whether any free port is good, and any problems are caused
by machine settings.

Oh, it also might be worth looking in your log files. Sometimes you get
more useful exceptions listed there that can help point in the direction
of exactly whats going on.

Hope this is of some use and good luck!

Ollie

Khalil Honsali wrote:
> maybe it's a network connection problem
> Can you ping, ssh, netstat your machines? do you have a firewall?
> 
> what command did you use to get the above error?
> what is your configuration (single machine?) ?
> 
> I think you need to provide more info.
> 
> On 28/06/07, DANIEL CLARK <daniel.a.clark@verizon.net> wrote:
>>
>> I entered the following in hadoop-site.xml and am getting 'connection
>> refused' stacktrace at Linux command line.  What could cause this?
>>
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> <!-- Put site-specific property overrides in this file. -->
>> <configuration>
>> <property>
>> <name>fs.default.name</name>
>> <value>MY.MACHINE.com:9000</value>
>> <description>
>> The name of the default file system. Either the literal string
>> "local" or a host:port for NDFS.
>> </description>
>> </property>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>my.machine.com:9001</value>
>> <description>
>> The host and port that the MapReduce job tracker runs at. If
>> "local", then jobs are run in-process as a single map and
>> reduce task.
>> </description>
>> </property>
>> <property>
>> <name>mapred.map.tasks</name>
>> <value>1</value>
>> <description>
>> define mapred.map tasks to be number of slave hosts
>> </description>
>> </property>
>> <property>
>> <name>mapred.reduce.tasks</name>
>> <value>1</value>
>> <description>
>> define mapred.reduce tasks to be number of slave hosts
>> </description>
>> </property>
>> <property>
>> <name>dfs.name.dir</name>
>> <value>/opt/nutch/filesystem/name</value>
>> </property>
>> <property>
>> <name>dfs.data.dir</name>
>> <value>/opt/nutch/filesystem/data</value>
>> </property>
>> <property>
>> <name>mapred.system.dir</name>
>> <value>/opt/nutch/filesystem/mapreduce/system</value>
>> </property>
>> <property>
>> <name>mapred.local.dir</name>
>> <value>/opt/nutch/filesystem/mapreduce/local</value>
>> </property>
>> <property>
>> <name>dfs.replication</name>
>> <value>1</value>
>> </property>
>> </configuration>
>>
>>
>> Exception in thread "main" java.net.ConnectException: Connection refused
>>         at java.net.PlainSocketImpl.socketConnect(Native Method)
>>         at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
>>         at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java
>> :195)
>>         at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
>>         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
>>         at java.net.Socket.connect(Socket.java:519)
>>         at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(
>> Client.java:149)
>>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:531)
>>         at org.apache.hadoop.ipc.Client.call(Client.java:458)
>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163)
>>         at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>> Source)
>>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:247)
>>         at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:105)
>>         at
>> org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.initialize 
>>
>> (DistributedFileSystem.java:67)
>>         at org.apache.hadoop.fs.FilterFileSystem.initialize(
>> FilterFileSystem.java:57)
>>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:160)
>>         at org.apache.hadoop.fs.FileSystem.getNamed(FileSystem.java:119)
>>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:91)
>>         at org.apache.nutch.crawl.Crawl.main(Crawl.java:83)
>>
>>
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Daniel Clark, President
>> DAC Systems, Inc.
>> 5209 Nanticoke Court
>> Centreville, VA  20120
>> Cell - (703) 403-0340
>> Email - daniel.a.clark@verizon.net
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> 
> 
> 


Mime
View raw message