hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gerardo Velez" <jgerardo.ve...@gmail.com>
Subject Re: Haddop 0.17.2 configuration problems!
Date Thu, 21 Aug 2008 19:03:45 GMT
Hi all!!


I just realized in a secondarynode log file and it stored this exception

java.net.NoRouteToHostException: No route to host
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
        at
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:193)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
        at java.net.Socket.connect(Socket.java:520)
        at java.net.Socket.connect(Socket.java:470)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:388)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:523)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:231)
        at sun.net.www.http.HttpClient.New(HttpClient.java:304)
        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
        at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:813)
        at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:765)
        at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:690)
        at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:934)
        at
org.apache.hadoop.dfs.TransferFsImage.getFileClient(TransferFsImage.java:149)
        at
org.apache.hadoop.dfs.TransferFsImage.getFileClient(TransferFsImage.java:188)
        at
org.apache.hadoop.dfs.SecondaryNameNode.getFSImage(SecondaryNameNode.java:245)
        at
org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:310)
        at
org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
        at java.lang.Thread.run(Thread.java:595)


Any idea how to solve this, I already disabled IPV6, maybe some problem with
firewall config?

Thanks in advance


On Thu, Aug 21, 2008 at 11:59 AM, Gerardo Velez <jgerardo.velez@gmail.com>wrote:

> Thanks for answer!
>
>
> I guess safe-mode come out after awhile, but I was wondering if safemode
> problems it is causing my problem....
>
>
> Basically, hadoop server starts just fine, but at moment to run example it
> never ends, here is some log:
>
>  bin/hadoop jar hadoop-0.17.2-examples.jar wordcount input/* output
> 08/08/21 12:22:53 INFO mapred.FileInputFormat: Total input paths to process
> : 1
> 08/08/21 12:22:54 INFO mapred.JobClient: Running job: job_200808211218_0001
> 08/08/21 12:22:55 INFO mapred.JobClient:  map 0% reduce 0%
> 08/08/21 12:23:03 INFO mapred.JobClient:  map 100% reduce 0%
>
> it never ends......
>
>
> any idea?
>
> Thanks again
>
>
>
>
> On Thu, Aug 21, 2008 at 11:18 AM, Arun C Murthy <acm@yahoo-inc.com> wrote:
>
>>
>> On Aug 21, 2008, at 11:04 AM, Gerardo Velez wrote:
>>
>>  I'm trying to install Hadoop 0.17.2 version on a linux box (xen os)
>>>
>>> So, bin/start-all.sh works fine, but
>>> hadoop-hadoop-jobtracker-softtek-helio-dev.log shows me error showed
>>> below.
>>> Do you now how to fix it?
>>>
>>>
>> The NameNode should come out of the safemode in a short while... did it
>> never?
>>
>> Arun
>>
>>
>>  Thanks in advance
>>>
>>> 2008-08-21 11:20:28,020 INFO org.apache.hadoop.mapred.JobTracker:
>>> JobTracker
>>> up at: 9001
>>> 2008-08-21 11:20:28,021 INFO org.apache.hadoop.mapred.JobTracker:
>>> JobTracker
>>> webserver: 50030
>>> 2008-08-21 11:20:28,217 INFO org.apache.hadoop.mapred.JobTracker: problem
>>> cleaning system directory:
>>> /opt/hadoop-datastore/hadoop-hadoop/mapred/system
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.dfs.SafeModeException: Cannot delete
>>> /opt/hadoop-datastore/hadoop-hadoop/mapred/system. Name node is in safe
>>> mode.
>>> The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
>>> Safe mode will be turned off automatically.
>>>       at
>>> org.apache.hadoop.dfs.FSNamesystem.deleteInternal(FSNamesystem.java:1523)
>>>       at
>>> org.apache.hadoop.dfs.FSNamesystem.delete(FSNamesystem.java:1502)
>>>       at org.apache.hadoop.dfs.NameNode.delete(NameNode.java:383)
>>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>       at
>>>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>       at
>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>       at java.lang.reflect.Method.invoke(Method.java:585)
>>>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
>>>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
>>>
>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message