hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Xine Jar <xineja...@googlemail.com>
Subject Re: problem with starting the Jobtracker and thenamenode
Date Wed, 08 Jul 2009 09:12:38 GMT
Hallo,
I am still struggling with the same problem. In order to be sure that IPv6
is not creating a problem and that the JVM is not suffering from a bug, I
have written a TCP client/server program in Java, I let the server run on
the same machine that was creating the binding problem and using the same
port number. Luckily or unfortunately, the Java server could bind the
address normally to the port, the communication between the client and the
server was successful, and the netstat command shows that the port is used.

I order to be sure that the problem is not a bug for hadoop I have installed
the version 0.20.0,
copied the same configuration and still have the same problem.

*I have few questions: *
As I have previously mentioned, the namenode cannot be started and java
complains that the node has not been formated. The other problem is with the
jobtracker which is giving a binding error on the address:port.

*Q1:* Is it possible that the second problem (jobtracker problem) is
appearing because of the namenode problem?

If it is a yes, how can I solve this? I have actually deleted the /tmp
folder and reformated the node but the formatting error persists!! Do I have
to do something else?

If it is a no,  am I doing something stupid here in any of the configuration
files?!!!

*Q2:  *In order to kill all the instances of the dfs and the mapred, is it
enough to execute the bin/stop-dfs.sh and bin/stop-mapred.sh on the
namenode? Is it possible that the command line is showing that there is no
jobtracker, no namenode, ...... but in fact the port is occupied?

P.S: -Whether hadoop is running or not, the netstat command is showing
always that the port
         is free!!!!!
       - My OS is a openSuse 9*

*I am trying to debug everything, because I am somehow sick of it, It is
certainly something stupid and most probably my fault (or my configuration
or my way of running things) , since others managed to run hadoop on a
cluster. I would appreciate any idea that helps to discover this the
problem.

Thank you,
CJ


On Tue, Jul 7, 2009 at 4:47 AM, Bogdan M. Maryniuk <
bogdan.maryniuk@gmail.com> wrote:

> On Mon, Jul 6, 2009 at 9:24 PM, Xine Jar<xinejar22@googlemail.com> wrote:
> > 2.How can I check that ipv6 is really disabled on my JVM?
>
> You've mentioned it is Linux, but how do I know what distribution of
> Linux's zoo you use?
>
> On Ubuntu it is sort of like this:
> In the file "/etc/modprobe.d/aliases" find "alias net-pf-10 ipv6",
> remove it and add the following:
>
> alias net-pf-10 off
> alias ipv6 off
>
> Then reboot (welcome to the Linux).
>
> On RedHat and derivatives it is like this:
> echo "alias net-pf-10 off" >>  /etc/modprobe.conf <ENTER>
>
> Then reboot (insert your ironic joke here). :-)
>
>
> > and in which file can I insert the flag -Djava.net.preferIPv4Stack=true?
> It is just a JVM parameter for a networking:
> 1. http://java.sun.com/j2se/1.4.2/docs/guide/net/properties.html
> 2.
> http://java.sun.com/j2se/1.4.2/docs/guide/net/ipv6_guide/#ipv6-networking
>
> See $HADOOP/conf/hadoop-env.sh and look for HADOOP_OPTS that is
> commented out by default.
>
> --
> Kind regards, BM
>
> Things, that are stupid at the beginning, rarely ends up wisely.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message