hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Xine Jar <xineja...@googlemail.com>
Subject Re: problem with starting the Jobtracker and thenamenode
Date Wed, 08 Jul 2009 15:11:17 GMT
Great thank you,
I can see now that all the nodes started and I shall try to run the
WordCount v1.0 example.Let' s see.

I would also like to know in which case do you advice reformatting the node?
Do I ever need to use this command again?

and concerning the book is there a possibility to know roughly the table of
content for each of the following books: ProHadoop, and Hadoop the
definitive guide? As you see I am starting basically from 0 with hadoop and
would like to reach at point where I can  write my own java  program/my own
search query. Which of the books would you advice ?

Thank you very much for your help


On Wed, Jul 8, 2009 at 3:25 PM, jason hadoop <jason.hadoop@gmail.com> wrote:

> You may need to delete the directory that you have configured for dfs
> storage on all of the machines in your cluster.
> The other issue that may happen is that the user that owns the
> Namenode/Datanode processes does not have write permissions on the
> filesystem structure to be used for dfs storage.
>
> Does the namenode format command complete successfully?
>
> On Wed, Jul 8, 2009 at 6:14 AM, Xine Jar <xinejar22@googlemail.com> wrote:
>
> > Thank you all for your great help,
> >
> > I finally discovered a typing error in one of my configuration files.
> > Therefore my jobtracker was not able to bind the address to the port.
> >
> > On the other hand I still have the Namenode which cannot be started. The
> > log
> > file shows that the namenode is not formatted although I did this. I
> > appreciate it if someone can point to me the folders I can delete in
> order
> > to be able to reformat the node without deleting some critical
> information.
> > please note that I have deleted once the /tmp folder and tried to
> reformat
> > but this did not help !!
> >
> > Vielen Dank,
> > CJ
> >
> > On Wed, Jul 8, 2009 at 11:12 AM, Xine Jar <xinejar22@googlemail.com>
> > wrote:
> >
> > > Hallo,
> > > I am still struggling with the same problem. In order to be sure that
> > IPv6
> > > is not creating a problem and that the JVM is not suffering from a bug,
> I
> > > have written a TCP client/server program in Java, I let the server run
> on
> > > the same machine that was creating the binding problem and using the
> same
> > > port number. Luckily or unfortunately, the Java server could bind the
> > > address normally to the port, the communication between the client and
> > the
> > > server was successful, and the netstat command shows that the port is
> > used.
> > >
> > > I order to be sure that the problem is not a bug for hadoop I have
> > > installed the version 0.20.0,
> > > copied the same configuration and still have the same problem.
> > >
> > > *I have few questions: *
> > > As I have previously mentioned, the namenode cannot be started and java
> > > complains that the node has not been formated. The other problem is
> with
> > the
> > > jobtracker which is giving a binding error on the address:port.
> > >
> > > *Q1:* Is it possible that the second problem (jobtracker problem) is
> > > appearing because of the namenode problem?
> > >
> > > If it is a yes, how can I solve this? I have actually deleted the /tmp
> > > folder and reformated the node but the formatting error persists!! Do I
> > have
> > > to do something else?
> > >
> > > If it is a no,  am I doing something stupid here in any of the
> > > configuration files?!!!
> > >
> > > *Q2:  *In order to kill all the instances of the dfs and the mapred, is
> > it
> > > enough to execute the bin/stop-dfs.sh and bin/stop-mapred.sh on the
> > > namenode? Is it possible that the command line is showing that there is
> > no
> > > jobtracker, no namenode, ...... but in fact the port is occupied?
> > >
> > > P.S: -Whether hadoop is running or not, the netstat command is showing
> > > always that the port
> > >          is free!!!!!
> > >        - My OS is a openSuse 9*
> > >
> > > *I am trying to debug everything, because I am somehow sick of it, It
> is
> > > certainly something stupid and most probably my fault (or my
> > configuration
> > > or my way of running things) , since others managed to run hadoop on a
> > > cluster. I would appreciate any idea that helps to discover this the
> > > problem.
> > >
> > > Thank you,
> > > CJ
> > >
> > >
> > >
> > > On Tue, Jul 7, 2009 at 4:47 AM, Bogdan M. Maryniuk <
> > > bogdan.maryniuk@gmail.com> wrote:
> > >
> > >> On Mon, Jul 6, 2009 at 9:24 PM, Xine Jar<xinejar22@googlemail.com>
> > wrote:
> > >> > 2.How can I check that ipv6 is really disabled on my JVM?
> > >>
> > >> You've mentioned it is Linux, but how do I know what distribution of
> > >> Linux's zoo you use?
> > >>
> > >> On Ubuntu it is sort of like this:
> > >> In the file "/etc/modprobe.d/aliases" find "alias net-pf-10 ipv6",
> > >> remove it and add the following:
> > >>
> > >> alias net-pf-10 off
> > >> alias ipv6 off
> > >>
> > >> Then reboot (welcome to the Linux).
> > >>
> > >> On RedHat and derivatives it is like this:
> > >> echo "alias net-pf-10 off" >>  /etc/modprobe.conf <ENTER>
> > >>
> > >> Then reboot (insert your ironic joke here). :-)
> > >>
> > >>
> > >> > and in which file can I insert the flag
> > -Djava.net.preferIPv4Stack=true?
> > >> It is just a JVM parameter for a networking:
> > >> 1. http://java.sun.com/j2se/1.4.2/docs/guide/net/properties.html
> > >> 2.
> > >>
> >
> http://java.sun.com/j2se/1.4.2/docs/guide/net/ipv6_guide/#ipv6-networking
> > >>
> > >> See $HADOOP/conf/hadoop-env.sh and look for HADOOP_OPTS that is
> > >> commented out by default.
> > >>
> > >> --
> > >> Kind regards, BM
> > >>
> > >> Things, that are stupid at the beginning, rarely ends up wisely.
> > >>
> > >
> > >
> >
>
>
>
> --
> Pro Hadoop, a book to guide you from beginner to hadoop mastery,
> http://www.amazon.com/dp/1430219424?tag=jewlerymall
> www.prohadoopbook.com a community for Hadoop Professionals
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message