hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Loddengaard" <a...@cloudera.com>
Subject Re: Hadoop Installation
Date Fri, 21 Nov 2008 18:22:45 GMT
Download the 1.1.1.tar.gz binaries.  This file will have a bunch of JAR
files; drop the JAR files in to $HADOOP_HOME/lib and see what happens.
Alex

On Fri, Nov 21, 2008 at 9:19 AM, Mithila Nagendra <mnagendr@asu.edu> wrote:

> Hey ALex
> Which file do I download from the apache commons website?
>
> Thanks
> Mithila
> On Fri, Nov 21, 2008 at 8:15 PM, Mithila Nagendra <mnagendr@asu.edu>
> wrote:
>
> > I tried the 0.18.2 as welll.. it gave me the same exception.. so tried
> the
> > lower version.. I should check if this works.. Thanks!
> >
> >
> > On Fri, Nov 21, 2008 at 5:06 AM, Alex Loddengaard <alex@cloudera.com
> >wrote:
> >
> >> Maybe try downloading the Apache Commons - Logging jars (<
> >> http://commons.apache.org/downloads/download_logging.cgi>) and drop
> them
> >> in
> >> to $HADOOP_HOME/lib.
> >> Just curious, if you're starting a new cluster, why have you chosen to
> use
> >> 0.17.* and not 0.18.2?  It would be a good idea to use 0.18.2 if
> possible.
> >>
> >> Alex
> >>
> >> On Thu, Nov 20, 2008 at 4:36 PM, Mithila Nagendra <mnagendr@asu.edu>
> >> wrote:
> >>
> >> > Hey
> >> > The version is: Linux enpc3740.eas.asu.edu 2.6.9-67.0.20.EL #1 Wed
> Jun
> >> 18
> >> > 12:23:46 EDT 2008 i686 i686 i386 GNU/Linux, this is what I got when I
> >> used
> >> > the command uname -a (thanks Tom!)
> >> >
> >> > Yea it is bin/start-all.. Following is the exception that I got when i
> >> > tried
> >> > to start the daemons..
> >> >
> >> >
> >> > [mithila@node01 mithila]$ ls
> >> > hadoop-0.17.2.1  hadoop-0.18.2  hadoop-0.18.2.tar.gz
> >> > [mithila@node01 mithila]$ cd hadoop-0.17*
> >> > [mithila@node01 hadoop-0.17.2.1]$ ls
> >> > bin        c++          conf     docs
> >> >  hadoop-0.17.2.1-examples.jar  lib      LICENSE.txt  NOTICE.txt  src
> >> > build.xml  CHANGES.txt  contrib  hadoop-0.17.2.1-core.jar
> >> >  hadoop-0.17.2.1-test.jar      libhdfs  logs         README.txt
>  webapps
> >> > [mithila@node01 hadoop-0.17.2.1]$ bin/start-all
> >> > bash: bin/start-all: No such file or directory
> >> > [mithila@node01 hadoop-0.17.2.1]$ bin/start-all.sh
> >> > starting namenode, logging to
> >> >
> >> >
> >>
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-namenode-node01.out
> >> > mithila@localhost's password:
> >> > localhost: starting datanode, logging to
> >> >
> >> >
> >>
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-datanode-node01.out
> >> > mithila@localhost's password:
> >> > localhost: starting secondarynamenode, logging to
> >> >
> >> >
> >>
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-secondarynamenode-node01.out
> >> > starting jobtracker, logging to
> >> >
> >> >
> >>
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-jobtracker-node01.out
> >> > mithila@localhost's password:
> >> > localhost: starting tasktracker, logging to
> >> >
> >> >
> >>
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-tasktracker-node01.out
> >> > localhost: Exception in thread "main"
> >> java.lang.ExceptionInInitializerError
> >> > localhost: Caused by:
> >> org.apache.commons.logging.LogConfigurationException:
> >> > User-specified log class 'org.apache.commons.logging.impl.Log4JLogger'
> >> > cannot be found or is not useable.
> >> > localhost:      at
> >> >
> >> >
> >>
> org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
> >> > localhost:      at
> >> >
> >> >
> >>
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
> >> > localhost:      at
> >> >
> >> >
> >>
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
> >> > localhost:      at
> >> > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
> >> > localhost:      at
> >> > org.apache.hadoop.mapred.TaskTracker.<clinit>(TaskTracker.java:95)
> >> > localhost: Could not find the main class:
> >> > org.apache.hadoop.mapred.TaskTracker.  Program will exit.
> >> >
> >> > AND when I tried formatting the file system I got the following
> >> exception..
> >> > I followed Michael Noll s step to install Hadoop.. I m currently
> working
> >> on
> >> > a single node and if this works will move on to multiple nodes in a
> >> > cluster.
> >> >
> >> > [mithila@node01 hadoop-0.17.2.1]$ bin/hadoop namenode -format
> >> > Exception in thread "main" java.lang.ExceptionInInitializerError
> >> > Caused by: org.apache.commons.logging.LogConfigurationException:
> >> > User-specified log class 'org.apache.commons.logging.impl.Log4JLogger'
> >> > cannot be found or is not useable.
> >> >        at
> >> >
> >> >
> >>
> org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
> >> >        at
> >> >
> >> >
> >>
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
> >> >        at
> >> >
> >> >
> >>
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
> >> >        at
> >> org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
> >> >        at org.apache.hadoop.dfs.NameNode.<clinit>(NameNode.java:88)
> >> > Could not find the main class: org.apache.hadoop.dfs.NameNode.
>  Program
> >> > will
> >> > exit.
> >> >
> >> >
> >> > I have no idea whats wrong... my hadoop-xml file looks as follows:
> >> >
> >> > <?xml version="1.0"?>
> >> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >> >
> >> > <!-- Put site-specific property overrides in this file. -->
> >> >
> >> > <configuration>
> >> >
> >> > <property>
> >> > <name>hadoop.tmp.dir</name>
> >> > <value>/tmp/hadoop-${user.name}</value>
> >> > <description>A base for other temporary directories</description>
> >> > </property>
> >> >
> >> >
> >> > <property>
> >> > <name>fs.default.name</name>
> >> > <value>hdfs://localhost:54310</value>
> >> > <description>The name of the default file system. A URI whose
> >> > scheme and authority determine the FileSystem implementation. The
> >> > URI's scheme determines the config property (fs.scheme.impl) naming
> >> > the FileSystem implementation class. The URI's authority is used to
> >> > determine the host, port, etc for a filesystem.</description>
> >> > </property>
> >> >
> >> >
> >> > <property>
> >> > <name>mapred.job.tracker</name>
> >> > <value>localhost:54311</value>
> >> > <description>The host and port that the MapReduce job tracker runs
at.
> >> > If "local", then jobs are run in-process as a single map and
> >> > reduce task.</description>
> >> > </property>
> >> >
> >> >
> >> > <property>
> >> > <name>dfs.replication</name>
> >> > <value>1</value>
> >> > <description>Default block replication.
> >> > The actual number of replications can be specified when the file is
> >> > created.
> >> > The default is used if replication is not specified in create
> >> > time.</description>
> >> > </property>
> >> > "conf/hadoop-site.xml" 42L, 1271C
> >> >
> >> >
> >> > My hadoop-env.sh looks as follows:
> >> >
> >> > # Set Hadoop-specific environment variables here.
> >> >
> >> > # The only required environment variable is JAVA_HOME.  All others are
> >> > # optional.  When running a distributed configuration it is best to
> >> > # set JAVA_HOME in this file, so that it is correctly defined on
> >> > # remote nodes.
> >> >
> >> > # The java implementation to use.  Required.
> >> >  export JAVA_HOME=/usr/java/jdk1.6.0_10
> >> >
> >> > # Extra Java CLASSPATH elements.  Optional.
> >> > # export HADOOP_CLASSPATH=
> >> >
> >> > # The maximum amount of heap to use, in MB. Default is 1000.
> >> > # export HADOOP_HEAPSIZE=2000
> >> >
> >> > # Extra Java runtime options.  Empty by default.
> >> > # export HADOOP_OPTS=-server
> >> >
> >> > # Command specific options appended to HADOOP_OPTS when specified
> >> > export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
> >> > $HADOOP_NAMENODE_OPTS"
> >> > export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote
> >> > $HADOOP_SECONDARYNAMENODE_OPTS"
> >> > export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote
> >> > $HADOOP_DATANODE_OPTS"
> >> > export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote
> >> > $HADOOP_BALANCER_OPTS"
> >> > export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote
> >> > $HADOOP_JOBTRACKER_OPTS"
> >> > # export HADOOP_TASKTRACKER_OPTS=
> >> > # The following applies to multiple commands (fs, dfs, fsck, distcp
> etc)
> >> > # export HADOOP_CLIENT_OPTS
> >> >
> >> > # Extra ssh options.  Empty by default.
> >> > # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o
> >> SendEnv=HADOOP_CONF_DIR"
> >> >
> >> > # Where log files are stored.  $HADOOP_HOME/logs by default.
> >> > # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
> >> >
> >> > # File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by
> default.
> >> > # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
> >> >
> >> > # host:path where hadoop code should be rsync'd from.  Unset by
> default.
> >> > # export HADOOP_MASTER=master:/home/$USER/src/hadoop
> >> >
> >> > "conf/hadoop-env.sh" 54L, 2236C
> >> >
> >> > Dont know what the exceptions mean.. Does anyone have an idea?
> >> >
> >> > THanks
> >> > Mithila
> >> >
> >> >
> >> > On Thu, Nov 20, 2008 at 6:42 AM, some speed <speed.some@gmail.com>
> >> wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > I am working on the same for my master's project and i know how
> >> > frustrating
> >> > > it can be to get hadoop installed.
> >> > > If time is not a factor, I suggest you first try implementing it in
> a
> >> > > psuedo distributed environment. Once you understand how things work
> by
> >> > > implementing a simple map reduce program, u can easily move on to
a
> >> > cluster.
> >> > >
> >> > > From what little i know, let me tell u a few things,
> >> > >
> >> > > I tried using the university network to install hadoop.. it was a
> real
> >> > > pain. mayb it was coz I didnt have the admin privileges( to install
> >> HDFS
> >> > n
> >> > > its files). so make sure u have admin rights or u keep getting an
> >> error
> >> > of
> >> > > port 22 (for ssh) being not opened or the demeons were not started.
> >> > > n btw is it conf/start-all.sh?? i think its bin/start -all.sh or
> >> > something
> >> > > of that sort.
> >> > >
> >> > > hadoop-site.xml  -- i had the links bookmarked somewhere- cant find
> it
> >> > now
> >> > > but i think u are supposed to have a few more details in there for
a
> >> > cluster
> >> > > installation. Am  sure we can find those online quite easily.
> >> > >
> >> > > Also i suppose u are using java? if u are good at eclipse, then u
> can
> >> > > implement map reduce/hadoop thru that on a single node (just to get
> a
> >> > hang
> >> > > of it).
> >> > >
> >> > > All the best!
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Wed, Nov 19, 2008 at 6:38 PM, Tom Wheeler <tomwheel@gmail.com>
> >> wrote:
> >> > >
> >> > >> On Wed, Nov 19, 2008 at 5:31 PM, Mithila Nagendra <
> mnagendr@asu.edu>
> >> > >> wrote:
> >> > >> > Oh is that so. Im not sure which UNIX it is since Im working
with
> a
> >> > >> cluster
> >> > >> > that is remotely accessed.
> >> > >>
> >> > >> If you can get a shell on the machine, try typing "uname -a" to
see
> >> > >> what type of UNIX it is.
> >> > >>
> >> > >> Alternatively, the os.name, os.version and os.arch Java system
> >> > >> properties could also help you to identify the operating system.
> >> > >>
> >> > >> --
> >> > >> Tom Wheeler
> >> > >> http://www.tomwheeler.com/
> >> > >>
> >> > >
> >> > >
> >> >
> >>
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message