hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mithila Nagendra" <mnage...@asu.edu>
Subject Re: Hadoop Installation
Date Fri, 21 Nov 2008 00:36:33 GMT
The version is: Linux enpc3740.eas.asu.edu 2.6.9-67.0.20.EL #1 Wed Jun 18
12:23:46 EDT 2008 i686 i686 i386 GNU/Linux, this is what I got when I used
the command uname -a (thanks Tom!)

Yea it is bin/start-all.. Following is the exception that I got when i tried
to start the daemons..

[mithila@node01 mithila]$ ls
hadoop-  hadoop-0.18.2  hadoop-0.18.2.tar.gz
[mithila@node01 mithila]$ cd hadoop-0.17*
[mithila@node01 hadoop-]$ ls
bin        c++          conf     docs
 hadoop-  lib      LICENSE.txt  NOTICE.txt  src
build.xml  CHANGES.txt  contrib  hadoop-
 hadoop-      libhdfs  logs         README.txt  webapps
[mithila@node01 hadoop-]$ bin/start-all
bash: bin/start-all: No such file or directory
[mithila@node01 hadoop-]$ bin/start-all.sh
starting namenode, logging to
mithila@localhost's password:
localhost: starting datanode, logging to
mithila@localhost's password:
localhost: starting secondarynamenode, logging to
starting jobtracker, logging to
mithila@localhost's password:
localhost: starting tasktracker, logging to
localhost: Exception in thread "main" java.lang.ExceptionInInitializerError
localhost: Caused by: org.apache.commons.logging.LogConfigurationException:
User-specified log class 'org.apache.commons.logging.impl.Log4JLogger'
cannot be found or is not useable.
localhost:      at
localhost:      at
localhost:      at
localhost:      at
localhost:      at
localhost: Could not find the main class:
org.apache.hadoop.mapred.TaskTracker.  Program will exit.

AND when I tried formatting the file system I got the following exception..
I followed Michael Noll s step to install Hadoop.. I m currently working on
a single node and if this works will move on to multiple nodes in a cluster.

[mithila@node01 hadoop-]$ bin/hadoop namenode -format
Exception in thread "main" java.lang.ExceptionInInitializerError
Caused by: org.apache.commons.logging.LogConfigurationException:
User-specified log class 'org.apache.commons.logging.impl.Log4JLogger'
cannot be found or is not useable.
        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
        at org.apache.hadoop.dfs.NameNode.<clinit>(NameNode.java:88)
Could not find the main class: org.apache.hadoop.dfs.NameNode.  Program will

I have no idea whats wrong... my hadoop-xml file looks as follows:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->


<description>A base for other temporary directories</description>

<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
URI's scheme determines the config property (fs.scheme.impl) naming
the FileSystem implementation class. The URI's authority is used to
determine the host, port, etc for a filesystem.</description>

<description>The host and port that the MapReduce job tracker runs at.
If "local", then jobs are run in-process as a single map and
reduce task.</description>

<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create
"conf/hadoop-site.xml" 42L, 1271C

My hadoop-env.sh looks as follows:

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
 export JAVA_HOME=/usr/java/jdk1.6.0_10

# Extra Java CLASSPATH elements.  Optional.

# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)

# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored.  $HADOOP_HOME/logs by default.

# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

"conf/hadoop-env.sh" 54L, 2236C

Dont know what the exceptions mean.. Does anyone have an idea?


On Thu, Nov 20, 2008 at 6:42 AM, some speed <speed.some@gmail.com> wrote:

> Hi,
> I am working on the same for my master's project and i know how frustrating
> it can be to get hadoop installed.
> If time is not a factor, I suggest you first try implementing it in a
> psuedo distributed environment. Once you understand how things work by
> implementing a simple map reduce program, u can easily move on to a cluster.
> From what little i know, let me tell u a few things,
> I tried using the university network to install hadoop.. it was a real
> pain. mayb it was coz I didnt have the admin privileges( to install HDFS n
> its files). so make sure u have admin rights or u keep getting an error of
> port 22 (for ssh) being not opened or the demeons were not started.
> n btw is it conf/start-all.sh?? i think its bin/start -all.sh or something
> of that sort.
> hadoop-site.xml  -- i had the links bookmarked somewhere- cant find it now
> but i think u are supposed to have a few more details in there for a cluster
> installation. Am  sure we can find those online quite easily.
> Also i suppose u are using java? if u are good at eclipse, then u can
> implement map reduce/hadoop thru that on a single node (just to get a hang
> of it).
> All the best!
> On Wed, Nov 19, 2008 at 6:38 PM, Tom Wheeler <tomwheel@gmail.com> wrote:
>> On Wed, Nov 19, 2008 at 5:31 PM, Mithila Nagendra <mnagendr@asu.edu>
>> wrote:
>> > Oh is that so. Im not sure which UNIX it is since Im working with a
>> cluster
>> > that is remotely accessed.
>> If you can get a shell on the machine, try typing "uname -a" to see
>> what type of UNIX it is.
>> Alternatively, the os.name, os.version and os.arch Java system
>> properties could also help you to identify the operating system.
>> --
>> Tom Wheeler
>> http://www.tomwheeler.com/

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message