Return-Path: Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: (qmail 80259 invoked from network); 28 Sep 2010 10:11:19 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 28 Sep 2010 10:11:19 -0000 Received: (qmail 88062 invoked by uid 500); 28 Sep 2010 10:11:18 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 87590 invoked by uid 500); 28 Sep 2010 10:11:14 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 87582 invoked by uid 99); 28 Sep 2010 10:11:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Sep 2010 10:11:13 +0000 X-ASF-Spam-Status: No, hits=2.9 required=10.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [74.125.82.176] (HELO mail-wy0-f176.google.com) (74.125.82.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Sep 2010 10:11:06 +0000 Received: by wyb34 with SMTP id 34so7052822wyb.35 for ; Tue, 28 Sep 2010 03:10:45 -0700 (PDT) MIME-Version: 1.0 Received: by 10.216.159.213 with SMTP id s63mr830448wek.78.1285668645362; Tue, 28 Sep 2010 03:10:45 -0700 (PDT) Received: by 10.216.165.203 with HTTP; Tue, 28 Sep 2010 03:10:45 -0700 (PDT) In-Reply-To: <4CA0A2E8.7070609@uni-konstanz.de> References: <4CA0A2E8.7070609@uni-konstanz.de> Date: Tue, 28 Sep 2010 03:10:45 -0700 Message-ID: Subject: Re: starting hadoop fails From: Jeff Hammerbacher To: mapreduce-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016e65ae596e8850304914f0eb2 X-Virus-Checked: Checked by ClamAV on apache.org --0016e65ae596e8850304914f0eb2 Content-Type: text/plain; charset=ISO-8859-1 Hey Johannes, For questions about CDH, please use the mailing list at https://groups.google.com/a/cloudera.org/group/cdh-user. Regards, Jeff On Mon, Sep 27, 2010 at 6:58 AM, Johannes.Lichtenberger < Johannes.Lichtenberger@uni-konstanz.de> wrote: > Hi, > > I'm trying to run the Cloudera hadoop distribution, but it seems it > always fails. The log of DataNode: > > 2010-09-27 15:49:07,081 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting DataNode > STARTUP_MSG: host = luna/127.0.1.1 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 0.20.2+320 > STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957; > compiled by 'root' on Mon Jun 28 23:17:49 UTC 2010 > ************************************************************/ > 2010-09-27 15:49:08,256 INFO org.apache.hadoop.ipc.Client: Retrying > connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s). > 2010-09-27 15:49:09,256 INFO org.apache.hadoop.ipc.Client: Retrying > connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s). > 2010-09-27 15:49:10,257 INFO org.apache.hadoop.ipc.Client: Retrying > connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s). > > I'm trying to start hadoop like it's described in > https://docs.cloudera.com/display/DOC/Hadoop+%28CDH3%29+Quick+Start+Guide > > johannes@luna:~$ for service in /etc/init.d/hadoop-0.20-*; do sudo > $service start; done > Starting Hadoop datanode daemon: starting datanode, logging to > /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-datanode-luna.out > ERROR. > Starting Hadoop jobtracker daemon: starting jobtracker, logging to > /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-jobtracker-luna.out > ERROR. > Starting Hadoop namenode daemon: starting namenode, logging to > /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-namenode-luna.out > ERROR. > Starting Hadoop secondarynamenode daemon: starting secondarynamenode, > logging to > /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-secondarynamenode-luna.out > ERROR. > Starting Hadoop tasktracker daemon: starting tasktracker, logging to > /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-tasktracker-luna.out > ERROR. > > Starting the namenode seems to be ok even though: > > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host = luna/127.0.1.1 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 0.20.2+320 > STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957; > compiled by 'root' on Mon Jun 28 23:17:49 UTC 2010 > ************************************************************/ > 2010-09-27 15:56:07,567 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: > Initializing RPC Metrics with hostName=NameNode, port=8020 > 2010-09-27 15:56:07,570 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: > localhost/127.0.0.1:8020 > 2010-09-27 15:56:07,572 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: > Initializing JVM Metrics with processName=NameNode, sessionId=null > 2010-09-27 15:56:07,573 INFO > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: > Initializing NameNodeMeterics using context > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > 2010-09-27 15:56:07,611 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop > 2010-09-27 15:56:07,611 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup > 2010-09-27 15:56:07,611 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > isPermissionEnabled=false > 2010-09-27 15:56:07,617 INFO > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: > Initializing FSNamesystemMetrics using context > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > 2010-09-27 15:56:07,618 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered > FSNamesystemStatusMBean > 2010-09-27 15:56:07,643 INFO > org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9 > 2010-09-27 15:56:07,649 INFO > org.apache.hadoop.hdfs.server.common.Storage: Number of files under > construction = 0 > 2010-09-27 15:56:07,649 INFO > org.apache.hadoop.hdfs.server.common.Storage: Image file of size 889 > loaded in 0 seconds. > 2010-09-27 15:56:07,657 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, > reached end of edit log Number of transactions found 22 > 2010-09-27 15:56:07,658 INFO > org.apache.hadoop.hdfs.server.common.Storage: Edits file > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current/edits of size 1049092 > edits # 22 loaded in 0 seconds. > 2010-09-27 15:56:07,722 INFO > org.apache.hadoop.hdfs.server.common.Storage: Image file of size 889 > saved in 0 seconds. > 2010-09-27 15:56:07,999 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading > FSImage in 401 msecs > 2010-09-27 15:56:08,004 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of > blocks = 1 > 2010-09-27 15:56:08,004 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid > blocks = 0 > 2010-09-27 15:56:08,004 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > under-replicated blocks = 1 > 2010-09-27 15:56:08,005 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > over-replicated blocks = 0 > 2010-09-27 15:56:08,005 INFO org.apache.hadoop.hdfs.StateChange: STATE* > Leaving safe mode after 0 secs. > 2010-09-27 15:56:08,005 INFO org.apache.hadoop.hdfs.StateChange: STATE* > Network topology has 0 racks and 0 datanodes > 2010-09-27 15:56:08,006 INFO org.apache.hadoop.hdfs.StateChange: STATE* > UnderReplicatedBlocks has 1 blocks > 2010-09-27 15:56:13,136 INFO org.mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > org.mortbay.log.Slf4jLog > 2010-09-27 15:56:13,183 INFO org.apache.hadoop.http.HttpServer: Port > returned by webServer.getConnectors()[0].getLocalPort() before open() is > -1. Opening the listener on 50070 > 2010-09-27 15:56:13,184 INFO org.apache.hadoop.http.HttpServer: > listener.getLocalPort() returned 50070 > webServer.getConnectors()[0].getLocalPort() returned 50070 > 2010-09-27 15:56:13,184 INFO org.apache.hadoop.http.HttpServer: Jetty > bound to port 50070 > 2010-09-27 15:56:13,184 INFO org.mortbay.log: jetty-6.1.14 > 2010-09-27 15:56:13,555 INFO org.mortbay.log: Started > SelectChannelConnector@0.0.0.0:50070 > 2010-09-27 15:56:13,555 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: > 0.0.0.0:50070 > 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 0 on 8020: starting > 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 1 on 8020: starting > 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 2 on 8020: starting > 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 3 on 8020: starting > 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 4 on 8020: starting > 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 5 on 8020: starting > 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 6 on 8020: starting > 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 7 on 8020: starting > 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 8 on 8020: starting > 2010-09-27 15:56:13,570 INFO org.apache.hadoop.ipc.Server: IPC Server > listener on 8020: starting > 2010-09-27 15:56:13,570 INFO org.apache.hadoop.ipc.Server: IPC Server > Responder: starting > 2010-09-27 15:56:13,580 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 9 on 8020: starting > 2010-09-27 15:56:13,591 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.registerDatanode: node registration from 127.0.0.1:50010 > storage DS-1170768146-127.0.1.1-50010-1285540015684 > 2010-09-27 15:56:13,594 INFO org.apache.hadoop.net.NetworkTopology: > Adding a new node: /default-rack/127.0.0.1:50010 > 2010-09-27 15:56:13,601 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to > blk_-3265306986591026360_1034 size 4 > > regards, > Johannes > --0016e65ae596e8850304914f0eb2 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hey Johannes,

For questions about CDH, please use the mailing list a= t https= ://groups.google.com/a/cloudera.org/group/cdh-user.

Regards,
Jeff

On Mon, Sep 27, 2010 at 6:58 AM, Joh= annes.Lichtenberger <Johannes.Lichtenberger@uni-konstanz.de> wrote:
Hi,

I'm trying to run the Cloudera hadoop distribution, but it seems it
always fails. The log of DataNode:

2010-09-27 15:49:07,081 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: =A0 host =3D luna/127.0.1.1
STARTUP_MSG: =A0 args =3D []
STARTUP_MSG: =A0 version =3D 0.20.2+320
STARTUP_MSG: =A0 build =3D =A0-r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;<= br> compiled by 'root' on Mon Jun 28 23:17:49 UTC 2010
************************************************************/
2010-09-27 15:49:08,256 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
2010-09-27 15:49:09,256 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s).
2010-09-27 15:49:10,257 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s).

I'm trying to start hadoop like it's described in
https://docs.cloudera.com/display/DOC/Hadoop+%= 28CDH3%29+Quick+Start+Guide

johannes@luna:~$ for service in /etc/init.d/hadoop-0.20-*; do sudo
$service start; done
Starting Hadoop datanode daemon: starting datanode, logging to
/usr/lib/hadoop-0.20/bin/../logs/hadoop-root-datanode-luna.out
ERROR.
Starting Hadoop jobtracker daemon: starting jobtracker, logging to
/usr/lib/hadoop-0.20/bin/../logs/hadoop-root-jobtracker-luna.out
ERROR.
Starting Hadoop namenode daemon: starting namenode, logging to
/usr/lib/hadoop-0.20/bin/../logs/hadoop-root-namenode-luna.out
ERROR.
Starting Hadoop secondarynamenode daemon: starting secondarynamenode,
logging to
/usr/lib/hadoop-0.20/bin/../logs/hadoop-root-secondarynamenode-luna.out
ERROR.
Starting Hadoop tasktracker daemon: starting tasktracker, logging to
/usr/lib/hadoop-0.20/bin/../logs/hadoop-root-tasktracker-luna.out
ERROR.

Starting the namenode seems to be ok even though:

/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: =A0 host =3D luna/127.0.1.1
STARTUP_MSG: =A0 args =3D []
STARTUP_MSG: =A0 version =3D 0.20.2+320
STARTUP_MSG: =A0 build =3D =A0-r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;<= br> compiled by 'root' on Mon Jun 28 23:17:49 UTC 2010
************************************************************/
2010-09-27 15:56:07,567 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=3DNameNode, port=3D8020
2010-09-27 15:56:07,570 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
localhost/127.0.0.1:802= 0
2010-09-27 15:56:07,572 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=3DNameNode, sessionId=3Dnull
2010-09-27 15:56:07,573 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
Initializing NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
2010-09-27 15:56:07,611 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=3Dhadoop,hadoo= p
2010-09-27 15:56:07,611 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsupergrou= p
2010-09-27 15:56:07,611 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=3Dfalse
2010-09-27 15:56:07,617 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
2010-09-27 15:56:07,618 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2010-09-27 15:56:07,643 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files =3D 9
2010-09-27 15:56:07,649 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files under
construction =3D 0
2010-09-27 15:56:07,649 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 889
loaded in 0 seconds.
2010-09-27 15:56:07,657 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode,
reached end of edit log Number of transactions found 22
2010-09-27 15:56:07,658 INFO
org.apache.hadoop.hdfs.server.common.Storage: Edits file
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/current/edits of size 1049092 edits # 22 loaded in 0 seconds.
2010-09-27 15:56:07,722 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 889
saved in 0 seconds.
2010-09-27 15:56:07,999 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 401 msecs
2010-09-27 15:56:08,004 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
blocks =3D 1
2010-09-27 15:56:08,004 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks =3D 0
2010-09-27 15:56:08,004 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks =3D 1
2010-09-27 15:56:08,005 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
over-replicated blocks =3D 0
2010-09-27 15:56:08,005 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Leaving safe mode after 0 secs.
2010-09-27 15:56:08,005 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2010-09-27 15:56:08,006 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 1 blocks
2010-09-27 15:56:13,136 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2010-09-27 15:56:13,183 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2010-09-27 15:56:13,184 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50070
webServer.getConnectors()[0].getLocalPort() returned 50070
2010-09-27 15:56:13,184 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50070
2010-09-27 15:56:13,184 INFO org.mortbay.log: jetty-6.1.14
2010-09-27 15:56:13,555 INFO org.mortbay.log: Started
S= electChannelConnector@0.0.0.0:50070
2010-09-27 15:56:13,555 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
0.0.0.0:50070
2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 8020: starting
2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 8020: starting
2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 8020: starting
2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 8020: starting
2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 8020: starting
2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 8020: starting
2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 8020: starting
2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 8020: starting
2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 8020: starting
2010-09-27 15:56:13,570 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 8020: starting
2010-09-27 15:56:13,570 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2010-09-27 15:56:13,580 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 8020: starting
2010-09-27 15:56:13,591 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.registerDatanode: node registration from 127.0.0.1:50010
storage DS-1170768146-127.0.1.1-50010-1285540015684
2010-09-27 15:56:13,594 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/127.0.0.1:50010
2010-09-27 15:56:13,601 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to
blk_-3265306986591026360_1034 size 4

regards,
Johannes

--0016e65ae596e8850304914f0eb2--