hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Hammerbacher <ham...@cloudera.com>
Subject Re: starting hadoop fails
Date Tue, 28 Sep 2010 10:10:45 GMT
Hey Johannes,

For questions about CDH, please use the mailing list at
https://groups.google.com/a/cloudera.org/group/cdh-user.

Regards,
Jeff

On Mon, Sep 27, 2010 at 6:58 AM, Johannes.Lichtenberger <
Johannes.Lichtenberger@uni-konstanz.de> wrote:

> Hi,
>
> I'm trying to run the Cloudera hadoop distribution, but it seems it
> always fails. The log of DataNode:
>
> 2010-09-27 15:49:07,081 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = luna/127.0.1.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2+320
> STARTUP_MSG:   build =  -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
> compiled by 'root' on Mon Jun 28 23:17:49 UTC 2010
> ************************************************************/
> 2010-09-27 15:49:08,256 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
> 2010-09-27 15:49:09,256 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s).
> 2010-09-27 15:49:10,257 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s).
>
> I'm trying to start hadoop like it's described in
> https://docs.cloudera.com/display/DOC/Hadoop+%28CDH3%29+Quick+Start+Guide
>
> johannes@luna:~$ for service in /etc/init.d/hadoop-0.20-*; do sudo
> $service start; done
> Starting Hadoop datanode daemon: starting datanode, logging to
> /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-datanode-luna.out
> ERROR.
> Starting Hadoop jobtracker daemon: starting jobtracker, logging to
> /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-jobtracker-luna.out
> ERROR.
> Starting Hadoop namenode daemon: starting namenode, logging to
> /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-namenode-luna.out
> ERROR.
> Starting Hadoop secondarynamenode daemon: starting secondarynamenode,
> logging to
> /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-secondarynamenode-luna.out
> ERROR.
> Starting Hadoop tasktracker daemon: starting tasktracker, logging to
> /usr/lib/hadoop-0.20/bin/../logs/hadoop-root-tasktracker-luna.out
> ERROR.
>
> Starting the namenode seems to be ok even though:
>
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = luna/127.0.1.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2+320
> STARTUP_MSG:   build =  -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
> compiled by 'root' on Mon Jun 28 23:17:49 UTC 2010
> ************************************************************/
> 2010-09-27 15:56:07,567 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=8020
> 2010-09-27 15:56:07,570 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> localhost/127.0.0.1:8020
> 2010-09-27 15:56:07,572 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2010-09-27 15:56:07,573 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 2010-09-27 15:56:07,611 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
> 2010-09-27 15:56:07,611 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2010-09-27 15:56:07,611 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=false
> 2010-09-27 15:56:07,617 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 2010-09-27 15:56:07,618 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2010-09-27 15:56:07,643 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
> 2010-09-27 15:56:07,649 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Number of files under
> construction = 0
> 2010-09-27 15:56:07,649 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 889
> loaded in 0 seconds.
> 2010-09-27 15:56:07,657 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode,
> reached end of edit log Number of transactions found 22
> 2010-09-27 15:56:07,658 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Edits file
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current/edits of size 1049092
> edits # 22 loaded in 0 seconds.
> 2010-09-27 15:56:07,722 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 889
> saved in 0 seconds.
> 2010-09-27 15:56:07,999 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 401 msecs
> 2010-09-27 15:56:08,004 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> blocks = 1
> 2010-09-27 15:56:08,004 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2010-09-27 15:56:08,004 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 1
> 2010-09-27 15:56:08,005 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> over-replicated blocks = 0
> 2010-09-27 15:56:08,005 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2010-09-27 15:56:08,005 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2010-09-27 15:56:08,006 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 1 blocks
> 2010-09-27 15:56:13,136 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2010-09-27 15:56:13,183 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2010-09-27 15:56:13,184 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2010-09-27 15:56:13,184 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2010-09-27 15:56:13,184 INFO org.mortbay.log: jetty-6.1.14
> 2010-09-27 15:56:13,555 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2010-09-27 15:56:13,555 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 8020: starting
> 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 8020: starting
> 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 8020: starting
> 2010-09-27 15:56:13,565 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 8020: starting
> 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 8020: starting
> 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 8020: starting
> 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 8020: starting
> 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 8020: starting
> 2010-09-27 15:56:13,566 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 8020: starting
> 2010-09-27 15:56:13,570 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 8020: starting
> 2010-09-27 15:56:13,570 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2010-09-27 15:56:13,580 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 8020: starting
> 2010-09-27 15:56:13,591 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from 127.0.0.1:50010
> storage DS-1170768146-127.0.1.1-50010-1285540015684
> 2010-09-27 15:56:13,594 INFO org.apache.hadoop.net.NetworkTopology:
> Adding a new node: /default-rack/127.0.0.1:50010
> 2010-09-27 15:56:13,601 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to
> blk_-3265306986591026360_1034 size 4
>
> regards,
> Johannes
>

Mime
View raw message