hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 纯 郭 <jingl...@hotmail.com>
Subject Ask a problem when run hadoop
Date Fri, 01 Aug 2008 09:48:27 GMT

 
Hi, i hava a problem when use hadoop.
 
Copy the input files into the distributed filesystem:$ bin/hadoop dfs -put conf input 
08/08/01 17:42:05 WARN dfs.DFSClient: NotReplicatedYetException sleeping /user/yicha-a-183/yicha/input/configuration.xsl
retries left 208/08/01 17:42:06 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException:
java.io.IOException: File /user/yicha-a-183/yicha/input/configuration.xsl could only be replicated
to 0 nodes, instead of 1        at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
       at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:585)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)
        at org.apache.hadoop.ipc.Client.call(Client.java:512)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:199)
       at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:585)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
       at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)        at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
       at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
       at org.a
 pache.hadoop.dfs.DFSClient$DFSOutputStream.access$9(DFSClient.java:1953)        at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)
08/08/01 17:42:06 WARN dfs.DFSClient: NotReplicatedYetException sleeping /user/yicha-a-183/yicha/input/configuration.xsl
retries left 108/08/01 17:42:09 WARN dfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
java.io.IOException: File /user/yicha-a-183/yicha/input/configuration.xsl could only be replicated
to 0 nodes, instead of 1        at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
       at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:585)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)
08/08/01 17:42:09 WARN dfs.DFSClient: Error Recovery for block null bad datanode[0]put: Could
not get block locations. Aborting...
when stop the daemons get:
$ bin/stop-all.shstopping jobtrackerlocalhost: no tasktracker to stopstopping namenodelocalhost:
no datanode to stoplocalhost: no secondarynamenode to stop
it can see that "tasktracker ","datanode ","secondarynamenode "are not start. whether the
error before is related with "tasktracker ","datanode ","secondarynamenode "are not start
??? why the three are not start and don't have log??? 
 
the environment and progress as below. thank you~
 
                                                                                jinglu
                                                                            2008-8-1
 
 
 
 
 
 
the environment : Cygwin in windows 2000
 
$ ssh localhost 
Last login: Fri Aug  1 16:46:33 2008 from 127.0.0.1
it can see ssh is already configed correctly.i can ssh to the localhost without a passphrase.
 
Use the following conf/hadoop-site.xml:
<configuration><property>     <name>fs.default.name</name>     <value>localhost:9000</value>
</property> <property>     <name>mapred.job.tracker</name>     <value>localhost:9001</value>
</property> <property>     <name>dfs.replication</name>     <value>1</value>
</property><property>   <name>hadoop.tmp.dir</name>   <value>/home/zxf/hadoop/tmp/</value></property></configuration>
 
Format a new distributed-filesystem:$ bin/hadoop namenode -format08/08/01 17:16:09 INFO dfs.NameNode:
STARTUP_MSG:/************************************************************STARTUP_MSG: Starting
NameNodeSTARTUP_MSG:   host = yicha-a-183/192.168.1.139STARTUP_MSG:   args = [-format]STARTUP_MSG:
  version = 0.16.4STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16
-r 652614; compiled by 'hadoopqa' on Fri May  2 00:18:12 UTC 2008************************************************************/08/08/01
17:16:09 INFO fs.FSNamesystem: fsOwner=yicha-a-183\sshd_server,None,root,Administrators,Users08/08/01
17:16:09 INFO fs.FSNamesystem: supergroup=supergroup08/08/01 17:16:09 INFO fs.FSNamesystem:
isPermissionEnabled=true08/08/01 17:16:09 INFO dfs.Storage: Storage directory \home\zxf\hadoop\tmp\dfs\name
has been successfully formatted.08/08/01 17:16:09 INFO dfs.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN
 _MSG: Shutting down NameNode at yicha-a-183/192.168.1.139************************************************************/
 
Start The hadoop daemons:$ bin/start-all.sh 
starting namenode, logging to /cygdrive/e/nutch/hadoop_source/hadoop-0.16.4/bin/../logs/hadoop-yicha-namenode-yicha-a-183.outlocalhost:
starting datanode, logging to /cygdrive/e/nutch/hadoop_source/hadoop-0.16.4/bin/../logs/hadoop-yicha-datanode-yicha-a-183.outlocalhost:
starting secondarynamenode, logging to /cygdrive/e/nutch/hadoop_source/hadoop-0.16.4/bin/../logs/hadoop-yicha-secondarynamenode-yicha-a-183.outstarting
jobtracker, logging to /cygdrive/e/nutch/hadoop_source/hadoop-0.16.4/bin/../logs/hadoop-yicha-jobtracker-yicha-a-183.outlocalhost:
starting tasktracker, logging to /cygdrive/e/nutch/hadoop_source/hadoop-0.16.4/bin/../logs/hadoop-yicha-tasktracker-yicha-a-183.out
 
 
See the hadoop daemon log output in ${HADOOP_HOME}/logs : only have two .log files, and five
.out files which are  0 byte. as below: 
hadoop-yicha-namenode-yicha-a-183.out    0k
 
hadoop-yicha-datanode-yicha-a-183.out     0k   
 
hadoop-yicha-secondarynamenode-yicha-a-183.out   0k
 
hadoop-yicha-jobtracker-yicha-a-183.out     0k
 
hadoop-yicha-tasktracker-yicha-a-183.out    0k
 
hadoop-yicha-namenode-yicha-a-183.log      4k     the content as below:
2008-08-01 17:28:06,890 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG:
Starting NameNodeSTARTUP_MSG:   host = yicha-a-183/192.168.1.139STARTUP_MSG:   args = []STARTUP_MSG:
  version = 0.16.4STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16
-r 652614; compiled by 'hadoopqa' on Fri May  2 00:18:12 UTC 2008************************************************************/2008-08-01
17:28:07,125 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with
hostName=NameNode, port=90002008-08-01 17:28:07,171 INFO org.apache.hadoop.dfs.NameNode: Namenode
up at: 127.0.0.1/127.0.0.1:90002008-08-01 17:28:07,203 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null2008-08-01 17:28:07,234
INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metri
 cs.spi.NullContext2008-08-01 17:28:07,671 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=yicha-a-183\yicha,None,root,Administrators,Users2008-08-01
17:28:07,671 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup2008-08-01 17:28:07,671
INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true2008-08-01 17:28:07,937 INFO
org.apache.hadoop.fs.FSNamesystem: Finished loading FSImage in 656 msecs2008-08-01 17:28:07,968
INFO org.apache.hadoop.fs.FSNamesystem: Leaving safemode after 687 msecs2008-08-01 17:28:07,968
INFO org.apache.hadoop.dfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes2008-08-01
17:28:07,968 INFO org.apache.hadoop.dfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks2008-08-01
17:28:08,015 INFO org.apache.hadoop.fs.FSNamesystem: Registered FSNamesystemStatusMBean2008-08-01
17:28:08,218 INFO org.mortbay.util.Credential: Checking Resource aliases2008-08-01 17:28:08,359
INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4200
 8-08-01 17:28:08,750 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@52c6b42008-08-01
17:28:08,812 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/]2008-08-01
17:28:08,812 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs]2008-08-01 17:28:08,812
INFO org.mortbay.util.Container: Started HttpContext[/static,/static]2008-08-01 17:28:08,812
INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:500702008-08-01 17:28:08,812
INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@1cd107f2008-08-01 17:28:08,812
INFO org.apache.hadoop.fs.FSNamesystem: Web-server up at: 0.0.0.0:500702008-08-01 17:28:08,812
INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting2008-08-01 17:28:08,812 INFO
org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting2008-08-01 17:28:08,843
I
 NFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting2008-08-01 17:28:08,843
INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting2008-08-01 17:28:18,875
WARN org.apache.hadoop.dfs.StateChange: DIR* FSDirectory.unprotectedDelete: failed to remove
/home/zxf/hadoop/tmp/map
 red/system because it does not exist
 
hadoop-yicha-jobtracker-yicha-a-183.log      3k     the content as below:
2008-08-01 17:28:17,484 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG: /************************************************************STARTUP_MSG:
Starting JobTrackerSTARTUP_MSG:   host = yicha-a-183/192.168.1.139STARTUP_MSG:   args = []STARTUP_MSG:
  version = 0.16.4STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16
-r 652614; compiled by 'hadoopqa' on Fri May  2 00:18:12 UTC 2008************************************************************/2008-08-01
17:28:17,656 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with
hostName=JobTracker, port=90012008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC
Server listener on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server:
IPC Server handler 1 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server:
IPC Server handler 5 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server:
IPC Server handler 3 o
 n 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler
9 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler
7 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server Responder:
starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on
9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler
2 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler
4 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler
6 on 9001: starting2008-08-01 17:28:17,671 INFO org.apache.hadoop.ipc.Server: IPC Server handler
8 on 9001: starting2008-08-01 17:28:17,750 INFO org.mortbay.util.Credential: Checking Resource
aliases2008-08-01 17:28:17,796 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.42008-08-01
17:28:18,218 INFO org.mortbay.util
 .Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@18153382008-08-01 17:28:18,328
INFO org.mortbay.util.Container: Started WebApplicationContext[/,/]2008-08-01 17:28:18,328
INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs]2008-08-01 17:28:18,328 INFO
org.mortbay.util.Container: Started HttpContext[/static,/static]2008-08-01 17:28:18,343 INFO
org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:500302008-08-01 17:28:18,343
INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@13c6a222008-08-01 17:28:18,375
INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker,
sessionId=2008-08-01 17:28:18,375 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up
at: 90012008-08-01 17:28:18,375 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver:
500302008-08-01 17:28:19,140 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING

Browse the web-interface for the NameNode and the JobTracker, by default they are available
at:

NameNode - http://localhost:50070/ 
NameNode '127.0.0.1:9000'




Started: 
Fri Aug 01 17:28:07 CST 2008 

Version: 
0.16.4, r652614 

Compiled: 
Fri May 2 00:18:12 UTC 2008 by hadoopqa 

Upgrades: 
There are no upgrades in progress. 
Browse the filesystem 


Cluster Summary
7 files and directories, 0 blocks = 7 total. Heap Size is 4.22 MB / 992.31 MB (0%) 




Capacity 
: 
0 KB 

DFS Remaining 
: 
0 KB 

DFS Used 
: 
0 KB 

DFS Used% 
: 
0 % 

Live Nodes 
: 
0 

Dead Nodes 
: 
0
 


There are no datanodes in the cluster 


Local logs
Log directory 


Hadoop, 2007.

JobTracker - http://localhost:50030/ 
localhost Hadoop Map/Reduce Administration
State: RUNNINGStarted: Fri Aug 01 17:28:18 CST 2008Version: 0.16.4, r652614Compiled: Fri May
2 00:18:12 UTC 2008 by hadoopqaIdentifier: 200808011728


Cluster Summary




Maps
Reduces
Total Submissions
Nodes
Map Task Capacity
Reduce Task Capacity
Avg. Tasks/Node

0
0
0
0
0
0
-


Running Jobs




Running Jobs 

none


Completed Jobs




Completed Jobs 

none


Failed Jobs




Failed Jobs 

none


Local logs
Log directory, Job Tracker History 


Hadoop, 2007.
 
 
 
_________________________________________________________________
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message