hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Muhammad Mudassar <mudassa...@gmail.com>
Subject problem regarding hadoop
Date Wed, 13 Jan 2010 11:11:59 GMT
hi i am running hadoop 0.20.1 on single node and  i am getting some problem
My hdfs-site configurations are
<configuration>
<property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
  <description>A base for other temporary directories.</description>
</property>
</configuration>


and core site configurations are
<configuration>
 <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
  </property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop</value>
  <description>A base for other temporary directories.</description>
</property>
</configuration>


the problem is with jobtracker log file says that

2010-01-13 16:00:33,015 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2010-01-13 16:00:33,043 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=JobTracker, port=54311
2010-01-13 16:00:38,309 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2010-01-13 16:00:38,407 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 50030
2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50030
webServer.getConnectors()[0].getLocalPort() returned 50030
2010-01-13 16:00:38,408 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50030
2010-01-13 16:00:38,408 INFO org.mortbay.log: jetty-6.1.14
2010-01-13 16:00:51,429 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50030
2010-01-13 16:00:51,430 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
up at: 54311
2010-01-13 16:00:51,431 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
webserver: 50030
2010-01-13 16:00:51,574 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2010-01-13 16:00:51,643 INFO
org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
inactive
2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

    at org.apache.hadoop.ipc.Client.call(Client.java:739)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy4.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)

2*010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Error
Recovery for block null bad datanode[0] nodes == null
2010-01-13 16:00:51,674 WARN org.apache.hadoop.hdfs.DFSClient: Could not get
block locations. Source file
"/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
jobtracker.info" - Aborting...
2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: Writing to
file
hdfs://localhost:54310/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
jobtracker.info failed!
2010-01-13 16:00:51,674 WARN org.apache.hadoop.mapred.JobTracker: FileSystem
is not ready yet!
2010-01-13 16:00:51,679 WARN org.apache.hadoop.mapred.JobTracker: Failed to
initialize recovery manager.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/hadoop/Desktop/hadoop-store/hadoop-$hadoop/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1*
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

    at org.apache.hadoop.ipc.Client.call(Client.java:739)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy4.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)



i checked with jps it says that  processes are running

15030 SecondaryNameNode
14904 DataNode
15129 JobTracker
15231 TaskTracker
14787 NameNode

but log file has errors can any one tell what the problem is

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message