hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Can not generate a result
Date Mon, 13 Aug 2012 16:25:03 GMT
Astie,

Since you've overriden these, do:

$ rm -rf /home/astie/hdfs/data

And then re-run your start-all command. After this works, please never
re-issue a "namenode -format" unless you really want to wipe
everything away and start over.

On Mon, Aug 13, 2012 at 9:48 PM, Astie Darmayantie
<astie.darmayantie@yahoo.com> wrote:
> There is other people who help me to solve this problem (since i sent it
> into forum). He suggest me to add a little more configuration in my config
> file. He told me to add this (the bold line)
> hdfs-site.xml :
>
>         <property>
>           <name>dfs.name.dir</name>
>           <value>/home/astie/hdfs/name</value>
>       </property>
> <property>
>           <name>dfs.data.dir</name>
>           <value>/home/astie/hdfs/data</value>
>       </property>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
>
> core-site.xml :
>
>         <property>
>           <name>fs.default.name</name>
>           <value>hdfs://localhost:9000</value>
>       </property>
> <property>
>           <name>hadoop.tmp.dir</name>
>           <value>/home/astie/hdfs/temp</value>
>       </property>
>
> in this case i already overriden the hadoop.tmp.dir, dfs.name.dir, or
> dfs.data.dir
> is it still possible to use this command $ rm -rf
> /tmp/hadoop-$(whoami)/dfs/data ?
>
> Thank you
>
> ________________________________
> From: Harsh J <harsh@cloudera.com>
> To: Astie Darmayantie <astie.darmayantie@yahoo.com>; user@hadoop.apache.org
> Sent: Monday, August 13, 2012 10:39 PM
>
> Subject: Re: Can not generate a result
>
> Hi Astie,
>
>> Live Nodes:0
>
> That the live nodes = 0 is the real issue here.
>
> If you're running off of default configs (i.e. haven't overriden
> hadoop.tmp.dir, dfs.name.dir, nor dfs.data.dir), do this:
>
> $ rm -rf /tmp/hadoop-$(whoami)/dfs/data
>
> And then:
>
> $ $HADOOP_HOME/bin/start-all.sh
>
> And you should see:
>
>> Live Nodes : 1
>
> And then your HDFS should work alright.
>
> W.r.t. tutorials, you may also follow
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> On Mon, Aug 13, 2012 at 8:36 PM, Astie Darmayantie
> <astie.darmayantie@yahoo.com> wrote:
>>
>>
>> Dear Mr. Harsh,
>>
>> First of all thank you for your reply. i already check the link that you
>> gave me.
>> there's a point said "Your DataNode instances have run out of space"
>> i check my namenode web and i got this :
>>
>> Cluster Summary
>>
>> 4 files and directories, 0 blocks = 4 total. Heap Size is 58.88 MB /
>> 888.94 MB (6%)
>> Configured Capacity : 0 KB
>> DFS Used : 0 KB
>> Non DFS Used : 0 KB
>> DFS Remaining : 0 KB
>> DFS Used% : 100 %
>> DFS Remaining% : 0 %
>> Live Nodes : 0
>> Dead Nodes : 0
>> Decommissioning Nodes : 0
>> Number of Under-Replicated Blocks : 0
>>
>> Is there any missing configuration? because the configured capacity is 0.
>> Is it normal?
>> I am really confused at this time, since i search all over the web, the
>> tutorial is just as simple as that and you can generate the result.
>>
>> And anyway, how can i wipe the datanode block directory? Because all the
>> tutorial just said to format the namenode.
>> Thank you
>>
>> ________________________________
>> From: Harsh J <harsh@cloudera.com>
>> To: user@hadoop.apache.org; Astie Darmayantie
>> <astie.darmayantie@yahoo.com>
>> Sent: Monday, August 13, 2012 9:50 PM
>> Subject: Re: Can not generate a result
>>
>> Hi Astie,
>>
>> You can look at http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
>> to find a solution on this one. Speaking w.r.t. first-timers, this
>> frequently happens when you format the NameNode but forget to wipe the
>> DataNode block directories at the same time.
>>
>> On Mon, Aug 13, 2012 at 10:06 AM, Astie Darmayantie
>> <astie.darmayantie@yahoo.com> wrote:
>> > hi i am new to hadoop.
>> > i already do the the precautionary measures like : configuring hadoop as
>> > pseudo-distributed operations, namenode -format etc. before
>> > running start-all.sh
>> >
>> > and i try to execute sample program WordCount by using :
>> > ./bin/hadoop jar /home/astie/thesis/project_eclipse/WordCount.jar
>> > WordCount
>> > /home/astie/thesis/project_eclipse/input/
>> > /home/astie/thesis/project_eclipse/output/
>> >
>> > it doesn't generate the result and i got this in the log file :
>> >
>> > 2012-08-13 11:28:27,053 WARN org.apache.hadoop.hdfs.DFSClient: Error
>> > Recovery for block null bad datanode[0] nodes == null
>> > 2012-08-13 11:28:27,053 WARN org.apache.hadoop.hdfs.DFSClient: Could not
>> > get
>> > block locations. Source file "/tmp/mapred/system/jobtracker.info" -
>> > Aborting...
>> > 2012-08-13 11:28:27,053 WARN org.apache.hadoop.mapred.JobTracker:
>> > Writing to
>> > file hdfs://localhost:9000/tmp/mapred/system/jobtracker.info failed!
>> > 2012-08-13 11:28:27,054 WARN org.apache.hadoop.mapred.JobTracker:
>> > FileSystem
>> > is not ready yet!
>> > 2012-08-13 11:28:27,059 WARN org.apache.hadoop.mapred.JobTracker: Failed
>> > to
>> > initialize recovery manager.
>> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> > /tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes,
>> > instead of 1
>> >        at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> >        at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> >        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>> >        at
>> >
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >        at java.lang.reflect.Method.invoke(Method.java:616)
>> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> >        at java.security.AccessController.doPrivileged(Native Method)
>> >        at javax.security.auth.Subject.doAs(Subject.java:416)
>> >        at
>> >
>> > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> >
>> >        at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>> >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> >        at $Proxy5.addBlock(Unknown Source)
>> >        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>> >        at
>> >
>> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >        at java.lang.reflect.Method.invoke(Method.java:616)
>> >        at
>> >
>> > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >        at
>> >
>> > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >        at $Proxy5.addBlock(Unknown Source)
>> >        at
>> >
>> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>> >        at
>> >
>> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>> >        at
>> >
>> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>> >        at
>> >
>> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>> >
>> > i am using openSuse and hadoop-1.0.3 also using eclipse to write the
>> > program.
>> > it is said that the node was null. yes, i am still running it with my
>> > computer only. is it the problem?
>> > can you tell me how to fix this? thank you
>>
>>
>>
>> --
>> Harsh J
>>
>>
>
>
>
> --
> Harsh J
>
>



-- 
Harsh J

Mime
View raw message