hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Richard Yang" <richardy...@richardyang.net>
Subject [No Subject]
Date Sun, 04 Mar 2007 05:55:53 GMT
 

Hello everyone,

 

I have encountered the following error when I try to use randomwriter
(http://wiki.apache.org/lucene-hadoop/RandomWriter) 

I also changed the configuration to 

Test.randomwriter.mpas_per_host=5

Test.randomwriter.bytes_per_map=1024*1024

In short, I am trying to generate only one megabyte of data

After running

bin/hadoop jar hadoop-0.11.2-examples.jar randomwriter 030307kuku RW.conf,
here is part of the error message:

 

07/03/01 09:21:31 INFO mapred.JobClient:  map 100% reduce 100%

07/03/01 09:21:31 INFO mapred.JobClient: Task Id :

task_0003_m_000000_3, Status : FAILED

07/03/01 09:21:31 INFO mapred.JobClient: 07/03/01 09:21:27 WARN

mapred.TaskTracker: Error running child

org.apache.hadoop.ipc.RemoteException: java.io.IOException: failed to create
file /user/root/030307kuku/part000000 on client localhost.localdomain
because target-length is 0, below MIN_REPLICATION (1)

        at
org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:695)

        at org.apache.hadoop.dfs.NameNode.create(NameNode.java:248)

        at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)

        at java.lang.reflect.Method.invoke(Method.java:585)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:337)

        at org.apache.hadoop.ipc.Server

 

It seems that the namenode has problem creating new files to store results.
Yet, what troubles (maybe confuses) me is the fact that I just finished
running sample grep programs without any error.

 

 

 

Currently, there are 3 datanodes and one of them as namenode. I have run 5
sample grep programs concurrently and they all finished fine most of time.
Sometimes the progresses would halt for 5 grep programs and I will need to
reboot every node and start it over again.  Does anybody know how to debug
or fix this kind of problem?? Thank you.

 

Best Regards

 

Richard Yang

 <mailto:richardyang@richardyang.net> richardyang@richardyang.net

 <mailto:kusanagiyang@gmail.com> kusanagiyang@gmail.com

 

 

 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message