hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From reduno1985 <reduno1...@googlemail.com>
Subject Re: About running a simple wordcount mapreduce
Date Fri, 22 Mar 2013 19:11:24 GMT
Thanks . 
Each host has 8gb but hadoop is estimating too much space the number estimated is too big
for any host in the world ;). My input data are  simple text files that do not exceed 20
mb. I do not know why hadooop os estimating that much. 



Sent from Samsung MobileAbdelrahman Shettia <ashettia@hortonworks.com> wrote:Hi Redwane
, 

It is possible that the hosts which are running tasks are do not have enough space. Those
dirs are confiugred in mapred-site.xml



On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <reduno1985@googlemail.com>
wrote:


---------- Forwarded message ----------
From: Redwane belmaati cherkaoui <reduno1985@googlemail.com>
Date: Fri, Mar 22, 2013 at 4:39 PM
Subject: About running a simple wordcount mapreduce
To: mapreduce-issues@hadoop.apache.org


Hi 
I am trying to run  a wordcount mapreduce job on several files (<20 mb) using two machines
. I get stuck on 0% map 0% reduce.
The jobtracker log file shows the following warning:
 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node hadoop0.novalocal
has 8791384064 bytes free; but we expect map to take 1317624576693539401

Please help me ,
Best Regards,


Mime
View raw message