Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 21A5410D4F for ; Thu, 6 Jun 2013 13:38:26 +0000 (UTC) Received: (qmail 59993 invoked by uid 500); 6 Jun 2013 13:38:20 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 59914 invoked by uid 500); 6 Jun 2013 13:38:20 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 59907 invoked by uid 99); 6 Jun 2013 13:38:20 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Jun 2013 13:38:20 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [129.187.255.137] (HELO postout1.mail.lrz.de) (129.187.255.137) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Jun 2013 13:38:15 +0000 Received: from lxmhs51.srv.lrz.de (localhost [127.0.0.1]) by postout1.mail.lrz.de (Postfix) with ESMTP id 3bR7Gp1SDPzyRw for ; Thu, 6 Jun 2013 15:37:54 +0200 (CEST) X-Virus-Scanned: by amavisd-new at lrz.de in lxmhs51.srv.lrz.de Received: from postout1.mail.lrz.de ([127.0.0.1]) by lxmhs51.srv.lrz.de (lxmhs51.srv.lrz.de [127.0.0.1]) (amavisd-new, port 20024) with LMTP id cW3UKWBmvseV for ; Thu, 6 Jun 2013 15:37:53 +0200 (CEST) Received: from BADWLRZ-SWHBT2.ads.mwn.de (BADWLRZ-SWHBT2.ads.mwn.de [IPv6:2001:4ca0:0:108::126]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client CN "BADWLRZ-SWHBT2", Issuer "BADWLRZ-SWHBT2" (not verified)) by postout1.mail.lrz.de (Postfix) with ESMTPS id 3bR7Gn0q2YzyRt for ; Thu, 6 Jun 2013 15:37:53 +0200 (CEST) Received: from BADWLRZ-SWMBX11.ads.mwn.de ([fe80::6de5:ff8b:1900:b1a1]) by BADWLRZ-SWHBT2.ads.mwn.de ([fe80::5951:9dc3:7b2b:14ba%13]) with mapi id 14.03.0123.003; Thu, 6 Jun 2013 15:36:47 +0200 From: "Lanati, Matteo" To: "" Subject: Re: Thread-Index: AQHOYTj+Hi7KfiCjvkeyUY34pAyNDZklkEcAgAMBhAA= Date: Thu, 6 Jun 2013 13:36:46 +0000 Message-ID: <31FE6CB5-55F2-4489-AB84-5BAB7FF49AF3@lrz.de> References: <31270DDF18CC924782A594673481B9154BD3897B@BADWLRZ-SWMBX11.ads.mwn.de> <11F6F9CB-D131-4355-B78C-C423822B2B6B@lrz.de> <12FA0B88-9CF0-44DB-B2EA-A486B734A4FC@lrz.de> <62FA29FA-B38D-4336-8C94-9E1981F76E90@gmail.com> In-Reply-To: <62FA29FA-B38D-4336-8C94-9E1981F76E90@gmail.com> Accept-Language: en-GB, it-IT, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [2001:4ca0:0:f000:cd3b:7f50:26cb:33de] Content-Type: text/plain; charset="iso-8859-1" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Hi all, I finally solved the problem. It was due to the cloud middleware I used to = run the Hadoop VMs. The domain type in the libvirt xm file was incorrectly set to 'qemu'. Once = I fixed this and changed to 'kvm' everything started to work properly. Thanks for the support. Matteo On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz wr= ote: > Hi Matteo, >=20 > Are you able to add more space to your test machines? Also, what says the= pi example (hadoop jar hadoop-examples pi 10 10 ? >=20 > - Alex >=20 > On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" wrote= : >=20 >> Hi again, >>=20 >> unfortunately my problem is not solved. >> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggeste= d in [1]. >> No security, no ACLs, default scheduler ... The files are attached. >> I still have the same error message. I also tried another Java version (= 6u45 instead of 7u21). >> How can I increase the debug level to have a deeper look? >> Thanks, >>=20 >> Matteo >>=20 >>=20 >> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Rest= artability >> On Jun 4, 2013, at 3:52 AM, Azuryy Yu wrote: >>=20 >>> Hi Harsh, >>>=20 >>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I s= aid upgrade. Sorry. >>>=20 >>>=20 >>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J wrote: >>> Azuryy, >>>=20 >>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel >>> there's been a regression, can you comment that on the JIRA? >>>=20 >>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu wrote: >>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it. >>>>=20 >>>>=20 >>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo = wrote: >>>>>=20 >>>>> Hi Azuryy, >>>>>=20 >>>>> thanks for the update. Sorry for the silly question, but where can I >>>>> download the patched version? >>>>> If I look into the closest mirror (i.e. >>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see tha= t the >>>>> Hadoop 1.1.2 version was last updated on Jan. 31st. >>>>> Thanks in advance, >>>>>=20 >>>>> Matteo >>>>>=20 >>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so wit= hout >>>>> any security, and the problem is there. >>>>>=20 >>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu wrote: >>>>>=20 >>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed = the >>>>>> bug you facing now. >>>>>>=20 >>>>>> --Send from my Sony mobile. >>>>>>=20 >>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" wrot= e: >>>>>> Thanks Harsh for the reply. I was confused too that why security is >>>>>> causing this. >>>>>>=20 >>>>>> Regards, >>>>>> Shahab >>>>>>=20 >>>>>>=20 >>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J wrote: >>>>>> Shahab - I see he has mentioned generally that security is enabled >>>>>> (but not that it happens iff security is enabled), and the issue her= e >>>>>> doesn't have anything to do with security really. >>>>>>=20 >>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) = or >>>>>> on the mapreduce-dev lists. >>>>>>=20 >>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus >>>>>> wrote: >>>>>>> HI Harsh, >>>>>>>=20 >>>>>>> Quick question though: why do you think it only happens if the OP >>>>>>> 'uses >>>>>>> security' as he mentioned? >>>>>>>=20 >>>>>>> Regards, >>>>>>> Shahab >>>>>>>=20 >>>>>>>=20 >>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J wrote= : >>>>>>>>=20 >>>>>>>> Does smell like a bug as that number you get is simply >>>>>>>> Long.MAX_VALUE, >>>>>>>> or 8 exbibytes. >>>>>>>>=20 >>>>>>>> Looking at the sources, this turns out to be a rather funny Java >>>>>>>> issue >>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALU= E >>>>>>>> return in such a case). I've logged a bug report for this at >>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a >>>>>>>> reproducible case. >>>>>>>>=20 >>>>>>>> Does this happen consistently for you? >>>>>>>>=20 >>>>>>>> [1] >>>>>>>>=20 >>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round= (double) >>>>>>>>=20 >>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo >>>>>>>> wrote: >>>>>>>>> Hi all, >>>>>>>>>=20 >>>>>>>>> I stumbled upon this problem as well while trying to run the >>>>>>>>> default >>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 >>>>>>>>> virtual >>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One >>>>>>>>> node is >>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input >>>>>>>>> file is >>>>>>>>> about 600 kB and the error is >>>>>>>>>=20 >>>>>>>>> 2013-06-01 12:22:51,999 WARN >>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No >>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free; >>>>>>>>> but we >>>>>>>>> expect map to take 9223372036854775807 >>>>>>>>>=20 >>>>>>>>> The logfile is attached, together with the configuration files. T= he >>>>>>>>> version I'm using is >>>>>>>>>=20 >>>>>>>>> Hadoop 1.2.0 >>>>>>>>> Subversion >>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.= 2 >>>>>>>>> -r >>>>>>>>> 1479473 >>>>>>>>> Compiled by hortonfo on Mon May 6 06:59:37 UTC 2013 >>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405 >>>>>>>>> This command was run using >>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar >>>>>>>>>=20 >>>>>>>>> If I run the default configuration (i.e. no securty), then the jo= b >>>>>>>>> succeeds. >>>>>>>>>=20 >>>>>>>>> Is there something missing in how I set up my nodes? How is it >>>>>>>>> possible >>>>>>>>> that the envisaged value for the needed space is so big? >>>>>>>>>=20 >>>>>>>>> Thanks in advance. >>>>>>>>>=20 >>>>>>>>> Matteo >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a >>>>>>>>>> bug >>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to >>>>>>>>>> show >>>>>>>>>> similar symptoms. However, that was fixed a long while ago. >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui < >>>>>>>>>> reduno1985@googlemail.com> wrote: >>>>>>>>>>=20 >>>>>>>>>>> This the content of the jobtracker log file : >>>>>>>>>>> 2013-03-23 12:06:48,912 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> Input >>>>>>>>>>> size for job job_201303231139_0001 =3D 6950001. Number of split= s =3D >>>>>>>>>>> 7 >>>>>>>>>>> 2013-03-23 12:06:48,925 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,927 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,930 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,931 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,933 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,934 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,939 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on >>>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>>> 2013-03-23 12:06:48,950 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=3D0.5 >>>>>>>>>>> 2013-03-23 12:06:48,978 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> Job >>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks >>>>>>>>>>> and 1 >>>>>>>>>>> reduce tasks. >>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracke= r: >>>>>>>>>>> Adding >>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip >>>>>>>>>>> task_201303231139_0001_m_000008, for tracker >>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879' >>>>>>>>>>> 2013-03-23 12:08:00,340 INFO >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> Task >>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed >>>>>>>>>>> task_201303231139_0001_m_000008 successfully. >>>>>>>>>>> 2013-03-23 12:08:00,538 WARN >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> No >>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes >>>>>>>>>>> free; >>>>>>>>>>> but we >>>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>>> 2013-03-23 12:08:00,543 WARN >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> No >>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes >>>>>>>>>>> free; >>>>>>>>>>> but we >>>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>>> 2013-03-23 12:08:00,544 WARN >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> No >>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes >>>>>>>>>>> free; >>>>>>>>>>> but we >>>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>>> 2013-03-23 12:08:00,544 WARN >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> No >>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes >>>>>>>>>>> free; >>>>>>>>>>> but we >>>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>>> 2013-03-23 12:08:01,264 WARN >>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>>> No >>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes >>>>>>>>>>> free; >>>>>>>>>>> but we >>>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> The value in we excpect map to take is too huge >>>>>>>>>>> 1317624576693539401 >>>>>>>>>>> bytes !!!!!!! >>>>>>>>>>>=20 >>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui < >>>>>>>>>>> reduno1985@googlemail.com> wrote: >>>>>>>>>>>=20 >>>>>>>>>>>> The estimated value that the hadoop compute is too huge for th= e >>>>>>>>>>>> simple >>>>>>>>>>>> example that i am running . >>>>>>>>>>>>=20 >>>>>>>>>>>> ---------- Forwarded message ---------- >>>>>>>>>>>> From: Redwane belmaati cherkaoui >>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM >>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce >>>>>>>>>>>> To: Abdelrahman Shettia >>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> This the output that I get I am running two machines as you c= an >>>>>>>>>>>> see >>>>>>>>>>>> do >>>>>>>>>>>> u see anything suspicious ? >>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB) >>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB) >>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB) >>>>>>>>>>>> DFS Used: 57344 (56 KB) >>>>>>>>>>>> DFS Used%: 0% >>>>>>>>>>>> Under replicated blocks: 0 >>>>>>>>>>>> Blocks with corrupt replicas: 0 >>>>>>>>>>>> Missing blocks: 0 >>>>>>>>>>>>=20 >>>>>>>>>>>> ------------------------------------------------- >>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead) >>>>>>>>>>>>=20 >>>>>>>>>>>> Name: 11.1.0.6:50010 >>>>>>>>>>>> Decommission Status : Normal >>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB) >>>>>>>>>>>> DFS Used: 28672 (28 KB) >>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB) >>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB) >>>>>>>>>>>> DFS Used%: 0% >>>>>>>>>>>> DFS Remaining%: 83.31% >>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> Name: 11.1.0.3:50010 >>>>>>>>>>>> Decommission Status : Normal >>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB) >>>>>>>>>>>> DFS Used: 28672 (28 KB) >>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB) >>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB) >>>>>>>>>>>> DFS Used%: 0% >>>>>>>>>>>> DFS Remaining%: 83.3% >>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia < >>>>>>>>>>>> ashettia@hortonworks.com> wrote: >>>>>>>>>>>>=20 >>>>>>>>>>>>> Hi Redwane, >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Please run the following command as hdfs user on any datanode= . >>>>>>>>>>>>> The >>>>>>>>>>>>> output will be something like this. Hope this helps >>>>>>>>>>>>>=20 >>>>>>>>>>>>> hadoop dfsadmin -report >>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB) >>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB) >>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB) >>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB) >>>>>>>>>>>>> DFS Used%: 0.68% >>>>>>>>>>>>> Under replicated blocks: 0 >>>>>>>>>>>>> Blocks with corrupt replicas: 0 >>>>>>>>>>>>> Missing blocks: 0 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> -Abdelrahman >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 >>>>>>>>>>>>> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instanc= es >>>>>>>>>>>>>> each >>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how muc= h >>>>>>>>>>>>>> space >>>>>>>>>>>>>> is in >>>>>>>>>>>>>> the hdfs without web ui . >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Sent from Samsung Mobile >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Serge Blazhievsky wrote: >>>>>>>>>>>>>> Check web ui how much space you have on hdfs??? >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Sent from my iPhone >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia < >>>>>>>>>>>>>> ashettia@hortonworks.com> wrote: >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Hi Redwane , >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do >>>>>>>>>>>>>> not >>>>>>>>>>>>>> have >>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui = < >>>>>>>>>>>>>> reduno1985@googlemail.com> wrote: >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> ---------- Forwarded message ---------- >>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui >>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM >>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce >>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Hi >>>>>>>>>>>>>>> I am trying to run a wordcount mapreduce job on several >>>>>>>>>>>>>>> files >>>>>>>>>>>>>>> (<20 >>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce. >>>>>>>>>>>>>>> The jobtracker log file shows the following warning: >>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for ma= p >>>>>>>>>>>>>>> task. >>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we >>>>>>>>>>>>>>> expect >>>>>>>>>>>>>>> map to >>>>>>>>>> take >>>>>>>>>>>>>>> 1317624576693539401 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>> Please help me , >>>>>>>>>>>>>>> Best Regards, >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> Matteo Lanati >>>>>>>>> Distributed Resources Group >>>>>>>>> Leibniz-Rechenzentrum (LRZ) >>>>>>>>> Boltzmannstrasse 1 >>>>>>>>> 85748 Garching b. M=FCnchen (Germany) >>>>>>>>> Phone: +49 89 35831 8724 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> -- >>>>>>>> Harsh J >>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> -- >>>>>> Harsh J >>>>>>=20 >>>>>=20 >>>>> Matteo Lanati >>>>> Distributed Resources Group >>>>> Leibniz-Rechenzentrum (LRZ) >>>>> Boltzmannstrasse 1 >>>>> 85748 Garching b. M=FCnchen (Germany) >>>>> Phone: +49 89 35831 8724 >>>>>=20 >>>>=20 >>>=20 >>>=20 >>>=20 >>> -- >>> Harsh J >>>=20 >>=20 >> Matteo Lanati >> Distributed Resources Group >> Leibniz-Rechenzentrum (LRZ) >> Boltzmannstrasse 1 >> 85748 Garching b. M=FCnchen (Germany) >> Phone: +49 89 35831 8724 >> >=20 Matteo Lanati Distributed Resources Group Leibniz-Rechenzentrum (LRZ) Boltzmannstrasse 1 85748 Garching b. M=FCnchen (Germany) Phone: +49 89 35831 8724