Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E408710976 for ; Tue, 4 Jun 2013 15:43:10 +0000 (UTC) Received: (qmail 91213 invoked by uid 500); 4 Jun 2013 15:43:05 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 90874 invoked by uid 500); 4 Jun 2013 15:43:05 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 90862 invoked by uid 99); 4 Jun 2013 15:43:04 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jun 2013 15:43:04 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of wget.null@gmail.com designates 209.85.212.180 as permitted sender) Received: from [209.85.212.180] (HELO mail-wi0-f180.google.com) (209.85.212.180) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jun 2013 15:42:57 +0000 Received: by mail-wi0-f180.google.com with SMTP id hn14so354894wib.1 for ; Tue, 04 Jun 2013 08:42:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to:x-mailer; bh=k+IfLVyrElXXjYtj7fycf7Mp5IjmhFPOQQpfp1B2czM=; b=F4W5i5rBu7v5CVqtCifEr4K8Vc3h5AvsUhePeD5iU+dYugyIDDnBjRm9bU7z2qlna/ WSN5gPRp9Y4hnmhmGpkY4HfqjrPurXIWbVkSLHYC1OXNXdZqQ30KSasAWv/ZzlFwNkWa iEjAr98y4cRVi9tHX2E3mlK4Ali5dlnI1dYa62k095GA/bKl/1VfJmlhAxPfAQXSvGng w4scT9km1T915SCw88Y2tzSLVhDtGBGA0NO0s4wivLbPs+BhgLmcyJ7PtzcF0bX0N9RG nit8lL/11zwvvn53m3AAr2cmqmFHjHCPiNbSrMwEsV6AR4Fu2Yqr57gnqDVs4ehkufFI j2Jg== X-Received: by 10.194.83.5 with SMTP id m5mr24487703wjy.20.1370360557530; Tue, 04 Jun 2013 08:42:37 -0700 (PDT) Received: from [192.168.1.14] ([88.211.43.246]) by mx.google.com with ESMTPSA id x13sm3479943wib.3.2013.06.04.08.42.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 04 Jun 2013 08:42:36 -0700 (PDT) Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: From: Alexander Alten-Lorenz In-Reply-To: <12FA0B88-9CF0-44DB-B2EA-A486B734A4FC@lrz.de> Date: Tue, 4 Jun 2013 16:42:32 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <62FA29FA-B38D-4336-8C94-9E1981F76E90@gmail.com> References: <31270DDF18CC924782A594673481B9154BD3897B@BADWLRZ-SWMBX11.ads.mwn.de> <11F6F9CB-D131-4355-B78C-C423822B2B6B@lrz.de> <12FA0B88-9CF0-44DB-B2EA-A486B734A4FC@lrz.de> To: user@hadoop.apache.org X-Mailer: Apple Mail (2.1503) X-Virus-Checked: Checked by ClamAV on apache.org Hi Matteo, Are you able to add more space to your test machines? Also, what says = the pi example (hadoop jar hadoop-examples pi 10 10 ? - Alex On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" = wrote: > Hi again, >=20 > unfortunately my problem is not solved. > I downloaded Hadoop v. 1.1.2a and made a basic configuration as = suggested in [1]. > No security, no ACLs, default scheduler ... The files are attached. > I still have the same error message. I also tried another Java version = (6u45 instead of 7u21). > How can I increase the debug level to have a deeper look? > Thanks, >=20 > Matteo >=20 >=20 > [1] = http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartabi= lity > On Jun 4, 2013, at 3:52 AM, Azuryy Yu wrote: >=20 >> Hi Harsh, >>=20 >> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I = said upgrade. Sorry. >>=20 >>=20 >> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J wrote: >> Azuryy, >>=20 >> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you = feel >> there's been a regression, can you comment that on the JIRA? >>=20 >> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu wrote: >>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it. >>>=20 >>>=20 >>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo = wrote: >>>>=20 >>>> Hi Azuryy, >>>>=20 >>>> thanks for the update. Sorry for the silly question, but where can = I >>>> download the patched version? >>>> If I look into the closest mirror (i.e. >>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see = that the >>>> Hadoop 1.1.2 version was last updated on Jan. 31st. >>>> Thanks in advance, >>>>=20 >>>> Matteo >>>>=20 >>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so = without >>>> any security, and the problem is there. >>>>=20 >>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu wrote: >>>>=20 >>>>> can you upgrade to 1.1.2, which is also a stable release, and = fixed the >>>>> bug you facing now. >>>>>=20 >>>>> --Send from my Sony mobile. >>>>>=20 >>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" = wrote: >>>>> Thanks Harsh for the reply. I was confused too that why security = is >>>>> causing this. >>>>>=20 >>>>> Regards, >>>>> Shahab >>>>>=20 >>>>>=20 >>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J = wrote: >>>>> Shahab - I see he has mentioned generally that security is enabled >>>>> (but not that it happens iff security is enabled), and the issue = here >>>>> doesn't have anything to do with security really. >>>>>=20 >>>>> Azurry - Lets discuss the code issues on the JIRA (instead of = here) or >>>>> on the mapreduce-dev lists. >>>>>=20 >>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus = >>>>> wrote: >>>>>> HI Harsh, >>>>>>=20 >>>>>> Quick question though: why do you think it only happens if the OP >>>>>> 'uses >>>>>> security' as he mentioned? >>>>>>=20 >>>>>> Regards, >>>>>> Shahab >>>>>>=20 >>>>>>=20 >>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J = wrote: >>>>>>>=20 >>>>>>> Does smell like a bug as that number you get is simply >>>>>>> Long.MAX_VALUE, >>>>>>> or 8 exbibytes. >>>>>>>=20 >>>>>>> Looking at the sources, this turns out to be a rather funny Java >>>>>>> issue >>>>>>> (there's a divide by zero happening and [1] suggests = Long.MAX_VALUE >>>>>>> return in such a case). I've logged a bug report for this at >>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a >>>>>>> reproducible case. >>>>>>>=20 >>>>>>> Does this happen consistently for you? >>>>>>>=20 >>>>>>> [1] >>>>>>>=20 >>>>>>> = http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)= >>>>>>>=20 >>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo = >>>>>>> wrote: >>>>>>>> Hi all, >>>>>>>>=20 >>>>>>>> I stumbled upon this problem as well while trying to run the >>>>>>>> default >>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 >>>>>>>> virtual >>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. = One >>>>>>>> node is >>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The = input >>>>>>>> file is >>>>>>>> about 600 kB and the error is >>>>>>>>=20 >>>>>>>> 2013-06-01 12:22:51,999 WARN >>>>>>>> org.apache.hadoop.mapred.JobInProgress: No >>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes = free; >>>>>>>> but we >>>>>>>> expect map to take 9223372036854775807 >>>>>>>>=20 >>>>>>>> The logfile is attached, together with the configuration files. = The >>>>>>>> version I'm using is >>>>>>>>=20 >>>>>>>> Hadoop 1.2.0 >>>>>>>> Subversion >>>>>>>> = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 >>>>>>>> -r >>>>>>>> 1479473 >>>>>>>> Compiled by hortonfo on Mon May 6 06:59:37 UTC 2013 >>>>>>>> =46rom source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405 >>>>>>>> This command was run using >>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar >>>>>>>>=20 >>>>>>>> If I run the default configuration (i.e. no securty), then the = job >>>>>>>> succeeds. >>>>>>>>=20 >>>>>>>> Is there something missing in how I set up my nodes? How is it >>>>>>>> possible >>>>>>>> that the envisaged value for the needed space is so big? >>>>>>>>=20 >>>>>>>> Thanks in advance. >>>>>>>>=20 >>>>>>>> Matteo >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>> Which version of Hadoop are you using. A quick search shows me = a >>>>>>>>> bug >>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems = to >>>>>>>>> show >>>>>>>>> similar symptoms. However, that was fixed a long while ago. >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui < >>>>>>>>> reduno1985@googlemail.com> wrote: >>>>>>>>>=20 >>>>>>>>>> This the content of the jobtracker log file : >>>>>>>>>> 2013-03-23 12:06:48,912 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> Input >>>>>>>>>> size for job job_201303231139_0001 =3D 6950001. Number of = splits =3D >>>>>>>>>> 7 >>>>>>>>>> 2013-03-23 12:06:48,925 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,927 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,930 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,931 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,933 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,934 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,939 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on >>>>>>>>>> node:/default-rack/hadoop0.novalocal >>>>>>>>>> 2013-03-23 12:06:48,950 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=3D0.5 >>>>>>>>>> 2013-03-23 12:06:48,978 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> Job >>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map = tasks >>>>>>>>>> and 1 >>>>>>>>>> reduce tasks. >>>>>>>>>> 2013-03-23 12:06:50,855 INFO = org.apache.hadoop.mapred.JobTracker: >>>>>>>>>> Adding >>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to = tip >>>>>>>>>> task_201303231139_0001_m_000008, for tracker >>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879' >>>>>>>>>> 2013-03-23 12:08:00,340 INFO >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> Task >>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed >>>>>>>>>> task_201303231139_0001_m_000008 successfully. >>>>>>>>>> 2013-03-23 12:08:00,538 WARN >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> No >>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 = bytes >>>>>>>>>> free; >>>>>>>>>> but we >>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>> 2013-03-23 12:08:00,543 WARN >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> No >>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 = bytes >>>>>>>>>> free; >>>>>>>>>> but we >>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>> 2013-03-23 12:08:00,544 WARN >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> No >>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 = bytes >>>>>>>>>> free; >>>>>>>>>> but we >>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>> 2013-03-23 12:08:00,544 WARN >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> No >>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 = bytes >>>>>>>>>> free; >>>>>>>>>> but we >>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>> 2013-03-23 12:08:01,264 WARN >>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: >>>>>>>>>> No >>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 = bytes >>>>>>>>>> free; >>>>>>>>>> but we >>>>>>>>>> expect map to take 1317624576693539401 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> The value in we excpect map to take is too huge >>>>>>>>>> 1317624576693539401 >>>>>>>>>> bytes !!!!!!! >>>>>>>>>>=20 >>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui = < >>>>>>>>>> reduno1985@googlemail.com> wrote: >>>>>>>>>>=20 >>>>>>>>>>> The estimated value that the hadoop compute is too huge for = the >>>>>>>>>>> simple >>>>>>>>>>> example that i am running . >>>>>>>>>>>=20 >>>>>>>>>>> ---------- Forwarded message ---------- >>>>>>>>>>> From: Redwane belmaati cherkaoui >>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM >>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce >>>>>>>>>>> To: Abdelrahman Shettia >>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 = >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> This the output that I get I am running two machines as you = can >>>>>>>>>>> see >>>>>>>>>>> do >>>>>>>>>>> u see anything suspicious ? >>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB) >>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB) >>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB) >>>>>>>>>>> DFS Used: 57344 (56 KB) >>>>>>>>>>> DFS Used%: 0% >>>>>>>>>>> Under replicated blocks: 0 >>>>>>>>>>> Blocks with corrupt replicas: 0 >>>>>>>>>>> Missing blocks: 0 >>>>>>>>>>>=20 >>>>>>>>>>> ------------------------------------------------- >>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead) >>>>>>>>>>>=20 >>>>>>>>>>> Name: 11.1.0.6:50010 >>>>>>>>>>> Decommission Status : Normal >>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB) >>>>>>>>>>> DFS Used: 28672 (28 KB) >>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB) >>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB) >>>>>>>>>>> DFS Used%: 0% >>>>>>>>>>> DFS Remaining%: 83.31% >>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013 >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> Name: 11.1.0.3:50010 >>>>>>>>>>> Decommission Status : Normal >>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB) >>>>>>>>>>> DFS Used: 28672 (28 KB) >>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB) >>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB) >>>>>>>>>>> DFS Used%: 0% >>>>>>>>>>> DFS Remaining%: 83.3% >>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013 >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia < >>>>>>>>>>> ashettia@hortonworks.com> wrote: >>>>>>>>>>>=20 >>>>>>>>>>>> Hi Redwane, >>>>>>>>>>>>=20 >>>>>>>>>>>> Please run the following command as hdfs user on any = datanode. >>>>>>>>>>>> The >>>>>>>>>>>> output will be something like this. Hope this helps >>>>>>>>>>>>=20 >>>>>>>>>>>> hadoop dfsadmin -report >>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB) >>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB) >>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB) >>>>>>>>>>>> DFS Used: 480129024 (457.89 MB) >>>>>>>>>>>> DFS Used%: 0.68% >>>>>>>>>>>> Under replicated blocks: 0 >>>>>>>>>>>> Blocks with corrupt replicas: 0 >>>>>>>>>>>> Missing blocks: 0 >>>>>>>>>>>>=20 >>>>>>>>>>>> Thanks >>>>>>>>>>>> -Abdelrahman >>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 >>>>>>>>>>>> wrote: >>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> I have my hosts running on openstack virtual machine = instances >>>>>>>>>>>>> each >>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how = much >>>>>>>>>>>>> space >>>>>>>>>>>>> is in >>>>>>>>>>>>> the hdfs without web ui . >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Sent from Samsung Mobile >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Serge Blazhievsky wrote: >>>>>>>>>>>>> Check web ui how much space you have on hdfs??? >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Sent from my iPhone >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia < >>>>>>>>>>>>> ashettia@hortonworks.com> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>> Hi Redwane , >>>>>>>>>>>>>=20 >>>>>>>>>>>>> It is possible that the hosts which are running tasks are = do >>>>>>>>>>>>> not >>>>>>>>>>>>> have >>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati = cherkaoui < >>>>>>>>>>>>> reduno1985@googlemail.com> wrote: >>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> ---------- Forwarded message ---------- >>>>>>>>>>>>>> From: Redwane belmaati cherkaoui = >>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM >>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce >>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Hi >>>>>>>>>>>>>> I am trying to run a wordcount mapreduce job on several >>>>>>>>>>>>>> files >>>>>>>>>>>>>> (<20 >>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce. >>>>>>>>>>>>>> The jobtracker log file shows the following warning: >>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for = map >>>>>>>>>>>>>> task. >>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we >>>>>>>>>>>>>> expect >>>>>>>>>>>>>> map to >>>>>>>>> take >>>>>>>>>>>>>> 1317624576693539401 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>> Please help me , >>>>>>>>>>>>>> Best Regards, >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>>=20 >>>>>>>>>>>>>=20 >>>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> Matteo Lanati >>>>>>>> Distributed Resources Group >>>>>>>> Leibniz-Rechenzentrum (LRZ) >>>>>>>> Boltzmannstrasse 1 >>>>>>>> 85748 Garching b. M=FCnchen (Germany) >>>>>>>> Phone: +49 89 35831 8724 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>> -- >>>>>>> Harsh J >>>>>>=20 >>>>>>=20 >>>>>=20 >>>>>=20 >>>>>=20 >>>>> -- >>>>> Harsh J >>>>>=20 >>>>=20 >>>> Matteo Lanati >>>> Distributed Resources Group >>>> Leibniz-Rechenzentrum (LRZ) >>>> Boltzmannstrasse 1 >>>> 85748 Garching b. M=FCnchen (Germany) >>>> Phone: +49 89 35831 8724 >>>>=20 >>>=20 >>=20 >>=20 >>=20 >> -- >> Harsh J >>=20 >=20 > Matteo Lanati > Distributed Resources Group > Leibniz-Rechenzentrum (LRZ) > Boltzmannstrasse 1 > 85748 Garching b. M=FCnchen (Germany) > Phone: +49 89 35831 8724 >