Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4856BFEA9 for ; Sat, 23 Mar 2013 10:38:10 +0000 (UTC) Received: (qmail 87583 invoked by uid 500); 23 Mar 2013 10:38:05 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 87423 invoked by uid 500); 23 Mar 2013 10:38:05 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 87400 invoked by uid 99); 23 Mar 2013 10:38:04 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 23 Mar 2013 10:38:04 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of reduno1985@googlemail.com designates 209.85.212.179 as permitted sender) Received: from [209.85.212.179] (HELO mail-wi0-f179.google.com) (209.85.212.179) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 23 Mar 2013 10:38:00 +0000 Received: by mail-wi0-f179.google.com with SMTP id hn17so2526766wib.12 for ; Sat, 23 Mar 2013 03:37:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=pYGuUvufdrbmIWKdwWnSOtZYKDDk1THR2x3Lm/HbyPM=; b=dncSPVthLUtMqHB9MJ+SXFeps/fY7OiPVnI3qbBrM1jne+M5cGYflag4rHzHOA2CvE 8drq3owHrzrZRS2/ClY7ytBQXH0e7LwbUxmIbYs4p9JIh99TBUXNai5BhNH6ecWZvRcL vd7wxpEtNuU8Fyx6pjAIdZrXAjwK3WFio6J+YwWyAS6P3eWKug+j2cb5VQw2J4If8Qeg Ps+SSAsCgEZvrrn/Y96DdQrrnyxnJf76Be3ovOyCn7ey5JImRoTr/U1BL97O8UtarhrK eGrYXpa6ICzr++dcvHeXpIqo6/uxjEXsdlFgIzTGSxX7y9x+sL2LEGy0KpZtbYvqBMTR 5xDQ== MIME-Version: 1.0 X-Received: by 10.194.59.100 with SMTP id y4mr8061898wjq.51.1364035059672; Sat, 23 Mar 2013 03:37:39 -0700 (PDT) Received: by 10.194.64.130 with HTTP; Sat, 23 Mar 2013 03:37:39 -0700 (PDT) In-Reply-To: References: Date: Sat, 23 Mar 2013 11:37:39 +0100 Message-ID: Subject: Fwd: About running a simple wordcount mapreduce From: Redwane belmaati cherkaoui To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b86cea4316c3e04d8952914 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b86cea4316c3e04d8952914 Content-Type: text/plain; charset=ISO-8859-1 The estimated value that the hadoop compute is too huge for the simple example that i am running . ---------- Forwarded message ---------- From: Redwane belmaati cherkaoui Date: Sat, Mar 23, 2013 at 11:32 AM Subject: Re: About running a simple wordcount mapreduce To: Abdelrahman Shettia Cc: user@hadoop.apache.org, reduno1985 This the output that I get I am running two machines as you can see do u see anything suspicious ? Configured Capacity: 21145698304 (19.69 GB) Present Capacity: 17615499264 (16.41 GB) DFS Remaining: 17615441920 (16.41 GB) DFS Used: 57344 (56 KB) DFS Used%: 0% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 2 (2 total, 0 dead) Name: 11.1.0.6:50010 Decommission Status : Normal Configured Capacity: 10572849152 (9.85 GB) DFS Used: 28672 (28 KB) Non DFS Used: 1765019648 (1.64 GB) DFS Remaining: 8807800832(8.2 GB) DFS Used%: 0% DFS Remaining%: 83.31% Last contact: Sat Mar 23 11:30:10 CET 2013 Name: 11.1.0.3:50010 Decommission Status : Normal Configured Capacity: 10572849152 (9.85 GB) DFS Used: 28672 (28 KB) Non DFS Used: 1765179392 (1.64 GB) DFS Remaining: 8807641088(8.2 GB) DFS Used%: 0% DFS Remaining%: 83.3% Last contact: Sat Mar 23 11:30:08 CET 2013 On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia < ashettia@hortonworks.com> wrote: > Hi Redwane, > > Please run the following command as hdfs user on any datanode. The output > will be something like this. Hope this helps > > hadoop dfsadmin -report > Configured Capacity: 81075068925 (75.51 GB) > Present Capacity: 70375292928 (65.54 GB) > DFS Remaining: 69895163904 (65.09 GB) > DFS Used: 480129024 (457.89 MB) > DFS Used%: 0.68% > Under replicated blocks: 0 > Blocks with corrupt replicas: 0 > Missing blocks: 0 > > Thanks > -Abdelrahman > > > On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 wrote: > >> >> I have my hosts running on openstack virtual machine instances each >> instance has 10gb hard disc . Is there a way too see how much space is in >> the hdfs without web ui . >> >> >> Sent from Samsung Mobile >> >> Serge Blazhievsky wrote: >> Check web ui how much space you have on hdfs??? >> >> Sent from my iPhone >> >> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia < >> ashettia@hortonworks.com> wrote: >> >> Hi Redwane , >> >> It is possible that the hosts which are running tasks are do not have >> enough space. Those dirs are confiugred in mapred-site.xml >> >> >> >> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui < >> reduno1985@googlemail.com> wrote: >> >>> >>> >>> ---------- Forwarded message ---------- >>> From: Redwane belmaati cherkaoui >>> Date: Fri, Mar 22, 2013 at 4:39 PM >>> Subject: About running a simple wordcount mapreduce >>> To: mapreduce-issues@hadoop.apache.org >>> >>> >>> Hi >>> I am trying to run a wordcount mapreduce job on several files (<20 mb) >>> using two machines . I get stuck on 0% map 0% reduce. >>> The jobtracker log file shows the following warning: >>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node >>> hadoop0.novalocal has 8791384064 bytes free; but we expect map to take >>> 1317624576693539401 >>> >>> Please help me , >>> Best Regards, >>> >>> >> > --047d7b86cea4316c3e04d8952914 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable The estimated value that the hadoop compute is too huge for the simple exam= ple that i am running .

---------- Forwar= ded message ----------
From: Redwane belma= ati cherkaoui <reduno1985@googlemail.com>
Date: Sat, Mar 23, 2013 at 11:32 AM
Subject: Re: About running a simple = wordcount mapreduce
To: Abdelrahman Shettia <ashettia@hortonworks.com>
Cc: user@hadoop.apache.org, reduno1985 <reduno1985@gmail.com>


This the output that I get I am running two machines =A0as you can = see =A0do u see anything suspicious ?
Configured Capacity: 2114569= 8304 (19.69 GB)
Present Capacity: 17615499264 (16.41 GB)
DFS Remaining: 17615441920 (16.41 GB)
DFS Used: 57344 (56 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0<= /div>
Missing blocks: 0

----------------= ---------------------------------
Datanodes available: 2 (2 total, 0 dead)

Name= : 11.1.0.6:50010
Decommission Status : Normal
Configured Capacity: 1057284= 9152 (9.85 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 1765019648 (1.64 GB)
DFS Remaining: 8807800832(8.2 GB)
DFS Used%: 0%
DFS Remaining%: 83.31%
Last contact: Sat Mar 23 11:30:10 CET 20= 13


Decommission Status : Normal=
Configured Capacity: 10572849152 (9.85 GB)
DFS Used: 2= 8672 (28 KB)
Non DFS Used: 1765179392 (1.64 GB)
DFS Remaining: 8807641088= (8.2 GB)
DFS Used%: 0%
DFS Remaining%: 83.3%
= Last contact: Sat Mar 23 11:30:08 CET 2013


On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Sh= ettia <ashettia@hortonworks.com> wrote:
Hi Redwane,=A0

Please run the following= command as hdfs user on any datanode. The output will be something like th= is. Hope this helps=A0

hadoop dfsadmin -repor= t
Configured Capacity: 81075068925 (75.51 GB)
Present Capacity= : 70375292928 (65.54 GB)
DFS Remaining: 69895163904 (65.09 GB)
DFS Used: 480129024 (457.89 MB)
DFS Used%: 0.68%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

Thanks
-Abdelrahman=A0


On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <reduno1985@googlema= il.com> wrote:

I have my hosts running on openstack virtual machi= ne instances each instance has 10gb hard disc . Is there a way too see how = much space is in the hdfs without web ui .


Sent from Samsung Mob= ile

Serge Blazhievsky <hadoop.ca@gmail.com> wrote:
Check web ui how much space you have on hdfs???

Sent from my iP= hone

On Mar 22, 2013, at 11:41 AM, Abdelrahman She= ttia <ashe= ttia@hortonworks.com> wrote:

Hi Redwane ,=A0
It is possible that the hosts which are running tasks are= do not have enough space. Those dirs are confiugred in mapred-site.xml



On Fri, Mar 22, 2013 at 8:42 AM, Redwane= belmaati cherkaoui <reduno1985@googlemail.com> wrot= e:


---------= - Forwarded message ----------
From: Redwa= ne belmaati cherkaoui <reduno1985@googlemail.com>
Date: Fri, Mar 22, 2013 at 4:39 PM
Subject: About running a simple wordc= ount mapreduce
To: mapreduce-issues@hadoop.apache.org


Hi I am trying to run=A0 a wordcount mapreduce job on several files (<20 mb= ) using two machines . I get stuck on 0% map 0% reduce.
The jobtracker log file shows the following warning:
=A0WARN org.apache.= hadoop.mapred.JobInProgress: No room for map task. Node hadoop0.novalocal h= as 8791384064 bytes free; but we expect map to take 1317624576693539401

Please help me ,
Best Regards,





--047d7b86cea4316c3e04d8952914--