hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Redwane belmaati cherkaoui <reduno1...@googlemail.com>
Subject Fwd: About running a simple wordcount mapreduce
Date Sat, 23 Mar 2013 10:37:39 GMT
The estimated value that the hadoop compute is too huge for the simple
example that i am running .

---------- Forwarded message ----------
From: Redwane belmaati cherkaoui <reduno1985@googlemail.com>
Date: Sat, Mar 23, 2013 at 11:32 AM
Subject: Re: About running a simple wordcount mapreduce
To: Abdelrahman Shettia <ashettia@hortonworks.com>
Cc: user@hadoop.apache.org, reduno1985 <reduno1985@gmail.com>


This the output that I get I am running two machines  as you can see  do u
see anything suspicious ?
Configured Capacity: 21145698304 (19.69 GB)
Present Capacity: 17615499264 (16.41 GB)
DFS Remaining: 17615441920 (16.41 GB)
DFS Used: 57344 (56 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Name: 11.1.0.6:50010
Decommission Status : Normal
Configured Capacity: 10572849152 (9.85 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 1765019648 (1.64 GB)
DFS Remaining: 8807800832(8.2 GB)
DFS Used%: 0%
DFS Remaining%: 83.31%
Last contact: Sat Mar 23 11:30:10 CET 2013


Name: 11.1.0.3:50010
Decommission Status : Normal
Configured Capacity: 10572849152 (9.85 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 1765179392 (1.64 GB)
DFS Remaining: 8807641088(8.2 GB)
DFS Used%: 0%
DFS Remaining%: 83.3%
Last contact: Sat Mar 23 11:30:08 CET 2013


On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
ashettia@hortonworks.com> wrote:

> Hi Redwane,
>
> Please run the following command as hdfs user on any datanode. The output
> will be something like this. Hope this helps
>
> hadoop dfsadmin -report
> Configured Capacity: 81075068925 (75.51 GB)
> Present Capacity: 70375292928 (65.54 GB)
> DFS Remaining: 69895163904 (65.09 GB)
> DFS Used: 480129024 (457.89 MB)
> DFS Used%: 0.68%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> Thanks
> -Abdelrahman
>
>
> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <reduno1985@googlemail.com>wrote:
>
>>
>> I have my hosts running on openstack virtual machine instances each
>> instance has 10gb hard disc . Is there a way too see how much space is in
>> the hdfs without web ui .
>>
>>
>> Sent from Samsung Mobile
>>
>> Serge Blazhievsky <hadoop.ca@gmail.com> wrote:
>> Check web ui how much space you have on hdfs???
>>
>> Sent from my iPhone
>>
>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> ashettia@hortonworks.com> wrote:
>>
>> Hi Redwane ,
>>
>> It is possible that the hosts which are running tasks are do not have
>> enough space. Those dirs are confiugred in mapred-site.xml
>>
>>
>>
>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> reduno1985@googlemail.com> wrote:
>>
>>>
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <reduno1985@googlemail.com>
>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>> Subject: About running a simple wordcount mapreduce
>>> To: mapreduce-issues@hadoop.apache.org
>>>
>>>
>>> Hi
>>> I am trying to run  a wordcount mapreduce job on several files (<20 mb)
>>> using two machines . I get stuck on 0% map 0% reduce.
>>> The jobtracker log file shows the following warning:
>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node
>>> hadoop0.novalocal has 8791384064 bytes free; but we expect map to take
>>> 1317624576693539401
>>>
>>> Please help me ,
>>> Best Regards,
>>>
>>>
>>
>

Mime
View raw message