hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nitin Pawar <nitinpawar...@gmail.com>
Subject Re: hdfs disk usage
Date Fri, 10 Apr 2015 14:10:35 GMT
I just restarted the cluster and it seems it resolved the problem

I will repost if this issue comes again

On Fri, Apr 10, 2015 at 7:35 PM, Nitin Pawar <nitinpawar432@gmail.com>
wrote:

> Thanks Peyman
>
> I think it is not related to replication.
>
> hdfs dfsadmin is reporting following stats
> Disk Usage (DFS Used)108.4 GB / 782.9 GB (13.85%)Disk Usage (Non DFS Used)583.9
> GB / 782.9 GB (74.58%)
>
> In the Non DFS used on all the disks, atleast 150GB is available (when i
> do df -h)
>
> This has marked my cluster at 90% and I want to understand why Non DFS
> used is represented so high when its not
>
> On Fri, Apr 10, 2015 at 7:28 PM, Peyman Mohajerian <mohajeri@gmail.com>
> wrote:
>
>> Take the default 3x replication into account too.
>>
>> On Fri, Apr 10, 2015 at 6:50 AM, Nitin Pawar <nitinpawar432@gmail.com>
>> wrote:
>>
>>> Hi Guys,
>>>
>>> I have setup a 6 node cluster using hadoop 2.6 out of which 4 are data
>>> nodes.
>>>
>>> Each datanode disk is 200GB (so total storage size of 800 GB)
>>>
>>> But when started, configured dfs storage was only 200GB.
>>>
>>> There are no extra mounted disks or additional directories configured
>>> for each mount.
>>>
>>> Can someone help me on how do i use all available 800GB from 4 data
>>> nodes as hdfs?
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar

Mime
View raw message