hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nitin Pawar <nitinpawar...@gmail.com>
Subject Re: hdfs disk usage
Date Fri, 10 Apr 2015 14:31:41 GMT
Hi Nataraj,

Thanks for reply but as I mentioned each datanode has a single directory
and it has only one data directory configured
so still need to find why the non dfs usage just abrubtly calculated to 90%
and then on restart it went down

On Fri, Apr 10, 2015 at 7:54 PM, nataraj jonnalagadda <
nataraj.jonnalagadda@gmail.com> wrote:

> Nitin,
>
> You need to mount each disk as a location eg: /data/disk01, /data/disk02,
> /data/disk03 and specify each of these locations (comma separated) in the
> parameter dfs.datanode.data.dir=/data/data01/hadoop/hdfs/data,
> /data/data02/hadoop/hdfs/data, /data/data03/hadoop/hdfs/data of the
> hdfs-site.xml file.
>
> Thanks,
> Nataraj.
>
>
>
> On Fri, Apr 10, 2015 at 7:10 AM, Nitin Pawar <nitinpawar432@gmail.com>
> wrote:
>
>> I just restarted the cluster and it seems it resolved the problem
>>
>> I will repost if this issue comes again
>>
>> On Fri, Apr 10, 2015 at 7:35 PM, Nitin Pawar <nitinpawar432@gmail.com>
>> wrote:
>>
>>> Thanks Peyman
>>>
>>> I think it is not related to replication.
>>>
>>> hdfs dfsadmin is reporting following stats
>>> Disk Usage (DFS Used)108.4 GB / 782.9 GB (13.85%)Disk Usage (Non DFS
>>> Used)583.9 GB / 782.9 GB (74.58%)
>>>
>>> In the Non DFS used on all the disks, atleast 150GB is available (when i
>>> do df -h)
>>>
>>> This has marked my cluster at 90% and I want to understand why Non DFS
>>> used is represented so high when its not
>>>
>>> On Fri, Apr 10, 2015 at 7:28 PM, Peyman Mohajerian <mohajeri@gmail.com>
>>> wrote:
>>>
>>>> Take the default 3x replication into account too.
>>>>
>>>> On Fri, Apr 10, 2015 at 6:50 AM, Nitin Pawar <nitinpawar432@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Guys,
>>>>>
>>>>> I have setup a 6 node cluster using hadoop 2.6 out of which 4 are data
>>>>> nodes.
>>>>>
>>>>> Each datanode disk is 200GB (so total storage size of 800 GB)
>>>>>
>>>>> But when started, configured dfs storage was only 200GB.
>>>>>
>>>>> There are no extra mounted disks or additional directories configured
>>>>> for each mount.
>>>>>
>>>>> Can someone help me on how do i use all available 800GB from 4 data
>>>>> nodes as hdfs?
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>


-- 
Nitin Pawar

Mime
View raw message