hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rajiv Chittajallu <raj...@yahoo-inc.com>
Subject Re: HDFS Datanode Capacity
Date Sun, 01 Jan 2012 11:29:28 GMT
dfsadmin -setSpaceQuota applies to hdfs filesystem. This doesn't apply to datanode volumes.



to add a volume, update dfs.data.dir (hdfs-site.xml on datanode) , and restart the datanode.



check the datanode log to see if the new volume as activated. You should see additional space
in
namenode:50070/dfsnodelist.jsp?whatNodes=LIVE


>________________________________
> From: Hamed Ghavamnia <ghavamnia.h@gmail.com>
>To: hdfs-user@hadoop.apache.org; Rajiv Chittajallu <rajive@yahoo-inc.com> 
>Sent: Sunday, January 1, 2012 4:06 PM
>Subject: Re: HDFS Datanode Capacity
> 
>
>Thanks for the help.
>I checked the quotas, it seems they're used for setting the maximum size on the files
inside the hdfs, and not the datanode itself. For example, if I set my dfs.data.dir to /media/newhard
(which I've mounted my new hard disk to), I can't use dfsadmin -setSpaceQuota n /media/newhard
to set the size of this directory, I can change the sizes of the directories inside hdfs (tmp,
user, ...), which don't have any effect on the capacity of the datanode.
>I can set the my new mounted volume as the datanode directory and it runs without a problem,
but the capacity is the default 5 GB.
>
>
>On Sun, Jan 1, 2012 at 10:41 AM, Rajiv Chittajallu <rajive@yahoo-inc.com> wrote:
>
>Once you updated the configuration is the datanode, restarted? Check if the datanode log
indicated that it was able to setup the new volume.
>>
>>
>>
>>>________________________________
>>> From: Hamed Ghavamnia <ghavamnia.h@gmail.com>
>>>To: hdfs-user@hadoop.apache.org
>>>Sent: Sunday, January 1, 2012 11:33 AM
>>>Subject: HDFS Datanode Capacity
>>
>>>
>>>
>>>Hi,
>>>I've been searching on how to configure the maximum capacity of a datanode. I've
added big volumes to one of my datanodes, but the configured capacity doesn't get bigger than
the default 5GB. If I want a datanode with 100GB of capacity, I have to add 20 directories,
each having 5GB so the maximum capacity reaches 100. Is there anywhere this can be set? Can
different datanodes have different capacities?
>>>
>>>Also it seems like the dfs.datanode.du.reserved doesn't work either, because I've
set it to zero, but it still leaves 50% of the free space for non-dfs usage.
>>>
>>>Thanks,
>>>Hamed
>>>
>>>P.S. This is my first message in the mailing list, so if I have to follow any
rules for sending emails, I'll be thankful if you let me know. :)
>>>
>>>
>>>
>>
>
>
>

Mime
View raw message