hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rajiv Chittajallu <raj...@yahoo-inc.com>
Subject Re: HDFS Datanode Capacity
Date Sun, 01 Jan 2012 07:11:19 GMT
Once you updated the configuration is the datanode, restarted? Check if the datanode log indicated
that it was able to setup the new volume. 

> From: Hamed Ghavamnia <ghavamnia.h@gmail.com>
>To: hdfs-user@hadoop.apache.org 
>Sent: Sunday, January 1, 2012 11:33 AM
>Subject: HDFS Datanode Capacity
>I've been searching on how to configure the maximum capacity of a datanode. I've added
big volumes to one of my datanodes, but the configured capacity doesn't get bigger than the
default 5GB. If I want a datanode with 100GB of capacity, I have to add 20 directories, each
having 5GB so the maximum capacity reaches 100. Is there anywhere this can be set? Can different
datanodes have different capacities?
>Also it seems like the dfs.datanode.du.reserved doesn't work either, because I've set
it to zero, but it still leaves 50% of the free space for non-dfs usage.
>P.S. This is my first message in the mailing list, so if I have to follow any rules for
sending emails, I'll be thankful if you let me know. :)

View raw message