hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 姚吉龙 <geelong...@gmail.com>
Subject Fwd: Need your help with Hadoop
Date Wed, 20 Mar 2013 01:56:57 GMT
---------- Forwarded message ----------
From: 姚吉龙 <geelongyao@gmail.com>
Date: 2013/3/19
Subject: Re: Need your help with Hadoop
To: Harsh J <harsh@cloudera.com>


Thank for your reply.
I am wondering  which parameters defines the capacity of datanode,or the
way to calculate the capacity. I have considered the your answer before, I
do not know how to modify the settings.
 Besides, from my point, the capacity will be related with the disk volume
which means that the capacity will be defined by the disk mounted on file
system of Hadoop user's temp directory. While I can't find the detailed
instructions about this.
Why the capacity of others nodes is about 50G?
These bothers me a lot.

BRs
Geelong

2013/3/19 Harsh J <harsh@cloudera.com>

> You'd probably want to recheck your configuration of dfs.data.dir on the
> node16 (perhaps its overriding the usual default), to see if it is perhaps
> including more dirs than normal (and they may be all on the same disks as
> well, the DN counts space via du/df on each directory so the number can
> grow that way).
>
> Also, please direct usage questions to user@hadoop.apache.org community,
> which I've included in my response :)
>
>
> On Tue, Mar 19, 2013 at 5:40 PM, 姚吉龙 <geelongyao@gmail.com> wrote:
>
>> Hi
>>
>> I am a newer for the Hadoop platform, I really need your help.
>> Now we have 32 datanodes available, while we find that the Configured
>> Capacity is different among these datanodes though the hardware is the same.
>> I wonder the reson why the node16 is much bigger than the others, besides
>> which is main factor or directory that determine the capacity for each
>> datanode.
>>
>>
>> I wiil apprecite your kindly help, this problem has been puzzled me for a
>> long time.
>>
>> BRs
>> Geelong
>>
>> --
>> From Good To Great
>>
>
>
>
> --
> Harsh J
>



-- 
>From Good To Great



-- 
>From Good To Great

Mime
View raw message