hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dora <dora0...@gmail.com>
Subject Re: How Hadoop decide the capacity of each node
Date Wed, 09 Jan 2013 14:07:35 GMT
Hi JM,

Thanks for you quickly answer!

But I'm still wonder why I just used 5.71 MB, but the "Configured
Capaity" is 98.43 GB as the following figure.

BTW, what's the "Non DFS Used" meaning?

[image: 埋め込み画像 1]

Best Regards,

Dora

---------------------------------

Hi Dora,

Hadoop is not deciding. It's "simply" pushing the same amount of data
on each node. If a node is out of space, it's removed from the "write"
list and is used only for reads.

Hadoop is only using the space it needs. So if it uses only 50G it's
because it don't need the extra 50G yet.

JM

2013/1/9, Dora <dora0009@gmail.com> <dora0009@gmail.com>:

Hi all,

Could you tell me how Hadoop decide tha capacity of each datanodes?
I've installed CDH3 on 2 VM machine, each VM has 100G space,
And I found that Hadoop occupied 50G/100G, why?
Thanks.

Best Regards,
Dora

Mime
View raw message