hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: How Hadoop decide the capacity of each node
Date Wed, 09 Jan 2013 13:45:36 GMT
Hi Dora,

Hadoop is not deciding. It's "simply" pushing the same amount of data
on each node. If a node is out of space, it's removed from the "write"
list and is used only for reads.

Hadoop is only using the space it needs. So if it uses only 50G it's
because it don't need the extra 50G yet.

JM

2013/1/9, Dora <dora0009@gmail.com>:
> Hi all,
>
> Could you tell me how Hadoop decide tha capacity of each datanodes?
> I've installed CDH3 on 2 VM machine, each VM has 100G space,
> And I found that Hadoop occupied 50G/100G, why?
> Thanks.
>
> Best Regards,
> Dora
>

Mime
View raw message