hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Diskspace usage
Date Thu, 22 Nov 2012 20:11:00 GMT

Quick question on the way hadoop is using the disk space.

Let's say I have 8 nodes. 7 of them with a 2T disk, and one with a 256GB.

Is hadoop going to use the 256GB until it's full, then continue with
the other nodes only but keeping the 256GB live? Or will it bring the
256GB node down when it will be full (like for failures) and continue
with the 7 remaining nodes?

To summarize, is hadoop taking care of the drive size?



View raw message