hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otis Gospodnetic <otis_gospodne...@yahoo.com>
Subject dfs.block.size vs avg block size
Date Sat, 17 May 2008 00:42:27 GMT
Hello,

I checked the ML archives and the Wiki, as well as the HDFS user guide, but could not find
information about how to change block size of an existing HDFS.

After running fsck I can see that my avg. block size is 12706144 B (cca 12MB), and that's
a lot smaller than what I have configured: dfs.block.size=67108864 B

Is the difference between the configured block size and actual (avg) block size results effectively
wasted space?
If so, is there a way to change the DFS block size and have Hadoop shrink all the existing
blocks?
I am OK with not running any jobs on the cluster for a day or two if I can do something to
free up the wasted disk space.


Thanks,
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch


Mime
View raw message