hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer ...@apache.org>
Subject Re: Changing dfs.block.size
Date Mon, 06 Jun 2011 22:05:18 GMT

On Jun 6, 2011, at 12:09 PM, J. Ryan Earl wrote:

> Hello,
> So I have a question about changing dfs.block.size in
> $HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created,
> blocksizes can be modified from default.  What happens if you modify the
> blocksize of an existing HDFS site?  Do newly created files get the default
> blocksize and old files remain the same?


>  Is there a way to change the
> blocksize of existing files; I'm assuming you could write MapReduce job to
> do it, but any build in facilities?

	You can use distcp to copy the files back onto the same fs in a new location.  The new files
should be in the new block size.  Now you can move the new files where the old files used
to live.
View raw message