hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcos Ortiz <mlor...@uci.cu>
Subject Re: Changing dfs.block.size
Date Mon, 06 Jun 2011 19:53:38 GMT
I think that you run several maintenance tasks after doing these changes.
* Start the balancer tool to redistribute the blocks by moving them from 
over-utilized datanodes to under-utilized datanodes.
    Rebember to change the dfs.balance.bandwidthPerSec property in the 
hdfs-site.xml file.

* Run the fsck tool (everyday if it is possible) to check the health of 
files in HDFS.

Regards

El 6/6/2011 3:29 PM, Jeff Bean escribió:
> Sorry, that's rep factor and not blocksize. I think you need to copy the files.
>
> Sent from my iPhone
>
> On Jun 6, 2011, at 12:09 PM, "J. Ryan Earl"<oss@jryanearl.us>  wrote:
>
>    
>> Hello,
>>
>> So I have a question about changing dfs.block.size in $HADOOP_HOME/conf/hdfs-site.xml.
 I understand that when files are created, blocksizes can be modified from default.  What
happens if you modify the blocksize of an existing HDFS site?  Do newly created files get
the default blocksize and old files remain the same?  Is there a way to change the blocksize
of existing files; I'm assuming you could write MapReduce job to do it, but any build in facilities?
>>
>> Thanks,
>> -JR
>>
>>
>>      

-- 
Marcos Luís Ortíz Valmaseda
  Software Engineer (UCI)
  http://marcosluis2186.posterous.com
  http://twitter.com/marcosluis2186
   


Mime
View raw message