hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ayon Sinha <ayonsi...@yahoo.com>
Subject Re: changing the block size
Date Thu, 03 Feb 2011 16:45:59 GMT
restart dfs. I believe it should be sufficient to restart the namenode only, but 
others can confirm.

From: Rita <rmorgan466@gmail.com>
To: hdfs-user@hadoop.apache.org
Sent: Thu, February 3, 2011 4:35:09 AM
Subject: changing the block size

Currently I am using the default block size of 64MB. I would like to change it 
for my cluster to 256 megabytes since I deal with large files (over 2GB).  What 
is the best way to do this? 

What file do I have to make the change on? Does it have to be applied on the 
namenode or each individual data nodes?  What has to get restarted, namenode, 
datanode, or both?

--- Get your facts first, then you can distort them as you please.--

View raw message