hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bharath Mundlapudi <bharathw...@yahoo.com>
Subject Re: changing the block size
Date Sun, 06 Feb 2011 19:25:54 GMT
Can you tell us, how are you verifying if its not working?


conf/hdfs-site.xml dfs.block.size 
And restart the cluster. 


From: Rita <rmorgan466@gmail.com>
To: hdfs-user@hadoop.apache.org
Sent: Sunday, February 6, 2011 8:50 AM
Subject: Re: changing the block size

Neither one was working. 

Is there anything I can do? I always have problems like this in hdfs. It seems even experts
are guessing at the answers :-/

On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha <ayonsinha@yahoo.com> wrote:

>restart dfs. I believe it should be sufficient to restart the namenode only, but others
can confirm.
>From: Rita <rmorgan466@gmail.com>
>To: hdfs-user@hadoop.apache.org
>Sent: Thu, February 3, 2011 4:35:09
> AM
>Subject: changing the block size
>>Currently I am using the default block size of 64MB. I would like to change it for
my cluster to 256 megabytes since I deal with large files (over 2GB).  What is the best way
to do this? 
>What file do I have to make the change on? Does it have to be applied on the namenode
or each individual data nodes?  What has to get restarted, namenode, datanode, or both?
>--- Get your facts first, then you can distort them as you please.--

--- Get your facts first, then you can distort them as you please.--

View raw message