hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Kimball <aa...@cloudera.com>
Subject Re: Changing block size of hadoop
Date Sun, 12 Apr 2009 22:07:48 GMT
Blocks already written to HDFS will remain their current size. Blocks are
immutable objects. That procedure would set the size used for all
subsequently-written blocks. I don't think you can change the block size
while the cluster is running, because that would require the NameNode and
DataNodes to re-read their configurations, which they only do at startup.
- Aaron

On Sun, Apr 12, 2009 at 6:08 AM, Rakhi Khatwani <rakhi.khatwani@gmail.com>wrote:

> Hi,
>  I would like to know if it is feasbile to change the blocksize of Hadoop
> while map reduce jobs are executing?  and if not would the following work?
>  1.
> stop map-reduce  2. stop-hbase  3. stop hadoop  4. change hadoop-sites.xml
> to reduce the blocksize  5. restart all
>  whether the data in the hbase tables will be safe and automatically split
> after changing the block size of hadoop??
> Thanks,
> Raakhi

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message