hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gang Luo <lgpub...@yahoo.com.cn>
Subject Re: change HDFS block size
Date Wed, 08 Sep 2010 18:40:22 GMT
That makes sense. Thanks Alex and Jeff.

-Gang




----- 原始邮件 ----
发件人: Alex Kozlov <alexvk@cloudera.com>
收件人: common-user@hadoop.apache.org
发送日期: 2010/9/8 (周三) 1:31:14 下午
主   题: Re: change HDFS block size

The block size is a per-file property, so it will change only for the newly
created files.  If you want to change the block size for the 'legacy' files,
you'll need to recreate them, for example with the distcp command (for the
new block size 512M):
*
hadoop distcp -D dfs.block.size=536870912 <path-to-old-file>
<path-to-new-file>*

and then rm the old file.

-- 
Alex Kozlov
Solutions Architect
Cloudera, Inc
twitter: alexvk2009

Hadoop World 2010, October 12, New York City - Register now:
http://www.cloudera.com/company/press-center/hadoop-world-nyc/

On Tue, Sep 7, 2010 at 8:03 PM, Jeff Zhang <zjffdu@gmail.com> wrote:

> Those lagacy files won't change block size (NameNode have the mapping
> between block and file)
> only the new added files will have the block size of 128m
>
>
> On Tue, Sep 7, 2010 at 7:27 PM, Gang Luo <lgpublic@yahoo.com.cn> wrote:
> > Hi all,
> > I need to change the block size (from 128m to 64m) and have to shut down
> the
> > cluster first. I was wondering what will happen to the current files on
> HDFS
> > (with 128M block size). Are they still there and usable? If so, what is
> the
> > block size of those lagacy files?
> >
> > Thanks,
> > -Gang
> >
> >
> >
> >
> >
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



      

Mime
View raw message