hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Kozlov <ale...@cloudera.com>
Subject Re: change HDFS block size
Date Wed, 08 Sep 2010 17:31:14 GMT
The block size is a per-file property, so it will change only for the newly
created files.  If you want to change the block size for the 'legacy' files,
you'll need to recreate them, for example with the distcp command (for the
new block size 512M):
*
hadoop distcp -D dfs.block.size=536870912 <path-to-old-file>
<path-to-new-file>*

and then rm the old file.

-- 
Alex Kozlov
Solutions Architect
Cloudera, Inc
twitter: alexvk2009

Hadoop World 2010, October 12, New York City - Register now:
http://www.cloudera.com/company/press-center/hadoop-world-nyc/

On Tue, Sep 7, 2010 at 8:03 PM, Jeff Zhang <zjffdu@gmail.com> wrote:

> Those lagacy files won't change block size (NameNode have the mapping
> between block and file)
> only the new added files will have the block size of 128m
>
>
> On Tue, Sep 7, 2010 at 7:27 PM, Gang Luo <lgpublic@yahoo.com.cn> wrote:
> > Hi all,
> > I need to change the block size (from 128m to 64m) and have to shut down
> the
> > cluster first. I was wondering what will happen to the current files on
> HDFS
> > (with 128M block size). Are they still there and usable? If so, what is
> the
> > block size of those lagacy files?
> >
> > Thanks,
> > -Gang
> >
> >
> >
> >
> >
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message