hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brahma Reddy Battula <brahmareddy.batt...@huawei.com>
Subject RE: modify hdfs block size
Date Tue, 10 Sep 2013 06:08:29 GMT
You can change the block size of existing files with a command like

hadoop distcp -Ddfs.block.size=$[256*1024*1024] /path/to/inputdata /path/to/inputdata-with-largeblocks.

 After this command completes, you can remove the original data



________________________________
From: kun yan [yankunhadoop@gmail.com]
Sent: Tuesday, September 10, 2013 12:27 PM
To: user@hadoop.apache.org
Subject: Re: modify hdfs block size

thank your very much


2013/9/10 Harsh J <harsh@cloudera.com<mailto:harsh@cloudera.com>>
You cannot change the blocksize (i.e. merge or split) of an existing
file. You can however change it for newer files, and also download and
re-upload older files again with newer blocksize to change it.

On Tue, Sep 10, 2013 at 9:01 AM, kun yan <yankunhadoop@gmail.com<mailto:yankunhadoop@gmail.com>>
wrote:
> Hi all
> Can I modify HDFS data block size is 32MB, I know the default is 64MB
> thanks
>
> --
>
> In the Hadoop world, I am just a novice, explore the entire Hadoop
> ecosystem, I hope one day I can contribute their own code
>
> YanBit
> yankunhadoop@gmail.com<mailto:yankunhadoop@gmail.com>
>



--
Harsh J



--

In the Hadoop world, I am just a novice, explore the entire Hadoop ecosystem, I hope one day
I can contribute their own code

YanBit
yankunhadoop@gmail.com<mailto:yankunhadoop@gmail.com>


Mime
View raw message