hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Block size
Date Sat, 04 Jan 2014 07:23:39 GMT
XG,

The newer default is 128 MB [HDFS-4053]. The minimum, however, can be
as low as io.bytes.per.checksum (default: 512 bytes) if the user so
wishes it. To administratively set a limit to prevent low values from
being used, see the config introduced via HDFS-4305.

On Sat, Jan 4, 2014 at 11:38 AM, Zhao, Xiaoguang
<XiaoGuang.Zhao@honeywell.com> wrote:
> As I am new to hdfs, I was told that the minimize block size is 64M, is it
> correct?
>
> XG
>
> 在 2014年1月4日,3:12,"German Florez-Larrahondo" <german.fl@samsung.com>
写道:
>
> Also note that the block size in recent releases is actually called
> “dfs.blocksize” as opposed to “dfs.block.size”, and that you can set it per
> job as well. In that scenario, just pass it as an argument to your job (e.g.
> Hadoop bla –D dfs.blocksize= 134217728)
>
>
>
> Regards
>
>
>
> From: David Sinclair [mailto:dsinclair@chariotsolutions.com]
> Sent: Friday, January 03, 2014 10:47 AM
> To: user@hadoop.apache.org
> Subject: Re: Block size
>
>
>
> Change the dfs.block.size in hdfs-site.xml to be the value you would like if
> you want to have all new files have a different block size.
>
>
>
> On Fri, Jan 3, 2014 at 11:37 AM, Kurt Moesky <kurtmoesky@gmail.com> wrote:
>
> I see the default block size for HDFS is 64 MB, is this a value that can be
> changed easily?
>
>



-- 
Harsh J

Mime
View raw message