hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jason hadoop <jason.had...@gmail.com>
Subject Re: specific block size for a file
Date Tue, 05 May 2009 12:51:49 GMT
Please try  -D dfs.block.size=4096000
The specification must be in bytes.

On Tue, May 5, 2009 at 4:47 AM, Christian Ulrik S√łttrup <soettrup@nbi.dk>wrote:

> Hi all,
> I have a job that creates very big local files so i need to split it to as
> many mappers as possible. Now the DFS block size I'm
> using means that this job is only split to 3 mappers. I don't want to
> change the hdfs wide block size because it works for my other jobs.
> Is there a way to give a specific file a different block size. The
> documentation says it is, but does not explain how.
> I've tried:
> hadoop dfs -D dfs.block.size=4M -put file  /dest/
> But that does not work.
> any help would be apreciated.
> Cheers,
> Chrulle

Alpha Chapters of my book on Hadoop are available
www.prohadoopbook.com a community for Hadoop Professionals

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message