hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christian Ulrik Søttrup <soett...@nbi.dk>
Subject Re: specific block size for a file
Date Tue, 05 May 2009 14:39:58 GMT
Cheers, that worked.

jason hadoop wrote:
> Please try  -D dfs.block.size=4096000
> The specification must be in bytes.
>
>
> On Tue, May 5, 2009 at 4:47 AM, Christian Ulrik Søttrup <soettrup@nbi.dk>wrote:
>
>   
>> Hi all,
>>
>> I have a job that creates very big local files so i need to split it to as
>> many mappers as possible. Now the DFS block size I'm
>> using means that this job is only split to 3 mappers. I don't want to
>> change the hdfs wide block size because it works for my other jobs.
>>
>> Is there a way to give a specific file a different block size. The
>> documentation says it is, but does not explain how.
>> I've tried:
>> hadoop dfs -D dfs.block.size=4M -put file  /dest/
>>
>> But that does not work.
>>
>> any help would be apreciated.
>>
>> Cheers,
>> Chrulle
>>
>>     
>
>
>
>   


Mime
View raw message