hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohit Anchlia <mohitanch...@gmail.com>
Subject Re: dfs.block.size
Date Mon, 27 Feb 2012 15:19:55 GMT
Can someone please suggest if parameters like dfs.block.size,
mapred.tasktracker.map.tasks.maximum are only cluster wide settings or can
these be set per client job configuration?

On Sat, Feb 25, 2012 at 5:43 PM, Mohit Anchlia <mohitanchlia@gmail.com>wrote:

> If I want to change the block size then can I use Configuration in
> mapreduce job and set it when writing to the sequence file or does it need
> to be cluster wide setting in .xml files?
>
> Also, is there a way to check the block of a given file?
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message