Sorry for the typo, the property name is mapred.max.split.size

Also just for changing the number of map tasks you don't need to modify the hdfs block size.

On Tue, Oct 2, 2012 at 10:31 PM, Bejoy Ks <> wrote:

You need to alter the value of mapred.max.split size to a value larger than your block size to have less number of map tasks than the default.

On Tue, Oct 2, 2012 at 10:04 PM, Shing Hing Man <> wrote:

I am running Hadoop 1.0.3 in Pseudo  distributed mode.
When I  submit a map/reduce job to process a file of  size about 16 GB, in job.xml, I have the following =242
mapred.min.split.size =0
dfs.block.size = 67108864

I would like to reduce to see if it improves performance.
I have tried doubling  the size of  dfs.block.size. But the remains unchanged.
Is there a way to reduce  ?

Thanks in advance for any assistance !