hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcelo Elias Del Valle <mvall...@gmail.com>
Subject Re: number of mapper tasks
Date Mon, 28 Jan 2013 16:55:19 GMT
Sorry for asking too many questions, but the answers are really happening.

2013/1/28 Harsh J <harsh@cloudera.com>

> This seems CPU-oriented. You probably want the NLineInputFormat? See
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/lib/NLineInputFormat.html
> .
> This should let you spawn more maps as we, based on your N factor.

Indeed, CPU is my bottleneck. That's why I want more things in parallel.
Actually, I wrote my own InputFormat, to be able to process multiline CSVs:
I could change it to read several lines at a time, but would this alone
allow more tasks running in parallel?

> Not really - "Slots" are capacities, rather than split factors
> themselves. You can have N slots always available, but your job has to
> supply as many map tasks (based on its input/needs/etc.) to use them
> up.

But how can I do that (supply map tasks) in my job? changing its code?
hadoop config?

> Unless your job sets the number of reducers to 0 manually, 1 default
> reducer is always run that waits to see if it has any outputs from
> maps. If it does not receive any outputs after maps have all
> completed, it dies out with behavior equivalent to a NOP.
Ok, I did job.setNumReduceTasks(0); , guess this will solve this part,

Marcelo Elias Del Valle
http://mvalle.com - @mvallebr

View raw message