hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinod Kumar Vavilapalli <vino...@hortonworks.com>
Subject Re: number of mapper tasks
Date Tue, 29 Jan 2013 20:08:22 GMT
Tried looking at your code, it's a bit involved. Instead of trying to run
the job, try unit-testing your input format. Test for getSplits(), whatever
number of splits that method returns, that will be the number of mappers
that will run.

You can also use LocalJobRunner also for this - set mapred.job.tracker to
local and run your job locally on your machine instead of trying on a
cluster.

HTH,
+Vinod



On Tue, Jan 29, 2013 at 4:53 AM, Marcelo Elias Del Valle <mvallebr@gmail.com
> wrote:

> Hello,
>
>     I have been able to make this work. I don't know why, but when but
> input file is zipped (read as a input stream) it creates only 1 mapper.
> However, when it's not zipped, it creates more mappers (running 3 instances
> it created 4 mappers and running 5 instances, it created 8 mappers).
>     I really would like to know why this happens and even with this number
> of mappers, I would like to know why more mappers aren't created. I was
> reading part of the book "Hadoop - The definitive guide" (
> https://www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-7/input-formats)
> which says:
>
> "The JobClient calls the getSplits() method, passing the desired number
> of map tasks as the numSplits argument. This number is treated as a hint,
> as InputFormat implementations are free to return a different number of
> splits to the number specified in numSplits. Having calculated the
> splits, the client sends them to the jobtracker, which uses their storage
> locations to schedule map tasks to process them on the tasktrackers. ..."
>
>      I am not sure on how to get more info.
>
>      Would you recommend me to try to find the answer on the book? Or
> should I read hadoop source code directly?
>
> Best regards,
> Marcelo.
>
>
> 2013/1/29 Marcelo Elias Del Valle <mvallebr@gmail.com>
>
>> I implemented my custom input format. Here is how I used it:
>>
>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/test/java/org/apache/hadoop/mapreduce/lib/input/test/CSVTestRunner.java
>>
>> As you can see, I do:
>> importerJob.setInputFormatClass(CSVNLineInputFormat.class);
>>
>> And here is the Input format and the linereader:
>>
>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java
>>
>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVLineRecordReader.java
>>
>> In this input format, I completely ignore these other parameters and get
>> the splits by the number of lines. The amount of lines per map can be
>> controlled by the same parameter used in NLineInputFormat:
>>
>> public static final String LINES_PER_MAP =
>> "mapreduce.input.lineinputformat.linespermap";
>> However, it has really no effect on the number of maps.
>>
>>
>>
>> 2013/1/29 Vinod Kumar Vavilapalli <vinodkv@hortonworks.com>
>>
>>>
>>> Regarding your original question, you can use the min and max split
>>> settings to control the number of maps:
>>> http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.html.
See #setMinInputSplitSize and #setMaxInputSplitSize. Or
>>> use mapred.min.split.size directly.
>>>
>>> W.r.t your custom inputformat, are you sure you job is using this
>>> InputFormat and not the default one?
>>>
>>>  HTH,
>>> +Vinod Kumar Vavilapalli
>>> Hortonworks Inc.
>>> http://hortonworks.com/
>>>
>>> On Jan 28, 2013, at 12:56 PM, Marcelo Elias Del Valle wrote:
>>>
>>> Just to complement the last question, I have implemented the getSplits
>>> method in my input format:
>>>
>>> https://github.com/mvallebr/CSVInputFormat/blob/master/src/main/java/org/apache/hadoop/mapreduce/lib/input/CSVNLineInputFormat.java
>>>
>>> However, it still doesn't create more than 2 map tasks. Is there
>>> something I could do about it to assure more map tasks are created?
>>>
>>> Thanks
>>> Marcelo.
>>>
>>>
>>> 2013/1/28 Marcelo Elias Del Valle <mvallebr@gmail.com>
>>>
>>>> Sorry for asking too many questions, but the answers are really
>>>> happening.
>>>>
>>>>
>>>> 2013/1/28 Harsh J <harsh@cloudera.com>
>>>>
>>>>> This seems CPU-oriented. You probably want the NLineInputFormat? See
>>>>>
>>>>> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/lib/NLineInputFormat.html
>>>>> .
>>>>> This should let you spawn more maps as we, based on your N factor.
>>>>>
>>>>
>>>> Indeed, CPU is my bottleneck. That's why I want more things in parallel.
>>>> Actually, I wrote my own InputFormat, to be able to process multiline
>>>> CSVs: https://github.com/mvallebr/CSVInputFormat
>>>> I could change it to read several lines at a time, but would this alone
>>>> allow more tasks running in parallel?
>>>>
>>>>
>>>>> Not really - "Slots" are capacities, rather than split factors
>>>>> themselves. You can have N slots always available, but your job has to
>>>>> supply as many map tasks (based on its input/needs/etc.) to use them
>>>>> up.
>>>>>
>>>>
>>>> But how can I do that (supply map tasks) in my job? changing its code?
>>>> hadoop config?
>>>>
>>>>
>>>>> Unless your job sets the number of reducers to 0 manually, 1 default
>>>>> reducer is always run that waits to see if it has any outputs from
>>>>> maps. If it does not receive any outputs after maps have all
>>>>> completed, it dies out with behavior equivalent to a NOP.
>>>>>
>>>> Ok, I did job.setNumReduceTasks(0); , guess this will solve this part,
>>>> thanks!
>>>>
>>>>
>>>> --
>>>> Marcelo Elias Del Valle
>>>> http://mvalle.com - @mvallebr
>>>>
>>>
>>>
>>>
>>> --
>>> Marcelo Elias Del Valle
>>> http://mvalle.com - @mvallebr
>>>
>>>
>>>
>>
>>
>> --
>> Marcelo Elias Del Valle
>> http://mvalle.com - @mvallebr
>>
>
>
>
> --
> Marcelo Elias Del Valle
> http://mvalle.com - @mvallebr
>



-- 
+Vinod
Hortonworks Inc.
http://hortonworks.com/

Mime
View raw message