hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Bhattacharjee <rahul.rec....@gmail.com>
Subject Re: Uber Job!
Date Mon, 06 May 2013 15:31:31 GMT
Excellent point sir

On Monday, May 6, 2013, yypvsxf19870706 wrote:

> Hi
>
>     Suppose that your input file are 10 with total size 64mb , I think you
> will get the 10 maps.
>
>     By the ways,the uber mode is only for yarn . Suppose you have actually
> 1 map ,yarn will at least create two containers , one for app master and
> the other for the map , if uber mode is enabled with the yarn , yarn will
> only create 1 container for both app master and the map.
>
>
> 发自我的 iPhone
>
> 在 2013-5-6,22:45,Rahul Bhattacharjee <rahul.rec.dgp@gmail.com<javascript:_e({},
'cvml', 'rahul.rec.dgp@gmail.com');>>
> 写道:
>
> Hi,
>
> I was going through the definition of Uber Job of Hadoop.
>
> A job is considered uber when it has 10 or less maps , one reducer and the
> complete data is less than one dfs block size.
>
> I have some doubts here-
>
> Splits are created as per the dfs block size.Creating 10 mappers are
> possible from one block of data by some settings change (changing the max
> split size). But trying to understand , why would some job need to run
> around 10 maps for 64 MB of data.
> One thing may be that the job is immensely CUP intensive. Will it be a
> correct assumption? or is there is any other reason for this.
>
> Thanks,
> Rahul
>
>
>

-- 
Sent from Gmail Mobile

Mime
View raw message