hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guy Doulberg <Guy.Doulb...@conduit.com>
Subject RE: number of maps it lower than the cluster capacity
Date Mon, 02 May 2011 06:07:21 GMT
I know that...
Map allocations for a job is proportional to size of input data, 
But in my case....
I have a job that runs with 200 map slots....
And a specific job that when it runs, the job tracker allocates to it 60 mappers (although
I think is should get more according to my calculations) and 30 to the job that was already
running. 
What happened to the other 110 map slots? Why aren't they allocated?


Thanks, Guy



-----Original Message-----
From: James Seigel [mailto:james@tynt.com] 
Sent: Sunday, May 01, 2011 9:42 PM
To: common-user@hadoop.apache.org
Subject: Re: number of maps it lower than the cluster capacity

Also an input split size thing in there as well.  But definitely # of
mappers are proportional to input data size

Sent from my mobile. Please excuse the typos.

On 2011-05-01, at 11:26 AM, ShengChang Gu <gushengchang@gmail.com> wrote:

> If I'm not mistaken,then the map slots = input data size / block size.
>
> 2011/5/2 Guy Doulberg <Guy.Doulberg@conduit.com>
>
>> Hey,
>> Maybe someone can give an idea where to look for the bug...
>>
>> I have a cluster with 270 slots for mappers,
>> And a fairSchedualer configured for it....
>>
>> Sometimes this cluster allocates only 80 or 50 slots to the entire cluster.
>> Most of the time the most of slots are allocated .
>>
>> I noticed that there are special several jobs that cause the above
>> behavior,
>>
>> Thanks Guy,
>>
>>
>>
>
>
> --
> 阿昌
Mime
View raw message