hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vitaliy Semochkin <vitaliy...@gmail.com>
Subject Re: specify different number of mapper tasks for different machines
Date Mon, 30 Aug 2010 12:39:32 GMT
To say the truth I didn't understood Ted's proposal to solve  the
wiping configuration.
If you manage to make such configuration work please report :-)

On Mon, Aug 30, 2010 at 3:59 PM, Shaojun Zhao <zhao@cs.rochester.edu> wrote:
> I beleive what Allen and Ted said, but so far, I did not try it out.
> -Sam
>
> On Mon, Aug 30, 2010 at 4:42 AM, Vitaliy Semochkin <vitaliy.se@gmail.com> wrote:
>> Hi,
>>
>> Have you find the way to set different amount of mappers/reducers on a
>> particular node?
>>
>> On Wed, Jul 14, 2010 at 10:50 PM, Shaojun Zhao <zhao@cs.rochester.edu> wrote:
>>> Hi,
>>>
>>> I am running mapreduce on 5 machines, where I have 8 cores for 3 of
>>> them, but 2 cores for 2 of them, and the 8 core machines are more
>>> powerful (faster, more mem, more disk).
>>>
>>> Currently, I am using only the 3 machines (each with 8 cores), and the
>>> max number of mapper tasks is 8.
>>> I may use one of the 2 core machine as the master, but it turns out I
>>> need a powerful master.
>>>
>>> Is there any way to specify that some machines run, say, 8 mapper
>>> tasks, while some machines run only 2 tasks?
>>>
>>> What I can imagine is to extend the slave file, and have
>>> machine1:8
>>> machine2:8
>>> machine3:8
>>> machine4:2
>>> machine5:2
>>> but I have never seen this format.
>>>
>>> Any option could be I specify the 8 core machines several times in the
>>> slave file:
>>> machine1
>>> machine1
>>> machine1
>>> machine1
>>> <same for machines 2 and 3>
>>> machine4
>>> machine5
>>>
>>> But I believe there are ways to do this. I just can not find the
>>> information from the hadoop website.
>>>
>>> Thanks in advance.
>>> -Sam
>>>
>>
>

Mime
View raw message