hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ulul <had...@ulul.org>
Subject Re: Question about mapp Task and reducer Task
Date Mon, 16 Feb 2015 21:22:22 GMT
Hi

As a general rule you should follow distro's recommendations, especially 
if you have paid support from Hortonworks.

I have a non-critical, unsupported production cluster onto which I'll 
run small jobs and intend to test the feature but I haven't tried yet so 
I can't give you feedback right now, sorry

Ulul

Le 16/02/2015 00:47, 杨浩 a écrit :
> hi ulul
>  thank you for explanation. I have googled the feature, and 
> hortonworks said
>
> This feature is a technical preview and considered under development. 
> Do not use this feature in your production systems.
>
>  can we use it in production env?
>
>
> 2015-02-15 20:15 GMT+08:00 Ulul <hadoop@ulul.org 
> <mailto:hadoop@ulul.org>>:
>
>     Hi
>
>     Actually it depends : in MR1 each mapper or reducer will be
>     exezcuted in its own JVM, in MR2 you can activate uberjobs that
>     will let the framework serialize small jobs' mappers and reducers
>     in the applicationmaster JVM.
>
>     Look for mapreduce.job.ubertask.* properties
>
>     Ulul
>
>     Le 15/02/2015 11:11, bit1129@163.com <mailto:bit1129@163.com> a
>     écrit :
>>     Hi, Hadoopers,
>>
>>     I am pretty newbie to Hadoop, I got a question:  when a job runs,
>>     Will each mapper or reducer task take up a JVM process or only a
>>     thread?
>>     I hear that the answer is the Process. That is, say, one job
>>     contains 5 mappers and 2 reducers , then there will be 7 JVM
>>     processes?
>>     Thanks.
>>
>>     ------------------------------------------------------------------------
>>     bit1129@163.com <mailto:bit1129@163.com>
>
>


Mime
View raw message