hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Varun Vasudev <vvasu...@apache.org>
Subject Re: Max Parallel task executors
Date Tue, 03 Nov 2015 13:59:06 GMT
The number of parallel tasks that are run depends on the amount of memory and vcores on your
machines and the amount of memory and vcores required by your mappers and reducers. The amount
of memory can be set via yarn.nodemanager.resource.memory-mb(the default is 8G). The amount
of vcores can be set via yarn.nodemanager.resource.cpu-vcores(the default is 8 vcores).

-Varun

From:  sandeep das <yarnhadoop@gmail.com>
Reply-To:  <user@hadoop.apache.org>
Date:  Monday, November 2, 2015 at 3:56 PM
To:  <user@hadoop.apache.org>
Subject:  Max Parallel task executors

Hi Team,

I've a cloudera cluster of 4 nodes. Whenever i submit a job my only 31 parallel tasks are
executed whereas my machines have more CPU available but still YARN/AM does not create more
task. 

Is there any configuration which I can change to start more MAP/REDUCER task in parallel?

Each machine in my cluster has 24 CPUs.

Regards,
Sandeep


Mime
View raw message