hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sandeep das <yarnhad...@gmail.com>
Subject Re: Max Parallel task executors
Date Mon, 09 Nov 2015 06:24:20 GMT
After increasing yarn.nodemanager.resource.memory-mb to 24 GB more number
of parallel map tasks are being spawned. Its resolved now.
Thanks a lot for your input.

Regards,
Sandeep

On Mon, Nov 9, 2015 at 9:49 AM, sandeep das <yarnhadoop@gmail.com> wrote:

> BTW Laxman according to the formula that you had provided it turns out
> that only 8 jobs per node will be initiated which is matching with what i'm
> seeing on my setup.
>
> *min *(*yarn.nodemanager.resource.memory-mb /
> mapreduce.[map|reduce].memory.mb*,
>      *yarn.nodemanager.resource.cpu-vcores /
> mapreduce.[map|reduce].cpu.vcores*)
>
>
>
> *yarn.nodemanager.resource.memory-mb: 16 GB*
>
> *mapreduce.map.memory.mb: 2 GB*
>
> *yarn.nodemanager.resource.cpu-vcores: 80*
>
>
> *mapreduce.map.cpu.vcores: 1*
> So if apply the formula then min(16/2, 80/1) -> min(8,80) -> 8
>
>
> *Should i reduce memory per map operation or increase memory for resource
> manager?*
>
> On Mon, Nov 9, 2015 at 9:43 AM, sandeep das <yarnhadoop@gmail.com> wrote:
>
>> Thanks Brahma and Laxman for your valuable input.
>>
>> Following are the statistics available on YARN RM GUI.
>>
>> Memory Used : 0 GB
>> Memory Total : 64 GB (16*4 = 64 GB)
>> VCores Used: 0
>> VCores Total: 320 (Earlier I had mentioned that I've configured 40 Vcores
>> but recently I increased to 80 that's why its appearing 80*4 = 321)
>>
>> Note: These statistics were captured when there was no job running in
>> background.
>>
>> Let me know whether it was sufficient to nail the issue. If more
>> information is required please let me know.
>>
>> Regards,
>> Sandeep
>>
>>
>> On Fri, Nov 6, 2015 at 7:04 PM, Brahma Reddy Battula <
>> brahmareddy.battula@huawei.com> wrote:
>>
>>>
>>> The formula for determining the number of concurrently running tasks per
>>> node is:
>>>
>>> *min *(*yarn.nodemanager.resource.memory-mb /
>>> mapreduce.[map|reduce].memory.mb*,
>>>      *yarn.nodemanager.resource.cpu-vcores /
>>> mapreduce.[map|reduce].cpu.vcores*) .
>>>
>>>
>>> *For you scenario :*
>>>
>>> As you told yarn.nodemanager.resource.memory-mb is configured to *16 GB*
>>> and yarn.nodemanager.resource.cpu-vcores configured to *40*. and I am
>>> thinking
>>> mapreduce.map/reduce.memory.mb, mapreduce.map/reduce.cpu.vcores default
>>> values.
>>>
>>> min (16GB/1GB,40Core/1Core )=*16* tasks for Node*. *Then total should
>>> be 16*4=64  (63+1AM)..
>>>
>>> I am thinking, Two Nodemanger's are unhealthy *(OR)* you might have
>>> configured mapreduce.map/reduce.memory.mb=2GB(or 5 core).
>>>
>>> As laxman pointed you can post RMUI or you can cross check like above.
>>>
>>> Hope this helps.
>>>
>>>
>>>
>>> Thanks & Regards
>>>
>>>  Brahma Reddy Battula
>>>
>>>
>>>
>>>
>>> ------------------------------
>>> *From:* Laxman Ch [laxman.lux@gmail.com]
>>> *Sent:* Friday, November 06, 2015 6:31 PM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: Max Parallel task executors
>>>
>>> Can you please copy paste the cluster metrics from RM dashboard.
>>> Its under http://rmhost:port/cluster/cluster
>>>
>>> In this page, check under Memory Total vs Memory Used and VCores Total
>>> vs VCores Used
>>>
>>> On 6 November 2015 at 18:21, sandeep das <yarnhadoop@gmail.com> wrote:
>>>
>>>> HI Laxman,
>>>>
>>>> Thanks for your response. I had already configured a very high value
>>>> for yarn.nodemanager.resource.cpu-vcores e.g. 40 but still its not
>>>> increasing more number of parallel tasks to execute but if this value is
>>>> reduced then it runs less number of parallel tasks.
>>>>
>>>> As of now yarn.nodemanager.resource.memory-mb is configured to 16 GB
>>>> and yarn.nodemanager.resource.cpu-vcores configured to 40.
>>>>
>>>> Still its not spawning more tasks than 31.
>>>>
>>>> Let me know if more information is required to debug it. I believe
>>>> there is upper limit after which yarn stops spawning tasks. I may be wrong
>>>> here.
>>>>
>>>>
>>>> Regards,
>>>> Sandeep
>>>>
>>>> On Fri, Nov 6, 2015 at 6:15 PM, Laxman Ch <laxman.lux@gmail.com> wrote:
>>>>
>>>>> Hi Sandeep,
>>>>>
>>>>> Please configure the following items to the cores and memory per node
>>>>> you wanted to allocate for Yarn containers.
>>>>> Their defaults are 8 cores and 8GB. So that's the reason you were
>>>>> stuck at 31 (4nodes * 8cores - 1 AppMaster)
>>>>>
>>>>>
>>>>> http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
>>>>> yarn.nodemanager.resource.cpu-vcores
>>>>> yarn.nodemanager.resource.memory-mb
>>>>>
>>>>>
>>>>> On 6 November 2015 at 17:59, sandeep das <yarnhadoop@gmail.com>
wrote:
>>>>>
>>>>>> May be to naive to ask but How do I check that?
>>>>>> Sometimes there are almost 200 map tasks pending to run but at a
time
>>>>>> only 31 runs.
>>>>>>
>>>>>> On Fri, Nov 6, 2015 at 5:57 PM, Chris Mawata <chris.mawata@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Also check that you have more than 31 blocks to process.
>>>>>>> On Nov 6, 2015 6:54 AM, "sandeep das" <yarnhadoop@gmail.com>
wrote:
>>>>>>>
>>>>>>>> Hi Varun,
>>>>>>>>
>>>>>>>> I tried to increase this parameter but it did not increase
number
>>>>>>>> of parallel tasks but if It is decreased then YARN reduces
number of
>>>>>>>> parallel tasks. I'm bit puzzled why its not increasing more
than 31 tasks
>>>>>>>> even after its value is increased.
>>>>>>>>
>>>>>>>> Is there any other configuration as well which controls on
how many
>>>>>>>> maximum tasks can execute in parallel?
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Sandeep
>>>>>>>>
>>>>>>>> On Tue, Nov 3, 2015 at 7:29 PM, Varun Vasudev <vvasudev@apache.org>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The number of parallel tasks that are run depends on
the amount of
>>>>>>>>> memory and vcores on your machines and the amount of
memory and vcores
>>>>>>>>> required by your mappers and reducers. The amount of
memory can be set
>>>>>>>>> via yarn.nodemanager.resource.memory-mb(the default is
8G). The amount of
>>>>>>>>> vcores can be set via yarn.nodemanager.resource.cpu-vcores(the
>>>>>>>>> default is 8 vcores).
>>>>>>>>>
>>>>>>>>> -Varun
>>>>>>>>>
>>>>>>>>> From: sandeep das <yarnhadoop@gmail.com>
>>>>>>>>> Reply-To: <user@hadoop.apache.org>
>>>>>>>>> Date: Monday, November 2, 2015 at 3:56 PM
>>>>>>>>> To: <user@hadoop.apache.org>
>>>>>>>>> Subject: Max Parallel task executors
>>>>>>>>>
>>>>>>>>> Hi Team,
>>>>>>>>>
>>>>>>>>> I've a cloudera cluster of 4 nodes. Whenever i submit
a job my
>>>>>>>>> only 31 parallel tasks are executed whereas my machines
have more CPU
>>>>>>>>> available but still YARN/AM does not create more task.
>>>>>>>>>
>>>>>>>>> Is there any configuration which I can change to start
more
>>>>>>>>> MAP/REDUCER task in parallel?
>>>>>>>>>
>>>>>>>>> Each machine in my cluster has 24 CPUs.
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Sandeep
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Laxman
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Laxman
>>>
>>
>>
>

Mime
View raw message