hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Naganarasimha G R (Naga)" <garlanaganarasi...@huawei.com>
Subject RE: Determine HDP Memory Configuration Settings
Date Thu, 14 Jan 2016 03:24:25 GMT
Hi Evan,

In most of the scenarios  hardware we have is like 8 dual core cpu's which are hyper threaded
so logically its 8*2*2 = 32 cores.  Usually  this logical cpu cores are got by executing the
command
grep -c /proc/cpuinfo
By logical, assumption is that in ideal case 32 threads/processess can run concurrently.
But based on the overload we can configure accordingly.
Hope this helps or may be you can reframe your question so that we can help better.

Regards,
Naga

________________________________
From: yaoxiaohua [yaoxiaohua@outlook.com]
Sent: Thursday, January 14, 2016 06:36
To: 'Namikaze Minato'
Cc: user@hadoop.apache.org
Subject: RE: Determine HDP Memory Configuration Settings

Thanks for your reply,
                What make you give this conclusion?

Evan Yao
From: Namikaze Minato [mailto:lloydsensei@gmail.com]
Sent: 2016年1月14日 8:50
To: yaoxiaohua
Cc: common-user@hadoop.apache.org
Subject: Re: Determine HDP Memory Configuration Settings

Logical ones.

On 14 January 2016 at 01:13, yaoxiaohua <yaoxiaohua@outlook.com<mailto:yaoxiaohua@outlook.com>>
wrote:
Hi,
                When we configure memory in yarn-site.xml, we find a recommend in hortonworks
 website.
Determine HDP Memory Configuration Settings
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/determine-hdp-memory-config.html

       python yarn-utils.py -c 16 -m 64 -d 4 -k True

                my question is that, for first parameter is the
-c CORES

The number of cores on each host.

                How to calculate the number of cores on each host?
                Use nproc?  Or other tools?
                I know that our host’s physical cores is 8, logical cores is 32.
                I should use which number? 32 or 8?

Best Regards,
Evan Yao


Mime
View raw message