hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yaoxiaohua <yaoxiao...@outlook.com>
Subject hadoop configure issues
Date Thu, 14 Jan 2016 05:53:18 GMT
Hi guys,

                We use huge pages for linux, 

the total huge page memory is 16G.

                Our environment is 

128G memory, 

                28 disks,

                32(logical ) cpu

                

                Ibm jdk 1.7

                Cdh2.3

                Linux : overcommit 0      

 

                For one nodemanager, we give 100g total, and vcore :24.

                So I find that one nodemanager can assign 24 container at
the same time.

                And every container 's java opts is :

                -server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
-Xgcpolicy:gencon -Xjit:optLevel=hot

                -Xlp in ibm jdk is meaning use huge pages.

 

                My questions is that, when the cluster is busy, 

                I found 24 containing is launched at same time,   but we
just have 16G huge pages totoal,

                Why does this happened?           24 *  1.2g > 16G

                

                Thanks

 

Best Regards,

Evan 

                


Mime
View raw message