hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Drake민영근 <drake....@nexr.com>
Subject Re: hadoop configure issues
Date Mon, 18 Jan 2016 04:38:53 GMT
Hi Evan,

I think this is why: 24 * 1.2g < 100g. I don't know the "huge pages" of the
IBM JDK, but still you may config 16g in nodemanager.

Thanks.

Drake 민영근 Ph.D
kt NexR

On Thu, Jan 14, 2016 at 2:53 PM, yaoxiaohua <yaoxiaohua@outlook.com> wrote:

> Hi guys,
>
>                 We use huge pages for linux,
>
> the total huge page memory is 16G.
>
>                 Our environment is
>
> 128G memory,
>
>                 28 disks,
>
>                 32(logical ) cpu
>
>
>
>                 Ibm jdk 1.7
>
>                 Cdh2.3
>
>                 Linux : overcommit 0
>
>
>
>                 For one nodemanager, we give 100g total, and vcore :24.
>
>                 So I find that one nodemanager can assign 24 container at
> the same time.
>
>                 And every container ‘s java opts is :
>
>                 -server -Xms1200m -Xmx1200m -Xlp -Xnoclassgc
> -Xgcpolicy:gencon -Xjit:optLevel=hot
>
>                 -Xlp in ibm jdk is meaning use huge pages.
>
>
>
>                 My questions is that, when the cluster is busy,
>
>                 I found 24 containing is launched at same time,   but we
> just have 16G huge pages totoal,
>
>                 Why does this happened?           24 *  1.2g > 16G
>
>
>
>                 Thanks
>
>
>
> Best Regards,
>
> Evan
>
>
>

Mime
View raw message