hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: NN memory consumption on 0.20/0.21 with compressed pointers/
Date Mon, 24 Aug 2009 10:46:11 GMT
Scott Carey wrote:

> The implementation in JRE 6u14 uses a shift for all heap sizes, the
> optimization to remove that for heaps less than 4GB is not in the hotspot
> version there (but will be later).

OK. I've been using JRockit 64 bit for a while, and it did a check on 
every pointer to see if it was real or relative, then an add, so I 
suspect its computation was more complex. The sun approach seems better

> The size advantage is there either way.
> I have not tested an app myself that was not faster using
> -XX:+UseCompressedOops on a 64 bit JVM.
> The extra bit shifting is overshadowed by how much faster and less frequent
> GC is with a smaller dataset.


You get better cache efficiency too -less cache misses, and save on 
memory bandwidth

View raw message