hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: Datanode high memory usage
Date Wed, 02 Sep 2009 12:11:49 GMT
Allen Wittenauer wrote:
> On 9/2/09 3:49 AM, "Stas Oskin" <stas.oskin@gmail.com> wrote:
>>> It's a Sun JVM setting, not something Hadoop will control.  You'd have to
>>> turn it on in hadoop-env.sh.
>>>
>>>
>> Question is, if Hadoop will include this as standard,  if it indeed has such
>> benefits.

We can't do this as then if you try and bring up Hadoop on a VM without 
this option (currently, all OS/X JVMs), your java program will not start.

> 
> Hadoop doesn't have a -standard- here, it has a -default-.  JVM settings are
> one of those things that should just automatically be expected to be
> adjusted on a per installation basis.  It is pretty much impossible to get
> it correct for everyone.  [Thanks Java. :( ]
> 

It would be nice if Java 6 had a way of switching compressed pointers on 
by default -the way JRockit 64 bit did. Right now you have to edit every 
shell script to start up every program,  hadoop included.  Maybe when 
jdk7 ships it will do this by default.

Mime
View raw message