harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Blewitt" <alex.blew...@gmail.com>
Subject Re: Re: [rant] Memory options in VM -- why is the default not 'unlimited'
Date Mon, 31 Jul 2006 21:28:29 GMT
On 31/07/06, will pugh <willpugh@sourcelabs.com> wrote:
> What does matter is whether you are using more virtual memory than you
> have physical memory.

I agree completely. But the VM developer (in C) does not know how much
memory I have present. Why should it assume 256m? I have more memory
than that on my laptop available, and a lot more on my desktop. And
the problem with an arbitrary default is that if I want to disable
that, I have to supply extra parameters.

Don't get me wrong; being able to specify minimum/maximum is a
reasonable idea for optimising a VM if you know what to put; but by
default, there shouldn't be any arbitrary limitations based on the
value of a #define constant ...

> My understanding is that one of the important reasons for having a Max
> on the allocation availble to a VM from the beginning was to make sure
> the heap never used more than physical memory.

Agreed, but the strategy falls apart when Max << TotalAvailable. When
JVMs first started, 256m might have sound like a reasonable max, but
these days machines have >> 512m available. Thus, there shouldn't be a
Max by default.

You can't even do an analysis based on how much memory is currrently
free. If I'm running a JVM and there's (say) 400m available, would you
pick that? What happens if it was running out of memory, and I closed
down some memory hog or IDE to free up some space? Why limit it just
to 400m?

In fact, can you think of any other system with automatic memory
management (Smalltalk, C++ with GC, Python, Ruby, PHP, Basic, C#) that
decides on a random upper limit 'to protect the user'? I can't think
of a single instance when the default max has been helpful for me.

> Zones sound like an interesting strategy, but I'm not sure they help you
> much with wanting make the default memory option "unlimited".
> Generational is good at reducing the number of full GCs you do, but does
> not necassarily eliminate them.

Zones were a completely orthogonal idea; I merely brought it up
because they were something that I'd been thinking would be a good
idea w.r.t. VMs, and since there were a few people here that were
talking about low-level VM issues, thought it would be worth
mentioning :-)

> The zones strategy you suggest may work well with apps that have a lot
> of class loaders and allocate somewhat evenly across them, but I think
> it may cause a lot of overhead.  Would your approach be generational?
> Would you need Write Barriers for both references from other generations
> as well as other Class Loaders?

I've no idea if it would cause a lot of overhead. It might do. I'd
imagine that each zone would have a nursery and mature generation, but
possibly share some permanent generation.

> If you were to have a Web Application, would you basically need a write
> barrier for every string you allocate, since the String Class is loaded
> in a parent class loader?  If so, this may cause more overhead than you
> would want for the stated benefit.

I suspect that if a write barrier was needed, this would quickly kill
any of the efficiencies. Perhaps there would be some other
optimisations that could avoid such things?


Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org

View raw message