harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiao-Feng Li" <xiaofeng...@gmail.com>
Subject Re: [DRLVM][GC] parallel compaction and wasted virtual space
Date Mon, 30 Oct 2006 02:08:29 GMT
On 10/29/06, Rana Dasgupta <rdasgupt@gmail.com> wrote:
> Xiao Feng,
>   I will read the reference to understand what are the compressor
> advantages, and how the algorithm is implemented, thanks.
> Even when you have 1GB of physical memory, is there not an overhead of page
> faults?

Yes, I agree that page faults definitely will be an overhead. I guess
the page mapping overhead in Compressor is lower than the benefits it
achieves. But yes, we need evaluate it independently.

> Is it an option to compact the heap in parts and/or to increase the number
> of passes to reduce the space overhead?

The key idea of Compressor is to keep the object order during parallel
compaction. There are other algorithms like "mark-copy" which require
less additional copy space, but can't maintain the object order. In
order to enable the parallel compaction of multiple blocks, the
assumption is, we have to assume the to-space in the worst case is the
equal size as the from-space. We can use a to-space with 30% size of
the from-space in most compaction collections without problem, but we
need be prepared for the worst case. A possible solution is, to have a
fall-back algorithm when the to-space is smaller than required. This
is not a new idea, e.g., GCv4.1 employs something similar and there
are some papers available. [1] in ISMM06 is an example.

[1] http://www.cs.purdue.edu/homes/pmcgache/ismm06.pdf

> Is this significantly better than doing semi-space copying at each GC cycle,
> since one major advantage of compaction( other than preserving allocation
> order ) over copying, was probably less space overhead?

Yes. The major advantage in my opinion is less physical space
overhead. Well it introduces the vitual space overhead. If assuming
the same of physical space overhead as a semi-space collector, we need
evaluate the real benefits of object locality to trade off the
collection pause time.

> Are we looking for a parallel compaction algorithm for all situations, or
> can we think of choosing at JVM startup based on user input, client/server,
> or OS feedback on execution environment?

I think some adaptive choice is better. It means we need provide the
choices at first. :-) I guess it's not a big overhead to have two
parallel compactors.


> Sorry for all these questions before reading the book :-)
> Rana
> > On 10/27/06, Xiao-Feng Li <xiaofeng.li@gmail.com> wrote:
> > >
> > > Hi, all, the plan for GCv5 parallel compaction is to apply the idea of
> > > Compressor [1]. But it has an issue I want to discuss with you.
> > > Compressor needs to reserve an unmapped virtual space for compaction.
> > > The size of the reserved part is the same as that of copy reserve
> > > space in a semi-space collector. This means about that part of the
> > > virtual space is unusable for the JVM. In a typical setting, the
> > > wasted part is half size of the total compaction space. If we have 1GB
> > > physical memory, the JVM is ok for Compressor because the virtual
> > > space is large enough to wast half; but if the phsical memory is >2GB,
> > > Compressor may have a problem in 32bit machine: some of phsical mapped
> > > space might be wasted.
> > >
> > > Any opinion on this?
> > >
> > > Thanks,
> > > xiaofeng
> > >
> > > [1] http://www.cs.technion.ac.il/~erez/Papers/compressor-pldi.pdf
> >
> >

View raw message