db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Memory Leak or Expected Behavior - BtreePostCommit
Date Mon, 29 Oct 2007 17:50:30 GMT
gregsmit wrote:
> 
> Hi,
> 
> We have an application that is using embedded Derby (10.3), where we do a
> lot of adds and deletes.
> 
> When we run this application under our stress scenarios, we see a memory
> leak in Derby.  When we look at our heap dumps, what we see is an ever
> growing number of these classes:
> 
> org.apache.derby.impl.services.deamon.ServiceRecord
> org.apache.derby.impl.store.access.btree.BtreePostCommit
> 
> I found some other documentation that said that the BtreePostCommit is a job
> that runs after a delete is committed, that frees space in a Btree, and that
> it requires a table lock.  What I think is happening is that because we are
> running at a constant heavily stressed state, is that our deletes are
> occurring, then this job is being placed on a queue after the commit.  But
> because we are in a heavily stressed state, these jobs never run, so the
> queue grows larger and larger.
> 
> Does this theory sound right, or does anyone have a better explanation?
> 
> OK, assuming its right -- Is this the correct behavior?  Should theses Post
> Commit jobs continue to get queued, and never run?  Is there a way we can
> force them to take grab the locks and complete?  Maybe something wrong with
> the way that we are committing, that doesn't allow them to run?
> 
> We do not have a lot of experience with Derby, so we may be doing something
> wrong.
> 
> Thanks for any help,
> Greg
> 
> 
There are a few things that you can make sure of your application.  Make 
sure to commit transactions, a single transaction without a commit that 
holds a lock on the btree will cause this problem. Does your stress
application happen to make the table average around 0 rows (ie. add rows
and then delete them all, ...) , this is likely the worst case for 
derby.  Just for information
do you see the memory go down at the end of your stress test after you
stop the active deletes?

I have seen this behavior in stress tests where there are lots of users
vs.the single post commit thread.  Basically the post commit thread only
gets a chance to run 1/number of threads amount of time and if during 
that time any of the transactions are hold the lock then work can't be 
done.

Historically I have not seen this be a problem in a real customer 
application, only in stress tests.  Usually in most applications there
is some downtime where the post commit gets to "catch up" and then these
items can be run and memory released. But of course there could be a
real app that requires this constant throughput with no chance of
time for the post commit.

A couple of improvements could be made to derby in this area.  The 
current post commit architecture was developed when the code was 
targeted at embedded applications likely to be running on small single
processor machines with likely small number of threads running.  Now 
that cheap very fast dual cores are the norm for even the smallest 
laptop/deskside machine it might make sense to update the post commit 
thread code to recognize
when it is falling behind and somehow increase it's throughput (ie.
either add more threads, maybe async execution, or perform more chunks
of work before giving up it's time ...).




Mime
View raw message