db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gregsmit <gregs...@us.ibm.com>
Subject Re: Memory Leak or Expected Behavior - BtreePostCommit
Date Mon, 29 Oct 2007 18:39:16 GMT


Hi Mike,

OK, sounds like our stress test fits in one of those "worst case for derby"
categories:

> Does your stress application happen to make the table average around 0
> rows (ie. add rows
and then delete them all, ...)

Yes, this is exactly what our stress application does.  It loops through 1)
Add Entry, 2) List all Entries, 3) Delete Entry just added.  It does this on
25 threads with no think time or pauses

> Just for information do you see the memory go down at the end of your
> stress test after you
stop the active deletes?

I don't know about this -- Unfortunately we usually have stopped everything
after we start to see memory increase.  I'll have to do some other tests to
see if the memory drops back down if the stress stops.

I'll change our test to put some pauses in there or something, so that these
get a chance to run.  Unfortunately, we need to go through all of these
paths over and over to make sure we don't have a leak somewhere else.  The
more pauses we take, the longer we need to run to consider our code properly
tested.

I do agree with you -- It seems like that there should be some way to force
the post commit queue to be active if its gets too large, to prevent these
"it looks like a leak" situations.  I think this is pretty common stress
testing scenario.  We are running on a 2 way, and still have hit this, I
guess you could call it, "Post Commit Queue Starvation" problem.

Thanks for the help,
Greg



Mike Matrigali wrote:
> 
> There are a few things that you can make sure of your application.  Make 
> sure to commit transactions, a single transaction without a commit that 
> holds a lock on the btree will cause this problem. Does your stress
> application happen to make the table average around 0 rows (ie. add rows
> and then delete them all, ...) , this is likely the worst case for 
> derby.  Just for information
> do you see the memory go down at the end of your stress test after you
> stop the active deletes?
> 
> I have seen this behavior in stress tests where there are lots of users
> vs.the single post commit thread.  Basically the post commit thread only
> gets a chance to run 1/number of threads amount of time and if during 
> that time any of the transactions are hold the lock then work can't be 
> done.
> 
> Historically I have not seen this be a problem in a real customer 
> application, only in stress tests.  Usually in most applications there
> is some downtime where the post commit gets to "catch up" and then these
> items can be run and memory released. But of course there could be a
> real app that requires this constant throughput with no chance of
> time for the post commit.
> 
> A couple of improvements could be made to derby in this area.  The 
> current post commit architecture was developed when the code was 
> targeted at embedded applications likely to be running on small single
> processor machines with likely small number of threads running.  Now 
> that cheap very fast dual cores are the norm for even the smallest 
> laptop/deskside machine it might make sense to update the post commit 
> thread code to recognize
> when it is falling behind and somehow increase it's throughput (ie.
> either add more threads, maybe async execution, or perform more chunks
> of work before giving up it's time ...).
> 

-- 
View this message in context: http://www.nabble.com/Memory-Leak-or-Expected-Behavior---BtreePostCommit-tf4712054.html#a13474021
Sent from the Apache Derby Users mailing list archive at Nabble.com.


Mime
View raw message