db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Binary Stream Cleanup
Date Thu, 09 Aug 2007 23:09:14 GMT
I can't be sure but this sounds like the expected memory usage for the
default configuration of the database server.  The default page cache
size is 1000 pages and with a blob table it is likely you have 32k page.
Normal operation will be for derby to fill up all 1000 pages as soon
as 1000 different pages are accessed either by reads or writes.  In your 
case your 500mb blob created more than 10,000 pages.  At that
point it will maintain the pages in memory uses a page replacement 
algorithm to optimize hits.

If you don't expect to use the db again then you can shut down the 
database and the memory should go away.  If you do not want the cache
this big you can alter the page cache size, probably best not to set it
smaller than ~50 pages.

Raymond Kroeker wrote:
> Hi All,
>   Is there something special/specific I have to do after writing a large 
> binary stream to my database in order to reclaim memory resources?
>   My use-case is:
>   1.  Create a new row containing a name, content and content size.  
> (500MB file)
>   2.  Commit via the connection.  (autocommit is turned off)
>   What I'm noticing is that the memory usage jumps from 1.2 MB (after 
> the driver is loaded) to 41.9MB after the insertion.  After numerous 
> manual GCs the memory remains at 34MB.
>   I've made not attempt to tune page/cache sizes; is this what I'm missing?
> I'm using:
>   Derby <>
>   Java 1.6.0-b105
>   Ubuntu 6.06.1 LTS
> -- 
> --------------------------------------------------------------------------------
> Raymond Kroeker
> thinkParity Solutions Inc.

View raw message