Thanks, this help a lot and answer was fast!
Pertti Ylijukuri <email@example.com > writes:
> I'am using derby nework server. I have started deby with following
> derby.storage.pageCacheSize = 20000
> derby.storage.pageSize = 4096
> I expect that derby need 20000 * 4096k = 80M memories.
> Anyway sometimes derby throw java.lang.OutOfMemoryError Java heap space
> How much I have to give memory for derby or is in derby memory leak?
It is difficult to estimate how much memory Derby needs because it
depends on the type of load and also on the application that runs on
top of Derby. I usually set the max heap size to between two and three
times (pageCacheSize * pageSize), and that has been enough most of the
time. Of course, the best way to determine how much memory your
application needs, is to run it with a very large heap and monitor the
memory usage (for instance with jconsole).
> This is derby memory dump using command jmap -histo [PID]
> Attaching to process ID 354, please wait...
> Debugger attached successfully.
> Client compiler detected.
> JVM version is 1.5.0_06-b05
> Iterating over heap. This may take a while...
> Object Histogram:
> Size Count Class description
> 86363952 81679 byte
> 68577936 1428707 org.apache.derby.impl.store.raw.data.StoredRecordHeader
> 30174120 1257255 org.apache.derby.impl.store.raw.data.RecordId
The memory dump shows ~85MB of byte arrays, which is expected since
the page cache should contain about 80MB of raw data. Each cached page
also contains a slot table, which is why there are so many
StoredRecordHeader and RecordId objects. If the pages contain many
small records, the slot tables will be large and consume more
memory. In your case, the slot tables take more of the memory than the
raw data in the page cache.
I have experienced something similar where an application ran out of
memory when performing an index scan on a large table. During normal
operation, the application had more than enough memory, but it always
ran out of memory on the scan ("select count(id) from table"). This
happened because the scan would throw out all the data pages from the
page cache and replace them with index pages. Since the index pages
had very small records, their slot tables were larger and the memory
usage increased dramatically.