db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ashwin Jayaprakash" <ashwin...@rediffmail.com>
Subject High throughput, min durability - tuning
Date Tue, 16 May 2006 16:05:07 GMT
Hello,
I'm posting several questions about Perf tuning. I'd be grateful if people can provide me
with answers.

I'm using Derby as an Embedded Database for a prototype application, where Derby is used to
run queries on a small set of Rows, very frequently. Rows are inserted into the Tables at
very high rates. The Queries are run on these new Rows and so they are hot, in the Cache.
Once the Query is executed on those Rows, they can be discarded. In a sense, I want Derby
to function as an in-memory DB, like Oracle TimesTen.

I've tuned all the documented parameters like PageCacheSize, PageSize and Durability=test.
Are there any other ways I can drastically reduce the Commits to the Disk and thereby eliminate
Disk IO? Perhaps rewrite a key Class and add that to the "Pre-Classpath" so that the default
behaviour can be over-ridden?

Are there any other parameters (undocumented) to increase the scalability of the DB? In the
Embedded mode, I see that Derby only creates 2 Threads - antigc and something else. Does this
mean that if my application can scale well, Derby's scalability will also scale?

The application works like this: Large number of Inserts, using a monotonically increasing
Id, which is the Primary key and is indexed. Updates are practically absent. Once Inserted,
they are used in a Select Query immediately. There's like, one such Table for each Producer-Consumer
Thread pair. There will be several such Tables and Producer-Consumer Thread pairs. Contention
is only between the 2 Threads for each Table. The rows will never be used again and so durability
is NOT a requirement.

Any Hacks or Tuning Tips would be appreciated, for the scenario described above.

Thanks,
Ashwin.











Mime
View raw message