db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steen Jansdal <st...@jansdal.dk>
Subject Re: Optimized Inserts
Date Tue, 04 Jan 2005 07:23:58 GMT
Wil Hunt wrote:
> Hey all,
>    I have the feeling the answer to this question is going to be rather 
> obvious, so please forgive me if I seem a bit obtuse. :)
>    I have a situation where I need to essentially replicate a MySQL 
> database over the network and store it in an embedded Derby instance.  
> As a result, I have many insert statements which are taking an 
> inordinate amount of time.  If I use MySQL as the local copy, the times 
> are about 10 times faster for the entire process.  I realize that MySQL 
> is an optimized C database, but I think a 10-fold difference is a bit 
> excessive, so I'm guessing that I just need to tune Derby.
>    The inserts that are causing problems because I don't know ahead of 
> time if the record really is an insert or not.  I need to do a local 
> read to see if the record exists, and if so, perform an update instead 
> of an insert.  This local read is taking about 45% of my processing 
> time, which seems strange -- realizing that a failed search is an 
> exhaustive search, I presume that the primary key, which I'm doing my 
> search on, is indexed, so it should only take log2(n) time or 
> thereabouts.  The other 50% is in the actual insert, which I presume is 
> causing so much time due to the fact that it is committing each record 
> individually.  My thought was to turn autoCommit off and then commit at 
> the end of the transaction.  The problem is that when I turn autoCommit 
> off, my read operation blocks.  Note that I have my Connection set to 
> TRANSACTION_READ_COMMITTED, so I expect this to not block and simply 
> read from the data that was previously committed to the database.  This 
> is not the case, however.
>    Consequently, when my read blocks, my application stalls.  There is 
> another thread that is supposed to read from the database; and it, too, 
> is blocked.  What I want is a fast read that will only return committed 
> records and not block -- or if it must block, have it be a very short 
> period of time (< 50ms).  How can I do this?
>    By the way, if I set autoCommit to true, everything works, no 
> blocking, but the process of writing 110,000 records to the database 
> takes nearly 10 minutes.
>    Is there an easy way around this?  If not, is there a hard way? :)  
> Like I said, I'm guessing that I'm missing something obvious; so please 
> let me know what that is!
> Thanks in advance for your time and consideration,
> Wil

Are you using parameters (PreparedStatement)? They can significantly 
increase the performance.


View raw message