db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wil Hunt <w...@lunarlogic.com>
Subject Re: Optimized Inserts
Date Mon, 10 Jan 2005 19:18:33 GMT

Jeremy Boynes wrote:

> Wil Hunt wrote:
>
>>    I have a situation where I need to essentially replicate a MySQL 
>> database over the network and store it in an embedded Derby instance. 
>
>
> <snip/>
>
>>    Is there an easy way around this?  If not, is there a hard way? 
>> :)  Like I said, I'm guessing that I'm missing something obvious; so 
>> please let me know what that is!
>>
>
> If you have a large number of changes to apply, you might also 
> consider a commit interval somewhere between the "all" and "every" 
> model you get with autocommit off and on respectively - say 
> programmatically committing every 10000 changes. You applications 
> would of course need to be able to handle the partial merge.
>
>
Thank you for your ideas.  I used your commit interval, along with 
Oystein's suggestion of inserting all records first, and then only 
updating on failure.  I've managed to turn the performance from hideous 
to reasonable.  I'll keep your other ideas in mind for future fixes, but 
don't have the time to look into them as yet.

Thanks again to all that provided ideas.

Wil

Mime
View raw message