db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremy Boynes <jboy...@apache.org>
Subject Re: Optimized Inserts
Date Fri, 07 Jan 2005 21:04:30 GMT
Wil Hunt wrote:
>    I have a situation where I need to essentially replicate a MySQL 
> database over the network and store it in an embedded Derby instance.  
> 

<snip/>

>    Is there an easy way around this?  If not, is there a hard way? :)  
> Like I said, I'm guessing that I'm missing something obvious; so please 
> let me know what that is!
> 

A couple of other things you might consider are bulk importing the data 
into a staging table and then using two set-oriented update and insert 
operations to merge the imported data into the target table. See

http://incubator.apache.org/derby/manuals/tools/tools89.html

A similar alternative if you can't move files around or use pipes would 
be to write to unlogged temporary tables (see

http://incubator.apache.org/derby/manuals/reference/sqlj33.html

) and again merge into the target using two operations.

You might also consider using a Java stored procedure inside your Derby 
instance to pull data from the MySQL instance.

If you have a large number of changes to apply, you might also consider 
a commit interval somewhere between the "all" and "every" model you get 
with autocommit off and on respectively - say programmatically 
committing every 10000 changes. You applications would of course need to 
be able to handle the partial merge.

None of this will get around all of your locking issues - however, they 
may reduce the time that you are holding the locks so that is less 
impactful.

--
Jeremy

Mime
View raw message