db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Duncan Groenewald <dagroenew...@optusnet.com.au>
Subject Re: Derby Transaction Log Shipping
Date Fri, 08 Feb 2008 10:52:10 GMT
I still don't know if I really understand the Derby model as it seems  
the transaction logs are archived when a database backup is run.  So  
here is a scenario:


Day 1:  Backup Primary Derby (enabling logging), copy backup database  
to secondary server and boot secondary server to check it is all OK.
Day 2:  Backup Primary Derby DB and copy archived log files to  
secondary server.
Day 3:  Backup Primary Derby DB and copy new archived log files to  
secondary server.
Day 4:  Boot secondary Derby DB to check its OK...  In theory then  
the boot process will replay all the log files and the database  
should be in the same state as the Primary was on Day 3 ?


Somehow I don't think this would actually work - but I will give it a  
try...

Here is the scenario I am try to cater for:

24x7 realtime system needs to be relocated to another site (or needs  
to have a warm standby system that can be enabled in 15 minutes or  
less).
Basic approach is to have two databases running and logs from the  
primary are loaded on the secondary within a couple of minutes of  
them being written.
Transaction dumps on primary database are written to timestamped  
files and file is renamed  TRXDUMP20080206091545212_DONE.DAT once  
dump write process has completed.  A script checks for presence of  
*_DONE.DAT files every 30 seconds and copies file to remote servers  
file system (or this gets done by the dump process as well).  Script  
on the remote server checks for presence of *_DONE.DAT files every 30  
seconds and runs a Transaction Load process on remote database to  
load the dump files.  At any given point in time the remote site is  
always within a few minutes of the primary site.

It seems unlikely one could do this with Derby because there are no  
commands to periodically dump the transaction logs or to load the  
transaction logs.

Cheers

On 08/02/2008, at 7:05 PM, Knut Anders Hatlen wrote:

> Duncan Groenewald <dagroenewald@optusnet.com.au> writes:
>
>> Thanks - the specification looks like its close to what I would like.
>> The model I work from is one used by Sybase (and possibly  others)
>> where you can specify a database dump and a separate  transaction log
>> dump at defined intervals using a script or some  other programmatic
>> method.  From what I can tell its not possible to  do this with  
>> Derby,
>> since you can only dump the database and not the  logs.  Its also
>> unclear how you would load a log file on its own.
>>
>> What I would like to see is two additional commands added to dump
>> transaction logs to specified directory or file name and another
>> command to load a transaction log file from a specified location/
>> file.  Ideally a transaction log file load should function much the
>> same way a normal user does to allow concurrent user access while
>> loading a transaction log file.
>
> Not exactly what you want (it won't allow concurrent user access while
> loading the transaction log), but you may achieve something similar  
> with
> log archiving and roll-forward recovery, combined with some creative
> scripts. I haven't tried it myself, but you may get some ideas here:
> http://db.apache.org/derby/docs/dev/adminguide/cadminrollforward.html
>
> -- 
> Knut Anders

Duncan Groenewald
mobile: +61406291205
email: dagroenewald@optusnet.com.au





Mime
View raw message