db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Suresh Thalamati <suresh.thalam...@gmail.com>
Subject Re: [jira] Commented: (DERBY-298) rollforward will not work correctly if the system happens to crash immediately after rollforward backup.
Date Wed, 25 May 2005 00:30:21 GMT
Øystein Grøvlen (JIRA) wrote:

>     [ http://issues.apache.org/jira/browse/DERBY-298?page=comments#action_66168 ]
>Øystein Grøvlen commented on DERBY-298:
>Looking at the code, I became a bit confused about the definition of an empty log file.
  Scan.getNextRecordForward contains debug output when it detects an empty log file.  It will
then return without setting knownGoodLogEnd.  Hence, new log records will be written to the
end of the previous file.  As Suresh says this is probably to be able to handle crashes during
log switch.
>However, this is not what happens when I run the recovery part of the example in this
report.  Since, currentLogFileLength is a large number, it detects "zapped log end on log
file", goes on to the next file, which does not exist, and returns.  (Who sets the length
of a log file?  Is this maximum size until a log switch is performed?)  The effect is the
same, but this can not be used to detect an empty log file and apply the solution proposed
by Suresh.  Instead, one would have to do some hairy file handling at a later stage.
derby has two types of log files , one that works in RWS mode with a 
preallocated log file ,
and one which uses file sync with out preallocation. In case of 
preallocation , zeros are written to the log file to the length 
specified by the logSwitchInterval (Default is 1 MB) , it  is  also 
configurable by the user.

Empty log file can not be identified based on the length alone. . Only 
way to declare a log file is empty/fresh is when  No log records found 
in the file during recover scan. As u noticed because it is a 
preallocated file , when scan finds zeros on first after zeros,  it 
declares that there are no more log records.  Any fix for this problem 
has to handle both preallocated and non preallocated case .

Actually, I  don't like idea of creating a  new log file on boot even 
for special conditions,
because spending more time than required  does not make out users happy 
! If  you can  find a fix that does not require a new log creation, it 
will be great.

>An alternative way to fix this would be to just create a dummy log record in the new log
file as part of the backup command.  This would make the redo scan end in the new log file.
 However, this will not work for those who do backup with OS-commands (i.e., copy the files
 Backup with OS command should not be a problem ,  because there is no 
support to perform roll forward recovery  with these type of backups. If 
it  is a just a plain  backup ,  it does not matter how logs are 
written  after the backup, because they are never used to do a restore.

This problem can also be fixed by writing dummy log record.  
Filelogger.redo() code has to be fixed to understand this, currently it 
looks at the logEnd only when a good log records is read.

>I would also think it should be possible to do the log switch in such a way that it is
possible to detect during recovery whether the log switch had completed or not.  If this was
the case, one could just set knownGoodLogEnd of the redo scan to the start of the empty file
if the log switch was completed.  Does anyone know if this is possible?
Yes. It should be, with some changes to do redo/scan code.

Another possible solution I was thinking to identify whether a  log 
switch is good or not is   by writing  a INT (4 bytes of zeros ) after 
log file initialization.   As 512 bytes writes suppose to be atomic.  if 
(log file length > = LOG_FILE_HEADER_SIZE(24) +4)  then  the log switch 
before the  crash can be fixed as good  one and fix the scan code  to 
use the empty  log file  instead of writing to the previous log file.


View raw message