db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Discussion of how to map the recovery time into Xmb of log --Checkpoint issue
Date Wed, 01 Feb 2006 23:34:24 GMT
Long answer above, some comments inline below.

I think runtime performance would be optimal in this case, runtime
performance is in no way "helped" by having checkpoints - only either
not affected or hindered.  As has been noted checkpoints can cause
drastic downward spikes in some disk bound applications, hopefully we
will some changes into 10.2 to smooth those spikes down.  But the
reality is the more checkpoints on a system that is disk i/o bound the
more the app is going to slow down, if you are not disk i/o bound then
the checkpoints may have little affect.

There are only 2 reasons for checkpoints:
1) decrease recovery time after a system crash.
2) make it possible to delete log file information (if you don't have
   rollforward recovery backups).  Without a checkpoint derby must
   keep all log files, thus space needed in the log directory will
   always grow.

The background writer thread should handle this, it should not consider
this an extreme case.  If there were no background writer and no
checkpoints then the following would happen:

1) the page cache grows to whatever maximum size it has gotten to
2) requests for a new page then use clock to determine what page to
   throw out.
3) if the page picked to throw out is dirty, then it is first written
  to the OS with no sync requested.  It is up to the OS whether this
  is handled async or not.  Most modern OS's will make this be an
  async operation unless the OS cache is full and then it will turn
  into a wait for some i/o (maybe some other i/o to free OS resource).
  The downside is that a user select at this point may end up waiting
  on a synchronous write of some page.
4) if the page to throw out is not dirty, then it can just be thrown
   out without any possible I/O wait.
5) In both cases 3 and 4 the user thread of course has to wait on the
   I/O to read the page into the cache.  Depending on the OS cache this
   may or may not be a "real" I/O.

The job of the background writer is to make case 3 less likely, that's
it.  Note if you try to keep the whole cache clean then you may flood
the I/O system unnecessarily if the app tends to write the same page
over and over again, then it is better to leave it dirty in cache until
needed.  The clock tends to do this by throwing out less used pages
vs. more used pages.

Kristian Waagan wrote:

> Hi Mike,
> 
> A question totally on the side of this discussion: Do you, or anyone
> else, have any opinion about how the "runtime performance" of Derby
> would be affected by not having checkpoints at all, say for a large
> database (around 20 GB) and 0.5 GB of page cache in a disk-bound
> application load?
> 
> Is the Derby background-writer (and Clock.java) written/designed to
> handle such "extreme cases" without major performance degradation?
> Any information on the goal/function of the background-writer?
> What mechanisms would kick in when the page-cache is full and Derby
> needs slots for new pages?
mechansism described above, it it particular to whether page to throw
out is dirty vs. clean.  There isn't really dependency on full.  In
a busy "normal" system the cache is always full and I don't think we
do anything special about weights of dirty vs. clean.  more work could
be done in this area as has been discussed.
> 
> I do know this is not a smart way to handle things, I'm just curious
> what people think about this! And I am not seeking answers about long
> recovery times and log disk space usage ;)
Hey in my benchmark days with other db products, it was standard
procedure to configure test system to either have no checkpoints or if
required ONE checkpoint during the run.  Derby is no different for this.

I almost always try to separate the checkpoint affect from the
performance throughput I am trying to measure (unless optimizing the
checkpoint is what I am trying to measure).  My guess is that default
checkpoint interval is making WAY too many checkpoints for your
throughput by default.
> 
> 
> 
> -- 
> Kristian
> 
> 
> Mike Matrigali wrote:
> 
>> I think this is the right path, though would need more details:
>> o does boot mean first time boot for each db?
>> o how to determine "this machine"
>> o and the total time to run such a test.
>>
>> There are some very quick and useful tests that would be fine to
>> add to the default system and do one time per database    Measureing
>> how long to do a commit and how long to do a single database read from
>> disk would be fine.  Seems like
>> just these 2 numbers could be used to come up with a very good
>> default estimate of log recovery time per log record.  Then as you
>> propose the actual estimate can be improved by meauring real
>> recovery time in the future.
>>
>> I am not convinced of the need for the bigger test, but if the default
>> is not to run it automatically and it is your "itch" to have such
>> a configuration option then I would not oppose.  I do see great value
>> in coming up with a very good default estimate of recovery time estimate
>> based on outstanding number of log records.  And
>> I even envision
>> a framework in the future where derby would schedule other non-essential
>> background tasks that have been discussed in the
>>
>> On a different track I am still unclear on the checkpoint dirty page
>> lru list.  Rather than talk about implementation details, I would
>> like to understand the problem you are trying to solve.  For instance
>> I well understand the goal to configure checkpoints such that they
>> map to user understandable concept of the tradeoff of current runtime
>> performance vs. how long am I willing to wait the next time I boot
>> the database after a crash.
>>
>> What is the other problem you are looking at.
>>
>> Raymond Raymond wrote:
>>
>>  
>>
>>> Mike,
>>> Last time we discussed about how to map the recovery time into Xmb of
>>> log.
>>> I have been thinking on it recently and have a proposal.
>>> How about when the very first time derby boots (not every time) on a
>>> certain
>>> machine, we let the user to chose whether he (or she) want to do some
>>> statistic
>>> collection about the system performance. If he (or she) want to do,
>>> derby runs
>>> some test, if not, derby doesn't run test. Later, just as what you said,
>>> we let derby
>>> collect information every time it does recovery to refine the former
>>> information.
>>>  Thanks.
>>>
>>>
>>> Raymond
>>>
>>> _________________________________________________________________
>>> Don’t just search. Find. Check out the new MSN Search!
>>> http://search.msn.click-url.com/go/onm00200636ave/direct/01/
>>>
>>>
>>>   
> 
> 
> 

Mime
View raw message