db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: RowLocation lifetime
Date Tue, 15 Nov 2005 18:06:00 GMT

Øystein Grøvlen wrote:

>>>>>>"MM" == Mike Matrigali <mikem_app@sbcglobal.net> writes:
>     MM> It is only stable while some sort of stable table intent lock is held.
>     MM> Rows can move during a compress table operation.
> I understand, when a record is moved to another page, its RecordId
> will change.  Is this the only case where a RecordId will change?  If
> yes, I would think one could solve the problem for insensitive result
> sets by invalidating open result sets when an underlying table is
> compressed.
> Some questions to how compression works:
>    - Will RecordIds ever be reused or will the sequence number continue to
>      increase?
Derby now supports 2 different compression techniques, basically one
offline and one online.

SYSCS_UTIL.SYSCS_COMPRESS_TABLE() basically copies all rows from one
table to another, so recordId's may be reused.  This requires table
level locking and so is effectively offline.
same table, and will not reuse RecordId's, but a given record can
definitely change RecordId.  This requires row locks
for purged/moved rows for most of it's work.  Giving back space to
OS requires short time table level lock.

>    - How do you ensure that the old RecordIds are not encountered
>      during recovery?  Does the compression include a checkpoint?
Neither really does anything special to stop old recordId's from
being encountered during recovery.   With offline redo recovery of
old table is basically a drop table so either the table is there
and we drop it again, or the table is already dropped and we do
nothing.  In online it is the normal case of redo or a record delete
or a record purge, in either case redo will either see a version of
the page where it can redo the delete or purge or it will see from
the version of the page that there is no work to do.

    This activity is basically all done above the store level, store
    does not really know what is going on and there is no special
    casing of recovery.  The path is basically the same recovery path
    as the fast create/load of a new table/indexes.  As the last step
    in the transaction the language code switches the mapping of the
    user table to the underlying store conglomerates.

    For the row movement portion of this, there is no special recovery
    code.  Every row movement is just a logged delete and insert, with
    all the associated index updates in the same transaction. The row
    movement portion, and row purging portion of this compress is
    "online" in that it only needs short term page latches and short
    row level locked transactions.  The actual giving space back to the
    OS still needs a table level lock, and does require a checkpoint.

>    - It seems like the compression heavily depends on that no log
>      records are generated during compression.  Do you have any ideas
>      of how to make compression on-line?  I guess you would need a
>      mapping between new and old RecordIds (i.e., every move would
>      have to be logged.)

There is not requirement of no log records.  As you say every move is
logged, which is why in it is recommended that users use offline
compression if that option is available to them as it uses way less
system resources and will finish quicker, but definitely is not as

View raw message