db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "A B (JIRA)" <j...@apache.org>
Subject [jira] Updated: (DERBY-3299) Uniqueness violation error (23505) occurs after dropping a PK constraint if there exists a foreign key on the same columns.
Date Wed, 06 Feb 2008 16:45:08 GMT

     [ https://issues.apache.org/jira/browse/DERBY-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

A B updated DERBY-3299:
-----------------------

    Attachment: d3299_v1.patch

Attaching d3299_v1.patch, which is a first attempt at addressing this issue.

The general approach taken by this patch is to "update" the physical conglomerate by first
dropping the old conglomerate and then creating a new (non-unique) conglomerate.  So with
respect to the example given in the description for this Jira, namely:

  ALTER TABLE NEWORDERS ADD CONSTRAINT
      NEWORDERS_PK PRIMARY KEY(NO_W_ID, NO_D_ID, NO_O_ID)

  ALTER TABLE NEWORDERS ADD CONSTRAINT
      NO_O_FK FOREIGN KEY (NO_W_ID, NO_D_ID, NO_O_ID) REFERENCES ORDERS; 

where NEWORDERS_PK and NO_O_FK share a unique physical conglomerate, the d3299_v1.patch will
make it so that the statement:

  ALTER TABLE NEWORDERS DROP CONSTRAINT NEWORDERS_PK

leads to a) the dropping of the unique physical conglomerate backing NEWORDERS_PK and b) the
creation of a new NON-unique physical conglomerate to back NO_O_FK. This will only happen
in cases where the physical conglomerate has to change. So if, for example, there are two
foreign keys on a single table with the same columns and sorting requirements, and one of
those foreign keys is dropped, we would NOT need to update the physical conglomerate for the
other foreign key, and thus we would skip the "drop and re-create" processing.

This particular change means that dropping a primary key whose backing conglomerate is shared
by one or more foreign keys might take some time (due to the potential creation of an entirely
new (non-unique) conglomerate, which must then be populated).

The other approach would be to see if there is a uniqueness "switch" within the physical conglomerate
that we could trun off, thus allowing us to use the same physical conglomerate but to disable
uniqueness checking.  If possible, that would almost certainly lead to faster DROP CONSTRAINT
processing than the approach taken in d3299_v1.patch (when shared conglomerates are at play).

However, I chose to go with the "drop and re-create" approach for two reasons. First, in the
interest of supporting a "modular" store as much as possible, it seems cleaner to follow an
approach which does not depend on classes or "switches" that are specific to a given disk
storage implementation. The approach of dropping a unique conglomerate and creating a new,
non-unique one should, in theory, be usable regardless of how the storage implementation treats
unique vs non-unique indexes.  It may not be optimal, but it seems more flexible. The second
reason I opted for this approach is that I have been working with an eye toward DERBY-2204,
for which the "drop and re-create" approach seems like more of a necessity.  By resolving
this issue in that manner, I'm hoping that changes for DERBY-2204 will be able to re-use or
benefit from a good amount of the code that I'm proposing to add for this issue.

It is of course possible to improve upon the d3299_v1.patch in the future if performance of
DROP CONSTRAINTs becomes an issue.  But for now I'm leaving that as a potential future enhancement...

More on the d3299_v1.patch:

  - Adds several utility methods to DDLSingleTableConstantAction.java that
    can be called by various constant action subclasses to drop a constraint
    and/or a conglomerate descriptor.  In addition to performing the drop as
    before, these utility methods also check to see if dropping the constraint
    or index necessitates the "update" of a shared physical conglomerate.  If
    so, the new methods will take the necessary steps to 1) drop the old physical
    conglomerate, and 2) create a new physical conglomerate that correctly
    satisifes all remaining conglomerate descriptors--i.e. all of the ones that
    were sharing the dropped physical conglomerate.

  - Changes all existing ConstantAction calls to ConstraintDescriptor.drop(...)
    and ConglomerateDescriptor.drop(...) so that they now use the new utility
    methods defined on DDLSingleTableConstantAction.  Due to current restrictions
    in Derby--see esp. DERBY-3300 and DERBY-2204--there are a few places where
    calling "drop" on a constraint/index won't ever lead to the dropping of a
    shared physical conglomerate, so use of the new utility methods is not
    strictly necessary.  But in the interest of a) completeness, and b) potential
    changes for DERBY-2204 or DERBY-3300 in the future, this patch updates all
    such "drop()" calls, nonetheless.

    Note that in some scenarios--esp. AlterTableConstantAction.java and
    DropTableConstantAction.java--we may want to drop the old physical
    conglomerate and then either skip creation of the new one, or create
    the new one "later".  The utility methods on DDLSingleTableConstantAction
    are written to support such a division where necessary/beneficial.  See
    code comments for details.

  - Uses the existing CreateIndexConstantAction class to create new physical
    conglomerates that "replace" shared ones that have been dropped.  The
    patch introduces a new CreateIndexConstantAction constructor that allows
    differentiation between creation of a "normal" index--for which we will
    create a new conglomerate descriptor, add corresponding data to the
    system catalogs, and (potentially) create a new physical conglomerate--
    verses creation of a "replacement" index, where we just create a new
    physical conglomerate based on an existing conglomerate descriptor.

  - Adds a new JUnit test, lang/ConglomerateSharingTest.java, which includes
    a test fixture for the "convert to non-unique" scenario described by this
    Jira.  It also includes a fixture for testing the various scenarios in
    which a constraint can be dropped, to verify that the logic surrounding
    "drop and re-create conglomerate" processing is correctly executed (or
    skipped) as appropriate.

  - Updates one master file, RowLockIso.out, to reflect the fact that the
    new utility methods in DDLSingleTableConstantAction.java acquire a lock
    on the index in order to read the properties of the physical conglomerate
    that is being dropped.  This is necessary so that those properties can
    be propagated to the new (replacement) conglomerate if necessary.  In
    RowLockIso.out, this difference shows itself by a "lock count" that is
    now one greater than previously, after a "drop index" command has been
    issued.

After applying this patch I ran derbyall and suites.All with ibm142 on Linux and saw no failures.
 Review comments/feedback would be much appreciated, especially since these changes will likely
affect work for DERBY-2204 (and perhaps DERBY-3300) in the future.

> Uniqueness violation error (23505) occurs after dropping a PK constraint if there exists
a foreign key on the same columns.
> ---------------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-3299
>                 URL: https://issues.apache.org/jira/browse/DERBY-3299
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.1.3.1, 10.2.1.6, 10.2.2.0, 10.3.1.4, 10.3.2.1, 10.4.0.0
>            Reporter: A B
>            Priority: Minor
>         Attachments: case_2.sql, d3299_v1.patch
>
>
> When there are multiple constraints on a single table and the constraints have the same
set of columns (in the same order), Derby tries to optimize things by re-using a single backing
index for all of the relevant constraints.  See the "executeConstantAction()" method of CreateIndexConstantAction.java
(search for "duplicate").
> But there is a bug in Derby where, if one of the constraints is unique and is dropped,
the uniqueness "attribute" of the backing index is not updated accordingly.  This means that
uniqueness may be incorrectly enforced where it is not required.
> Take the following example ("Case 2" from DERBY-2204):
>   ALTER TABLE NEWORDERS ADD CONSTRAINT
>       NEWORDERS_PK PRIMARY KEY(NO_W_ID, NO_D_ID, NO_O_ID);
>   ALTER TABLE NEWORDERS ADD CONSTRAINT
>       NO_O_FK FOREIGN KEY (NO_W_ID, NO_D_ID, NO_O_ID) REFERENCES ORDERS;
> For these statements Derby will use a single backing index for both the primary constraint
NEWORDERS_PK and the foreign key constraint NO_O_FK.  That backing index will be unique because
the primary key must itself be unique.
> If later we drop the primary key:
>   ALTER TABLE NEWORDERS DROP CONSTRAINT NEWORDERS_PK;
> then the backing index needs to be converted from a unique index to a non-unique index
(because a foreign key is not inherently unique).  But in Derby the uniqueness attribute remains
unchanged, so attempts to insert a duplicate (NO_W_ID, NO_D_ID, NO_O_ID) row into NEWORDERS
will fail with error 23505, when it should really succeed.
> I tried this out on 10.1.3.1 and the same behavior occurs there, so marking "Affects"
versions for everything back to that...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message