db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mamta A. Satoor (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-1482) Update triggers on tables with blob columns stream blobs into memory even when the blobs are not referenced/accessed.
Date Mon, 19 Apr 2010 20:35:53 GMT

    [ https://issues.apache.org/jira/browse/DERBY-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12858671#action_12858671
] 

Mamta A. Satoor commented on DERBY-1482:
----------------------------------------

I ran the tests for trigger where there are no trigger columns specified but there are columns
referenced in trigger action through old/new transition variables which is the case 4) above
and we should not run into any issues with that scenario because Derby decides to read all
the columns from the trigger table if there are not trigger columns specified for it. So,
no matter which one of the following scenario has been used to create the trigger, of kind
4) case 10.6 code will work fine since Derby is going to read all the columns create trigger
tr1 after update on t1 referencing old as oldt for each row values(oldt.id); 
a)trigger is getting created in newly created 10.6 db 
b)trigger already created in the pre-10.6db before soft upgrade 
c)trigger is getting created while in soft upgrad mode with pre-10.6 db 
d)trigger already created in the pre-10.6db before hard upgrade 
e)trigger is getting created after pre-10.6db is hard upgraded 


So, the only issue we need to worry about is case 3), which is create trigger tr1 after update
of c1 on t1 referencing old as oldt for each row values(oldt.id); 

I think we can solve the soft-upgrade problems by just having Derby read all the columns no
matter what(or none) trigger columns are specified. I think it is an acceptable solution because
user probably would not be running their databases in soft-upgrade mode for a long time. Let
me know what your thoughts might be.

The only issue left then is 3d), triggers which were created prior to 10.6 and that database
has been hard-upgrade. In such hard upgraded databases, we will run into not running enough
columns for trigger case 3)create trigger tr1 after update of c1 on t1 referencing old as
oldt for each row values(oldt.id); One way to resolve such trigger cases would be at upgrade
time, invalidate the trigger so that get recompiled before they are used next in the hard-upgraded
database. 

Any thoughts/feedback greatly appreciated.

> Update triggers on tables with blob columns stream blobs into memory even when the blobs
are not referenced/accessed.
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1482
>                 URL: https://issues.apache.org/jira/browse/DERBY-1482
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.2.1.6
>            Reporter: Daniel John Debrunner
>            Assignee: Mamta A. Satoor
>            Priority: Minor
>         Attachments: derby1482_patch1_diff.txt, derby1482_patch1_stat.txt, derby1482_patch2_diff.txt,
derby1482_patch2_stat.txt, derby1482DeepCopyAfterTriggerOnLobColumn.java, derby1482Repro.java,
derby1482ReproVersion2.java, junitUpgradeTestFailureWithPatch1.out, TriggerTests_ver1_diff.txt,
TriggerTests_ver1_stat.txt
>
>
> Suppose I have 1) a table "t1" with blob data in it, and 2) an UPDATE trigger "tr1" defined
on that table, where the triggered-SQL-action for "tr1" does NOT reference any of the blob
columns in the table. [ Note that this is different from DERBY-438 because DERBY-438 deals
with triggers that _do_ reference the blob column(s), whereas this issue deals with triggers
that do _not_ reference the blob columns--but I think they're related, so I'm creating this
as subtask to 438 ]. In such a case, if the trigger is fired, the blob data will be streamed
into memory and thus consume JVM heap, even though it (the blob data) is never actually referenced/accessed
by the trigger statement.
> For example, suppose we have the following DDL:
>     create table t1 (id int, status smallint, bl blob(2G));
>     create table t2 (id int, updated int default 0);
>     create trigger tr1 after update of status on t1 referencing new as n_row for each
row mode db2sql update t2 set updated = updated + 1 where t2.id = n_row.id;
> Then if t1 and t2 both have data and we make a call to:
>     update t1 set status = 3;
> the trigger tr1 will fire, which will cause the blob column in t1 to be streamed into
memory for each row affected by the trigger. The result is that, if the blob data is large,
we end up using a lot of JVM memory when we really shouldn't have to (at least, in _theory_
we shouldn't have to...).
> Ideally, Derby could figure out whether or not the blob column is referenced, and avoid
streaming the lob into memory whenever possible (hence this is probably more of an "enhancement"
request than a bug)... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message