db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mamta A. Satoor (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (DERBY-1482) Update triggers on tables with blob columns stream blobs into memory even when the blobs are not referenced/accessed.
Date Thu, 12 May 2011 16:06:47 GMT

     [ https://issues.apache.org/jira/browse/DERBY-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Mamta A. Satoor updated DERBY-1482:
-----------------------------------

    Attachment: derby1482_patch5_stat.txt
                derby1482_patch5_diff.txt

Attaching a new patch derby1482_patch5_diff.txt which takes care of the upgrade problems which
I ran into with the earlier draft patch. 

During an upgrade(soft/hard), the trigger action SPSes get marked invalid and hence when they
fire next time, they will be regenerated and recompiled. The issue with the earlier patch
was that when in soft-upgrade mode, the patch was using the new code to do the column read
optimization while generating the internal trigger action sql but when such a database goes
back to it's original version, the generated trigger action sql won't work anymore because
previous releases of Derby do not recognize this column read optimization. In order to fix
this, the code has to be smart to see if it is working with a pre-10.9 database(which in other
words means that we are in soft-upgrade mode) and if yes, then it should not use the column
read optimization code during trigger action SPS regeneration and during UPDATE execution
when we read limited columns from the trigger table based on what is required by the UPDATE
sql and the firing triggers  I have made that change in the attached patch and now
the upgrade tests work fine. I have also run the complete junit suite and it ranfine. derbyall
is still running on my machine. 

I will go ahead and commit this patch on Monday if there are no comments.


> Update triggers on tables with blob columns stream blobs into memory even when the blobs
are not referenced/accessed.
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1482
>                 URL: https://issues.apache.org/jira/browse/DERBY-1482
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.2.1.6
>            Reporter: Daniel John Debrunner
>            Assignee: Mamta A. Satoor
>            Priority: Minor
>              Labels: LOB
>             Fix For: 10.7.1.1
>
>         Attachments: DERBY_1482_patch4_diff.txt, DERBY_1482_patch4_stat.txt, TriggerTests_ver1_diff.txt,
TriggerTests_ver1_stat.txt, derby1482DeepCopyAfterTriggerOnLobColumn.java, derby1482Repro.java,
derby1482ReproVersion2.java, derby1482_patch1_diff.txt, derby1482_patch1_stat.txt, derby1482_patch2_diff.txt,
derby1482_patch2_stat.txt, derby1482_patch3_diff.txt, derby1482_patch3_stat.txt, derby1482_patch5_diff.txt,
derby1482_patch5_stat.txt, junitUpgradeTestFailureWithPatch1.out
>
>
> Suppose I have 1) a table "t1" with blob data in it, and 2) an UPDATE trigger "tr1" defined
on that table, where the triggered-SQL-action for "tr1" does NOT reference any of the blob
columns in the table. [ Note that this is different from DERBY-438 because DERBY-438 deals
with triggers that _do_ reference the blob column(s), whereas this issue deals with triggers
that do _not_ reference the blob columns--but I think they're related, so I'm creating this
as subtask to 438 ]. In such a case, if the trigger is fired, the blob data will be streamed
into memory and thus consume JVM heap, even though it (the blob data) is never actually referenced/accessed
by the trigger statement.
> For example, suppose we have the following DDL:
>     create table t1 (id int, status smallint, bl blob(2G));
>     create table t2 (id int, updated int default 0);
>     create trigger tr1 after update of status on t1 referencing new as n_row for each
row mode db2sql update t2 set updated = updated + 1 where t2.id = n_row.id;
> Then if t1 and t2 both have data and we make a call to:
>     update t1 set status = 3;
> the trigger tr1 will fire, which will cause the blob column in t1 to be streamed into
memory for each row affected by the trigger. The result is that, if the blob data is large,
we end up using a lot of JVM memory when we really shouldn't have to (at least, in _theory_
we shouldn't have to...).
> Ideally, Derby could figure out whether or not the blob column is referenced, and avoid
streaming the lob into memory whenever possible (hence this is probably more of an "enhancement"
request than a bug)... 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message