db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Matrigali (JIRA)" <j...@apache.org>
Subject [jira] Updated: (DERBY-4537) Update on tables with blob columns streams blobs into memory even when the blobs are not updated/accessed.
Date Fri, 05 Feb 2010 18:09:32 GMT

     [ https://issues.apache.org/jira/browse/DERBY-4537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Mike Matrigali updated DERBY-4537:
----------------------------------


looking at the stack, I don't think this is a bug.  You are running out of space while loading
the cache.
I think you are setting a 4mb limit but the default for the cache is 1000 pages - so at least
32mb.  And
I believe your blob is bigger than 32mb.

Looking at test case I see there are no indexes.  So the query is going to require a scan
of the whole
base table, which unfortunately will include a scan of all of the pages of the blob column.
 So it is 
first going to find the first row and do the update, and it is not going to have to read the
blob as part
of the update.  The problem comes when it has to read past the first row to the second row,
in this
case it just reads each page in numerical order into the cache, checks the type of the page
until
it either comes to the end of the table or the next non overflow page.  There are no "pointers"
to find
the next real row.  

So to fix this you could either set the cache size down to something like 40:
derby.storage.pageCacheSize=40

or you could make id be a primary key.  I think both will result in no out of memory. 

Could you try one or both and then close the bug if you agree?

> Update on tables with blob columns streams blobs into memory even when the blobs are
not updated/accessed.
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-4537
>                 URL: https://issues.apache.org/jira/browse/DERBY-4537
>             Project: Derby
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 10.6.0.0
>            Reporter: Mamta A. Satoor
>            Priority: Minor
>         Attachments: derby4537Repro.java
>
>
> While investigating DERBY-1482, I wrote a simple program to see the behavior of a simple
update (without any triggers) of a table with blob columns. 
> The update is on a non-blob column of a table with blob volumns. 
> When this update is made with limited heap memory, Derby runs into OOM error. 
> I tried another table similar to earlier table but with no blob column. An update on
that table does not run into OOM when run with same limited heap memory. 
> I would have expected the update to pass for table with blob column since we are not
referencing/updating the blob column. But it appears that we might be streaming in blob column
even though it is untouched by the update sql. 
> I wonder if working on this jira first will make the work for DERBY-1482 any easier or
better yet, will it make the problem with DERBY-1482 go away? Will attach a reproducible program
for this soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message