db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Matrigali (JIRA)" <derby-...@db.apache.org>
Subject [jira] Updated: (DERBY-1713) Memory do not return to the system after Shuting down derby 10.2.1.0, following an out of memory event
Date Fri, 18 Aug 2006 15:38:14 GMT
     [ http://issues.apache.org/jira/browse/DERBY-1713?page=all ]

Mike Matrigali updated DERBY-1713:
----------------------------------


I admit I don't understand why there is much of a difference, during runtime for those 2 queries.
Do you have
any indexes on this data?  From your ddl, derby will pick 32k pages for that table, and a
select * should fill up the 1000 page cache leading to more that 32,000,000 bytes used, no
matter what the order by clause.  The sort  should use at most 1 meg of memory in addition,
by default once a sort is bigger than that an external merge sort is used where external files
are used rather than memory.  I guess it could be that you are 
right at the very edge of running out of memory durring processing and that the sort with
one extra  field uses up slightly more memory.

32mb is not a safe minimum for a database of this size and a 1000 page buffer cache.  If memory
is a consideration I suggest you reduce the page cache size, if not raise the minimum memory
for the jvm.

When you say memory use is "neglibible" do you mean after you successfully shutdown=true?
 I would assume derby will use at least 32,000,000 during runtime processing of this query
(assuming  that the
db size on disk is 40mb as you say).  

> Memory do not return to the system after Shuting down derby 10.2.1.0, following an out
of memory event
> ------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1713
>                 URL: http://issues.apache.org/jira/browse/DERBY-1713
>             Project: Derby
>          Issue Type: Bug
>          Components: Performance
>    Affects Versions: 10.2.1.0
>         Environment: Windows XP SP2
> JRE 1.6 beta2
>            Reporter: Ibrahim
>            Priority: Critical
>
> I face a problem when querying large tables. I run the below SQL and it stuck in this
query and throws java heap exception OutOfMemory:
> SELECT count(*) FROM <table> WHERE .....
> N.B. I'm using a database of more than 90,000 records (40 MB). I set the maxHeap to 32
MB (all other settings have the default value, pageCache ... etc ). 
> Then, I shutdown the database but the memory is not returned to the system (and remain
32 MB [max threshold]). I tried to increase the maxHeap to 128 MB in which it works and releases
the memory, so I think the problem is when it reaches the maxHeap then it seems to not respond
to anything such as closing the connection or shutting down the database. How can I get rid
of this? (because i cannot increase the maxHeap as the database increases, I want to throw
an exception and release the memory)
> I'm using this to shutdown the DB:
> try{DriverManager.getConnection("jdbc:derby:;shutdown=true");}
> catch(SQLException ex){System.err.println("SQLException: " + ex.getMessage());}
> I'm using a memory Profiler for monitoring the memory usage.
> Thanks in advanced.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message