db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Nielsen (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (DERBY-1781) Process handles appear to be leaking in queries using an IN clause during concurrent DB access
Date Thu, 18 Oct 2007 12:10:51 GMT

    [ https://issues.apache.org/jira/browse/DERBY-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12535898
] 

thomanie edited comment on DERBY-1781 at 10/18/07 5:09 AM:
-----------------------------------------------------------------

Running the repro for a longer period of time, i.e more iterations and multiple times, show
that the situation eventually stabilize and is handled fine on any JVM (1.5.0_07 and above)
with all derbys I've tested starting from 10.2.1.6 and above. None of the combinations tested
show the behaviour reported.

To be on the safe side I gathered 1.5.0_05 and 10.1.3.1 as well. 

With this combination I see that the NetworkServer stops accepting incoming connecitons after
serving about 4000 of them. The debug version of 10.1.3.1 I have writes "Connection number:
 <N>."  when I run the test, and it gets to 3967 served connections, then the client
starts throwing exceptions (Caused by: org.apache.derby.client.am.DisconnectException ...).
It's repeatable. The handle count in the OS is well within the earlier seen limits, so this
not an issue. Neither is surviving generations for the objects when this happens. 

Attaching to 10.1.3.1 running on JVM 1.5.0_05 and looking at the processes when does not reveal
any problems and the exception indicates a client driver issue. The underlying problem for
this issue has been fixed by another fix committed sometime between 10.2.1.1 to 10.2.1.6.
This issue should probably be closed as "duplicate" (but I don't yet know to which fix), or
"won't fix".

SUMMARY:
- Underlying problem was fixed sometime between 10.2.1.1 and 10.2.1.6.
- Suggest user move to 10.2.1.6 or above

Lesson learned - test on what's reported not working :/

Sidenote:
I noticed the following trend while doing all this on my laptop:
10.1.3.1 on JVM 1.5.0_05, 35% CPU usage
10.2.1.6 on JVM 1.5.0_05, 45% CPU usage
10.2.1.6 on JVM 1.5.0_13, 55% CPU usage 
10.3.1.4 on JVM 1.5.0_13, 70% CPU usage
10.2.1.6 on JVM 1.6.0_03, ~100% CPU usage
10.3.1.4 on JVM 1.6.0_03, ~100% CPU usage
All CPU usage percentages according to Sysinternals Process Explorer, and all with identical
repro script.

      was (Author: thomanie):
    Running the repro for a longer period of time, i.e more iterations and multiple times,
show that the situation eventually stabilize and is handled fine on any JVM (1.5.0_07 and
above) with all derbys I've tested starting from 10.2.1.6 and above. None of the combinations
tested show the behaviour reported.

To be on the safe side I gathered 1.5.0_05 and 10.1.3.1 as well. 

With this combination I see that the NetworkServer stops accepting incoming connecitons after
serving about 4000 of them. The debug version of 10.1.3.1 I have writes "Connection number:
 <N>."  when I run the test, and it gets to 3967 served connections, then the client
starts throwing exceptions (Caused by: org.apache.derby.client.am.DisconnectException ...).
It's repeatable. The handle count in the OS is well within the earlier seen limits, so this
not an issue. Neither is surviving generations for the objects when this happens. 

Attaching to 10.1.3.1 running on JVM 1.5.0_05 and looking at the processes when does not reveal
why it stops accepting connections - wether it's a client driver or server issue. The underlying
problem for this issue has been fixed by another fix committed sometime between 10.2.1.1 to
10.2.1.6. This issue should probably be closed as "duplicate" (but I don't yet know to which
fix), or "won't fix".

SUMMARY:
- Underlying problem was fixed sometime between 10.2.1.1 and 10.2.1.6.
- Suggest user move to 10.2.1.6 or above

Lesson learned - test on what's reported not working :/

Sidenote:
I noticed the following trend while doing all this on my laptop:
10.1.3.1 on JVM 1.5.0_05, 35% CPU usage
10.2.1.6 on JVM 1.5.0_05, 45% CPU usage
10.2.1.6 on JVM 1.5.0_13, 55% CPU usage 
10.3.1.4 on JVM 1.5.0_13, 70% CPU usage
10.2.1.6 on JVM 1.6.0_03, ~100% CPU usage
10.3.1.4 on JVM 1.6.0_03, ~100% CPU usage
All CPU usage percentages according to Sysinternals Process Explorer, and all with identical
repro script.
  
> Process handles appear to be leaking in queries using an IN clause during concurrent
DB access
> ----------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1781
>                 URL: https://issues.apache.org/jira/browse/DERBY-1781
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.1.3.1
>         Environment: Windows XP, Java 1.5.0_05
>            Reporter: Mark Hellkamp
>            Assignee: Thomas Nielsen
>         Attachments: SqlStressTest.java
>
>
> We are currently using Derby embedded in our web application running on Windows. When
processing multiple concurrent requests we have noticed that the Java process handle count
continues to increase until the machine becomes unresponsive. I was able to isolate the problem
to Derby by running the database in network mode in another process. Further investigation
showed that the problem could be reproduced using a select statement that has an IN clause
with multiple entries on the primary key column. Spawning multiple threads running the same
query causes the handle count to increase considerably on the Derby process. The problem occurs
in version 10.1.3.1 and 10.2.1.1 (even worse) in both embedded and network mode. The attached
test program duplicates the problem. Start Derby in network mode (using startNetworkServer.bat)
and run the enclosed test program. The handle count on the Derby process will increase and
never go down short of restarting Derby. Using 10.2.1.1 the handle count for the Derby process
goes somewhere between 1400-1500 with just two threads in my environment. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message