commons-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: DBCP woes (running out of cursors).
Date Wed, 14 Oct 2009 18:58:33 GMT
Hash: SHA1


On 10/14/2009 2:17 PM, Bill Davidson wrote:
> Redhat 5.2 Server


> I've been trying to convert an old J2EE application to use DBCP connection
> pools from an old custom connection pool class (not a DataSource
> interface).

I've moved a couple of things around, here. Sorry if it's disorienting.

>              maxActive="245"
>              maxIdle="16"
>              validationQuery="SELECT 1 FROM dual"
>              poolPreparedStatements="true"
>              accessToUnderlyingConnectionAllowed="true"

This config looks fine, though the maxIdle might be a little low. IIRC,
setting-up Oracle connections is a time-consuming operation. If you
really need 245 connections, perhaps you might want to increase the idle
count to improve the performance of your webapp when demand suddenly
hits a mostly-idle server.

> The old pool worked acceptably, and DBCP seemed to work fine in
> development, but then I got it into a test environment and had a bunch
> of people beating on it and I started running out of cursors.  Looking
> through the old code, it turns out that the old pool manager, when
> freeing a Connection, had kept track of when it created that Connection
> and if it was more than 120 seconds old, it closed it and made a new one.

Similar capability is found in DBCP in the form of the "removeAbandoned"
and "logAbandoned" configuration parameters. See the configuration for
DBCP on this page:

/Important note:/ the "removeAbandoned" configuration parameter will
merely remove the discarded connection from the pool. It will /not/
close the connection after that timeout, so it won't solve your problem:
it will only help you find the problems in your webapp.

> As a result, a given Connection wouldn't hit the cursor limit because it
> wouldn't get reused for more than 2 minutes.
> So, I think: "We must have code that's not closing ResultSet/Statement
> objects" so I go off looking for them.

I wrote this a while back, which may be worth reading:

> With a little research, I come up with queries like this:
> SELECT a.sid,
>       b.status,
>       b.osuser,
>       b.machine,
>       to_char(b.logon_time, 'dd-mon-yyyy hh24:mi:ss') logon_time,
>       a.user_name,
>       a.sql_id,
>       a.sql_text
>  FROM v$session b,
>       v$open_cursor a
> WHERE a.sid = b.sid
>   AND a.user_name = 'APPUSERNAME'
> ORDER BY upper(a.sql_text),
>         b.status;
> SELECT count(a.sql_text) count,
>       a.sql_text
>  FROM v$session b,
>       v$open_cursor a
> WHERE a.sid = b.sid
>   AND a.user_name = 'APPUSERNAME'
>   AND b.status = 'INACTIVE'
> GROUP BY a.sql_text
> ORDER BY count desc,
>         upper(a.sql_text);
> I run these against production (which is still using the old pools) and
> I find all sorts of queries lying around on inactive sessions, which
> I'm guessing are Connections that were closed.

I don't know a thing about Oracle-specific queries, but what does:

>       v$open_cursor a

mean? Does this explicitly open a new cursor, or use an existing one
called a?

> I then start tracking
> the SQL down and every single one has an iron clad close() call in
> a "finally" clause.  There is absolutely no way the close() is not being
> called on the Statement objects.  Many of them even explicitly close()
> the ResultSet objects even though that should be handled by closing
> the Statement objects.

Technically speaking, the JDBC specification requires that calling
Connection.close() also close any Statement (and therefore ResultSet)
objects that were opened as well. The lines become blurred a bit when
you're talking about pooled connections, because Connection.close()
doesn't really get called... it's a grey area in the spec if you ask me,
but I'd prefer that a pooled connection act like a non-pooled connection
in this case, but there's no "recycle" or "reset" method in the
java.sql.Connection class, and calling Connection.close() on the actual
connection is not appropriate (since it's pooled) so there may be no way
to actually implement this mimicry.

> These close() calls are happening immediately
> after the data is gathered from the ResultSet's.

Can you post an example of your finally blocks? If each xyz.close()
isn't in a try/catch block of its own, it could potentially fail to
return the Connection to the pool. In this case, logAbandoned will help
you a lot. I recommend enabling that setting even in production,
especially because that's where it's easiest to exercise your webapp's
DB access code.

> I then went looking in active sessions in the test environment and it's
> the same thing.  I can't find any lingering cursors for SQL that doesn't
> have a Statement.close() call that's guaranteed to run.  I've got lots
> of open cursors.  I can't explain why.


> Since I've eliminated the possibility that close() is not being called
> on Statement/ResultSet's, I'm wondering a number of things:
> 1. Could this be a bug in the Oracle JDBC driver?

Possible, but I think lots of people are using this driver without a

> 2. Should I change parameters for DBCP?

Other than enabling the 'abandoned' stuff just to convince yourself that
your code /is/ properly cleaning-up connections/statements/etc., I don't
think that mucking-around with DBCP's configuration will be useful...
there's nothing else DBCP can do except maybe:

>              poolPreparedStatements="true"

...try setting that to "false" and see if the problem clears-up.
Different drivers pool statements in different ways, and it's possible
that Oracle's driver either poorly implements the statement pooling, or
that it's being emulated by DBCP in a way that doesn't quite work with
Oracle. It wouldn't be a bad idea to ask on the Apache commons list if
anyone has had problems like this when using DBCP.

> - Should I get rid of poolPreparedStatements?  Note: That noticeably
> hurts performance.


On the other hand, a working webapp is much better than a fast one that
is unreliable.

> - Should I use timeBetweenEvictionRunsMillis - What's an "eviction" run?

It's what happens every so often to flush-out all the connections that
have been (for instance) idle too long, etc.

> - Should I reduce maxIdle?

I wouldn't do that. You might see your performance suffer even more if
connections need to be created all the time.

> 3.  Is there a newer version of DBCP than the one that comes with
>    Tomcat that might mitigate this problem and is it a good idea to
>    drop that into Tomcat?

Tomcat re-packages DBCP into a different package-space to avoid
confusion with the "real" DBCP if it's being used by a webapp or
whatever, so you have to do some back-flips if you want to "upgrade"
DBCP to a newer version.

Tomcat 6.0.11 updated to use commons-dbcp-1.2.2 (the current version)
and commons-pool-1.3 (not clear what the current version is to me, but
1.3 seems reasonable), so you should have basically the state-of-the-art
stable releases of those packages.

There is another option that you could try, though I suspect the problem
might be with your queries themselves: Filip announced a new beta
connection pool in this thread:

The link be posted is broken and I can't find an updated version, but it
might be lurking somewhere else. You might want to re-post with a
question about that.

Hope all that helps,
- -chris
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla -


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message