commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cservenak Tamas <>
Subject [transaction] OutOfMemory error on continous work with lot of new files
Date Tue, 22 Nov 2005 12:38:29 GMT

I am using Commons Tx to handle a lot of small files in transactional
way with long term runs (weeks or days at least). New files arrives at
cca 100000  piece/hour. These files leaves the app at the end of
processing. We have noticed, that during work, the Commons Tx eates up
all heap memory (not confirmed yet, but I'm 90% sure about this

Some investigation (profiling) pointed out, that in GenericLockManager,
the globalLocks Map just grows, and nobody removes lock entries. At
least lock instances multiples like viruses :)

I am aware that the supplied patch is probalby WRONG, but i am hoping to
reduce memory leak (is it leak?) even at some lock handling overhead
costs (the lock manager is now always reinserting locks).

Keep in mind, that a lot of new files enters and LEAVES my app, and will
never return to it. So, (at least in my case) the reuse of locks is not
as important as memory footprint reduction.


(revision 348140)
(working copy)
@@ -307,6 +307,8 @@
         if (lock != null) {
             released = lock.release(ownerId);
             removeOwner(ownerId, lock);
+            // XXX to shrink the globalLocks map
+            removeLock(lock);
         return released;
@@ -336,6 +338,8 @@
                 GenericLock lock = (GenericLock);
+                // XXX to shrink the globalLocks map
+                removeLock(lock);

View raw message