jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manuel López Blasi <lopezbl...@conicet.gov.ar>
Subject Re: JCA Connector Glassfish PoolingException
Date Wed, 03 Jan 2018 22:19:58 GMT
I have set transaction support to "Xa-Transaction" (In glassfish 
properties) ,
i believe what i'm using is CMT, container managed transactions,
so glassfish is in charge of handling connections and kill them, 
invalidate them or mark them as available
once the transaction has commited.

I discovered something interesting: when saving files everything went 
smooth,
save thousands with no problem, seeing in glassfish monitoring console 
how many hundred
of connections are created, used, and disposed within seconds. 
Absolutely no errors.

But when i get stuff from the repository, that's when connections begin 
to get leaked.

That's make me think that since there's no transaction commited the 
connection stays open.
I tried to retrieve files from jackrabbit outside of a transactional 
context and inside too, with no luck, same
PoolingException / out of connections error.

My service to reach jackrabbit, the place where the transactional 
context begins are annotated like this:
@LocalBean
@Stateless
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)


I smell that's precisely the problem: since i have 
"bindSessionToTransaction=true",
if i try to reach repository inside or outside any transaction, doesn't 
matter, since
there's really no commit at all, connections will stale.

I'm going to try doing a second connection pool with 
"bindSessionToTransaction=false",
using it only to retrieve stuff from jackrabbit.

I'll tell you how it goes soon, thanks a lot for your responses,
cheerz,
Manuel.


On 03/01/18 17:22, Pontus Amberg wrote:
> What are you using to handle the transactions when you invoke the
> Jackrabbit JCA connector? The reason I'm asking is that the flag
> "bindSessionToTransaction=true" might maybe be an indication that you have
> transactions that for some reason never are committed.
>
> /Pontus
>
> On Tue, Jan 2, 2018 at 8:18 PM, Manuel López Blasi <
> lopezblasi@conicet.gov.ar> wrote:
>
>> Monitoring Glassfish shows all connections are taken up ( and not freed ):
>>
>> NumConnUsed     32count         Jan 2, 2018 10:48:22 AM         Jan 2,
>> 2018 3:58:20 PM Marca de Agua Máxima: 32 count
>> Marca de Agua Mínima: 0 count
>>          Provides connection usage statistics. The total number of
>> connections that are currently being used, as well as information about the
>> maximum number of connections that were used (the high water mark).
>>
>>
>>
>> All 32 connections are taken  already.
>>
>>
>>
>>
>>
>> On 02/01/18 15:00, Manuel López Blasi wrote:
>>
>>> Hello, thanks for your response Pontus,
>>> i have set a maximun of concurrent connections, 32.
>>>
>>> I understand that i set a maximum number o sessions/connections/transactions,
>>> in my case on glassfish.
>>> These is handled by the jca connector y conjunction with the glassfish
>>> server/container.
>>>
>>> Once this maximun is reached, should i ask for another new connection,
>>> the connector/connection pool would wait
>>> until one of the bussy connections is freed. There is a wait timeout  for
>>> this, once the time is elapsed the connection pool
>>>   would return an error message, saying that no connection is available.
>>> It's perfectly logical.
>>>
>>> In my case this is happening, i get an exception "Connections in use are
>>> equal to max-pool-size value and max-wait-time has elapsed":
>>>
>>> Caused by: com.sun.appserv.connectors.internal.api.PoolingException: Las
>>> conexiones en uso equivalen al valor de max-pool-size y el tiempo caducado
>>> de max-wait-time. No se pueden asignar m?s conexiones.
>>>      at com.sun.enterprise.resource.pool.ConnectionPool.getResource(
>>> ConnectionPool.java:418)
>>>      at com.sun.enterprise.resource.pool.PoolManagerImpl.getResource
>>> FromPool(PoolManagerImpl.java:245)
>>>      at com.sun.enterprise.resource.pool.PoolManagerImpl.getResource
>>> (PoolManagerImpl.java:170)
>>>      at com.sun.enterprise.connectors.ConnectionManagerImpl.getResou
>>> rce(ConnectionManagerImpl.java:332)
>>>      at com.sun.enterprise.connectors.ConnectionManagerImpl.internal
>>> GetConnection(ConnectionManagerImpl.java:301)
>>>      at com.|#]
>>>
>>> [#|2018-01-02T14:23:20.456-0300|SEVERE|glassfish3.1.2|javax.
>>> enterprise.system.std.com.sun.enterprise.server.logging|_
>>> ThreadID=409;_ThreadName=Thread-2;|sun.enterprise.
>>> connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:190)
>>>
>>>      at com.sun.enterprise.connectors.ConnectionManagerImpl.allocate
>>> Connection(ConnectionManagerImpl.java:165)
>>>      at com.sun.enterprise.connectors.ConnectionManagerImpl.allocate
>>> Connection(ConnectionManagerImpl.java:160)
>>>      at org.apache.jackrabbit.jca.JCARepositoryHandle.login(JCARepos
>>> itoryHandle.java:75)
>>>
>>>
>>> The thing is, once i reach this state it remains the same, i can wait 10
>>> minutes or 5 hours, when it dies, it stays that way no matter how long i
>>> leave it "to recover connections".
>>> The only solution is to shut down server and start it again, that way
>>> everything works great again.
>>>
>>> One other thing that seems strange is the fact that i can work generating
>>> files in number of thousands in very short time, let's say 2000 files in 3
>>> minutes. That may indicate
>>> that the time settings for the connection pool are okay, i mean, i have 1
>>> minute of max wait time before saying there're no more free connections,
>>> almost a thousand files can be fully
>>> processed and saved within 1 minute.
>>>
>>> That's what leaves me perplexed. The other thing is that mysql connection
>>> pools have the very same / carbon copy settings and they work ok, never run
>>> off connections or died this way.
>>>
>>> I know files are way different, requires more work than db registers, I/O
>>> is the most time consuming and slow op of them all. Maybe within a certain
>>> amount of time the file caching gets
>>> bottlenecked and that's what causes the collapse?
>>>
>>>
>>>
>>> On 29/12/17 11:17, Pontus Amberg wrote:
>>>
>>>> Have you verified that it isn't the number of concurrent
>>>> sessions/transactions that is causing the problem? If that is the problem
>>>> you would probably only encounter it when you have approximately 33 or
>>>> more
>>>> file operations executing at the same time.
>>>>
>>>> /Pontus
>>>>
>>>> On Tue, Dec 26, 2017 at 11:28 PM, Manuel López Blasi <
>>>> lopezblasi@conicet.gov.ar> wrote:
>>>>
>>>> Hello,
>>>>> i've been adding almost successfully Jackrabbit Repository to our
>>>>> project,
>>>>> basically for file storing purposes. Everything works great, with some
>>>>> exceptions,
>>>>> one which is critical, once in a while, following no apparent pattern
an
>>>>> exception is thrown
>>>>> saying the pool is out of connections, this one:
>>>>>
>>>>> Caused by: com.sun.appserv.connectors.internal.api.PoolingException:
>>>>> Las conexiones en uso equivalen al valor de max-pool-size y el tiempo
>>>>> caducado de max-wait-time. No se pueden asignar m?s conexiones.
>>>>> (Quantity of connections in use are same as defined max-pool-size and
>>>>> max-wait-time already elapsed. Can't assign any more connections.)
>>>>>       at com.sun.enterprise.resource.pool.ConnectionPool.getResource(
>>>>> ConnectionPool.java:418)
>>>>>       at com.sun.enterprise.resource.pool.PoolManagerImpl.getResource
>>>>> FromPool(PoolManagerImpl.java:245)
>>>>>       at com.sun.enterprise.resource.pool.PoolManagerImpl.getResource
>>>>> (PoolManagerImpl.java:170)
>>>>>       at com.sun.enterprise.connectors.ConnectionManagerImpl.getResou
>>>>> rce(ConnectionManagerImpl.java:332)
>>>>>       at com.sun.enterprise.connectors.ConnectionManagerImpl.internal
>>>>> GetConnection(ConnectionManagerImpl.java:301)
>>>>>       at com.sun.enterprise.connectors.ConnectionManagerImpl.allocate
>>>>> Connection(ConnectionManagerImpl.java:190)
>>>>>       at com.sun.enterprise.connectors.ConnectionManagerImpl.allocate
>>>>> Connection(ConnectionManagerImpl.java:165)
>>>>>       at com.sun.enterprise.connectors.ConnectionManagerImpl.allocate
>>>>> Connection(ConnectionManagerImpl.java:160)
>>>>>       at org.apache.jackrabbit.jca.JCARepositoryHandle.login(JCARepos
>>>>> itoryHandle.java:75)
>>>>>       ... 120 more
>>>>>
>>>>> Our setup/context is the following:
>>>>>
>>>>> VM: java 7 (1.7.0_101)
>>>>> container: Glassfish 3.1.2.2
>>>>> main framework for webapp: struts 2
>>>>> DB (mysql) persistence manager: Hibernate 4.2.19.Final
>>>>>
>>>>> Jackrabbit stuff/versions:
>>>>> jackrabbit-core 2.14.4
>>>>> jcr 2.0
>>>>> OCM: jackrabbit-ocm 2.0.0
>>>>> Connector: jackrabbit-jca-2.14.4 (This one is deployed as a connector
in
>>>>> glassfish, associated with a connection pool )
>>>>>
>>>>> The configuration for JCA connector is the following:
>>>>>
>>>>> Connection definition: javax.jcr.Repository
>>>>>
>>>>> Initial and minimum pool size: 8 Connections
>>>>> Maximum pool size: 32 Connections
>>>>> Switch Pool size: 2 connections
>>>>> Activity Timeout 300  seconds
>>>>> Max Wait Timeout: 60000  miliseconds
>>>>> Transaction Support: XATransaction
>>>>>
>>>>> Matching Connections: Yes.
>>>>>
>>>>> bindSessionToTransaction: True
>>>>>
>>>>> It seems to be caused randomnly, as we're able to produce and store a
>>>>> couple thousand of files within minutes with no crashes
>>>>> (every file is stored within a transaction and with a single Session
to
>>>>> the repository). Should the pool be out of connections,
>>>>> it should happen immediately i think (???).
>>>>>
>>>>> So, if someone has any indication/clues it would be greatly appreciated,
>>>>> thanks in advance, best regards,
>>>>> Manuel.
>>>>>
>>>>>
>>>>>
>>>>>
>>>


Mime
View raw message