jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcel May <marcel....@consol.de>
Subject Re: Questions about TX in Jackrabbit, JTA and Spec compliance
Date Thu, 09 Aug 2007 17:58:08 GMT
Cris Daniluk wrote:
>> The changelog is filled with the operations BEFORE the transaction is
>> committed, and its contents is part of the logical view, as far as
>> node traversal is concerned. In other words, before the transaction is
>> committed, you will be the only one seeing those changes, and after
>> commit, everyone will. However, if JR crashes before the changelog has
>> been saved to the RDBMS, the changelog will be lost, as it is
>> memory-based.
> This is where our concern comes in. Based on your explanation,
> Jackrabbit is not honoring the JTA spec, nor the general ACID
> transaction principals (durability, notably). The fact that the
> committed transaction rolls into the logical view is great, but the
> fact that there is no flush to permanent storage is not.
> The JTA spec is bound to the X/Open DTP standard, available at
> http://www.opengroup.org/onlinepubs/009680699/toc.pdf
> I think the spec clearly sets the expectation for transaction
> permanence, and I believe that Jackrabbit clearly misses that, so
> while I think that the JTA support offered is valuable, it is not
> truly compliant--probably in a way that would be surprising to most
> JTA users.
I'm no XA expert either but agree with Chris:

If Jackrabbit is not storing the ChangeLog in phase one (prepare),
the changes can be potentially lost before phase 2 (commit) succeeds.
Succeeding a phase one prepare guarantees that no changes are lost.
So, for phase one the ChangeLog must be persisted IMO (Jackrabbit would
resume after
whatever failure and finish the XA TX by executing the recored ChangeLog).

>>> If the XA includes Jackrabbit AND the RDBMS AND any other outside
>>> participants that may be relevant, it could not be rolled back without
>>> Jackrabbit knowing. I'm not sure I understand where Jackrabbit could
>>> be "left out of the loop" on a rollback?
>> I just have some concerns about the flow of control: what JR is
>> supposed to do with its associated JDBC connection when a XA TX is
>> prepared, committed or rolled back. Do I get your point here: instead
>> of using a changelog, continuously write changes made by the client
>> via XA capable JDBC connection to the database, using the fact that
>> uncommitted changes are only visible to that user?
> If the DBMS supports the two-phase transaction (I believe Postgres
> does), then you could just use a JTA-enabled version of the JDBC
> driver and register the DB transaction to the existing XA. Then, while
> you execute the SQL directly to the RDBMS, it would not be visible as
> it is not committed. When the global transaction is committed, the
> DBMS would receive the two-phase commit request(s) and do the right
> thing automatically.
> The only other option is to persist the changelog, effectively
> converting it into a journal. However, I think bringing the DBMS into
> the XA is probably the quickest way to solve this problem..
> - Cris

This would be a nice solution I guess.

As a result of this discussion, should we open an JIRA issue for JR/JCA?
I think no issue exists for this?

Thanks alot, Chris and Dominique!
This discussion was very helpful for my Jackrabbit internal understanding.


View raw message