db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kurt Huwig <k.hu...@iku-ag.de>
Date Tue, 19 Jun 2007 11:44:21 GMT
Am Dienstag, 19. Juni 2007 schrieb Bernt M. Johnsen:
> >>>>>>>>>>>> Kurt Huwig wrote (2007-06-19 13:46:03):
> > Am Dienstag, 19. Juni 2007 schrieb Bernt M. Johnsen:
> > > Ok, I see. I assumed that if HA-jdbc got an SQLException from one of
> > > the nodes, the complete transaction was rolled back on all nodes. I
> > > would say that such lack of transactional behaviour is a serious
> > > weakness in HA-Jdbc.
> >
> > I guess this is intentional, as you generally don't want your
> > transactions to fail if one of the nodes goes bad. I mean this is the
> > whole point of having a cluster, right?
> No, I wouldn't say that. The point of an HA cluster is to have several
> instances of your data in an up-to-date and consistent state in case
> one of the nodes fail, allowing you to continue operations after the
> failure. Failure of the whole transaction when one of the nodes fail
> during the transaction is quite normal for clustered solutions.
> The application should then be written from the fact that all
> transactions might fail, and the transaction has to be retried if it
> fails. (This actually applies to all applications using some interface
> to an SQL database).
> JDBC 4.0 has defined subclasses of SQLException which enables an
> application to handle cases like this. If a node fails and causes the
> running transaction(s) to fail, but the cluster is working after the
> failure (with one less node, though) an SQLTransientException (or a
> subclass) should be thrown and the application may assume that the
> transaction can be retried. If an SQLNonTransientException is thrown,
> there is noe use to retry the transaction.

I must admit, that I did not read the JDBC 4.0 specs yet. Still IMHO having a 
cluster solution that mandates you to add handling code to every SQL command 
you execute is not a good idea, because I think this is the job of the 
cluster solution. This way, I can simply transfer an existing application to 
support cluster just by changing the JDBC driver and do not have to add 
special code for handling a new error condition. As an application developer, 
I do not care if a cluster node fails or not. This is the responsibility of 
the cluster adminstration, not of the application itself. The cluster code 
might try to redo the transaction on the failed node or not. Why should it 
impact the main application and why should it rollback transactions that 
worked fine on all nodes? Anyway, you cannot roll them back if the commit 
failed on the last node, right?

But back to the real question, I suppose there is currently no solution for 
this problem with Derby? So I guess the only "solution" would be to patch 
Derby not to throw duplicate key exceptions?!
Mit freundlichen Grüßen

Kurt Huwig (Vorstand)
Telefon 0681/96751-50, Telefax 0681/96751-66

iKu Systemhaus AG, Am Römerkastell 4, 66121 Saarbrücken
Amtsgericht: Saarbrücken, HRB 13240
Vorstand: Kurt Huwig, Andreas Niederländer
Aufsichtsratsvorsitzender: Jan Bankstahl

GnuPG 1024D/99DD9468 64B1 0C5B 82BC E16E 8940  EB6D 4C32 F908 99DD 9468

View raw message