db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bernt M. Johnsen" <Bernt.John...@Sun.COM>
Subject Re: REPLACE INTO/INSERT IF NOT EXIST
Date Tue, 19 Jun 2007 13:22:50 GMT
>>>>>>>>>>>> Kurt Huwig wrote (2007-06-19 14:44:21):
> I must admit, that I did not read the JDBC 4.0 specs yet. Still IMHO
> having a cluster solution that mandates you to add handling code to
> every SQL command you execute is not a good idea, because I think
> this is the job of the cluster solution.

All well behaved JDBC applications should assume that any transaction
may fail (note "transaction" not "SQL command"). That's a fact and
that's why the JDBC expert group introduced the notion of transient
vs. non-transient exceptions. It is also a fact that clustered db
solutions may have more transaction failure scenarios than traditional
SQL databases (believe me, I have worked with real clustered HA
databases).

> This way, I can simply transfer an existing application to support
> cluster just by changing the JDBC driver and do not have to add
> special code for handling a new error condition. As an application
> developer, I do not care if a cluster node fails or not. This is the
> responsibility of the cluster adminstration, not of the application
> itself.

That's obvious. Node failure should be handled by the cluster code,
but your application still have to handle transaction failure. If
"HA-JDBC database cluster can lose a node without failing/corrupting
open transactions" (quote from the docs) that's great , but in your
case you have two conflicting transactions on top of HA-JDBC and one
of them *should* fail with "duplicate key".

> The cluster code might try to redo the transaction on the failed
> node or not. Why should it impact the main application and why
> should it rollback transactions that worked fine on all nodes?

As seen from your application's point of view HA-JDBC presents a
virtual database, and a transaction should either be committed or
rolled back and the cluster be in a consistent state after the
transaction. It seems to me that HA-JDBC tries to maintain that by
dropping a node in your case. I don't think that's a good
implementation, since SQLExceptions are to be expected from any
database (e.g. since your app is miltithreaded, you run the risk of
dead-locks which may time out).

> Anyway, you cannot roll them back if the commit failed on the last
> node, right?

There are standard techniques to deal with that.

> But back to the real question, I suppose there is currently no solution for 
> this problem with Derby? So I guess the only "solution" would be to patch 
> Derby not to throw duplicate key exceptions?!

With the current behaviour of HA-JDBC I can't see a way around your
problem. 

-- 
Bernt Marius Johnsen, Database Technology Group, 
Staff Engineer, Technical Lead Derby/Java DB
Sun Microsystems, Trondheim, Norway

Mime
View raw message