archiva-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jesse McConnell" <>
Subject Re: Archiva Database future.
Date Fri, 09 Mar 2007 17:07:00 GMT
well, at least from my experience on the matter...

If your willing to manage the database and the data in and out
manually your generally better off using a solution thats lower level.
 Its almost always clear how it works, your generally dealing with raw
sql types and java objects so its easier to know what going on.
Databases have been making it easier to trace whats going on with a
query so its easier to tune if its that important to eek out
performance.  Sure the (most?) ORM solutions allow you to directly
query things through some sql like system, but if you have to deal
with a database raw sql is generally the more established mechanism
(again imo) unless your digging into funky db specific stuff.

ORM's are pretty nice in apps like continuum where you have full
control over the project model going in and out of the database.  Also
anytime you just don't want to mess with the underlying database or
deal with CRUD stuff.  Archiva is a bit different in that respect
since your actually thinking of using the maven-artifact beans from
maven for reading in and out of the archiva database, or some
combination of that.  Also the continuum reporting is currently all
object based.

One thought would be that if your going to have a model of objects in
archiva that are mapped to database tables directly, would it make
sense to have a mapping between the unenhanced objects your talking
about in maven-artifact into a set of enhanced classes that go in and
out of the database?  dunno, just thought of that looking at your
generation heavy process set there.

Ultimately, in my opinion anytime you can get away with serving a web
request without a big object creation or reading layer your better
off.  But then I am bias given my history in the area where we spurned
the use of ORM and went straight db query or stored procedures
wherever we could to maximize performance for large volume situations
for things like session manglement, etc.

Speaking in terms of ibatis, I wanted to throw out one bonus that I
thought of, having worked with it a little bit recently I think it has
a nice aspect of the entire glue layer between the objects and tables
being sql in xml files.  This makes it  easy for someone with db
experience to audit the queries and contribute (and test!) performance
enhancement patches, don't even need java experience at that point.

my 2 cents,


On 3/9/07, Joakim Erdfelt <> wrote:
> Databases in Archiva.
> I need *desperately* to create a proper database for Archiva. Relying on
> the Lucene database for all things in Archiva is not cutting it.
> But I'm waffling on the database O/RM technology to use.
> Here some Archiva requirements for the O/RM db technology.
>  1) Need to be able to handle objects managed outside of Archiva.
>  2) Need to be able to work with objects managed by Archiva.
>  3) Needs to work with objects in without enhancements by O/RM.
>  4) Needs to support a wide variety of JDBC datasources.
>  5) Needs to be ASL license compatible.
>  6) Needs to be Open Source.
>  7) Need ability to upgrade schema of previously installed Archiva.
>  8) Needs to be quick.
>  9) Need to be active and support project.
> 10) Need to support arbitrary lookups across DB tables for reporting
>     reasons.
> So, when I looked at the technologies out there, this is what I see.
> JPOX: Violates #3, #7, #8, #10.
> Hibernate: Violates #3, #5, #7, #10
> OJB: Violates #3, #7
> OpenJPA: Violates #3, #7, #10
> iBatis: --
> The problem I have with most of the O/RM technologies are around #3.
> The long term plans of Archiva are to create supporting technologies
> around the XML-RPC interface to the data that Archiva is tracking.
> Having enhanced objects would cause the clients to Archiva to also have
> these enhanced classes as well as the O/RM supporting jars.  An
> unacceptable situation.
> Another big concern is #7 or how to upgrade the database schema between
> archiva releases.  Most of the O/RM technologies above do not make it
> clear how they do that.  So I dinged them.  iBatis on the other hand makes
> it extremely clear.  It doesn't manage the Table creation.  The developer
> does it.
> The last concern is #10, or how well the O/RM technology can deal with
> arbitrary and dynamic lookups into the tables without working with
> the objects.  Such as the needs of a reporting system.  I would like
> to hook up the database tables to the various reporting libraries
> and presentation widgets without having to worry about those queries
> being invalidated by changes made to the schema by the O/RM technology.
> One Note, all of the reporting usage patterns against the database that
> I see are read-only in nature.
>   The process I am proposing is to use modello and ibatis.
>   * Create our archiva-model using modello.
>   * Generate java files for model definition.
>   * Generate Create Table sqlMap.xml files.
>     - One for each database type (hsqldb, derby, mysql, oracle, etc...)
>     - Only for version 1.0.0 in modello model.
>   * Generate Update Table sqlMap.xml files.
>     - One for each database type (hsqldb, derby, mysql, oracle, etc...)
>     - For each versions above 1.0.0 in modello model.
>   * Generate CRUD sqlMap.xml files.
>     - One for each database type (hsqldb, derby, mysql, oracle, etc...)
>     - One for each object in modello model.
>   * Generate java source for table version. (to aide in upgrade logic)
>   * Generate java source for ibatis DAO layer.
>     - One for each object in modello model.
>   * Generate java source for sqlmap table create / update usage.
> I am going to be working towards this starting monday.  Unless anyone
> has some suggestions or criticism on this approach.
> ( /me awaits the pearls of knowledge from trygvis )
> - Joakim

jesse mcconnell

View raw message