jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kisu San <Kishore....@gmail.com>
Subject Jackrabbit suitablity for large data models
Date Wed, 14 Nov 2007 15:05:10 GMT

Dear All,

I have a big question, rather I should have asked you this question in first
(I post this query as a reply to my other problem, this is a separate one so
thought it should be a thread on its own.)

I am trying to see the suitability of Jackrabbit for a very large data
models. For an automotive company.

I have lot of entities like Models, Variants, Countries of sale, Fault
Codes, Manuals, Parts, Dealers,
and so on. Whole purpose of this new implementation is about storing large
documents in different languages. Search will be performed on complex
relations (like in rdbms, several joins) to retrieve the relevant document.

I was trying to implement all of these entities as nodes (including
Reference or Standard data) and define relation between these nodes. Which
could be one to many, many to many or one to one.

To this kind of implementation, is Jackrabbit suitable. Particularly what I
am finding difficult is resolving the references or relation between nodes.
I will end up writing code to resolve these references and going through lot
of iterations.

Can anyone advise me, whether it is good idea to implement this model
entirely in Jackrabbit. Or would it be better to use an RDBMS for data store
and Jackrabbit for document store.

Thanks in Advance
View this message in context: http://www.nabble.com/Jackrabbit-suitablity-for-large-data-models-tf4805704.html#a13748534
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

View raw message