db-ojb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian McCallister <bri...@apache.org>
Subject Re: Problem: Copy field values
Date Fri, 11 Mar 2005 14:54:25 GMT
Speaking of second level cache, we may want to look into making the 
second level cache backing store pluggable. If we move to storing 
identity keyed hashmaps with serializables in them (any jdbc type) then 
we can push that out to coherence, memcached, ehcache, whirlycache, etc 
-- allowing for much more tunable 2nd level caching, and not having to 
implement it ourselves.


On Mar 11, 2005, at 9:44 AM, Armin Waibel wrote:

> Brian McCallister wrote:
>> On Mar 10, 2005, at 3:57 PM, Armin Waibel wrote:
>>> The basic problem is how can we make an image or a copy of a 
>>> persistent object, how to copy the object fields?
>>> On OJB java-field-type level a field (of a persistent class) could 
>>> be all kind of class, because the user can declare a 
>>> field-conversion in the field-descriptor, thus we don't know the 
>>> field type in the persistent object.
>>> So it's not possible to image/copy field values on this level, 
>>> because the fields don't have to implement Serializeable or 
>>> Cloneable.
>> Backwards incompatible option: provide a copy function on field 
>> conversions. Provide an AbstractFieldConversion which keeps a flat 
>> fieldwise copy of the custom object, but can be replaced by a more 
>> intelligent version. I like this option less than the next...
> I have in mind the same (could be an option for 1.1), additionally we 
> should add a equals(obj1, obj2) method in FieldConversion to compare 
> two fields on java-field level, in AbstractFieldConversion we can do 
> the field-conversion and use equals(...) of the assigned FieldType.
>>> If we convert the fields to the sql-field-type using the javaToSql 
>>> field-conversion we know the type of the field (performance issue 
>>> when using complex field-conversions?), because this is declared in 
>>> the field-descriptor and we are using the jdbc type / java type 
>>> mapping of the JDBC specification:
>>> VARCHAR --> String
>>> VARBINARY --> byte[]
>>> DATE --> Date
>> Caching the jdbc type makes the most sense to me, and going ahead and 
>> doing the conversion. I don't think the second level cache should be 
>> keeping entity instances around, just the sql values. Running them 
>> through the conversion process is still much cheaper than hitting the 
>> db.
> Great note Brian! Agree, this makes sense and it expose a bug in 
> current TLCacheImpl. Currently the second level cache cache "flat" 
> objects, but indeed it will be better to use a HashMap and cache the 
> sql type values by field-name.
> This will prevent data corruption if someone use different metadata 
> mappings (using the "per thread mode" in MetadataManager) for the same 
> class with different field-conversion.
> Armin
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ojb-dev-unsubscribe@db.apache.org
> For additional commands, e-mail: ojb-dev-help@db.apache.org

To unsubscribe, e-mail: ojb-dev-unsubscribe@db.apache.org
For additional commands, e-mail: ojb-dev-help@db.apache.org

View raw message