db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Satheesh Bandaram" <banda...@gmail.com>
Subject Re: [jira] Updated: (DERBY-1107) For existing databases JDBC metadata queries do not get updated properly between maintenance versions.
Date Sat, 01 Apr 2006 19:53:40 GMT
On 3/30/06, Knut Anders Hatlen <Knut.Hatlen@sun.com> wrote:
> Satheesh Bandaram <satheesh@Sourcery.Org> writes:
> I don't think there has been any metadata changes between maintenance
> releases, and I suppose bug fixes should be the only changes between
> maintenance releases. And I think it is more a downgrade issue than an
> upgrade issue. I mean, for upgrade it's no problem if we don't add
> upgrade code until a metadata change is made. The problem with this
> approach is that we will still use the new query after a downgrade. As
> long as the new query doesn't use new functions, new tables or new
> syntax (not likely that we will add such things in a maintenance
> release, I guess), the only consequence is that the bug is fixed even
> after we downgrade.

This seems like a good problem to have? Like you mentioned, adding of new
tables or new syntax is unlikely between maintenance or point releases. Now
we also have a mechanism to add major_minor version specific queries to
metadata and still have soft upgrade work. As far as I know, Cloudscape or
Derby so far hasn't had a need to change metadata queries in incompatible
way between maintenace versions. May be others know more....?

Since we can't drop and regenerate the statements in a read-only
> database, we have two options
>   1) Leave the current behaviour of Derby. That is, metadata bugs
>      won't be fixed for read-only databases.
>   2) Make EmbedDatabaseMetaData do the same in read-only upgrade mode
>      as it does in soft upgrade mode. That is, read metadata queries
>      from metadata.properties.

I think either option would work, each having their own advantages... 1) is
probably best for performance... 2) Has the advantage of fixing currently
open bug and an issue having to recompile SPS statements. There is code in
Derby that automatically decides when to recompile SPS and sometimes gets
triggered after large number of executions of a statement. (just to refresh
the plan, in case any schema changes like adding of indexes has happened)
This check is useless for system SPS since changing of schema is not
possible and causes the bug in read-only databases. (while trying to write
the recompiled plan back to disk!)

Option 1 keeps the bugs, option 2 gives lower run-time performance
> even for the unmodified queries.

Hah... I just saw this after I typed everything just above... You think of
everything :)

> 2) Performance considerations. Could we
> > instead make it as need arises? Like if version is going from 10.1.2 to
> > 10.1.3 (just example) Hopefully these are rare cases.
> Is performance a big issue? This will happen once per version
> change. I agree that it could be solved when the need arises. See my
> comments above.

Performance could be a small issue for the first time following version
change. Not sure how long recompiling all system SPSes take, but could be
several seconds? If we are proposing recompiling between maintenance
releases only, it may be ok... but this cost could be noticed for
point-release changes as ODD.

Looks like we agree this issue could be solved when the real need arises...
I noticed Rick included this issue as one of the reasons for codefreeze
delay. Rick, do you think this issue is more serious?

While use of SPS for metadata has some serious advantages, it does cause
complications too, I think. Necessary evil?


> Knut Anders

View raw message