db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Pendleton <bpendle...@amberpoint.com>
Subject Re: [jira] Commented: (DERBY-1397) Tuning Guide: Puzzling optimizer documentation
Date Fri, 16 Jun 2006 01:44:47 GMT
>  The maximum value is 2097151, which comes
>    from java.lang.Integer.MAX_VALUE / 1024.

Army, this is a *wonderful* writeup -- thank you very much?

Is it really realistic, though, to allow a setting of 2 million,
which gives the optimizer permission to try to create a
in-memory hash table of 2 gigabytes? I mean, I'd need a 64-bit JVM...

Related, is it worth putting a line or two in this doc telling the
user that it would be foolish to set this value to something which
is higher than the actual physical memory that they have on their
machine -- if I have a 256 Mb machine, telling the optimizer to do
a 512 Mb hash join in "memory" is probably performance suicide, no?

Also, perhaps we should have a line suggesting that if the
user decides to increase this value, they should check their JVM
memory settings and make sure they've authorized the JVM to use
that much memory (e.g., on Sun JVMs, -XmxNNNm should be set).

Lastly, what is the symptom if the user sets this too high? If
they tell the optimizer that it's allowed to make, for example,
a 200 Mb hash table by setting this value to 200,000, then what
happens if the runtime evaluation of the query plan finds that
it can't allocate that much memory? Does the query fail? Does
Derby crash? If we can easily describe the symptom they'd get from
setting this too high, that would be nice.

thanks,

bryan


Mime
View raw message