commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Inger, Matthew" <In...@Synygy.com>
Subject RE: [Math] - RealMatrix
Date Fri, 09 Apr 2004 14:09:26 GMT
Not only that, but as you mention, order matters.  Doing the following
operations produces different outputs:


NumberFormat fmt = NumberFormat.getInstance();
fmt.setMaximumFractionDigits(48);
fmt.setMinimumFractionDigits(48);


double res = ((double)2.0 / (double)3.0) * (double)14.0;
System.out.println(fmt.format(res));
// outputs: 9.333333333333332000000000000000000000000000000000

double res2 = ((double)14.0 * (double)2.0) / (double) 3.0;
System.out.println(fmt.format(res2));
// outputs: 9.333333333333334000000000000000000000000000000000


conceptually, and mathematically, these two equations are identical
and should produce the exact same result.  However they do not (even
using BigDecimal, they do not).

-----Original Message-----
From: Al Chou [mailto:hotfusionman@yahoo.com]
Sent: Friday, April 09, 2004 9:23 AM
To: Jakarta Commons Developers List
Subject: RE: [Math] - RealMatrix


--- "Inger, Matthew" <Inger@Synygy.com> wrote:
> The basic reason i inquired is that by using doubles,
> you're limiting the precision available when doing certain
> operations.  Take the following matrix:
> 
>  [ 4 6 ]
>  [ 6 14 ]
> 
> If you try to take the inverse of that matrix, the correct
> answer is:
> 
>  [  0.7  -0.3 ]
>  [ -0.3   0.2 ]
> 
> however, by using double in java, we get something like:
> 
>  [  0.7000000000000002 -0.3000000000000001  ]
>  [ -0.3000000000000001  0.20000000000000007 ]
> 
> using BigDecimal isntead, we might get a slightly more accurate
> result (though i admit, in most cases, people won't go to 16 digits)

A valid point, though again, usage will dictate whether such levels of
precision are necessary (also, most usage I've seen just lives with the fact
that most base-10 numbers are not exactly represented in base-2; the inverse
matrix above would probably be considered "close enough" by many if the
difference were explained by representation inaccuracy).  Essentially all
numerical computing to date has been done with, at best(!), double
precision. 
Some techniques that could in fact increase precision in principle are, to
my
knowledge, never used in practice (e.g., sorting a list of numbers before
summing, so that they can be summed from smallest to largest -- and if
there's
a possibility of having both negative and positive signs, summing the
like-signed elements and then finally the resulting two opposite-signed
partial
sums).  I guess performance comes into play, as well as a mathematician's
view
(even though many who are not mathematicians do numerical computing) that
_that_ level of nitpickiness is just too much <g>.

But if we were to have use cases in which exactness was paramount, very high
precision (or perhaps using a RationalNumber class) would of course be the
right thing to provide.


Al

__________________________________
Do you Yahoo!?
Yahoo! Small Business $15K Web Design Giveaway 
http://promotions.yahoo.com/design_giveaway/

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Mime
View raw message