commons-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: Why not BigDecimal?
Date Fri, 12 Feb 2010 20:08:45 GMT
It is not a precision issue.  R and commons-math use different algorithms
with the same underlying numerical implementation.

It is even an open question which result is better.  R has lots of
credibility, but I have found cases where it lacked precision (and I coded
up a patch that was accepted).

Unbounded precision integers and rationals are very useful, but not usually
for large scale numerical programming.  Except in a very few cases, if you
need more than 17 digits of precision, you have other very serious problems
that precision won't help.

On Fri, Feb 12, 2010 at 1:40 AM, Andy Turner <A.G.D.Turner@leeds.ac.uk>wrote:

> Interesting that this is a precision issue. I'm not surprised depending on
> what you are doing, double precision may not be enough. It depends a lot on
> how the calculations are broken into smaller parts. BigDecimal is
> fantastically useful...
>



-- 
Ted Dunning, CTO
DeepDyve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message