> On Mon, 15 Jul 2002, Bill Stoddard wrote: > > > > 1. gettimeofday > > > (fast, no loss of accuracy) > > We cannot avoid this, right? > > Right. > > > > > > 2. 64-bit multiplication to build an apr_time_t > > > (slow (on lots of current platforms), no loss of accuracy) > > > > Do we eliminate this by representing apr_time_t as busec? > > Yes. > > Rather than having to do (seconds * 1000000 + microseconds), you just do > ((seconds << 20) | binarymicroseconds). > > > > 3. 64-bit shifts to get approximate seconds > > > (fast, but loss of accuracy) > > > > If you convert from microseconds to integer seconds (which is what httpd > > requires), you loose -resolution- no matter how you do it. If > the accuracy > > you loose is smaller than the resolution, then what does it > matter that you > > loose some accuracy? > > It's not always smaller: > > 30 seconds = 30,000,000 microseconds > > 30000000 base 10 = 11100 1001110000 1110000000 base 2 > > 11100 1001110000 1110000000 >> 20 = 11100 base 2 > > 11100 base 2 = 28 base 10 > > So your approximation of 30 seconds gets turned into 28 seconds. Oops. > You'd have to do lots of extra work to make sure you always accounted for > the "lost" 48576 microseconds per second. I've been thinking about it all > day and have yet to come up with a non-dividing way to do that. That's a valid concern. But 32 bit divides (on 32 bit hardware) will (should?) be substantially less expensive than emulating 64 bit divides on 32 bit hardware. Another potential solution is to add some constant factor based on the result. Not perfect, but good enough? Bill