avalon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Berin Loritsch <blorit...@apache.org>
Subject Re: Performance questions about ECM
Date Fri, 08 Mar 2002 20:34:02 GMT
Leo Sutic wrote:
> 
>>All GC's are blocking, and stop all code from working during the GC
>>time.  Considering that a full GC cycle can take upwards of a full
>>second, that is one request per thread during that time that will
>>fail.
>>
>>Concidering that the average partial GC can take up to 10 milliseconds,
>>that means that you have to aim for no more than 990 milliseconds per
>>request.  In order to really test if a Full GC happens every so many
>>seconds (as opposed to only when we call System.gc()), we need a test
>>that runs for about an hour.
>>
>>When JDK 1.5 comes, we will finally have a GC that doesn't block all
>>threads.  Until then we have to wait impatiently.
>>
>>Also, if you take .7 (the minimum time for a partial GC in the -server
>>mode) and multiply times the number of partial garbage collections in
>>a day (86,400,000 / 150 = 576,000), you come up with 403,200
>>milliseconds of downtime.  That is more than 15 times the allowed down
>>time.
>>
>>Let's also assume we have 100 threads in the system.  That means we can
>>supposedly run 100 simultaneous requests.  Let us also assume that the
>>average request time is 990 milliseconds.  You will have 8,727,272
>>requests per day processed running at full boar (no GC action).  Now,
>>let's take away the minimum time spend in GC.  We can process
>>8,686,545--we lost 40,727 transactions.  When we take the new
>>number and discover how long the average transaction takes when
>>factoring for GC we get 994 milliseconds per request--so we are safe.
>>
> 
> You can not count it that way. Average response time does not count here.
> 
> If request 1 takes 1 ms and request 2 takes 1001 ms, you lose. Even though
> the average response time is ~500ms, well below the threshold, you have just
> botched 50% of the requests.
> 
> The fact is that you will probably lose every request that happens to run
> when the GC kicks in in a serious way, as they will be delayed by whatever
> time the GC takes.
> 
> The only way out of this, as I see it, is to do incremental processing
> of the request. I.e. get rough response done in 100 ms, improve it
> until time runs out.
> 
> So just figure out the % of the total number of requests that
> will overlap an intense GC period, and that's your downtime. So you gain
> a lot by having many, fast requests, serialized and lose a lot by having
> many parallell, slow  requests (as they are more likely to straddle a GC
> period). (This paragraph is just a guess. I have not used any formulae,
> and not really analyzed the problem.)


The sizing algorithms I gave work well when trying to size hardware/JVM
strategies for existing software.  Now, considering the example above,
where 8,727,272 requests are processed within an average time, we can
have only 2,618 responses that exceed 1 second.

While I agree that incremental processing works best when the primary
concern is scalability, it does not necessarily work as well with
absolute processing.  So, before we can really delve into the specifics
of this system, we need to know how long the average response time
takes--then we need to understand exactly how many responses we are
expected to process in any one time.

It can be done, but it isn't easy.



-- 

"They that give up essential liberty to obtain a little temporary safety
  deserve neither liberty nor safety."
                 - Benjamin Franklin


--
To unsubscribe, e-mail:   <mailto:avalon-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:avalon-dev-help@jakarta.apache.org>


Mime
View raw message