tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Supun Abeysinghe <>
Subject Re: [OT] Tomcat Online Parameter Setting/Tuning
Date Wed, 13 Mar 2019 04:30:20 GMT

Thanks for the suggestion. I agree with you on tuning application-level
threads will have a higher impact. However, tuning these application-level
thread pools will make the tool application dependent whereas, at the
moment, I am thinking of a more general tool. Nonetheless, I'm still at a
POC level and if things work out, can look into that aspect as well.


First of all, I'm still at a POC level. That being said, having the thread
pool size at a higher number does not always guarantee better performance.
In fact, I have conducted some experiments using different thread pool
sizes with different workloads and found out that for a given workload
type, there is a different optimal thread pool size that yields the best
performance (not only memory usage but also metrics like latency). This
might be caused by the additional context switches and other overheads
associated with a larger number of threads. Thus, simply running with X
number of threads throughout does not result in the optimal performance. In
this project, my plan is to tweak the thread pool size (and some other
parameters) during runtime to find out that optimal point dynamically.

This area of research has been studied extensively in the literature
(however, not sure about any real-world implementation though) and has
shown some promising results. Below figure is extracted from [1]. In [1],
they are proposing a reinforcement learning based algorithm (a subclass of
machine learning algorithms) to tune the parameters of a web application
dynamically. The figure below shows that response time shows a significant
decrement compared to the static default configuration. I'm trying to
extend this work using different machine learning algorithms.
[image: image.png]

As you have suggested, it is still questionable how things will work out in
a real-world system with bursty workloads. I will definitely conduct some
experiments with bursty workloads once I'm done with the implementation.


Thank you.

Best regards,

On Wed, 13 Mar 2019 at 06:55, Christopher Schultz <> wrote:

> Hash: SHA256
> Supun,
> On 3/12/19 12:05, Supun Abeysinghe wrote:
> > I am working on a project where the parameters of the tomcat server
> > (e.g. MaxThreads, MinSpareThreads, MaxSpareThreads etc.) for a
> > given web application is auto-tuned using a machine learning
> > technique by looking at the runtime characteristics (e.g. workload
> > characteristics, current performance etc.).
> I'm kind of curious about this. Is this Nest[1] for Tomcat?
> My experience is that traffic is bursty. That means that it's hard to
> predict when you'll need more resources -- like threads, one of the
> only things you can really tune at runtime. When you need them, you
> want them NOW, not when some algorithm finally gets around to
> detecting the burst of traffic you know has already begun.
> Since the server must be planned for X number of threads at peak
> volume, why not simply always run with X threads? There is really no
> penalty for leaving threads unused, other than "wasted" memory... that
> you would need if you needed the threads, anyway. It's not like you
> can use that memory for anything else, because you're going to need it
> at peak-load time.
> If you are interested in doing ML for capacity-tuning, I'd look more
> toward auto-scaling of *instances* rather than the resources being
> used by one single Tomcat instance. If I can scale-down my VMs
> deployed into, say, AWS, then I can save *real money* by scaling-back
> during low-load times and learning about trends to pre-scale things
> before e.g. the Friday-night rush or whatever happens to be "peak
> volume" for my particular use-case.
> - -chris
> [1]
> Comment: Using GnuPG with Thunderbird -
> pFhIexAAhhyCaSiQLaTlS3ySTzF5hHuH9Ft2TVXV8rC1/nOA11zelkQRad1Dpd1Z
> Ywks+Ag0OaZryld+S/oDCg2hbZEJ4yp2lCKhib9Vk85TGO6DnLM17Ul6Z7XOGstQ
> gQagjNBCd+lpRuaJ/d1esx1zhzUxD8eTSMRNGP1eZcOTRFUekQpSPlaEmJiczTdo
> /X79kL/NPV2evmJXlXrzBToqLPqpNVUrQMNmu6QZcwEP4eXNIaGIiUTD6WqnnHeJ
> jSppwqk/++RXJa2P17+6cue5fmyoUHcVKOypwYye6c7deEXPR5Kx8Nvdl/IB4VJU
> GsEhce9L+4R0EGRsgBEi0VW5OK4SxDYtmi+hdPxHDFXz1hocwAu3R0zdZspwJYo6
> ESFepwnvEA/yfrNJD/dkoVrndejuZQjBRHkTRDh4ELj/ce/jFW3q0WjESVTz0Mmt
> jrwnZsbqQ+qAFgcvYqQnBAyHguiJTfuRpDAg0kkkPfXLay65ESU5j8tAmoHR1S+V
> utuZOaStzFnDl0VRyFMhPzU0K32w29bpp0wSpb6COgwkIha1TDOIrVJTBlT5sjhr
> 0tXo2ex4TZXdGVkeGqwCGmu2SNZQcPSp3qVbYGzZ/w1duvqlPFoBkxpk04dWFEc1
> 6eixdMxP3fb0KAiowXbk3VFtLJblkzzum3+UavxB8+vOCn2v2kg=
> =Q1Uk
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

*Supun Abeysinghe*
Undergrad, Department of Computer Science and Engineering,
University of Moratuwa, Faculty of Engineering.

  • Unnamed multipart/related (inline, None, 0 bytes)
View raw message