-
Notifications
You must be signed in to change notification settings - Fork 41.1k
Embedded Jetty uses unbounded QueuedThreadPool by default. #8662
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
I believe that it's the default behaviour with Jetty with or without Spring Boot.
The problem with specifying an upper-bound for the queue is that we have no way of knowing what it should be. From the same documentation that you linked to above:
That equation has two variables and both are unknown to us. If we resort to a guess we artificially constrain performance for some while not solving the problem for others. How do you propose that we constrain the size of the queue in a way that's generally useful? |
I propose that we take the same opinionated view that Spring Boot typically takes. Assume that most applications are RESTful microservices. A fast REST API call takes ~10ms (database round trip, json ser/des, etc). Therefore your upper bound will be a 100Hz request rate. Yes, this is a simplification, it doesn't account for thread pools, async calls, etc, but its probably not atypical of what people will see. One thing that's missing from the Spring-Boot docs as they exist today is good performance profiling information for a bare bones microservice, so even napkin calculations are hard in this case. @dsyer did a very good writeup on memory use, but that's not expected request rates and such under load. Thymeleaf rendering and other MVC might take longer, so that's a reasonable starting point. Allow 30 seconds for back log and go with Expose a My big hangup here is that if you are using Jetty, this behavior is not obvious at all if you don't go looking for it, until something breaks under load. It's just a poor DX for a framework that's expected to be production ready out of the box. |
Also, specifically, it is the default behavior for Jetty with or without Spring boot. However, those constructor parameters to You have to do some kind of work around like this in a sub-class:
|
Sorry, but that sounds like a guess to me.
It's Jetty's default behaviour. We like to align with each container's defaults so that people's existing experience with a container also applies to Spring Boot. If you disagree with that default then I would take it up with the Jetty team. I think it's also worth noting that Tomcat has a similar default to Jetty and so does Undertow. In the absence of a compelling argument to the contrary, I am strongly inclined to defer to the expertise of the Jetty, Tomcat, and Undertow development teams.
I disagree that it's hidden. There's a public setter method on the factory precisely so that the thread pool can be customised and tuned to meet an application's needs.
This is more like something that we could consider doing as it would be much more generally useful. It's still somewhat complicated by the fact that some applications will want to replace the thread pool implementation entirely with one where configuring the max queue length may not make sense. A pull request that adds this configuration option for Jetty, Tomcat, and Undertow would be welcome. |
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed. |
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue. |
The default behavior for jetty when used in spring boot is for the
JettyEmbeddedServletContainerFactory to create an Jetty Server with an unbounded
QueuedThreadPool.
This means that under any kind of load where the input request rates exceeds the servers ability to process that the JVM will eventually OOM no matter what.
Essentially, this is a guaranteed breakage by default.
Instead it would be better to replace the default BlockingQueue inside the QueuedThreadPool with a bounded queue, as is advised in the Jetty documentation.
This changes the default behavior to load shed (fail fast) and is far more production ready out of the box.
(See https://wiki.eclipse.org/Jetty/Howto/High_Load#Jetty_Tuning - dated by still reflected in the code.)
The text was updated successfully, but these errors were encountered: