Skip to content

Embedded Jetty uses unbounded QueuedThreadPool by default. #8662

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
jzampieron opened this issue Mar 18, 2017 · 6 comments
Closed

Embedded Jetty uses unbounded QueuedThreadPool by default. #8662

jzampieron opened this issue Mar 18, 2017 · 6 comments

Comments

@jzampieron
Copy link

The default behavior for jetty when used in spring boot is for the
JettyEmbeddedServletContainerFactory to create an Jetty Server with an unbounded
QueuedThreadPool.

This means that under any kind of load where the input request rates exceeds the servers ability to process that the JVM will eventually OOM no matter what.

Essentially, this is a guaranteed breakage by default.

Instead it would be better to replace the default BlockingQueue inside the QueuedThreadPool with a bounded queue, as is advised in the Jetty documentation.

This changes the default behavior to load shed (fail fast) and is far more production ready out of the box.

(See https://wiki.eclipse.org/Jetty/Howto/High_Load#Jetty_Tuning - dated by still reflected in the code.)

@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged label Mar 18, 2017
@wilkinsona
Copy link
Member

The default behavior for jetty when used in spring boot

I believe that it's the default behaviour with Jetty with or without Spring Boot.

Instead it would be better to replace the default BlockingQueue inside the QueuedThreadPool with a bounded queue, as is advised in the Jetty documentation.

The problem with specifying an upper-bound for the queue is that we have no way of knowing what it should be. From the same documentation that you linked to above:

The capability (maximum queue length) should be calculated according to the "no-response" time tolerable. For example, if the webapp can handle 100 requests per second, and if you can allow it one minute to recover from excessive high load, you can set the queue capability to 60*100=6000

That equation has two variables and both are unknown to us. If we resort to a guess we artificially constrain performance for some while not solving the problem for others.

How do you propose that we constrain the size of the queue in a way that's generally useful?

@wilkinsona wilkinsona added the status: waiting-for-feedback We need additional information before we can continue label Mar 19, 2017
@jzampieron
Copy link
Author

jzampieron commented Mar 19, 2017

I propose that we take the same opinionated view that Spring Boot typically takes.

Assume that most applications are RESTful microservices. A fast REST API call takes ~10ms (database round trip, json ser/des, etc). Therefore your upper bound will be a 100Hz request rate.

Yes, this is a simplification, it doesn't account for thread pools, async calls, etc, but its probably not atypical of what people will see.

One thing that's missing from the Spring-Boot docs as they exist today is good performance profiling information for a bare bones microservice, so even napkin calculations are hard in this case.

@dsyer did a very good writeup on memory use, but that's not expected request rates and such under load.

Thymeleaf rendering and other MVC might take longer, so that's a reasonable starting point.

Allow 30 seconds for back log and go with 30*100 = 3000 for the default.

Expose a server.jetty.maxQueueLength property and document in the spring boot property appendix with a link to the Jetty eclipse documentation.

My big hangup here is that if you are using Jetty, this behavior is not obvious at all if you don't go looking for it, until something breaks under load. It's just a poor DX for a framework that's expected to be production ready out of the box.

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Mar 19, 2017
@jzampieron
Copy link
Author

Also, specifically, it is the default behavior for Jetty with or without Spring boot.

However, those constructor parameters to QueuedThreadPool, specifically as pass to the threadPool parameter to the Jetty Server class, are entirely hidden within JettyEmbeddedServletContainerFactory.

You have to do some kind of work around like this in a sub-class:

   public MyJettyEmbeddedServletContainerFactory()
   {
      super();
      // The default queue for the queued thread pool is *unbounded*
      // this is unacceptable for production deployments under high load.
      // We create a _bounded_ queue.
      final BlockingQueue< Runnable > queue = new ArrayBlockingQueue<>( scJettyMaxQueueSize );
      final QueuedThreadPool threadPool 
         = new QueuedThreadPool( scJettyMaxThreads, 
                                 scJettyMinThreads,
                                 scJettyIdleTimeout,
                                 queue );
      this.setThreadPool( threadPool );
      this.setUseForwardHeaders( true );
   }

@wilkinsona
Copy link
Member

A fast REST API call takes ~10ms (database round trip, json ser/des, etc).

Sorry, but that sounds like a guess to me.

My big hangup here is that if you are using Jetty, this behavior is not obvious at all

It's Jetty's default behaviour. We like to align with each container's defaults so that people's existing experience with a container also applies to Spring Boot. If you disagree with that default then I would take it up with the Jetty team. I think it's also worth noting that Tomcat has a similar default to Jetty and so does Undertow. In the absence of a compelling argument to the contrary, I am strongly inclined to defer to the expertise of the Jetty, Tomcat, and Undertow development teams.

are entirely hidden within JettyEmbeddedServletContainerFactory

I disagree that it's hidden. There's a public setter method on the factory precisely so that the thread pool can be customised and tuned to meet an application's needs.

Expose a server.jetty.maxQueueLength property

This is more like something that we could consider doing as it would be much more generally useful. It's still somewhat complicated by the fact that some applications will want to replace the thread pool implementation entirely with one where configuring the max queue length may not make sense.

A pull request that adds this configuration option for Jetty, Tomcat, and Undertow would be welcome.

@wilkinsona wilkinsona added status: waiting-for-feedback We need additional information before we can continue and removed status: feedback-provided Feedback has been provided labels Mar 20, 2017
@spring-projects-issues
Copy link
Collaborator

If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.

@spring-projects-issues spring-projects-issues added the status: feedback-reminder We've sent a reminder that we need additional information before we can continue label Mar 27, 2017
@spring-projects-issues
Copy link
Collaborator

Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.

@spring-projects-issues spring-projects-issues removed status: waiting-for-feedback We need additional information before we can continue status: feedback-reminder We've sent a reminder that we need additional information before we can continue labels Apr 3, 2017
@snicoll snicoll removed the status: waiting-for-triage An issue we've not yet triaged label Apr 27, 2017
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants