Thread Pool and I O Flow - geronimo-iia/restexpress GitHub Wiki
Tuning a RestExpress Service
As the diagram above shows, there are two thread pools in RestExpress. The front-end thread pool, which handles input and output from/to the HTTP connection stream, should be roughly 2x the number of cores, assuming a hyper-threading architecture. The back-end or executor thread pool, is designed to handle back-end blocking operations in the controllers. Operations such as reading and writing from a database (like MongoDB) are often blocking operations. The executor thread pool allocates new threads as needed up to the limit you specify and should be roughly the size of your database connection pool. That means if you have 3 MongoDB servers (one master and two slaves) and your driver configured to have 500 connections per server, you should have approximately a 1500+ executor thread-pool size.
How many MongoDB connections do you have configured for your app? That's the connection number you're looking for. If you're using the MongoDB defaults, you probably want to bump that number up for production--since it's either 10 or 100 depending on which MongoDB driver you're using.
My own opinion is, that the MongoDB connection pools should be large-ish depending on the size of the service suite server boxes (say 1000 per back-end MongoDB server). That would put the executor thread pools for the service suite to 3000 or so (assuming three MongoDB servers, as above).
RestExpressService server = RestExpressService.newBuilder();
server.settings.serverSettings().setExecutorThreadPoolSize(3000); // Setting a large number for the back-end thread pool.
###What should be the executor thread count for a service which does not use Mongo?
It depends... :-) Does it do blocking operations? Is it its own service suite (in its own JVM)?
However many executor threads you allow in the pool is how many simultaneous blocking operations the services can support. Assuming the service suite is in its own JVM, then the memory size is kinda the limiting factor on how big the pool size can be. I would make it arbitrarily large...
Blocking operations, such as external HTTP calls are very long running and will consume a thread until a response is received. The number of threads in the executor pool, then dictate exactly how many simultaneous requests that can happen.
We just need to make the executor thread pools big until we know when we get a diminishing return on throughput, where adding more threads doesn't give us more throughput. I don't know what that number is yet. But depending on memory size, a JVM can handle a "bunch" of threads. I wouldn't hesitate to give it 3000 threads, as it doesn't allocate them unless it needs them anyway. If the service falls over, we can lower that number, but presently, the server procs are barely being utilized--limited instead by I/O.
###What if my controller methods don't block?
Here you have some options. Netty, which RestExpress uses as it's under-the-covers server, uses non-blocking I/O to read and write from the network connections (clients). Therefore, in the case where your controllers perform fast-running, non-blocking operations, you could choose to configure the service suite without an executor thread pool at all. In other words, run everything in the front-end I/O worker threads without incurring the overhead of handing off data from the front-end pool to the back-end pool. Remember, however, that logic that runs in the controller is consuming that thread for however long it takes.
For the non-blocking use case, tuning is a bit more abstract, as there is not a lot of correlation between increasing thread count and increasing throughput. Your mileage may vary, but two-threads per processor core is a good starting point. Setting the executor thread count to zero (0) will deconfigure the back-end thread-pool altogether.
RestExpressService server = RestExpressService.newBuilder();
server.settings.serverSettings().setExecutorThreadPoolSize(0); // Turn off the back-end thread pool.
###Effects of Memory on Performance
Java-based sockets use quite a bit of memory per connection. It is therefore imperative that you give the Java Virtual Machine enough memory to allocate as-needed resources such as threads and connections. Make sure the JVM can utilize most of the memory on the machine or you'll see performance issues.