C10k problem - VittorioDeMarzi/hero-beans GitHub Wiki

~: handle 10k concurrent users. Fun fact: Wikipedia says that nginx was created to solve the original C10k problem back in 1999.

  • Traditional servlet containers (Tomcat, Jetty, Undertow in blocking mode) spawn a thread per request. With 10,000 concurrent connections, you’d need 10,000 threads → massive context switching, high memory.
  • MySQL connections are also expensive: each open JDBC connection uses memory and OS resources.

Disclaimer

"Avoid adopting technologies just to solve problems that haven’t happened yet." – This documentation is not the ultimate guide to follow. Test, leverage, decide - don't follow blindly.

Testing Tools

  • Apache JMeter – classic, heavy but feature-rich.
  • k6– modern, simple to script (JavaScript DSL).
  • wrk or ab (Apache Bench) – very fast, lower-level.
  • Gatling – if you like Scala.
  • Example flow:
    • Use wrk or k6 with -c10000 connections, targeting Nginx (not Spring Boot directly).
    • Monitor:
      • Nginx: stub_status → check active/idle connections.
      • Spring Boot: Micrometer → thread pool usage, response times.
      • MySQL: slow query log + SHOW PROCESSLIST.

How to solve it?

  • non-blocking IO (multiplexing) + efficient DB connection pooling,
  • NOT by making MySQL accept 10k connections

Use connection pooling, a reasonable thread pool, and a reverse proxy (Nginx).

1. Connection pool tuning

  • Use HikariCP (default in Spring Boot).
  • Set a reasonable pool size (e.g. 20–50 connections).
  • Don’t try to match pool size to concurrent users — that’s a common beginner mistake. Let requests wait in a queue. spring.datasource.hikari.maximum-pool-size: 30

2. Thread model tuning

  • Stick with Tomcat (blocking) for now — it’s simpler.
  • Increase Tomcat thread pool if you expect bursts, but don’t set it to 10,000. Typical: 200–500.
  • Example config: server.tomcat.threads.max: 500 server.tomcat.threads.min-spare: 50

3. Put Nginx (or HAProxy) in front of your app

  • Let Nginx handle thousands of idle keep-alive connections efficiently.
  • Nginx passes only the active requests to your app.
  • This alone solves 90% of "C10k" problems for small apps.

4. Cache hot data

  • If you have any endpoint that always hits MySQL (e.g. product catalog, address lookup), consider caching in Redis or even in-memory (@Cacheable).
  • This reduces DB load massively.

Steps to follow from this article:

To handle X amount of requests per second, higher than your digestive rate, spawn more severs to support your needs (behind a load balancer). Now you can use the recommended 3-node-setup(https://www.elastic.co/blog/found-elasticsearch-in-production#split-brains)

Steps to measure & solve the C10k problem:

  • Build the minimal setup
  • Using 1 request per user (1 iteration), with different time-span, find your system digestive rate.
  • Is it enough? (Can you spread the requests over time?)
  • If not, multiply as needed and repeat those steps until satisfied.

Minimal setup:

  • nginx(Port forwarding) docker
  • logstash (to handle JSON) docker
  • elasticsearch (changed the elasticsearch heap to 8 GB) docker
  • 4 vCPU
  • 16 GB RAM U
  • Ubuntu 19
  • take the biggest JSON expected for logstash to digest as the designated POST payload