max_connections - GradedJestRisk/db-training GitHub Wiki

Default value is 100.

Depends on :

  • concurrency
    • CPU, CPU count
    • l’I/O, storage technology: SSD / HDD
  • caching

One heuristic is connections = 2 * cpu_core_count + spindle_count where spindle count = hard drive count

Replace spindle_count by effective_io_concurrency

https://dba.stackexchange.com/questions/228663/what-is-effective-spindle-count

https://wiki.postgresql.org/wiki/Number_Of_Database_Connections

https://github.com/le0pard/pgtune/blob/92976061ccea9b886482bbb00243c5c495bc8813/src/features/configuration/configurationSlice.js#L107

Values:

  • web : 200,
  • datawarehouse : 40

Dalibo DBA2

La valeur 100 pour max_connections est généralement suffisante. Il peut être intéressant de la dimi‑ nuer pour se permettre de monter work_mem et autoriser plus de mémoire de tri. Il est possible de monter max_connections pour qu’un plus grand nombre de clients puisse se connecter en même temps.

If data is cached, I/O is no longer the bottleneck. But we cannot expect data to be always cached, it depends upon access pattern.

It will be cached if :

  • you access the same data over and over ;
  • it's small enough to fit in the cache;
  • nothing evicts it from the cache.

Datawarehouse queries may use the same data, but it is too big to fill in the cache. Remember that, to avoid memory depletion, you should limit max_connections And also than less connections, less queries, more memory to do client-specific calculation (aggregation, max)