A practical guide to debugging MySQL connection pool issues, from local testing to RDS Proxy implementation
Lately, I've spent quite a lot of time debugging and understanding how database connections actually work. Then the topic of scaling those connections also came up. I wrote this article to share the conclusions and observations I arrived at. It also includes a simple local test that helps explain how DB connections work and what the Too many connections error really means.
This is not an article written from the perspective of textbook theory. It's more of a practical take after spending time analyzing the problem, running tests, and observing how the application behaves.
If you spot a mistake here or think something could be explained better — feel free to let me know.
What MySQL Too many connections really means
The message itself is pretty simple. In practice, this error means that the total demand for database connections is greater than the number of connections available on the DB side — in other words, greater than max_connections.
In SQL, you set the max_connections parameter according to your database resources and your infrastructure setup. Depending on what your application and environment look like, that number will vary.
Let's take a simple example:
- MySQL 2 GB RAM
- one database instance
- max_connections = 200
Now let's assume the following setup: the database is running on a server, we have a web application hitting that database, a developer connects to the same database locally from their machine. At this point alone, we already have at least two sources generating DB connections: the web application, the developer's local machine.
Now add the fact that the developer also connects through a client like DBeaver to inspect the data. That's another connection, or another use of the existing pool. Now add functions like await Promise.all(), where we fire off multiple database queries in parallel. And this is exactly where we start moving toward the risk of eventually seeing: Too many connections.
A simple max_connections = 10 example
To illustrate it better, let's take an example where max_connections equals 10. Let's assume the web application uses between 2 and 8 connections at peak. If it reaches 8, that leaves 2 free.
Now:
- one developer connects to the database through DBeaver, the total becomes 9
- another developer does the same, the total becomes 10
And at that point, we are already at the limit. Any additional client trying to connect may get Too many connections, because it would be trying to become the eleventh connection to the database.
This is exactly the kind of message you will also see on the DB client side when you try to connect to a database that no longer has any free connections.
This is obviously a simplified model, but it explains the mechanics of the problem well. The same can be applied to production. If the total demand for connections exceeds max_connections, sooner or later you will get an error.
Where connection spikes and overages come from
Most often, they come from a few things:
- poorly written code can generate too much parallel traffic to the DB
- connections are not being released properly
- there is no sensible pool limit configured on the application side
- the application scales, but the database configuration does not keep up with the number of instances
- additional consumers of the same database appear: DBeaver, local tests, jobs, cron, migrations, integrations
One thing is worth clarifying here. When I talk about Promise.all(), I do not mean that Promise.all() magically creates new connections by itself. I mean situations where you run many database queries in parallel inside Promise.all(), which increases simultaneous demand for DB resources. That is an important difference.
Why setting an application-side connection pool limit matters
In my opinion, this should be standard. If your DB has max_connections = 10, and in your backend code you set a limit so the application can use at most 5 connections, then a single backend instance should not be able to consume the entire pool by itself.
That does not mean the global problem disappears. Other connection sources still exist. But it does mean the backend stops behaving aggressively toward the database.
And this leads to an interesting side effect. If the application cannot use more than 5 connections and more traffic comes in, some requests will simply wait longer. So instead of an error, you get increased response time.
Of course, this is not a guarantee that the error will never appear, because the same database may also be used by other processes, other application instances, or additional clients. By setting a limit on the application side, you reduce the risk of Too many connections, but during peak hours, once the available connection pool is exhausted, some things will simply run slower.
My local MySQL Too many connections test
To understand this better, I ran a simple test. The setup looked like this:
- a local database running in Docker
- max_connections = 15
- an endpoint that executed more than 15 parallel DB queries
To start with, I set a low local max_connections value so I could trigger the problem easily and observe it in controlled conditions.
Variant 1 - without an application-side limit
I called the endpoint. Result: the Too many connections error appeared. Here you can see the effect on the NestJS application side — once the connection limit was exceeded, the backend started returning Too many connections.
Variant 2 — with an application-side limit
I added a parameter so the backend could not use more than 10 connections. Result: the error disappeared, but the request took longer to complete.
And this is one of the most important conclusions from the whole article for me:
A code-side limit is a simple protective mechanism that helps reduce the risk of exhausting DB connections.
If you want to verify this yourself, I encourage you to test it on the mysql-too-many-connections-repro repository.
What is the right number of connections?
In my opinion, that is the wrong question. A better question is: what number of connections is appropriate for my infrastructure? And this is where things start getting interesting. Because the answer depends on things like:
- whether your backend scales vertically or horizontally
- how many application instances you have
- how many different clients use the same database
- how intensively you use parallel queries
- what the traffic looks like
If you scale the backend vertically, the topic is simpler. You have one instance, more resources, and you can tune the pool and max_connections.
If you scale the backend horizontally, it gets harder. Because then you need not only to increase max_connections, but also to calculate how much one instance of the application should be allowed to consume at most.
Example:
- max_connections = 15
- 3 backend instances
- you configure 5 connections per instance
At first glance, it looks fine. But once you add: DBeaver, a developer's local connection, another process, a job runner, you quickly realize that 3 x 5 is no longer such an ideal setup.
And that is exactly why this topic is context-dependent. There is no single answer for everyone.
Scaling the backend is not the same as scaling MySQL
These are two separate things. You can scale the backend by adding more instances, while the database itself remains the same. And that is exactly when connection management stops being "a local problem of one application."
If the backend scales horizontally and the application starts gaining traffic, sooner or later you end up asking: how do I manage connections so that several instances do not kill one database? And that is where proxy comes in.
How RDS Proxy helps with MySQL connection management
I'll show this with AWS RDS Proxy as an example. My simplified mental model looked like this:
- without a proxy, if the total number of connections generated by the application and all instances together exceeds max_connections, errors will start appearing
- with a proxy, part of that traffic will be handled by a layer managing the connection pool, and instead of an immediate error, some requests will simply wait longer
In other words: without a proxy, you will see Too many connections sooner, with a proxy, the system has a better chance of spreading the problem over time.
That was exactly what interested me the most. Put simply: the application no longer connects directly to the database, but to an intermediate layer that manages the DB connection pool. Because of that, a large number of clients on the application side does not have to mean exactly the same number of physical connections to the database itself.
It is also worth clarifying that a proxy does not magically make the database able to handle everything. If the real problem is heavy queries or long-running transactions, the proxy will not fix that. Its job is mainly to manage connections better.
My working hypothesis about RDS Proxy and Too many connections
My hypothesis was this: if, without RDS Proxy, I trigger a scenario where the sum of connections used by the application exceeds max_connections on the database side, I will get a Too many connections error. If I connect the application through RDS Proxy, then in that same scenario the request will not fail immediately — it will wait for a free connection and complete later.
That was exactly the mechanism I wanted to confirm with a test.
Test 1: MySQL Too many connections without RDS Proxy
On an EC2 server, I deployed an endpoint that, when called, exceeded max_connections in the DB. I also disabled the application-side limit so that the application would not block me from exceeding the limit and I could trigger the problem directly.
After calling the endpoint: Too many connections errors appeared.
Test 2: MySQL connections behavior with RDS Proxy
The setup was:
- the same endpoint exceeding the number of connections
- no application-side limit
- in .env, a connection through RDS Proxy instead of directly to RDS
After calling the endpoint: the error did not appear, the endpoint took longer to execute, but in the end the request completed.
And that was exactly the effect I expected.
Is setting a connection pool limit enough?
Not always. In my opinion, a code-side limit is mainly enough when:
- you have one backend instance
- you scale mostly vertically
- your traffic is predictable
- and the whole setup is fairly simple
In that case, a proxy may be unnecessary.
But if you have several backend instances, it becomes a different story. Because the code-side limit works locally for one instance, while RDS Proxy works at a higher level — as a connection-management layer for a larger number of clients.
In simplified terms: the code-side limit controls one instance, the proxy helps control the problem more broadly.
From the user's perspective, it is obviously better if a request waits a little longer instead of failing immediately. In many cases that is exactly what will happen, although under heavier overload you can still get timeouts or other errors.
And that is exactly the practical value of the proxy for me.
What I concluded from the MySQL connection pool tests
Based on these tests, I confirmed one practical observation for myself: when the total number of DB connections starts exceeding what the database can safely handle directly, RDS Proxy can turn part of those immediate failures into waiting for a resource.
And yes — someone could say: but you did not actually test this with several additional backend instances running at once
That is true. I did not run a full 1:1 autoscaling simulation with several instances. But the behavior itself was clear to me:
If, without the proxy, exceeding the limit causes errors, and with the proxy, in a similar scenario, the request simply starts waiting longer, then you can see what class of problem this solution addresses.
That was enough of a signal for me to better understand the purpose of the proxy.
What is the best connection pool size?
There is no single answer. Context is king. It depends on the system, traffic, number of instances, the intensity of parallel queries, and how many other processes use the same database. You cannot do this properly once and for all in isolation from the context.
When RDS Proxy actually makes sense
In my opinion, not blindly. At the very beginning of a project, RDS Proxy will very often be overkill and an unnecessary extra cost. The hard truth is that most projects never even reach the point where scaling DB connections becomes a real problem.
So first: observe the traffic, observe how the application evolves, watch whether you are actually reaching the stage where instances are being added and connections start competing.
In many cases, instead of implementing a proxy, companies simply:
- increase DB RAM
- increase DB resources
- increase max_connections
And honestly? That is okay too. It is faster, simpler, and often enough.
Only later, when costs start rising or the setup becomes harder to manage, does real interest in a proxy begin.
From what I have seen, companies are more likely to first increase DB resources and max_connections than to jump straight into RDS Proxy. And honestly, that does not surprise me at all.
RDS Proxy is more advanced. It gives you more possibilities, but it also requires more work, more understanding, and changes how the application connects to the database — which by itself can create new complexity in an existing setup.
So I would not treat it as the default first move. But it is definitely worth understanding how it works.
Final thoughts on MySQL Too many connections
Too many connections is rarely the problem of one function or one endpoint. It is a sum problem — all the clients, processes, and instances that use the same database at the same time.
An application-side limit is the first and simplest protective mechanism. It is worth setting. But with many backend instances, that alone often stops being enough — and that is exactly where a proxy starts doing useful work.
That does not mean you need to implement it right away. Often, it is enough to first understand your setup and observe how the system behaves under load.
If this article helps someone understand the topic faster than it took me the first time around, that is great. Thanks for your time.


Comments
Please log in or register to join the discussion