Too many open files on Ubuntu

I had a report of an application which was running on a Ubuntu 20.04 server being extremely slow to respond to queries. Checking its service log, I could see the following:

2022/08/04 11:26:35 app: too many open files; retrying in 1s
2022/08/04 11:26:36 app: too many open files; retrying in 5ms
2022/08/04 11:26:36 app: too many open files; retrying in 10ms
2022/08/04 11:26:36 app: too many open files; retrying in 20ms
2022/08/04 11:26:36 app: too many open files; retrying in 40ms
2022/08/04 11:26:36 app: too many open files; retrying in 80ms
2022/08/04 11:26:36 app: too many open files; retrying in 160ms

That’s not good! I know there wasn’t a limit in the application itself, so this had to be a limitation within Linux somewhere.

It transpires that Linux has a soft and hard limit to the amount of files which can be open at once. You can check this with the following command:

root@localhost:~# ulimit -n
1024

To double check, I wanted to make sure this was being applied to the service. We need to retrieve its PID then query its limit:

root@localhost:~# sudo systemctl status app.service
● app.service - app name
     Loaded: loaded (/etc/systemd/system/app.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-08-04 12:10:31 UTC; 31min ago
   Main PID: 1391 (app.sh)
...

root@localhost:~# cat /proc/1391/limits | grep "Max open files"
Max open files            1024                 524288               files

There seems to be a number of ways to resolve this, but as it’s specifically a service which experiences this, and I didn’t want to spray-and-pray system configuration file edits in the hope it did, I opted for setting the LimitNOFILE and LimitNOFILESoft service settings, so that the change isn’t system-wide.

Open the service file:

root@lcoalhost:~# nano /etc/systemd/system/app.service

Find the [Service] block, and add the following:

LimitNOFILE=32000
LimitNOFILESoft=32000

You should change these values to match your requirements.

Restart the service, retrieve its PID then query the limit again:

root@localhost:~# sudo systemctl restart app.service
root@localhost:~# sudo systemctl status app.service
● app.service - app name
     Loaded: loaded (/etc/systemd/system/app.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-08-04 14:00:37 CEST; 16s ago
   Main PID: 1875 (app.sh)
...

root@localhost:~# cat /proc/1875/limits | grep "Max open files"
Max open files            32000                32000                files

That looks better to me, and the issue was resolved.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top