Cloudron v9: huge disk I/O is this normal/safe/needed?
-
It’s a production server, isn’t it ridiculous to stop these apps to watch resource behavior? There must be tools or ways to find the root cause don’t you think?
Beside that it’s the host MySQL does it has anything to do with apps?
-
Hello @imc67
You can use the PID from the process to figure out what mysql service it is.e.g. your
iotopshows formysqldthe pid1994756.
You can runsystemctl status mysql.serviceand there is the pid displayed:● mysql.service - MySQL Community Server Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; preset: enabled) Active: active (running) since Mon 2025-12-01 09:17:59 UTC; 1 week 5 days ago Main PID: 1994756 (mysqld) Status: "Server is operational" Tasks: 48 (limit: 4603) Memory: 178.7M (peak: 298.0M swap: 95.4M swap peak: 108.7M) CPU: 1h 41min 31.520s CGroup: /system.slice/mysql.service └─1994756 /usr/sbin/mysqld Notice: journal has been rotated since unit was started, output may be incomplete.So from
iotopI can confirm that the system mysqld service is pid1994756so I'd know to inspect the system mysqld service and not the docker mysql service.You can also get the pid from the
mysqldinside the docker container withdocker top mysql:docker top mysql UID PID PPID C STIME TTY TIME CMD root 1889 1512 0 Nov07 ? 00:06:17 /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql usbmux 3079 1889 0 Nov07 ? 03:49:38 /usr/sbin/mysqld usbmux 3099 1889 0 Nov07 ? 00:00:11 node /app/code/service.jsThen I know the
mysqldpid of the docker service is3079which I can check again with the system:ps uax | grep -i 3079 usbmux 3079 0.4 1.0 1587720 43692 ? Sl Nov07 229:38 /usr/sbin/mysqldNow we can differentiate between the two.
Okay.
Now that we can differentiate between the two, you can observeiotopand see which one has a high I/O.
After you narrow it down to either one, then we can do some analysis what database / table get accesses the most even further narrow it down. -
Ok, thanks for your hints!!
The result was
PID 19974However:
● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2025-12-13 05:57:30 UTC; 1 day 5h ago Process: 874 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 910 (mysqld) Status: "Server is operational" Tasks: 47 (limit: 77023) Memory: 601.7M CPU: 59min 14.538s CGroup: /system.slice/mysql.service └─910 /usr/sbin/mysqldAnd
docker top mysqlUID PID PPID C STIME TTY TIME CMD root 9842 8908 0 Dec13 ? 00:00:17 /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql message+ 19974 9842 6 Dec13 ? 01:56:43 /usr/sbin/mysqld message+ 19976 9842 0 Dec13 ? 00:01:31 node /app/code/service.jsSo
ps uax | grep -i 19974gives:message+ 19974 6.6 1.8 4249604 1229136 ? Sl Dec13 116:48 /usr/sbin/mysqldSo at least we now know that it's the Docker MySQL
-
Hello @imc67
Now we can start analysing.
Edit the file/home/yellowtent/platformdata/mysql/custom.cnfand add the following lines:[mysqld] general_log = 1 slow_query_log = 1Restart the MySQL service in the Cloudron Dashboard.
The log files are stored at/home/yellowtent/platformdata/mysql/mysql.logand/home/yellowtent/platformdata/mysql/mysql-slow.log.Let it run for a day or more.
Then you can download the log files and see what queries run very often causing disk I/O. -
I enabled this en within seconds the log file was enormous, I asked ChatGPT to analyse it and here is it's observations: (too technical for me):
Some observations after briefly enabling the MySQL general log (Cloudron v9)
I enabled the MySQL general log only for a short time because of disk I/O concerns, but even within a few minutes a clear pattern showed up.
What I’m seeing:
- A very high number of
INSERT INTO session (...)and
INSERT ... ON DUPLICATE KEY UPDATE - These happen continuously and come from
172.18.0.1 - As far as I understand, this IP is the Docker bridge gateway in Cloudron, so it likely represents multiple apps
I temporarily disabled Matomo to rule that out, but disk I/O and session-related writes did not noticeably decrease, so it does not seem to be the main contributor.
From the log it looks like:
- Multiple applications are storing sessions in MySQL
- Session rows are updated on almost every request
- This can generate a lot of InnoDB redo log and disk I/O, even with low traffic
Nothing looks obviously broken, but I’m trying to understand whether this level of session write activity is:
- expected behavior in Cloudron v9
- something that can be tuned or configured
- or if there are recommended best practices (e.g. Redis for sessions)
Any guidance on how Cloudron expects apps to handle sessions, or how to reduce unnecessary MySQL write I/O, would be much appreciated.
Thanks for looking into this.
- A very high number of
-
Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)
@joseph said in Cloudron v9: huge disk I/O is this normal/safe/needed?:
Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)
No there is no Nextcloud on this server