Cloudron v9: huge disk I/O is this normal/safe/needed?
-
Hello @imc67
You can use the PID from the process to figure out what mysql service it is.e.g. your
iotopshows formysqldthe pid1994756.
You can runsystemctl status mysql.serviceand there is the pid displayed:● mysql.service - MySQL Community Server Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; preset: enabled) Active: active (running) since Mon 2025-12-01 09:17:59 UTC; 1 week 5 days ago Main PID: 1994756 (mysqld) Status: "Server is operational" Tasks: 48 (limit: 4603) Memory: 178.7M (peak: 298.0M swap: 95.4M swap peak: 108.7M) CPU: 1h 41min 31.520s CGroup: /system.slice/mysql.service └─1994756 /usr/sbin/mysqld Notice: journal has been rotated since unit was started, output may be incomplete.So from
iotopI can confirm that the system mysqld service is pid1994756so I'd know to inspect the system mysqld service and not the docker mysql service.You can also get the pid from the
mysqldinside the docker container withdocker top mysql:docker top mysql UID PID PPID C STIME TTY TIME CMD root 1889 1512 0 Nov07 ? 00:06:17 /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql usbmux 3079 1889 0 Nov07 ? 03:49:38 /usr/sbin/mysqld usbmux 3099 1889 0 Nov07 ? 00:00:11 node /app/code/service.jsThen I know the
mysqldpid of the docker service is3079which I can check again with the system:ps uax | grep -i 3079 usbmux 3079 0.4 1.0 1587720 43692 ? Sl Nov07 229:38 /usr/sbin/mysqldNow we can differentiate between the two.
Okay.
Now that we can differentiate between the two, you can observeiotopand see which one has a high I/O.
After you narrow it down to either one, then we can do some analysis what database / table get accesses the most even further narrow it down. -
Ok, thanks for your hints!!
The result was
PID 19974However:
● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2025-12-13 05:57:30 UTC; 1 day 5h ago Process: 874 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 910 (mysqld) Status: "Server is operational" Tasks: 47 (limit: 77023) Memory: 601.7M CPU: 59min 14.538s CGroup: /system.slice/mysql.service └─910 /usr/sbin/mysqldAnd
docker top mysqlUID PID PPID C STIME TTY TIME CMD root 9842 8908 0 Dec13 ? 00:00:17 /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql message+ 19974 9842 6 Dec13 ? 01:56:43 /usr/sbin/mysqld message+ 19976 9842 0 Dec13 ? 00:01:31 node /app/code/service.jsSo
ps uax | grep -i 19974gives:message+ 19974 6.6 1.8 4249604 1229136 ? Sl Dec13 116:48 /usr/sbin/mysqldSo at least we now know that it's the Docker MySQL
-
Hello @imc67
Now we can start analysing.
Edit the file/home/yellowtent/platformdata/mysql/custom.cnfand add the following lines:[mysqld] general_log = 1 slow_query_log = 1Restart the MySQL service in the Cloudron Dashboard.
The log files are stored at/home/yellowtent/platformdata/mysql/mysql.logand/home/yellowtent/platformdata/mysql/mysql-slow.log.Let it run for a day or more.
Then you can download the log files and see what queries run very often causing disk I/O. -
I enabled this en within seconds the log file was enormous, I asked ChatGPT to analyse it and here is it's observations: (too technical for me):
Some observations after briefly enabling the MySQL general log (Cloudron v9)
I enabled the MySQL general log only for a short time because of disk I/O concerns, but even within a few minutes a clear pattern showed up.
What I’m seeing:
- A very high number of
INSERT INTO session (...)and
INSERT ... ON DUPLICATE KEY UPDATE - These happen continuously and come from
172.18.0.1 - As far as I understand, this IP is the Docker bridge gateway in Cloudron, so it likely represents multiple apps
I temporarily disabled Matomo to rule that out, but disk I/O and session-related writes did not noticeably decrease, so it does not seem to be the main contributor.
From the log it looks like:
- Multiple applications are storing sessions in MySQL
- Session rows are updated on almost every request
- This can generate a lot of InnoDB redo log and disk I/O, even with low traffic
Nothing looks obviously broken, but I’m trying to understand whether this level of session write activity is:
- expected behavior in Cloudron v9
- something that can be tuned or configured
- or if there are recommended best practices (e.g. Redis for sessions)
Any guidance on how Cloudron expects apps to handle sessions, or how to reduce unnecessary MySQL write I/O, would be much appreciated.
Thanks for looking into this.
- A very high number of
-
Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)
@joseph said in Cloudron v9: huge disk I/O is this normal/safe/needed?:
Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)
No there is no Nextcloud on this server
-
J joseph has marked this topic as solved
-
@Joseph isn't it strange that you set this topic to solved without checking?
@girish & @nebulon today I spend an awful lot of time to analyse this issue together with Claude.ai and this is the result:
Cloudron health checker triggers excessive MySQL disk I/O via Matomo LoginOIDC plugin
I want to report a bug that causes massive MySQL disk write I/O on servers running Matomo with the LoginOIDC plugin (which is the default Cloudron OIDC integration).
The problem
The Cloudron health checker calls the root URL
/of the Matomo app every 10 seconds. When Matomo's LoginOIDC plugin is active, every single health check request causes PHP to create a new session in MySQL containing aLogin.loginnonce and aLoginOIDC.nonce— even though no user is logging in.This results in exactly 360 new MySQL session rows per hour, 24 hours a day, on every server running Matomo with OIDC enabled.
Evidence
Session count per hour over a full day (consistent across 3 separate servers):
00:00 → 360 sessions 01:00 → 360 sessions 02:00 → 360 sessions ... (identical every hour, including 3am) 23:00 → 360 sessions360 sessions/hour = exactly 1 per 10 seconds = the Cloudron health check interval.
Decoding a session row from the MySQL session table confirms the content:
a:3:{ s:11:"Login.login"; a:1:{s:5:"nonce"; s:32:"44e6599e05b0e829ec469459a413fc11";} s:4:"__ZF"; a:2:{ s:11:"Login.login"; a:1:{s:4:"ENVT"; a:1:{s:5:"nonce"; i:1772890030;}} s:15:"LoginOIDC.nonce"; a:1:{s:4:"ENVT"; a:1:{s:5:"nonce"; i:1772890030;}} } s:15:"LoginOIDC.nonce"; a:1:{s:5:"nonce"; s:32:"7456603093600c7a3686d560bc61acd1";} }These are unauthenticated OIDC handshake sessions — not real users.
Sessions have a lifetime of 1,209,600 seconds (14 days), so they accumulate without being cleaned up. On my 3 servers this resulted in 113,000–121,000 session rows per Matomo instance, causing continuous MySQL InnoDB redo log writes and buffer pool flushes of 2.5–4 MB/s disk I/O.
Today's actual visitor count in Matomo: 22 visits across 10 sites. Today's sessions created in MySQL: 4,320+.
Root cause
The Cloudron health checker calls
GET /on the Matomo app. This URL triggers the LoginOIDC plugin to initialize an OIDC authentication flow and write a session to MySQL — even for a non-browser health check request with no user interaction.Suggested fix
The Cloudron health checker should call a static or session-free endpoint instead of
/, for example:matomo.jsorpiwik.js(static JavaScript file, no PHP session)- A dedicated
/healthor/pingendpoint
This would eliminate the session creation entirely without requiring any changes to Matomo or its plugins.
Environment
- Cloudron v9.1.3 (9.0.17)
- Ubuntu 22.04.5 LTS
- Matomo 5.8.0 with LoginOIDC plugin
- Reproduced on 3 separate Cloudron Pro instances
-
I imc67 marked this topic as a regular topic
-
I imc67 marked this topic as a question
-
@imc67 not sure I remember why
Does this mean that if you disable matomo temporarily, the disk usage goes down a lot?Seems easy to fix now that we know the root cause
-
My two cents: as soon as #28 is correct, this should happen with every Cloudron instance that has Matomo (and OIDC enabled). I looked at one of my instances that met the criteria. One of the Matomo instances had about 300 sessions stored in MySQL. The oldest entry is from Feb 26.
So maybe #28 isn't correct, or it's something that only happens on this instance. -
Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?
can you check how many sessions per hour are being created? Run this query:
sqlSELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions FROM `<your_matomo_db>`.session WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY GROUP BY hour ORDER BY hour;On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.
-
Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?
can you check how many sessions per hour are being created? Run this query:
sqlSELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions FROM `<your_matomo_db>`.session WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY GROUP BY hour ORDER BY hour;On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.
@imc67 one app instance (4y old)
+------+----------+ | hour | sessions | +------+----------+ | 0 | 2 | | 2 | 1 | | 7 | 2 | | 8 | 1 | | 9 | 1 | | 13 | 3 | | 15 | 1 | | 17 | 3 | | 19 | 1 | | 20 | 3 | | 21 | 4 | | 22 | 1 | +------+----------+different app instance (7y old)
+------+----------+ | hour | sessions | +------+----------+ | 3 | 1 | | 5 | 2 | | 15 | 4 | | 18 | 2 | | 19 | 2 | | 20 | 2 | | 21 | 4 | | 22 | 2 | +------+----------+health check is every 10 sec.
Mar 07 18:00:50 - - - [07/Mar/2026:17:00:50 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)" Mar 07 18:00:50 172.18.0.1 - - [07/Mar/2026:17:00:50 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)" Mar 07 18:01:00 - - - [07/Mar/2026:17:01:00 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)" Mar 07 18:01:00 172.18.0.1 - - [07/Mar/2026:17:01:00 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)" Mar 07 18:01:10 - - - [07/Mar/2026:17:01:10 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)" Mar 07 18:01:10 172.18.0.1 - - [07/Mar/2026:17:01:10 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)" Mar 07 18:01:20 - - - [07/Mar/2026:17:01:20 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)" Mar 07 18:01:20 172.18.0.1 - - [07/Mar/2026:17:01:20 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)" Mar 07 18:01:30 - - - [07/Mar/2026:17:01:30 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)" Mar 07 18:01:30 172.18.0.1 - - [07/Mar/2026:17:01:30 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login