Tasks table accumulates tasks indefinitely
-
Second issue found: box.tasks table accumulates completed tasks indefinitely
While investigating the disk I/O issue further, I found a second contributor to the high host MySQL write activity.
The
box.taskstable contains tens of thousands of completed tasks (completed=1, pending=0) that are never cleaned up. These go back years:Server 1 (running since ~2021): 17,752 completed tasks, oldest from 2021
Server 2 (running since ~2019): 22,509 completed tasks, oldest from 2019
Server 3 (running since ~2019): 26,972 completed tasks, oldest from 2019Breakdown per server 3 as an example:
type count oldest cleanBackups 9,628 2019-04-19 backup_xxx 7,054 2019-04-19 app 4,765 2019-10-06 checkCerts 2,239 2021-07-01 reneWcerts 1,611 2019-04-19 updateDiskUsage 1,107 2022-12-03This large table causes continuous InnoDB buffer pool activity and redo log writes on the host MySQL, contributing to the baseline disk I/O of ~2-3 MB/s — independently of any app issues.
Query to check on your own server:
SELECT type, COUNT(*) as aantal, MIN(creationTime) as oudste, MAX(creationTime) as nieuwste FROM box.tasks WHERE completed=1 AND pending=0 GROUP BY type ORDER BY aantal DESC LIMIT 10;Questions:
- Is there a safe way to manually clean up old completed tasks?
- Should Cloudron implement automatic cleanup of completed tasks older than X days?
- Is this a known issue or intentional behavior?
-
That is intentional behavior. Given that task info is very little data as such, this is unnecessary microoptimization with the downside of essentially losing audit logs from the past.
On a modern system having tables with 10ks of rows in mysql should really not be an issue. How that would result in constant disk I/O without actually doing much, seems I think unrelated? If this is causing such high disk I/O I think maybe something is off in the mysql server settings instead. Do you have more analysis or info on that buffer pool activity relating to the tasks table?
-
Thanks for the clarification. You are right that the tasks table itself is not the primary cause.
Here is the buffer pool analysis from the host MySQL:
BUFFER POOL AND MEMORY Buffer pool size: 8192 pages Free buffers: 1030 Database pages: 7123 Modified db pages: 0 Pages written: 1,918,869 Write rate: 9.76 writes/s young-making rate: 63 / 1000And the box database table sizes:
eventlog 79.58 MB 16,275 rows tasks 29.55 MB 17,719 rows backups 19.47 MB 762 rowsThe host MySQL writes/s (9.76) are indeed modest. The main disk I/O culprit is the Docker MySQL (messageb user) which writes significantly more — and that is where the Matomo sessions live.
So I agree the tasks table is not directly causing the disk I/O. The real issue remains the Matomo health checker session accumulation as discussed in the main topic.
-
J joseph has marked this topic as solved
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login