Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb
-
Hi Cloudron Team,
I hope you're doing well. I noticed unusually large log files in the directory below:
/home/yellowtent/platformdata/logs/mongodb app.log → 24GB app.log.1 → 12GB
- Is there a known issue or misconfiguration that could cause these files to grow to this size?
- If these are app-specific logs, what controls their retention and rotation?
Would appreciate guidance on how to clean these up and prevent future bloating safely.
Thank you for your time and support.
Best,
Alex -
Hello @alex-a-soto
@alex-a-soto said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:
Is there a known issue or misconfiguration that could cause these files to grow to this size?
No.
There is/was this issue https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace might be also the case here.
Needs to be validated.@alex-a-soto said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:
If these are app-specific logs, what controls their retention and rotation?
Since this is about the
/home/yellowtent/platformdata/logs/mongodb
log file, it could be that one of your apps that uses MongoDB is running an absurd amount of querys.
What apps are you using? -
Hi @james, thank you for your support.
There is/was this issue https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace might be also the case here.
Needs to be validated.
I'll check to see if it's related to the Ubuntu upgrade.
I ran head -n 1 app.log and got a MongoDB log entry noting a find query by _id, labeled as a "slow query" even though it completed instantly (0ms).
What apps are you using?
n8n, Cal.com Nextcloud, Baserow, HedgeDoc, SOGo, Wekan
I started noticing this after installing Wekan.
-
Hello @alex-a-soto
It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.
@james said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:
Hello @alex-a-soto
It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.
Hi @james, I cleared the MongoDB logs, disabled Wekan, waited for about 10 minutes, and checked app.log and app.log.1, the file size stayed the same during that time.
I restarted Wekan and noticed that both log files began increasing in size again by about 1.0 MB in minutes.
The logs show repeated hello commands from Wekan to MongoDB that time out or wait for responses.
2025-06-10T19:04:51.245+00:00 - "Error while waiting for hello response" 2025-06-10T19:04:51.245+00:00 - "Slow query" 2025-06-10T19:04:51.245+00:00 - "Waiting for a hello response from a topology change or until deadline"
-
@james said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:
Hello @alex-a-soto
It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.
Hi @james, I cleared the MongoDB logs, disabled Wekan, waited for about 10 minutes, and checked app.log and app.log.1, the file size stayed the same during that time.
I restarted Wekan and noticed that both log files began increasing in size again by about 1.0 MB in minutes.
The logs show repeated hello commands from Wekan to MongoDB that time out or wait for responses.
2025-06-10T19:04:51.245+00:00 - "Error while waiting for hello response" 2025-06-10T19:04:51.245+00:00 - "Slow query" 2025-06-10T19:04:51.245+00:00 - "Waiting for a hello response from a topology change or until deadline"
-
Since the warning is from mongodb, I would try giving mongodb more memory - https://docs.cloudron.io/services/#configure . You can safely delete the log files. They are supposed to be logrotated, but of course if an app is spamming it's logs faster than logrotate kicks in, then it will end up filling the disk this way.
-
Hello @alex-a-soto
Please share an excerpt of that log file.
Maybe I can see something.@james said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:
Hello @alex-a-soto
Please share an excerpt of that log file.
Maybe I can see something.Hi @james, I've shared an excerpt of the log file, redacted some parts.
2025-06-12T10:49:13-04:00 {"t":{"$date":"2025-06-12T14:49:13.303+00:00"},"s":"I","c":"COMMAND","id":[REDACTED_ID],"ctx":"[REDACTED_CTX]","msg":"Slow query","attr":{"type":"command","ns":"[REDACTED_DB].[REDACTED_COLLECTION]","command":{"find":"[REDACTED_COLLECTION]","filter":{"cardId":{"$in":[ /* …redacted list of IDs… */ ]}}},"lsid":{"id":{"$uuid":"[REDACTED_UUID]"}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":[REDACTED_TS_T],"i":1}},"signature":{"hash":"[REDACTED_SIG_HASH]","keyId":[REDACTED_KEY_ID]}}},"$db":"[REDACTED_DB]","planSummary":"COLLSCAN","planningTimeMicros":89,"keysExamined":0,"docsExamined":0,"nBatches":1,"cursorExhausted":true,"numYields":0,"nreturned":0,"queryHash":"[REDACTED_QUERY_HASH]","planCacheKey":"[REDACTED_PLAN_CACHE_KEY]","queryFramework":"classic","reslen":253,"locks":{ /* …intact… */ },"readConcern":{"level":"local"},"storage":{},"cpuNanos":139903,"remote":"[REDACTED_IP]:[REDACTED_PORT]","protocol":"op_msg","durationMillis":0} 2025-06-12T10:49:21-04:00 {"t":{"$date":"2025-06-12T14:49:21.853+00:00"},"s":"D1","c":"STORAGE","id":[REDACTED_ID],"ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":[REDACTED_TS_T],"i":1}},"pendingIdentsCount":0}} 2025-06-12T10:49:23-04:00 {"t":{"$date":"2025-06-12T14:49:23.311+00:00"},"s":"D1","c":"REPL","id":[REDACTED_ID],"ctx":"[REDACTED_CONN]","msg":"Waiting for a hello response from a topology change or until deadline","attr":{"deadline":{"$date":"2025-06-12T14:49:33.311Z"},"currentTopologyVersionCounter":6}} 2025-06-12T10:49:25-04:00 {"t":{"$date":"2025-06-12T14:49:25.463+00:00"},"s":"D1","c":"REPL","id":[REDACTED_ID],"ctx":"NoopWriter","msg":"Set last known op time","attr":{"lastKnownOpTime":{"ts":{"$timestamp":{"t":[REDACTED_TS_T],"i":1}},"t":[REDACTED_TERM]}}}
-
Unfortunately, this did not help much.
Did you ever download the whole log file and inspect it?
Maybe there is something even visually repeating that might indicate something.If you have the option, maybe upload the big log file or the last 500MB of that log file somewhere so I can also take a look at a bigger chunk.
-
Since those are just log lines from the commands sent to mongodb from the app, this indicates that the app is just very busy using the database. To be honest I am not sure why mongodb logs every single query like this. We have to check how to reduce that level of logs, since they don't really add much.
-
I haven't downloaded the log file. I've inspected it using cat and tail, and it's the same repeating pattern I provided earlier from the redacted log file.
I came across this Wekan issue, not sure if it's related?
Wekan's mongodb.log grows pretty large (unlimited?)