Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb

Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb

Scheduled Pinned Locked Moved Unsolved Support
mongodbbloatdisk spacedisk usagedisk-usage
14 Posts 4 Posters 110 Views 4 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • alex-a-sotoA Offline
    alex-a-sotoA Offline
    alex-a-soto
    wrote last edited by alex-a-soto
    #1

    Hi Cloudron Team,

    I hope you're doing well. I noticed unusually large log files in the directory below:

    /home/yellowtent/platformdata/logs/mongodb
     
    app.log       → 24GB
    app.log.1     → 12GB
    
    1. Is there a known issue or misconfiguration that could cause these files to grow to this size?
    2. If these are app-specific logs, what controls their retention and rotation?

    Would appreciate guidance on how to clean these up and prevent future bloating safely.

    Thank you for your time and support.

    Best,
    Alex

    1 Reply Last reply
    2
    • jamesJ Offline
      jamesJ Offline
      james
      Staff
      wrote last edited by
      #2

      Hello @alex-a-soto

      @alex-a-soto said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:

      Is there a known issue or misconfiguration that could cause these files to grow to this size?

      No.
      There is/was this issue https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace might be also the case here.
      Needs to be validated.

      @alex-a-soto said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:

      If these are app-specific logs, what controls their retention and rotation?

      Since this is about the /home/yellowtent/platformdata/logs/mongodb log file, it could be that one of your apps that uses MongoDB is running an absurd amount of querys.
      What apps are you using?

      1 Reply Last reply
      2
      • alex-a-sotoA Offline
        alex-a-sotoA Offline
        alex-a-soto
        wrote last edited by alex-a-soto
        #3

        Hi @james, thank you for your support.

        There is/was this issue https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace might be also the case here.

        Needs to be validated.

        I'll check to see if it's related to the Ubuntu upgrade.

        I ran head -n 1 app.log and got a MongoDB log entry noting a find query by _id, labeled as a "slow query" even though it completed instantly (0ms).

        What apps are you using?

        n8n, Cal.com Nextcloud, Baserow, HedgeDoc, SOGo, Wekan

        I started noticing this after installing Wekan.

        1 Reply Last reply
        1
        • jamesJ Offline
          jamesJ Offline
          james
          Staff
          wrote last edited by
          #4

          Hello @alex-a-soto

          It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.

          alex-a-sotoA 1 Reply Last reply
          1
          • jamesJ james

            Hello @alex-a-soto

            It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.

            alex-a-sotoA Offline
            alex-a-sotoA Offline
            alex-a-soto
            wrote last edited by
            #5

            @james said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:

            Hello @alex-a-soto

            It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.

            Hi @james, I cleared the MongoDB logs, disabled Wekan, waited for about 10 minutes, and checked app.log and app.log.1, the file size stayed the same during that time.

            I restarted Wekan and noticed that both log files began increasing in size again by about 1.0 MB in minutes.

            The logs show repeated hello commands from Wekan to MongoDB that time out or wait for responses.

            2025-06-10T19:04:51.245+00:00 - "Error while waiting for hello response"
            2025-06-10T19:04:51.245+00:00 - "Slow query"
            2025-06-10T19:04:51.245+00:00 - "Waiting for a hello response from a topology change or until deadline"
            
            jamesJ 1 Reply Last reply
            1
            • alex-a-sotoA alex-a-soto

              @james said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:

              Hello @alex-a-soto

              It might be a good idea to clear the MongoDB logs, disable wekan to confirm if this is the issue.

              Hi @james, I cleared the MongoDB logs, disabled Wekan, waited for about 10 minutes, and checked app.log and app.log.1, the file size stayed the same during that time.

              I restarted Wekan and noticed that both log files began increasing in size again by about 1.0 MB in minutes.

              The logs show repeated hello commands from Wekan to MongoDB that time out or wait for responses.

              2025-06-10T19:04:51.245+00:00 - "Error while waiting for hello response"
              2025-06-10T19:04:51.245+00:00 - "Slow query"
              2025-06-10T19:04:51.245+00:00 - "Waiting for a hello response from a topology change or until deadline"
              
              jamesJ Offline
              jamesJ Offline
              james
              Staff
              wrote last edited by
              #6

              @alex-a-soto so maybe you need to also increase the RAM of wekan. Worth a try.

              1 Reply Last reply
              1
              • J Offline
                J Offline
                joseph
                Staff
                wrote last edited by
                #7

                Since the warning is from mongodb, I would try giving mongodb more memory - https://docs.cloudron.io/services/#configure . You can safely delete the log files. They are supposed to be logrotated, but of course if an app is spamming it's logs faster than logrotate kicks in, then it will end up filling the disk this way.

                1 Reply Last reply
                0
                • alex-a-sotoA Offline
                  alex-a-sotoA Offline
                  alex-a-soto
                  wrote last edited by
                  #8

                  @james @joseph Thank you. I've increased the memory of Wekan and MongoDB. I appreciate your support in troubleshooting. I'll monitor Wekan today and tomorrow to see if this makes a difference.

                  1 Reply Last reply
                  1
                  • alex-a-sotoA Offline
                    alex-a-sotoA Offline
                    alex-a-soto
                    wrote last edited by alex-a-soto
                    #9

                    @james @joseph After increasing the memory of Wekan and MongoDB, it hasn't made a difference.

                    As of now, to size are:

                    6.4G    ./app.log
                    6.0G    ./app.log.1
                    
                    1 Reply Last reply
                    0
                    • jamesJ Offline
                      jamesJ Offline
                      james
                      Staff
                      wrote last edited by
                      #10

                      Hello @alex-a-soto
                      Please share an excerpt of that log file.
                      Maybe I can see something.

                      alex-a-sotoA 1 Reply Last reply
                      0
                      • jamesJ james

                        Hello @alex-a-soto
                        Please share an excerpt of that log file.
                        Maybe I can see something.

                        alex-a-sotoA Offline
                        alex-a-sotoA Offline
                        alex-a-soto
                        wrote last edited by alex-a-soto
                        #11

                        @james said in Unexpected Large Log Files in /home/yellowtent/platformdata/logs/mongodb:

                        Hello @alex-a-soto
                        Please share an excerpt of that log file.
                        Maybe I can see something.

                        Hi @james, I've shared an excerpt of the log file, redacted some parts.

                        2025-06-12T10:49:13-04:00 {"t":{"$date":"2025-06-12T14:49:13.303+00:00"},"s":"I","c":"COMMAND","id":[REDACTED_ID],"ctx":"[REDACTED_CTX]","msg":"Slow query","attr":{"type":"command","ns":"[REDACTED_DB].[REDACTED_COLLECTION]","command":{"find":"[REDACTED_COLLECTION]","filter":{"cardId":{"$in":[ /* …redacted list of IDs… */ ]}}},"lsid":{"id":{"$uuid":"[REDACTED_UUID]"}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":[REDACTED_TS_T],"i":1}},"signature":{"hash":"[REDACTED_SIG_HASH]","keyId":[REDACTED_KEY_ID]}}},"$db":"[REDACTED_DB]","planSummary":"COLLSCAN","planningTimeMicros":89,"keysExamined":0,"docsExamined":0,"nBatches":1,"cursorExhausted":true,"numYields":0,"nreturned":0,"queryHash":"[REDACTED_QUERY_HASH]","planCacheKey":"[REDACTED_PLAN_CACHE_KEY]","queryFramework":"classic","reslen":253,"locks":{ /* …intact… */ },"readConcern":{"level":"local"},"storage":{},"cpuNanos":139903,"remote":"[REDACTED_IP]:[REDACTED_PORT]","protocol":"op_msg","durationMillis":0}  
                        
                        2025-06-12T10:49:21-04:00 {"t":{"$date":"2025-06-12T14:49:21.853+00:00"},"s":"D1","c":"STORAGE","id":[REDACTED_ID],"ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":[REDACTED_TS_T],"i":1}},"pendingIdentsCount":0}}  
                        
                        2025-06-12T10:49:23-04:00 {"t":{"$date":"2025-06-12T14:49:23.311+00:00"},"s":"D1","c":"REPL","id":[REDACTED_ID],"ctx":"[REDACTED_CONN]","msg":"Waiting for a hello response from a topology change or until deadline","attr":{"deadline":{"$date":"2025-06-12T14:49:33.311Z"},"currentTopologyVersionCounter":6}}  
                        
                        2025-06-12T10:49:25-04:00 {"t":{"$date":"2025-06-12T14:49:25.463+00:00"},"s":"D1","c":"REPL","id":[REDACTED_ID],"ctx":"NoopWriter","msg":"Set last known op time","attr":{"lastKnownOpTime":{"ts":{"$timestamp":{"t":[REDACTED_TS_T],"i":1}},"t":[REDACTED_TERM]}}}  
                        
                        
                        1 Reply Last reply
                        0
                        • jamesJ Offline
                          jamesJ Offline
                          james
                          Staff
                          wrote last edited by
                          #12

                          Unfortunately, this did not help much.

                          Did you ever download the whole log file and inspect it?
                          Maybe there is something even visually repeating that might indicate something.

                          If you have the option, maybe upload the big log file or the last 500MB of that log file somewhere so I can also take a look at a bigger chunk.

                          1 Reply Last reply
                          0
                          • nebulonN Offline
                            nebulonN Offline
                            nebulon
                            Staff
                            wrote last edited by
                            #13

                            Since those are just log lines from the commands sent to mongodb from the app, this indicates that the app is just very busy using the database. To be honest I am not sure why mongodb logs every single query like this. We have to check how to reduce that level of logs, since they don't really add much.

                            1 Reply Last reply
                            0
                            • alex-a-sotoA Offline
                              alex-a-sotoA Offline
                              alex-a-soto
                              wrote last edited by
                              #14

                              @james @nebulon

                              I haven't downloaded the log file. I've inspected it using cat and tail, and it's the same repeating pattern I provided earlier from the redacted log file.

                              I came across this Wekan issue, not sure if it's related?
                              Wekan's mongodb.log grows pretty large (unlimited?)

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • Bookmarks
                              • Search