Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace

After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace

Scheduled Pinned Locked Moved Solved Support
syslog
38 Posts 12 Posters 1.9k Views 12 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • nebulonN Offline
    nebulonN Offline
    nebulon
    Staff
    wrote last edited by
    #29

    So the docker daemon itself using journald via --log-driver=journald is correct. Also it is correct that the containers which are managed and started by Cloudron will have syslog in the LogConfig of the HostConfig. Also it should mention the syslog-address being unix://home/yellowtent/platformdata/logs/syslog.sock

    From what I can see in your post this all looks correct and as intended.

    Thus, none of the docker containers should log to journald or rsyslogd. Well at least if they were created by Cloudron itself of course to set those.

    Given that this is uptime kuma, which in turn is just using sqlite, this lead me to https://git.cloudron.io/platform/box/-/blob/master/src/services.js?ref_type=heads#L933 which indeed starts a container without specifying the cloudron logdriver configs. So that is probably one thing we should fix.

    This however would still mean the Gbs of sql dump logs just end up in another place. So the main issue then to fix is that sqlite3 app.db .dump which is run to create the sqldump also somehow logs to stdout/err despite redirectding stdou to the dump file....and that ends up in the logs somehow. I haven't found a fix yet but just to share the investigation here.

    1 Reply Last reply
    2
    • SansGuidonS Offline
      SansGuidonS Offline
      SansGuidon
      wrote last edited by SansGuidon
      #30

      In the meantime, the problem still persists it seems

      root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog*
      15G	/var/log/syslog
      26G	/var/log/syslog.1
      0	/var/log/syslog.1.gz-2025083120.backup
      52K	/var/log/syslog.2.gz
      4.0K	/var/log/syslog.3.gz
      4.0K	/var/log/syslog.4.gz
      

      Disk graph shows

        docker 25.9 GB
        docker-volumes 7.79 GB
        /apps.swap 4.29 GB
        platformdata 3.77 GB
        boxdata 58.34 MB
        maildata 233.47 kB
        Everything else (Ubuntu, etc) 48.67 GB
      
      root@ubuntu-cloudron-16gb-nbg1-3:~# truncate -s 0 /var/log/syslog
      root@ubuntu-cloudron-16gb-nbg1-3:~# truncate -s 0 /var/log/syslog.1
      

      After truncating the logs (see above), I reclaim the disk space, but I really need to work on a more effective patch / housekeeping job to prevent ๐Ÿ”ฅ

      This disk contains:
      
        docker 25.9 GB
        docker-volumes 8.02 GB
        /apps.swap 4.29 GB
        platformdata 3.8 GB
        boxdata 57.93 MB
        maildata 233.47 kB
        Everything else (Ubuntu, etc) 7.62 GB
      

      I would also love if the Cloudron disk usage view would be a graph like for CPU and Memory. Maybe it's already planned for Cloudron 9, otherwise should I mention that idea in a new thread, @nebulon ?

      About me / Now

      1 Reply Last reply
      3
      • jamesJ Offline
        jamesJ Offline
        james
        Staff
        wrote last edited by
        #31

        Hello @SansGuidon
        You mean the disk usage as a historical statistic and not only a singular point when checking?
        If this is what you mean, no that is not part of Cloudron 9 at the moment.
        But in my opinion, a very welcome feature request after Cloudron 9 is released!

        SansGuidonS 1 Reply Last reply
        3
        • jamesJ james

          Hello @SansGuidon
          You mean the disk usage as a historical statistic and not only a singular point when checking?
          If this is what you mean, no that is not part of Cloudron 9 at the moment.
          But in my opinion, a very welcome feature request after Cloudron 9 is released!

          SansGuidonS Offline
          SansGuidonS Offline
          SansGuidon
          wrote last edited by SansGuidon
          #32

          @james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:

          Hello @SansGuidon
          You mean the disk usage as a historical statistic and not only a singular point when checking?
          If this is what you mean, no that is not part of Cloudron 9 at the moment.
          But in my opinion, a very welcome feature request after Cloudron 9 is released!

          Exactly, the idea is to be able to notice if something weird is happening (like disk usage growing constantly at a rapid rate)
          I'll make a proposal in a separate thread -> Follow up in https://forum.cloudron.io/topic/14292/add-historical-disk-usage-in-system-info-graphs-section

          About me / Now

          1 Reply Last reply
          5
          • J Online
            J Online
            joseph
            Staff
            wrote last edited by
            #33

            @SansGuidon afaik, Cloudron does not log anything to syslog . Did you happen to check what was inside that massive syslog file? In one of our production cloudrons (running for almost a decade):

            $ du -sh /var/log/syslog*
            5.1M	/var/log/syslog
            6.6M	/var/log/syslog.1
            800K	/var/log/syslog.2.gz
            796K	/var/log/syslog.3.gz
            812K	/var/log/syslog.4.gz
            
            1 Reply Last reply
            0
            • SansGuidonS Offline
              SansGuidonS Offline
              SansGuidon
              wrote last edited by SansGuidon
              #34

              Hi @joseph

              root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog*
              8.2G	/var/log/syslog
              0	/var/log/syslog.1
              0	/var/log/syslog.1.gz-2025083120.backup
              52K	/var/log/syslog.2.gz
              4.0K	/var/log/syslog.3.gz
              4.0K	/var/log/syslog.4.gz
              

              As mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.

              root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l
              47237303
              

              And I think this was started being investigated by @nebulon
              This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks)

              About me / Now

              J 1 Reply Last reply
              0
              • SansGuidonS Offline
                SansGuidonS Offline
                SansGuidon
                wrote last edited by SansGuidon
                #35

                For now as a workaround I'm applying this patch, please advise if you have any concern with this ๐Ÿ™‚

                diff --git a/box/src/services.js b/box/src/services.js
                --- a/box/src/services.js
                +++ b/box/src/services.js
                @@ -1,7 +1,7 @@
                 'use strict';
                 
                 exports = module.exports = {
                     getServiceConfig,
                 
                     listServices,
                     getServiceStatus,
                @@ -308,7 +308,7 @@ async function backupSqlite(app, options) {
                     // we use .dump instead of .backup because it's more portable across sqlite versions
                     for (const p of options.paths) {
                         const outputFile =  path.join(paths.APPS_DATA_DIR, app.id, path.basename(p, path.extname(p)) + '.sqlite');
                 
                         // we could use docker exec but it may not work if app is restarting
                         const cmd = `sqlite3 ${p} ".dump"`;
                         const runCmd = `docker run --rm --name=sqlite-${app.id} \
                             --net cloudron \
                             -v ${volumeDataDir}:/app/data \
                             --label isCloudronManaged=true \
                -            --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile}`;
                +            --log-driver=none \
                +            --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile} 2>/dev/null`;
                 
                         await shell.bash(runCmd, { encoding: 'utf8' });
                     }
                 }
                
                

                About me / Now

                1 Reply Last reply
                0
                • SansGuidonS SansGuidon

                  Hi @joseph

                  root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog*
                  8.2G	/var/log/syslog
                  0	/var/log/syslog.1
                  0	/var/log/syslog.1.gz-2025083120.backup
                  52K	/var/log/syslog.2.gz
                  4.0K	/var/log/syslog.3.gz
                  4.0K	/var/log/syslog.4.gz
                  

                  As mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.

                  root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l
                  47237303
                  

                  And I think this was started being investigated by @nebulon
                  This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks)

                  J Online
                  J Online
                  joseph
                  Staff
                  wrote last edited by joseph
                  #36

                  @SansGuidon I think @nebulon investigated and could not reproduce. We also run uptime kuma. Our logs are fine. Have you enabled backups inside uptime kuma or something else by any chance?

                  root@my:~# docker ps | grep uptime
                  cb00714073cb   cloudron/louislam.uptimekuma.app:202508221422060000    "/app/pkg/start.sh"      2 weeks ago    Up 2 weeks                                            ee6e4628-c370-4713-9cb6-f1888c32f8fb
                  root@my:~# du -sh /var/log/syslog*
                  352K	/var/log/syslog
                  904K	/var/log/syslog.1
                  116K	/var/log/syslog.2.gz
                  112K	/var/log/syslog.3.gz
                  112K	/var/log/syslog.4.gz
                  108K	/var/log/syslog.5.gz
                  112K	/var/log/syslog.6.gz
                  108K	/var/log/syslog.7.gz
                  root@my:~# grep 'INSERT INTO' /var/log/syslog | wc -l
                  0
                  
                  1 Reply Last reply
                  0
                  • J Online
                    J Online
                    joseph
                    Staff
                    wrote last edited by joseph
                    #37

                    FWIW, our db is pretty big too.

                    image.png

                    @SansGuidon the command is just sqlite3 ${p} ".dump" and it is redirected to a file. Do you have any ideas of why this will log sql commands to syslog? I can't reproduce this by running the command manually.

                    1 Reply Last reply
                    0
                    • SansGuidonS Offline
                      SansGuidonS Offline
                      SansGuidon
                      wrote last edited by SansGuidon
                      #38

                      @joseph I don't see any special setting in UptimeKuma being applied in my instance. Can you try to reproduce with those instructions below? Hope that makes sense

                      Ensure your default logdriver is journald:

                      systemctl show docker -p ExecStart
                      

                      Should show something like

                      ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2 --experimental --ip6tables --use>
                      

                      Then try to mimic what backupSqlite() does (no log driver; redirect only outside docker run):

                      docker run --rm alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out.sql
                      

                      Observe duplicates got logged to syslog anyway:

                      grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l   # > 0
                      cat /tmp/out.sql | wc -l                              # same 3 lines
                      

                      Now repeat with logging disabled (what the fix does):

                      docker run --rm --log-driver=none alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out2.sql
                      grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l   # unchanged
                      

                      About me / Now

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search