Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Cloudron v9: huge disk I/O is this normal/safe/needed?

Cloudron v9: huge disk I/O is this normal/safe/needed?

Scheduled Pinned Locked Moved Unsolved Support
graphs
39 Posts 8 Posters 2.2k Views 9 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • imc67I Offline
    imc67I Offline
    imc67
    translator
    wrote last edited by imc67
    #28

    @Joseph isn't it strange that you set this topic to solved without checking?

    @girish & @nebulon today I spend an awful lot of time to analyse this issue together with Claude.ai and this is the result:

    Cloudron health checker triggers excessive MySQL disk I/O via Matomo LoginOIDC plugin

    I want to report a bug that causes massive MySQL disk write I/O on servers running Matomo with the LoginOIDC plugin (which is the default Cloudron OIDC integration).

    The problem

    The Cloudron health checker calls the root URL / of the Matomo app every 10 seconds. When Matomo's LoginOIDC plugin is active, every single health check request causes PHP to create a new session in MySQL containing a Login.login nonce and a LoginOIDC.nonce — even though no user is logging in.

    This results in exactly 360 new MySQL session rows per hour, 24 hours a day, on every server running Matomo with OIDC enabled.

    Evidence

    Session count per hour over a full day (consistent across 3 separate servers):

    00:00 → 360 sessions
    01:00 → 360 sessions
    02:00 → 360 sessions
    ... (identical every hour, including 3am)
    23:00 → 360 sessions
    

    360 sessions/hour = exactly 1 per 10 seconds = the Cloudron health check interval.

    Decoding a session row from the MySQL session table confirms the content:

    a:3:{
      s:11:"Login.login"; a:1:{s:5:"nonce"; s:32:"44e6599e05b0e829ec469459a413fc11";}
      s:4:"__ZF"; a:2:{
        s:11:"Login.login"; a:1:{s:4:"ENVT"; a:1:{s:5:"nonce"; i:1772890030;}}
        s:15:"LoginOIDC.nonce"; a:1:{s:4:"ENVT"; a:1:{s:5:"nonce"; i:1772890030;}}
      }
      s:15:"LoginOIDC.nonce"; a:1:{s:5:"nonce"; s:32:"7456603093600c7a3686d560bc61acd1";}
    }
    

    These are unauthenticated OIDC handshake sessions — not real users.

    Sessions have a lifetime of 1,209,600 seconds (14 days), so they accumulate without being cleaned up. On my 3 servers this resulted in 113,000–121,000 session rows per Matomo instance, causing continuous MySQL InnoDB redo log writes and buffer pool flushes of 2.5–4 MB/s disk I/O.

    Today's actual visitor count in Matomo: 22 visits across 10 sites. Today's sessions created in MySQL: 4,320+.

    Root cause

    The Cloudron health checker calls GET / on the Matomo app. This URL triggers the LoginOIDC plugin to initialize an OIDC authentication flow and write a session to MySQL — even for a non-browser health check request with no user interaction.

    Suggested fix

    The Cloudron health checker should call a static or session-free endpoint instead of /, for example:

    • matomo.js or piwik.js (static JavaScript file, no PHP session)
    • A dedicated /health or /ping endpoint

    This would eliminate the session creation entirely without requiring any changes to Matomo or its plugins.

    Environment

    • Cloudron v9.1.3 (9.0.17)
    • Ubuntu 22.04.5 LTS
    • Matomo 5.8.0 with LoginOIDC plugin
    • Reproduced on 3 separate Cloudron Pro instances
    1 Reply Last reply
    2
    • imc67I imc67 marked this topic as a regular topic
    • imc67I imc67 marked this topic as a question
    • J Offline
      J Offline
      joseph
      Staff
      wrote last edited by
      #29

      @imc67 not sure I remember why 😄 Does this mean that if you disable matomo temporarily, the disk usage goes down a lot?

      Seems easy to fix now that we know the root cause

      imc67I 1 Reply Last reply
      1
      • J joseph

        @imc67 not sure I remember why 😄 Does this mean that if you disable matomo temporarily, the disk usage goes down a lot?

        Seems easy to fix now that we know the root cause

        imc67I Offline
        imc67I Offline
        imc67
        translator
        wrote last edited by
        #30

        @joseph I’m pretty sure that more apps suffer from this issue since the introduction of OIDC, I see EspoCRM and FreeScout, also has a Healthcheck to root/ (where the OIDC login is), didn’t check the sessions.

        1 Reply Last reply
        0
        • J Offline
          J Offline
          joseph
          Staff
          wrote last edited by
          #31

          I have to test, but it seems like a matomo bug here (if this is all true). There is no need to create an OIDC session when visiting '/' . You have to only create OIDC session when OIDC login button is clicked.

          1 Reply Last reply
          0
          • luckowL Offline
            luckowL Offline
            luckow
            translator
            wrote last edited by
            #32

            My two cents: as soon as #28 is correct, this should happen with every Cloudron instance that has Matomo (and OIDC enabled). I looked at one of my instances that met the criteria. One of the Matomo instances had about 300 sessions stored in MySQL. The oldest entry is from Feb 26.
            So maybe #28 isn't correct, or it's something that only happens on this instance.

            Pronouns: he/him | Primary language: German

            1 Reply Last reply
            1
            • imc67I Offline
              imc67I Offline
              imc67
              translator
              wrote last edited by
              #33

              Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?

              can you check how many sessions per hour are being created? Run this query:
              sql

              SELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions
              FROM `<your_matomo_db>`.session
              WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY
              GROUP BY hour ORDER BY hour;
              

              On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.

              luckowL 1 Reply Last reply
              1
              • imc67I imc67

                Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?

                can you check how many sessions per hour are being created? Run this query:
                sql

                SELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions
                FROM `<your_matomo_db>`.session
                WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY
                GROUP BY hour ORDER BY hour;
                

                On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.

                luckowL Offline
                luckowL Offline
                luckow
                translator
                wrote last edited by luckow
                #34

                @imc67 one app instance (4y old)

                +------+----------+
                | hour | sessions |
                +------+----------+
                |    0 |        2 |
                |    2 |        1 |
                |    7 |        2 |
                |    8 |        1 |
                |    9 |        1 |
                |   13 |        3 |
                |   15 |        1 |
                |   17 |        3 |
                |   19 |        1 |
                |   20 |        3 |
                |   21 |        4 |
                |   22 |        1 |
                +------+----------+
                

                different app instance (7y old)

                +------+----------+
                | hour | sessions |
                +------+----------+
                |    3 |        1 |
                |    5 |        2 |
                |   15 |        4 |
                |   18 |        2 |
                |   19 |        2 |
                |   20 |        2 |
                |   21 |        4 |
                |   22 |        2 |
                +------+----------+
                

                health check is every 10 sec.

                Mar 07 18:00:50 - - - [07/Mar/2026:17:00:50 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                Mar 07 18:00:50 172.18.0.1 - - [07/Mar/2026:17:00:50 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:00 - - - [07/Mar/2026:17:01:00 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:00 172.18.0.1 - - [07/Mar/2026:17:01:00 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:10 - - - [07/Mar/2026:17:01:10 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:10 172.18.0.1 - - [07/Mar/2026:17:01:10 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:20 - - - [07/Mar/2026:17:01:20 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:20 172.18.0.1 - - [07/Mar/2026:17:01:20 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:30 - - - [07/Mar/2026:17:01:30 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                Mar 07 18:01:30 172.18.0.1 - - [07/Mar/2026:17:01:30 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                

                Pronouns: he/him | Primary language: German

                1 Reply Last reply
                2
                • imc67I Offline
                  imc67I Offline
                  imc67
                  translator
                  wrote last edited by
                  #35

                  We also found some huge MySQL tables from a Wordpress-app with dedicated MainWP due to incorrect retention settings, after correction and deletion the 1 minute iotop -aoP -d 5 is still:

                  • Docker MySQL: 70 MB
                  • Host MySQL: 33 MB
                  • go-carbon: 6.7 MB
                  • jbd2: 9.9 MB
                  • Total: ~103 MB per minute

                  To put this in perspective:

                  • 103 MB/min = 6.2 GB/hour
                  • 6.2 GB/hour = 148 GB/day
                  • 148 GB/day = 4.4 TB/month

                  This is on a server with relatively low visitors across 10 sites. The vast majority of this write activity is caused by the issues identified above (Matomo health checker sessions, box.tasks accumulation, and app-level retention misconfigurations) — not by actual user traffic.

                  Note: these are cumulative iotop counters, not sustained rates. The actual average write speed shown by Cloudron's dashboard is ~2.5-4 MB/s, which still translates to 216-345 GB/day of unnecessary disk writes on a lightly loaded server.

                  1 Reply Last reply
                  0
                  • imc67I Offline
                    imc67I Offline
                    imc67
                    translator
                    wrote last edited by
                    #36

                    Update on the Matomo issue https://forum.cloudron.io/post/121389

                    I disabled the OIDC plugin in Matomo and then the sessions are still created every 10 seconds, so it's not that plugin. Next when looking into the Matomo settings - System check it says:

                    Errors below may be due to a partial or failed upload of Matomo files.
                    --> Try to reupload all the Matomo files in BINARY mode. <--
                    
                    File size mismatch: /app/code/misc/user/index.html (expected length: 172, found: 170)
                    

                    The file contains:

                    /app/code# cat /app/code/misc/user/index.html
                    This directory stores the custom logo for this Piwik server. Learn more: <a href="https://matomo.org/faq/new-to-piwik/faq_129/">How do I customise the logo in Piwik?</a>
                    

                    This means nothing serious.

                    Then I installed a Matomo app on the Cloudron demo server and was able to replicate the every 10 seconds session table increase, within 1 minute 8 extra sessions:

                    mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified)) 
                        -> FROM session;
                    +----------+------------------------------+------------------------------+
                    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
                    +----------+------------------------------+------------------------------+
                    |       47 | 2026-03-08 10:58:00          | 2026-03-08 11:05:20          |
                    +----------+------------------------------+------------------------------+
                    1 row in set (0.00 sec)
                    
                    mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
                    +----------+------------------------------+------------------------------+
                    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
                    +----------+------------------------------+------------------------------+
                    |       55 | 2026-03-08 10:58:00          | 2026-03-08 11:06:40          |
                    +----------+------------------------------+------------------------------+
                    1 row in set (0.00 sec)
                    
                    1 Reply Last reply
                    0
                    • nebulonN Offline
                      nebulonN Offline
                      nebulon
                      Staff
                      wrote last edited by
                      #37

                      There is a lot of information here, but I think it got all a bit too mixed together making it unclear what might actually case the disk I/O. For a start, upserting sessions in mysql does not mean it would sync to disk all the time, so this may or may not be related. Also it is unclear to me when and why how much disk I/O is expected and when it is an issue. So it becomes even harder to properly respond here.

                      Maybe we can try to separate the issues mainly first focusing on the potentially unnecessary session creation by the healtheck and that also ideally one application at a time. Maybe you can create those issues at the individual app packages to track those better, otherwise those issues easily get lost until such time we have resources to look into those.

                      1 Reply Last reply
                      1
                      • nebulonN nebulon forked this topic
                      • nebulonN nebulon forked this topic
                      • imc67I imc67 referenced this topic
                      • imc67I Offline
                        imc67I Offline
                        imc67
                        translator
                        wrote last edited by
                        #38

                        Thanks @nebulon for dividing the main issue "high disk I/O" and my three possible root causes into 3.

                        Here we can focus on Matomo, current situation on 3 different servers, each with one Matomo app:

                        ysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
                        +----------+------------------------------+------------------------------+
                        | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
                        +----------+------------------------------+------------------------------+
                        |   121230 | 2026-02-24 21:02:50          | 2026-03-10 21:43:20          |
                        +----------+------------------------------+------------------------------+
                        1 row in set (0.13 sec)
                        
                        mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
                        +----------+------------------------------+------------------------------+
                        | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
                        +----------+------------------------------+------------------------------+
                        |   120811 | 2026-02-24 21:41:30          | 2026-03-10 21:43:10          |
                        +----------+------------------------------+------------------------------+
                        1 row in set (0.13 sec)
                        
                        mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
                        +----------+------------------------------+------------------------------+
                        | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
                        +----------+------------------------------+------------------------------+
                        |    22494 | 2026-03-08 07:31:01          | 2026-03-10 21:40:00          |
                        +----------+------------------------------+------------------------------+
                        1 row in set (0.02 sec)
                        
                        

                        This looks like a serious amount of sessions in a short time, to be exactly:
                        120.811 / 20.161,67 = 5,99 sessions per minute is every 10 seconds health check.

                        The only thing I can find in the config.ini.php regarding sessions is: session_save_handler = "" and I don't remember me changing that?

                        1 Reply Last reply
                        0
                        • imc67I Offline
                          imc67I Offline
                          imc67
                          translator
                          wrote last edited by
                          #39

                          Here is a more complete analysis of the disk I/O across all 3 servers.

                          1. Cloudron Disk I/O graph (server 1, last 6 hours)

                          Scherm­afbeelding 2026-03-10 om 23.40.17.png

                          The graph shows a constant write baseline of ~2.5 MB/s, 24/7. The spike around 20:00 is the scheduled daily backup — completely normal. The total write of 646 GB over 2 days (~323 GB/day) is almost entirely this constant baseline, not user traffic or backups.

                          2. iotop breakdown (server 1, 1 minute measurement)

                          Docker MySQL (messageb):  48.62 MB/min  (~0.81 MB/s)
                          Host MySQL:               23.26 MB/min  (~0.39 MB/s)
                          go-carbon:                 9.34 MB/min  (~0.16 MB/s)
                          jbd2 (fs journal):         8.44 MB/min  (~0.14 MB/s)
                          systemd-journald:          4.37 MB/min  (~0.07 MB/s)
                          containerd:                2.02 MB/min  (~0.03 MB/s)
                          dockerd:                   1.13 MB/min  (~0.02 MB/s)
                          Total:                   ~97 MB/min    (~1.6 MB/s average)
                          

                          Note: the average of ~1.6 MB/s is consistent with the graph baseline of ~2.5 MB/s when accounting for peaks and the fact that iotop measures a 1-minute window.

                          3. InnoDB write activity since last MySQL restart (all 3 servers)

                          Server 1 (uptime 59 min) Server 2 (uptime ~40h) Server 3 (uptime ~40h)
                          Data written 2.13 GB 55.3 GB 63.5 GB
                          Effective write rate ~0.58 MB/s ~0.38 MB/s ~0.43 MB/s
                          Rows inserted/s 6.5 8.8 8.6
                          Rows updated/s 7.0 4.5 4.0
                          Log writes/s 28.7 23.6 18.0

                          All three servers show a consistent insert rate of ~6-9 rows/second in the Docker MySQL, matching exactly 1 new Matomo session every 10 seconds (= health check interval).

                          Conclusion

                          The Docker MySQL (~0.4-0.8 MB/s) is the largest single contributor, driven primarily by Matomo session inserts. The total observed disk I/O of 2-4 MB/s is the sum of multiple processes, with the constant Matomo session accumulation as the most significant and most easily fixable component.

                          1 Reply Last reply
                          0

                          Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                          Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                          With your input, this post could be even better 💗

                          Register Login
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • Bookmarks
                          • Search