Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Cloudron v9: huge disk I/O is this normal/safe/needed?

Cloudron v9: huge disk I/O is this normal/safe/needed?

Scheduled Pinned Locked Moved Unsolved Support
graphs
34 Posts 8 Posters 2.0k Views 8 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • imc67I Offline
    imc67I Offline
    imc67
    translator
    wrote on last edited by imc67
    #20

    It’s a production server, isn’t it ridiculous to stop these apps to watch resource behavior? There must be tools or ways to find the root cause don’t you think?

    Beside that it’s the host MySQL does it has anything to do with apps?

    robiR 1 Reply Last reply
    0
    • imc67I imc67

      It’s a production server, isn’t it ridiculous to stop these apps to watch resource behavior? There must be tools or ways to find the root cause don’t you think?

      Beside that it’s the host MySQL does it has anything to do with apps?

      robiR Offline
      robiR Offline
      robi
      wrote on last edited by
      #21

      @imc67 Holding that limiting belief is keeping your problem unresolved, no?

      Sure, then trace it from the MySQL side, find which user, which container and so on..

      Yes, it has everything to do with the Apps that are using that DB instance.

      Conscious tech

      1 Reply Last reply
      0
      • jamesJ Offline
        jamesJ Offline
        james
        Staff
        wrote on last edited by
        #22

        Hello @imc67
        You can use the PID from the process to figure out what mysql service it is.

        e.g. your iotop shows for mysqld the pid 1994756.
        You can run systemctl status mysql.service and there is the pid displayed:

        ● mysql.service - MySQL Community Server
             Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; preset: enabled)
             Active: active (running) since Mon 2025-12-01 09:17:59 UTC; 1 week 5 days ago
           Main PID: 1994756 (mysqld)
             Status: "Server is operational"
              Tasks: 48 (limit: 4603)
             Memory: 178.7M (peak: 298.0M swap: 95.4M swap peak: 108.7M)
                CPU: 1h 41min 31.520s
             CGroup: /system.slice/mysql.service
                     └─1994756 /usr/sbin/mysqld
        
        Notice: journal has been rotated since unit was started, output may be incomplete.
        

        So from iotop I can confirm that the system mysqld service is pid 1994756 so I'd know to inspect the system mysqld service and not the docker mysql service.

        You can also get the pid from the mysqld inside the docker container with docker top mysql:

        docker top mysql
        UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
        root                1889                1512                0                   Nov07               ?                   00:06:17            /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql
        usbmux              3079                1889                0                   Nov07               ?                   03:49:38            /usr/sbin/mysqld
        usbmux              3099                1889                0                   Nov07               ?                   00:00:11            node /app/code/service.js
        

        Then I know the mysqld pid of the docker service is 3079 which I can check again with the system:

        ps uax | grep -i 3079
        usbmux      3079  0.4  1.0 1587720 43692 ?       Sl   Nov07 229:38 /usr/sbin/mysqld
        

        Now we can differentiate between the two.


        Okay.
        Now that we can differentiate between the two, you can observe iotop and see which one has a high I/O.
        After you narrow it down to either one, then we can do some analysis what database / table get accesses the most even further narrow it down.

        1 Reply Last reply
        2
        • imc67I Offline
          imc67I Offline
          imc67
          translator
          wrote on last edited by
          #23

          Ok, thanks for your hints!!

          The result was PID 19974

          However:

          ● mysql.service - MySQL Community Server
               Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
               Active: active (running) since Sat 2025-12-13 05:57:30 UTC; 1 day 5h ago
              Process: 874 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
             Main PID: 910 (mysqld)
               Status: "Server is operational"
                Tasks: 47 (limit: 77023)
               Memory: 601.7M
                  CPU: 59min 14.538s
               CGroup: /system.slice/mysql.service
                       └─910 /usr/sbin/mysqld
          

          And docker top mysql

          UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
          root                9842                8908                0                   Dec13               ?                   00:00:17            /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql
          message+            19974               9842                6                   Dec13               ?                   01:56:43            /usr/sbin/mysqld
          message+            19976               9842                0                   Dec13               ?                   00:01:31            node /app/code/service.js
          

          So ps uax | grep -i 19974 gives:

          message+   19974  6.6  1.8 4249604 1229136 ?     Sl   Dec13 116:48 /usr/sbin/mysqld
          

          So at least we now know that it's the Docker MySQL

          1 Reply Last reply
          0
          • jamesJ Offline
            jamesJ Offline
            james
            Staff
            wrote on last edited by
            #24

            Hello @imc67
            Now we can start analysing.
            Edit the file /home/yellowtent/platformdata/mysql/custom.cnf and add the following lines:

            [mysqld]
            general_log = 1
            slow_query_log = 1
            

            Restart the MySQL service in the Cloudron Dashboard.
            The log files are stored at /home/yellowtent/platformdata/mysql/mysql.log and /home/yellowtent/platformdata/mysql/mysql-slow.log.

            Let it run for a day or more.
            Then you can download the log files and see what queries run very often causing disk I/O.

            1 Reply Last reply
            3
            • imc67I Offline
              imc67I Offline
              imc67
              translator
              wrote on last edited by
              #25

              I enabled this en within seconds the log file was enormous, I asked ChatGPT to analyse it and here is it's observations: (too technical for me):


              Some observations after briefly enabling the MySQL general log (Cloudron v9)

              I enabled the MySQL general log only for a short time because of disk I/O concerns, but even within a few minutes a clear pattern showed up.

              What I’m seeing:

              • A very high number of
                INSERT INTO session (...) and
                INSERT ... ON DUPLICATE KEY UPDATE
              • These happen continuously and come from 172.18.0.1
              • As far as I understand, this IP is the Docker bridge gateway in Cloudron, so it likely represents multiple apps

              I temporarily disabled Matomo to rule that out, but disk I/O and session-related writes did not noticeably decrease, so it does not seem to be the main contributor.

              From the log it looks like:

              • Multiple applications are storing sessions in MySQL
              • Session rows are updated on almost every request
              • This can generate a lot of InnoDB redo log and disk I/O, even with low traffic

              Nothing looks obviously broken, but I’m trying to understand whether this level of session write activity is:

              • expected behavior in Cloudron v9
              • something that can be tuned or configured
              • or if there are recommended best practices (e.g. Redis for sessions)

              Any guidance on how Cloudron expects apps to handle sessions, or how to reduce unnecessary MySQL write I/O, would be much appreciated.

              Thanks for looking into this.

              1 Reply Last reply
              0
              • J Offline
                J Offline
                joseph
                Staff
                wrote on last edited by
                #26

                Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)

                imc67I 1 Reply Last reply
                0
                • J joseph

                  Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)

                  imc67I Offline
                  imc67I Offline
                  imc67
                  translator
                  wrote on last edited by
                  #27

                  @joseph said in Cloudron v9: huge disk I/O is this normal/safe/needed?:

                  Do you have happen to use nextcloud on the server? I think nextcloud+ldap keeps doing a login request when syncing for each file (which might trigger a login eventlog in mysql)

                  No there is no Nextcloud on this server

                  1 Reply Last reply
                  1
                  • J joseph has marked this topic as solved
                  • imc67I Offline
                    imc67I Offline
                    imc67
                    translator
                    wrote last edited by imc67
                    #28

                    @Joseph isn't it strange that you set this topic to solved without checking?

                    @girish & @nebulon today I spend an awful lot of time to analyse this issue together with Claude.ai and this is the result:

                    Cloudron health checker triggers excessive MySQL disk I/O via Matomo LoginOIDC plugin

                    I want to report a bug that causes massive MySQL disk write I/O on servers running Matomo with the LoginOIDC plugin (which is the default Cloudron OIDC integration).

                    The problem

                    The Cloudron health checker calls the root URL / of the Matomo app every 10 seconds. When Matomo's LoginOIDC plugin is active, every single health check request causes PHP to create a new session in MySQL containing a Login.login nonce and a LoginOIDC.nonce — even though no user is logging in.

                    This results in exactly 360 new MySQL session rows per hour, 24 hours a day, on every server running Matomo with OIDC enabled.

                    Evidence

                    Session count per hour over a full day (consistent across 3 separate servers):

                    00:00 → 360 sessions
                    01:00 → 360 sessions
                    02:00 → 360 sessions
                    ... (identical every hour, including 3am)
                    23:00 → 360 sessions
                    

                    360 sessions/hour = exactly 1 per 10 seconds = the Cloudron health check interval.

                    Decoding a session row from the MySQL session table confirms the content:

                    a:3:{
                      s:11:"Login.login"; a:1:{s:5:"nonce"; s:32:"44e6599e05b0e829ec469459a413fc11";}
                      s:4:"__ZF"; a:2:{
                        s:11:"Login.login"; a:1:{s:4:"ENVT"; a:1:{s:5:"nonce"; i:1772890030;}}
                        s:15:"LoginOIDC.nonce"; a:1:{s:4:"ENVT"; a:1:{s:5:"nonce"; i:1772890030;}}
                      }
                      s:15:"LoginOIDC.nonce"; a:1:{s:5:"nonce"; s:32:"7456603093600c7a3686d560bc61acd1";}
                    }
                    

                    These are unauthenticated OIDC handshake sessions — not real users.

                    Sessions have a lifetime of 1,209,600 seconds (14 days), so they accumulate without being cleaned up. On my 3 servers this resulted in 113,000–121,000 session rows per Matomo instance, causing continuous MySQL InnoDB redo log writes and buffer pool flushes of 2.5–4 MB/s disk I/O.

                    Today's actual visitor count in Matomo: 22 visits across 10 sites. Today's sessions created in MySQL: 4,320+.

                    Root cause

                    The Cloudron health checker calls GET / on the Matomo app. This URL triggers the LoginOIDC plugin to initialize an OIDC authentication flow and write a session to MySQL — even for a non-browser health check request with no user interaction.

                    Suggested fix

                    The Cloudron health checker should call a static or session-free endpoint instead of /, for example:

                    • matomo.js or piwik.js (static JavaScript file, no PHP session)
                    • A dedicated /health or /ping endpoint

                    This would eliminate the session creation entirely without requiring any changes to Matomo or its plugins.

                    Environment

                    • Cloudron v9.1.3 (9.0.17)
                    • Ubuntu 22.04.5 LTS
                    • Matomo 5.8.0 with LoginOIDC plugin
                    • Reproduced on 3 separate Cloudron Pro instances
                    1 Reply Last reply
                    2
                    • imc67I imc67 marked this topic as a regular topic
                    • imc67I imc67 marked this topic as a question
                    • J Offline
                      J Offline
                      joseph
                      Staff
                      wrote last edited by
                      #29

                      @imc67 not sure I remember why 😄 Does this mean that if you disable matomo temporarily, the disk usage goes down a lot?

                      Seems easy to fix now that we know the root cause

                      imc67I 1 Reply Last reply
                      1
                      • J joseph

                        @imc67 not sure I remember why 😄 Does this mean that if you disable matomo temporarily, the disk usage goes down a lot?

                        Seems easy to fix now that we know the root cause

                        imc67I Offline
                        imc67I Offline
                        imc67
                        translator
                        wrote last edited by
                        #30

                        @joseph I’m pretty sure that more apps suffer from this issue since the introduction of OIDC, I see EspoCRM and FreeScout, also has a Healthcheck to root/ (where the OIDC login is), didn’t check the sessions.

                        1 Reply Last reply
                        0
                        • J Offline
                          J Offline
                          joseph
                          Staff
                          wrote last edited by
                          #31

                          I have to test, but it seems like a matomo bug here (if this is all true). There is no need to create an OIDC session when visiting '/' . You have to only create OIDC session when OIDC login button is clicked.

                          1 Reply Last reply
                          0
                          • luckowL Offline
                            luckowL Offline
                            luckow
                            translator
                            wrote last edited by
                            #32

                            My two cents: as soon as #28 is correct, this should happen with every Cloudron instance that has Matomo (and OIDC enabled). I looked at one of my instances that met the criteria. One of the Matomo instances had about 300 sessions stored in MySQL. The oldest entry is from Feb 26.
                            So maybe #28 isn't correct, or it's something that only happens on this instance.

                            Pronouns: he/him | Primary language: German

                            1 Reply Last reply
                            1
                            • imc67I Offline
                              imc67I Offline
                              imc67
                              translator
                              wrote last edited by
                              #33

                              Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?

                              can you check how many sessions per hour are being created? Run this query:
                              sql

                              SELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions
                              FROM `<your_matomo_db>`.session
                              WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY
                              GROUP BY hour ORDER BY hour;
                              

                              On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.

                              luckowL 1 Reply Last reply
                              1
                              • imc67I imc67

                                Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?

                                can you check how many sessions per hour are being created? Run this query:
                                sql

                                SELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions
                                FROM `<your_matomo_db>`.session
                                WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY
                                GROUP BY hour ORDER BY hour;
                                

                                On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.

                                luckowL Offline
                                luckowL Offline
                                luckow
                                translator
                                wrote last edited by luckow
                                #34

                                @imc67 one app instance (4y old)

                                +------+----------+
                                | hour | sessions |
                                +------+----------+
                                |    0 |        2 |
                                |    2 |        1 |
                                |    7 |        2 |
                                |    8 |        1 |
                                |    9 |        1 |
                                |   13 |        3 |
                                |   15 |        1 |
                                |   17 |        3 |
                                |   19 |        1 |
                                |   20 |        3 |
                                |   21 |        4 |
                                |   22 |        1 |
                                +------+----------+
                                

                                different app instance (7y old)

                                +------+----------+
                                | hour | sessions |
                                +------+----------+
                                |    3 |        1 |
                                |    5 |        2 |
                                |   15 |        4 |
                                |   18 |        2 |
                                |   19 |        2 |
                                |   20 |        2 |
                                |   21 |        4 |
                                |   22 |        2 |
                                +------+----------+
                                

                                health check is every 10 sec.

                                Mar 07 18:00:50 - - - [07/Mar/2026:17:00:50 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:00:50 172.18.0.1 - - [07/Mar/2026:17:00:50 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:00 - - - [07/Mar/2026:17:01:00 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:00 172.18.0.1 - - [07/Mar/2026:17:01:00 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:10 - - - [07/Mar/2026:17:01:10 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:10 172.18.0.1 - - [07/Mar/2026:17:01:10 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:20 - - - [07/Mar/2026:17:01:20 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:20 172.18.0.1 - - [07/Mar/2026:17:01:20 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:30 - - - [07/Mar/2026:17:01:30 +0000] "GET / HTTP/1.1" 302 - "-" "Mozilla (CloudronHealth)"
                                Mar 07 18:01:30 172.18.0.1 - - [07/Mar/2026:17:01:30 +0000] "GET / HTTP/1.1" 302 299 "-" "Mozilla (CloudronHealth)"
                                

                                Pronouns: he/him | Primary language: German

                                1 Reply Last reply
                                2

                                Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                With your input, this post could be even better 💗

                                Register Login
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Don't have an account? Register

                                • Login or register to search.
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • Bookmarks
                                • Search