Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Cloudron v9: huge disk I/O is this normal/safe/needed?

Cloudron v9: huge disk I/O is this normal/safe/needed?

Scheduled Pinned Locked Moved Unsolved Support
graphs
24 Posts 6 Posters 348 Views 6 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • imc67I Offline
    imc67I Offline
    imc67
    translator
    wrote last edited by
    #15

    ok thanks, below the result after just a few minutes, I'm not a technician but as far as I can see it's mainly mysql which is writing (I sorted Write):
    de0b4ce4-096f-4c6b-977b-dcf6574125ea-Scherm­afbeelding 2025-12-02 om 14.30.00.png

    1 Reply Last reply
    0
    • nebulonN Offline
      nebulonN Offline
      nebulon
      Staff
      wrote last edited by
      #16

      Since we debugged some other issue on that server, was also taking a look at the disk I/O. So basically the mysql service is doing a lot of disk I/O (also as see in the screenshot).

      It does seem the mysql addon is just queried and written to a lot. So likely one of the many installed apps using it might commit a lot to the database. I didn't want to stop apps, but maybe you can try to stop individual apps which use mysql one-by-one to hopefully find the one which causes the constant writes.

      1 Reply Last reply
      1
      • imc67I Offline
        imc67I Offline
        imc67
        translator
        wrote last edited by imc67
        #17

        Thanks @nebulon for your time, together with ChatGPT I did deeper analysis but I also read this: https://docs.cloudron.io/troubleshooting/#mysql

        Two instances of MySQL
        There are two instances of MySQL on Cloudron. One instance runs on the host and is used by the platform. Another instance is the MySQL addon which runs in a container named mysql and is shared by apps. This test is related to the host MySQL.
        

        Doesn't this mean that the mysql service in iotop is the "host version" that has nothing to do with the apps?

        For now "we" (I) have seen this:

        Summary of Disk Write I/O Observation on Cloudron Host

        • Using iotop, the host shows consistently high disk write I/O (4–5 MB/s).
        • Analysis of MySQL processes (mysqld) indicates these are responsible for the majority of the write load.
        • The high write I/O is primarily due to InnoDB internal activity: buffer pool flushes, redo log writes, and metadata updates, mostly from the box database (eventlog, tasks, backups).

        In about 10 minutes this is the Disk Write I/O (so 1.5GB in 10 minutes)

        Total DISK READ:         0.00 B/s | Total DISK WRITE:         2.73 M/s
        Current DISK READ:       0.00 B/s | Current DISK WRITE:       4.25 M/s
            TID  PRIO  USER     DISK READ DISK WRITE>  SWAPIN      IO    COMMAND                                                                                                                  
          21250 be/4 messageb      0.00 B   1038.50 M  ?unavailable?  mysqld
            936 be/4 mysql         0.00 B    465.28 M  ?unavailable?  mysqld
        

        I stopped about 25% of the apps at a certain moment with no significant result, this is the current situation (IMHO not really intensive application and they have low traffic):

        App 	Status 
        Yourls	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Stopped 
        WordPress (Developer)	Running 
        WordPress (Developer)	Stopped 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Running 
        WordPress (Developer)	Stopped 
        WordPress (Developer)	Running 
        Taiga	Stopped 
        Surfer	Running 
        Surfer	Stopped 
        Roundcube	Running 
        Roundcube	Running 
        Omeka S	Stopped 
        Moodle	Stopped 
        LAMP	Running 
        Roundcube	Running 
        Roundcube	Running 
        Roundcube	Running 
        Pretix	Stopped 
        MiroTalk SFU	Running 
        Matomo	Running 
        FreeScout	Running 
        FreeScout	Running 
        Espo CRM	Running 
        

        What to do next to find the root cause?

        avatar1024A 1 Reply Last reply
        0
        • imc67I imc67

          Thanks @nebulon for your time, together with ChatGPT I did deeper analysis but I also read this: https://docs.cloudron.io/troubleshooting/#mysql

          Two instances of MySQL
          There are two instances of MySQL on Cloudron. One instance runs on the host and is used by the platform. Another instance is the MySQL addon which runs in a container named mysql and is shared by apps. This test is related to the host MySQL.
          

          Doesn't this mean that the mysql service in iotop is the "host version" that has nothing to do with the apps?

          For now "we" (I) have seen this:

          Summary of Disk Write I/O Observation on Cloudron Host

          • Using iotop, the host shows consistently high disk write I/O (4–5 MB/s).
          • Analysis of MySQL processes (mysqld) indicates these are responsible for the majority of the write load.
          • The high write I/O is primarily due to InnoDB internal activity: buffer pool flushes, redo log writes, and metadata updates, mostly from the box database (eventlog, tasks, backups).

          In about 10 minutes this is the Disk Write I/O (so 1.5GB in 10 minutes)

          Total DISK READ:         0.00 B/s | Total DISK WRITE:         2.73 M/s
          Current DISK READ:       0.00 B/s | Current DISK WRITE:       4.25 M/s
              TID  PRIO  USER     DISK READ DISK WRITE>  SWAPIN      IO    COMMAND                                                                                                                  
            21250 be/4 messageb      0.00 B   1038.50 M  ?unavailable?  mysqld
              936 be/4 mysql         0.00 B    465.28 M  ?unavailable?  mysqld
          

          I stopped about 25% of the apps at a certain moment with no significant result, this is the current situation (IMHO not really intensive application and they have low traffic):

          App 	Status 
          Yourls	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Stopped 
          WordPress (Developer)	Running 
          WordPress (Developer)	Stopped 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Running 
          WordPress (Developer)	Stopped 
          WordPress (Developer)	Running 
          Taiga	Stopped 
          Surfer	Running 
          Surfer	Stopped 
          Roundcube	Running 
          Roundcube	Running 
          Omeka S	Stopped 
          Moodle	Stopped 
          LAMP	Running 
          Roundcube	Running 
          Roundcube	Running 
          Roundcube	Running 
          Pretix	Stopped 
          MiroTalk SFU	Running 
          Matomo	Running 
          FreeScout	Running 
          FreeScout	Running 
          Espo CRM	Running 
          

          What to do next to find the root cause?

          avatar1024A Online
          avatar1024A Online
          avatar1024
          wrote last edited by
          #18

          @imc67 said in Cloudron v9: huge disk I/O is this normal/safe/needed?:

          I stopped about 25% of the apps at a certain moment with no significant result

          I think @nebulon was suggesting to stop apps one by one to see if one particular app is causing the problem.

          1 Reply Last reply
          1
          • robiR Offline
            robiR Offline
            robi
            wrote last edited by
            #19

            Generally such system behavior is accompanied by higher CPU and Memory usage, so you can start with stopping those, and see which one causes a dip MySQL usage.

            Conscious tech

            1 Reply Last reply
            0
            • imc67I Offline
              imc67I Offline
              imc67
              translator
              wrote last edited by imc67
              #20

              It’s a production server, isn’t it ridiculous to stop these apps to watch resource behavior? There must be tools or ways to find the root cause don’t you think?

              Beside that it’s the host MySQL does it has anything to do with apps?

              robiR 1 Reply Last reply
              0
              • imc67I imc67

                It’s a production server, isn’t it ridiculous to stop these apps to watch resource behavior? There must be tools or ways to find the root cause don’t you think?

                Beside that it’s the host MySQL does it has anything to do with apps?

                robiR Offline
                robiR Offline
                robi
                wrote last edited by
                #21

                @imc67 Holding that limiting belief is keeping your problem unresolved, no?

                Sure, then trace it from the MySQL side, find which user, which container and so on..

                Yes, it has everything to do with the Apps that are using that DB instance.

                Conscious tech

                1 Reply Last reply
                0
                • jamesJ Offline
                  jamesJ Offline
                  james
                  Staff
                  wrote last edited by
                  #22

                  Hello @imc67
                  You can use the PID from the process to figure out what mysql service it is.

                  e.g. your iotop shows for mysqld the pid 1994756.
                  You can run systemctl status mysql.service and there is the pid displayed:

                  ● mysql.service - MySQL Community Server
                       Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; preset: enabled)
                       Active: active (running) since Mon 2025-12-01 09:17:59 UTC; 1 week 5 days ago
                     Main PID: 1994756 (mysqld)
                       Status: "Server is operational"
                        Tasks: 48 (limit: 4603)
                       Memory: 178.7M (peak: 298.0M swap: 95.4M swap peak: 108.7M)
                          CPU: 1h 41min 31.520s
                       CGroup: /system.slice/mysql.service
                               └─1994756 /usr/sbin/mysqld
                  
                  Notice: journal has been rotated since unit was started, output may be incomplete.
                  

                  So from iotop I can confirm that the system mysqld service is pid 1994756 so I'd know to inspect the system mysqld service and not the docker mysql service.

                  You can also get the pid from the mysqld inside the docker container with docker top mysql:

                  docker top mysql
                  UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
                  root                1889                1512                0                   Nov07               ?                   00:06:17            /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql
                  usbmux              3079                1889                0                   Nov07               ?                   03:49:38            /usr/sbin/mysqld
                  usbmux              3099                1889                0                   Nov07               ?                   00:00:11            node /app/code/service.js
                  

                  Then I know the mysqld pid of the docker service is 3079 which I can check again with the system:

                  ps uax | grep -i 3079
                  usbmux      3079  0.4  1.0 1587720 43692 ?       Sl   Nov07 229:38 /usr/sbin/mysqld
                  

                  Now we can differentiate between the two.


                  Okay.
                  Now that we can differentiate between the two, you can observe iotop and see which one has a high I/O.
                  After you narrow it down to either one, then we can do some analysis what database / table get accesses the most even further narrow it down.

                  1 Reply Last reply
                  2
                  • imc67I Offline
                    imc67I Offline
                    imc67
                    translator
                    wrote last edited by
                    #23

                    Ok, thanks for your hints!!

                    The result was PID 19974

                    However:

                    ● mysql.service - MySQL Community Server
                         Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
                         Active: active (running) since Sat 2025-12-13 05:57:30 UTC; 1 day 5h ago
                        Process: 874 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
                       Main PID: 910 (mysqld)
                         Status: "Server is operational"
                          Tasks: 47 (limit: 77023)
                         Memory: 601.7M
                            CPU: 59min 14.538s
                         CGroup: /system.slice/mysql.service
                                 └─910 /usr/sbin/mysqld
                    

                    And docker top mysql

                    UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
                    root                9842                8908                0                   Dec13               ?                   00:00:17            /usr/bin/python3 /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i Mysql
                    message+            19974               9842                6                   Dec13               ?                   01:56:43            /usr/sbin/mysqld
                    message+            19976               9842                0                   Dec13               ?                   00:01:31            node /app/code/service.js
                    

                    So ps uax | grep -i 19974 gives:

                    message+   19974  6.6  1.8 4249604 1229136 ?     Sl   Dec13 116:48 /usr/sbin/mysqld
                    

                    So at least we now know that it's the Docker MySQL

                    1 Reply Last reply
                    0
                    • jamesJ Offline
                      jamesJ Offline
                      james
                      Staff
                      wrote last edited by
                      #24

                      Hello @imc67
                      Now we can start analysing.
                      Edit the file /home/yellowtent/platformdata/mysql/custom.cnf and add the following lines:

                      [mysqld]
                      general_log = 1
                      slow_query_log = 1
                      

                      Restart the MySQL service in the Cloudron Dashboard.
                      The log files are stored at /home/yellowtent/platformdata/mysql/mysql.log and /home/yellowtent/platformdata/mysql/mysql-slow.log.

                      Let it run for a day or more.
                      Then you can download the log files and see what queries run very often causing disk I/O.

                      1 Reply Last reply
                      2
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search