Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
imc67I

Marcel C

@imc67
translator
About
Posts
948
Topics
171
Shares
0
Groups
1
Followers
3
Following
0

Posts

Recent Best Controversial

  • MiroTalk SFU — Node.js heap out of memory crash (OOM) + analysis
    imc67I imc67

    wow that is fast, Thanks!

    MiroTalk

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    O what a pity 😉 was hoping this could be a solution

    Support graphs

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    @girish I don't know if this I related but it's the first time I tried: cloudron-support --troubleshoot and this is the result, a [FAIL] that can't be solved AND it's exactly the same on my 2 other servers....:

    root@xxx:~# cloudron-support --troubleshoot
    Vendor: netcup Product: KVM Server
    Linux: 5.15.0-171-generic
    Ubuntu: jammy 22.04
    Execution environment: kvm
    Processor: AMD EPYC 7702P 64-Core Processor x 10
    RAM: 65842976KB
    Disk: /dev/sda3       1.9T
    [OK]    node version is correct
    [OK]    IPv6 is enabled and public IPv6 address is working
    [OK]    docker is running
    [OK]    docker version is correct
    [OK]    MySQL is running
    [OK]    netplan is good
    [OK]    DNS is resolving via systemd-resolved
    [OK]    unbound is running
    [OK]    nginx is running
    [OK]    dashboard cert is valid
    [OK]    dashboard is reachable via loopback
    [FAIL]  Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.js. Last migration file: /package.json.
            Please run 'cloudron-support --apply-db-migrations' to apply the migrations.
    [OK]    Service 'mysql' is running and healthy
    [OK]    Service 'postgresql' is running and healthy
    [OK]    Service 'mongodb' is running and healthy
    [OK]    Service 'mail' is running and healthy
    [OK]    Service 'graphite' is running and healthy
    [OK]    Service 'sftp' is running and healthy
    [OK]    box v9.1.3 is running
    [OK]    Dashboard is reachable via domain name
    [WARN]  Domain xxx.nl expiry check skipped because whois does not have this information
    root@xxx:~# cloudron-support --apply-db-migrations
    Applying pending database migrations
    2026-03-12T11:27:14 ==> start: Cloudron Start
    media:x:500:
    2026-03-12T11:27:14 ==> start: Configuring docker
    Synchronizing state of apparmor.service with SysV service script with /lib/systemd/systemd-sysv-install.
    Executing: /lib/systemd/systemd-sysv-install enable apparmor
    2026-03-12T11:27:15 ==> start: Ensuring directories
    2026-03-12T11:27:15 ==> start: Configuring journald
    2026-03-12T11:27:15 ==> start: Setting up unbound
    2026-03-12T11:27:15 ==> start: Adding systemd services
    Synchronizing state of unbound.service with SysV service script with /lib/systemd/systemd-sysv-install.
    Executing: /lib/systemd/systemd-sysv-install enable unbound
    Synchronizing state of cron.service with SysV service script with /lib/systemd/systemd-sysv-install.
    Executing: /lib/systemd/systemd-sysv-install enable cron
    Synchronizing state of rpcbind.service with SysV service script with /lib/systemd/systemd-sysv-install.
    Executing: /lib/systemd/systemd-sysv-install disable rpcbind
    2026-03-12T11:28:39 ==> start: Configuring sudoers
    2026-03-12T11:28:39 ==> start: Unconfiguring collectd
    Synchronizing state of collectd.service with SysV service script with /lib/systemd/systemd-sysv-install.
    Executing: /lib/systemd/systemd-sysv-install disable collectd
    2026-03-12T11:28:40 ==> start: Configuring logrotate
    2026-03-12T11:28:40 ==> start: Adding motd message for admins
    2026-03-12T11:28:40 ==> start: Configuring nginx
    2026-03-12T11:28:41 ==> start: Starting mysql
    mysqladmin: [Warning] Using a password on the command line interface can be insecure.
    Warning: Since password will be sent to server in plain text, use ssl connection to ensure password safety.
    mysql: [Warning] Using a password on the command line interface can be insecure.
    mysql: [Warning] Using a password on the command line interface can be insecure.
    2026-03-12T11:28:41 ==> start: Migrating data
    [INFO] No migrations to run
    [INFO] Done
    2026-03-12T11:28:41 ==> start: Changing ownership
    2026-03-12T11:28:41 ==> start: Starting cloudron-syslog
    2026-03-12T11:28:41 ==> start: Starting Cloudron
    2026-03-12T11:28:43 ==> start: Almost done
    [OK]    Database migrations applied successfully
    root@xxx:~# cloudron-support --troubleshoot
    Vendor: netcup Product: KVM Server
    Linux: 5.15.0-171-generic
    Ubuntu: jammy 22.04
    Execution environment: kvm
    Processor: AMD EPYC 7702P 64-Core Processor x 10
    RAM: 65842976KB
    Disk: /dev/sda3       1.9T
    [OK]    node version is correct
    [OK]    IPv6 is enabled and public IPv6 address is working
    [OK]    docker is running
    [OK]    docker version is correct
    [OK]    MySQL is running
    [OK]    netplan is good
    [OK]    DNS is resolving via systemd-resolved
    [OK]    unbound is running
    [OK]    nginx is running
    [OK]    dashboard cert is valid
    [OK]    dashboard is reachable via loopback
    [FAIL]  Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.js. Last migration file: /package.json.
            Please run 'cloudron-support --apply-db-migrations' to apply the migrations.
    [OK]    Service 'mysql' is running and healthy
    [OK]    Service 'postgresql' is running and healthy
    [OK]    Service 'mongodb' is running and healthy
    [OK]    Service 'mail' is running and healthy
    [OK]    Service 'graphite' is running and healthy
    [OK]    Service 'sftp' is running and healthy
    [OK]    box v9.1.3 is running
    [OK]    Dashboard is reachable via domain name
    [WARN]  Domain xxx.nl expiry check skipped because whois does not have this information
    
    Support graphs

  • MiroTalk SFU — Node.js heap out of memory crash (OOM) + analysis
    imc67I imc67

    Hi everyone,

    Last night our MiroTalk SFU instance (v2.1.26, running on Cloudron) crashed due to a Node.js
    out-of-memory error. I wanted to share my analysis in case others run into the same issue.

    BTW: there was NOTHING in the Cloudron GUI log:

    Time	Source	Details	
    11 mrt 2026, 02:02	cron	App was updated to v2.6.10	
    11 mrt 2026, 02:00	cron	Update started from v2.6.9 to v2.6.10
    

    What happened

    Scherm­afbeelding 2026-03-12 om 12.05.42.png
    Around 04:40 UTC the container crashed hard with the following fatal error:

    FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
    Aborted (core dumped)
    

    The Node.js process had been running for approximately 27.5 hours (~99,000 seconds according
    to the GC log). At the time of the crash, the heap was sitting at ~251 MB with no room left
    to grow. Cloudron's healthcheck detected the crash via ECONNREFUSED and automatically
    restarted the container. After the restart at 04:41 UTC the app came back up normally.


    Disk I/O spike

    The Disk I/O graph showed a sharp spike around the time of the crash and restart:

    • MiroTalk Read: 6.26 MB/s
    • System Read: 5.85 MB/s
    • System Write: 4.91 MB/s
    • MiroTalk Write: 2.2 MB/s

    This is consistent with Docker rebuilding the container, writing the core dump, and reloading
    all Node modules on startup. The background system writes (~2 MB/s) throughout the night
    appear normal (likely backups/log rotation).


    Probable causes

    1. Memory leak — the process grew steadily over ~27.5 hours until the heap was exhausted.
      This pattern is typical of a slow memory leak in the application itself.

    2. Insufficient heap size — with 10 mediasoup workers configured, the default Node.js
      heap limit can be too low under sustained load.

    3. OIDC discovery errors as a contributing factor — just before the crash, the logs show
      repeated Issuer.discover() failed errors because the OIDC provider (issuerBaseURL)
      was temporarily unreachable. Repeated failed discovery attempts can accumulate error
      objects and contribute to heap pressure.

    AggregateError: Issuer.discover() failed.
    OPError: expected 200 OK, got: 404 Not Found
    RequestError: Timeout awaiting 'request' for 5000ms
    

    Recommendations

    Short term: Increase the Node.js heap limit by setting the following environment variable
    in your MiroTalk configuration:

    NODE_OPTIONS=--max-old-space-size=2048
    

    Monitoring: Keep an eye on RAM usage over time. If memory grows steadily without ever
    dropping, that confirms a leak in the app that should be reported upstream to the MiroTalk
    developer.

    OIDC stability: Make sure the OIDC provider endpoint is reliably reachable. On Cloudron
    this appears to be the built-in auth (/openid). If discovery requests fail repeatedly and
    are not properly cleaned up, they may contribute to memory growth.


    Environment

    Property Value
    MiroTalk SFU v2.1.26
    Node.js 22.14.0
    Workers 10
    Server RAM 62.79 GB
    OS Linux 5.15.0 x64
    Platform Cloudron (Docker container)

    Has anyone else seen this OOM pattern with MiroTalk SFU? Curious whether it's related to a
    specific feature (OIDC, recording, etc.) or just general heap growth over time.

    MiroTalk

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    @girish said:

    value since the server last rebooted

    I knew that from an earlier post, we took that in account

    Here the Netcup graph and Cloudron GUI graph of the last 6 hours, exactly the same (server was rebooted 3 days ago no timestamp in the Server - System - Uptime)
    Scherm­afbeelding 2026-03-12 om 11.13.19.png Scherm­afbeelding 2026-03-12 om 11.12.51.png

    Support graphs

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    Summary of extensive disk I/O investigation — findings and conclusions

    After spending considerable time investigating the high disk I/O on my servers (with help from an Claude PRO AI assistant, especially for this issue I subscribed to PRO!), I want to share my findings for anyone else experiencing this issue.

    Setup: 3 servers running Cloudron v9.1.3, Ubuntu 22.04. Server 1 (just to focus on one): 12 WordPress sites, Matomo, EspoCRM, FreeScout (2x), Roundcube, MiroTalk, Taiga, MainWP, Yourls, Surfer (2x). Constant write I/O of ~2.5 MB/s = ~347 GB/day.

    Reference: Cloudron demo server (20 apps including Nextcloud, Matrix, Discourse) shows ~80 GB/day. My servers run 4-5x higher with lighter apps.


    What we investigated and measured

    • iotop analysis: Docker MySQL (messageb) and host MySQL are by far the largest writers
    • MySQL general log analysis: mapped write distribution per table
    • Tested innodb_flush_log_at_trx_commit = 2: changes the pattern (bursts instead of constant pressure) but total write volume unchanged
    • Analyzed nginx access logs for suspicious traffic patterns
    • Compared against Cloudron demo server

    What was cleaned up (almost no impact)

    • EspoCRM: deleted 244K jobs + 244K scheduled_job_log_records; set cleanupJobPeriod to 7 days
    • WordPress actionscheduler_claims: deleted 130K rows
    • Roundcube: reduced from 5 to 1 installation
    • Matomo: adjusted session_gc_probability and login_cookie_expire; cleared accumulated sessions
    • Wordfence: reduced live traffic table to 200 rows / 1 day, disabled audit logging
    • MainWP: disabled uptime monitor addon and SSL monitor addon
    • MainWP wp_mainwp_wp_logs: deleted 46,903 rows older than 30 days
    • MainWP wp_mainwp_wp_logs_meta: deleted 141,682 orphaned records
    • MainWP: disabled Network Activity logging

    What was ruled out as significant I/O cause

    • Matomo: stopped the app entirely → no measurable difference in I/O
    • MainWP: one of the three servers has no MainWP but shows identical I/O pattern
    • FreeScout: job tables are empty
    • External scan traffic: all returning 404/301 from nginx, no database impact

    What is proven but not fixable without Cloudron

    • Matomo healthcheck bug: GET / triggers the LoginOIDC plugin on every health check (every 10 seconds), creating a new MySQL session each time → 8,640 new sessions per day per Matomo instance. Fix requires changing the health check endpoint from GET / to /matomo.js in the app package. This is a Cloudron-side fix. Reported separately in topic 15211.
    • InnoDB configuration: innodb_log_file_size is only 48MB (causes very frequent checkpoints), innodb_flush_method is fsync. These settings are suboptimal for a write-heavy workload but are managed by Cloudron.
    • go-carbon/Graphite: writes ~0.13 MB/s continuously for 814 whisper metric files — inherent to Cloudron's monitoring stack.

    Conclusion

    There is no single large cause. The high I/O is the sum of multiple Cloudron-internal mechanisms. Everything works correctly — no performance issues, no user impact. But for a server with relatively low user traffic, 347 GB/day of writes feels disproportionate, especially compared to the Cloudron demo server at ~80 GB/day.

    Sharing this in case it helps others investigating the same issue.

    Support graphs

  • Matomo creates a new MySQL session on every health check
    imc67I imc67

    before I started:

    mysql> select count(*) from session;
    +----------+
    | count(*) |
    +----------+
    |    26920 |
    +----------+
    1 row in set (0.01 sec)
    

    I did your settings in #4 and #5, restarted the app

    After that (with some intervals):

    mysql> select count(*) from session;
    +----------+
    | count(*) |
    +----------+
    |    17398 |
    +----------+
    1 row in set (0.01 sec)
    
    mysql> select count(*) from session;
    +----------+
    | count(*) |
    +----------+
    |    17399 |
    +----------+
    1 row in set (0.00 sec)
    
    mysql> select count(*) from session;
    +----------+
    | count(*) |
    +----------+
    |    17395 |
    +----------+
    1 row in set (0.01 sec)
    

    I see they keep "hanging" around that amount

    I decided to clean it once manually to give it a clean start ..

    mysql> select count(*) from session;
    +----------+
    | count(*) |
    +----------+
    |        1 |
    +----------+
    1 row in set (0.00 sec)
    
    mysql> select count(*) from session;
    +----------+
    | count(*) |
    +----------+
    |        4 |
    +----------+
    1 row in set (0.00 sec)
    
    Matomo

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    Here is a more complete analysis of the disk I/O across all 3 servers.

    1. Cloudron Disk I/O graph (server 1, last 6 hours)

    Scherm­afbeelding 2026-03-10 om 23.40.17.png

    The graph shows a constant write baseline of ~2.5 MB/s, 24/7. The spike around 20:00 is the scheduled daily backup — completely normal. The total write of 646 GB over 2 days (~323 GB/day) is almost entirely this constant baseline, not user traffic or backups.

    2. iotop breakdown (server 1, 1 minute measurement)

    Docker MySQL (messageb):  48.62 MB/min  (~0.81 MB/s)
    Host MySQL:               23.26 MB/min  (~0.39 MB/s)
    go-carbon:                 9.34 MB/min  (~0.16 MB/s)
    jbd2 (fs journal):         8.44 MB/min  (~0.14 MB/s)
    systemd-journald:          4.37 MB/min  (~0.07 MB/s)
    containerd:                2.02 MB/min  (~0.03 MB/s)
    dockerd:                   1.13 MB/min  (~0.02 MB/s)
    Total:                   ~97 MB/min    (~1.6 MB/s average)
    

    Note: the average of ~1.6 MB/s is consistent with the graph baseline of ~2.5 MB/s when accounting for peaks and the fact that iotop measures a 1-minute window.

    3. InnoDB write activity since last MySQL restart (all 3 servers)

    Server 1 (uptime 59 min) Server 2 (uptime ~40h) Server 3 (uptime ~40h)
    Data written 2.13 GB 55.3 GB 63.5 GB
    Effective write rate ~0.58 MB/s ~0.38 MB/s ~0.43 MB/s
    Rows inserted/s 6.5 8.8 8.6
    Rows updated/s 7.0 4.5 4.0
    Log writes/s 28.7 23.6 18.0

    All three servers show a consistent insert rate of ~6-9 rows/second in the Docker MySQL, matching exactly 1 new Matomo session every 10 seconds (= health check interval).

    Conclusion

    The Docker MySQL (~0.4-0.8 MB/s) is the largest single contributor, driven primarily by Matomo session inserts. The total observed disk I/O of 2-4 MB/s is the sum of multiple processes, with the constant Matomo session accumulation as the most significant and most easily fixable component.

    Support graphs

  • Tasks table accumulates tasks indefinitely
    imc67I imc67

    Thanks for the clarification. You are right that the tasks table itself is not the primary cause.

    Here is the buffer pool analysis from the host MySQL:

    BUFFER POOL AND MEMORY
    Buffer pool size:    8192 pages
    Free buffers:        1030
    Database pages:      7123
    Modified db pages:   0
    Pages written:       1,918,869
    Write rate:          9.76 writes/s
    young-making rate:   63 / 1000
    

    And the box database table sizes:

    eventlog    79.58 MB   16,275 rows
    tasks       29.55 MB   17,719 rows
    backups     19.47 MB     762 rows
    

    The host MySQL writes/s (9.76) are indeed modest. The main disk I/O culprit is the Docker MySQL (messageb user) which writes significantly more — and that is where the Matomo sessions live.

    So I agree the tasks table is not directly causing the disk I/O. The real issue remains the Matomo health checker session accumulation as discussed in the main topic.

    Support tasks

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    Thanks @nebulon for dividing the main issue "high disk I/O" and my three possible root causes into 3.

    Here we can focus on Matomo, current situation on 3 different servers, each with one Matomo app:

    ysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
    +----------+------------------------------+------------------------------+
    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
    +----------+------------------------------+------------------------------+
    |   121230 | 2026-02-24 21:02:50          | 2026-03-10 21:43:20          |
    +----------+------------------------------+------------------------------+
    1 row in set (0.13 sec)
    
    mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
    +----------+------------------------------+------------------------------+
    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
    +----------+------------------------------+------------------------------+
    |   120811 | 2026-02-24 21:41:30          | 2026-03-10 21:43:10          |
    +----------+------------------------------+------------------------------+
    1 row in set (0.13 sec)
    
    mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
    +----------+------------------------------+------------------------------+
    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
    +----------+------------------------------+------------------------------+
    |    22494 | 2026-03-08 07:31:01          | 2026-03-10 21:40:00          |
    +----------+------------------------------+------------------------------+
    1 row in set (0.02 sec)
    
    

    This looks like a serious amount of sessions in a short time, to be exactly:
    120.811 / 20.161,67 = 5,99 sessions per minute is every 10 seconds health check.

    The only thing I can find in the config.ini.php regarding sessions is: session_save_handler = "" and I don't remember me changing that?

    Support graphs

  • Roundcube creates a new MySQL session on every health check
    imc67I imc67

    @nebulon said:

    Do you have any custom configs on that app? Like maybe increased session lifetime or so?
    We are no experts on roundcube internals to know what may or may not work here, especially since we cannot reproduce it. So any extra info helps.

    my apologies, I was on the road and answered too quickly after checking via filemanager on my phone ... the wrong Roundcube instance ...

    On this server I have 5 Roundcube apps (because aliases are not in the app package available) and 3 of them have this issue and indeed all three they do have in common in the custom config:
    $config['session_lifetime'] = 600;

    Once upon a time in 2022 😉 it was suggested/advised here https://forum.cloudron.io/post/46033 and so I used that ever since.

    I deleted the line, restarted the apps, waited a few minutes and the session increase is now 'normal'.

    I also restarted MySQL server and it used 2.6 GB of the assigned 8GB and after restarting it starts at 416MB.

    This part 3 of the discovered possible issues for high disk I/O is solved.

    Let's continue with the possible Matomo issue in https://forum.cloudron.io/topic/14639/cloudron-v9-huge-disk-i-o-is-this-normal-safe-needed/

    Roundcube

  • Roundcube creates a new MySQL session on every health check
    imc67I imc67

    Customcofing.php is “empty” and php.ini also

    Roundcube

  • Roundcube creates a new MySQL session on every health check
    imc67I imc67

    IMG_0030.jpeg

    Roundcube

  • Roundcube creates a new MySQL session on every health check
    imc67I imc67

    Third Bug report: Roundcube also creates a new MySQL session on every health check

    The same issue we found with Matomo also affects Roundcube. The Cloudron health checker calls GET / every 10 seconds, which causes Roundcube to create a new unauthenticated session in MySQL each time.

    Decoded session data from the latest entries:

    temp|b:1;
    language|s:5:"en_US";
    task|s:5:"login";
    skin_config|a:7:{...}
    

    This is a pure unauthenticated login page session — no user involved, just the health checker hitting the front page.

    Measured growth rate: exactly 6 new sessions per minute per Roundcube instance (= 1 per 10 seconds = health check interval). With 5 Roundcube instances on this server that is 30 new sessions per minute, 43,200 per day.

    Suggested fix: Change the health check endpoint from GET / to a static asset that does not trigger PHP session creation, for example:

    • A static file like /favicon.ico or /robots.txt
    • Or Roundcube's own /index.php/api if available

    For reference, WordPress handles this cleanly: GET /wp-includes/version.php returns HTTP 200 with empty output (Wordfence hides the version) without touching the database or creating any session.

    It would be great if Cloudron could define a session-free health check endpoint per app type, similar to how it is done for WordPress.

    Roundcube

  • Matomo creates a new MySQL session on every health check
    imc67I imc67

    Update on the Matomo issue https://forum.cloudron.io/post/121389

    I disabled the OIDC plugin in Matomo and then the sessions are still created every 10 seconds, so it's not that plugin. Next when looking into the Matomo settings - System check it says:

    Errors below may be due to a partial or failed upload of Matomo files.
    --> Try to reupload all the Matomo files in BINARY mode. <--
    
    File size mismatch: /app/code/misc/user/index.html (expected length: 172, found: 170)
    

    The file contains:

    /app/code# cat /app/code/misc/user/index.html
    This directory stores the custom logo for this Piwik server. Learn more: <a href="https://matomo.org/faq/new-to-piwik/faq_129/">How do I customise the logo in Piwik?</a>
    

    This means nothing serious.

    Then I installed a Matomo app on the Cloudron demo server and was able to replicate the every 10 seconds session table increase, within 1 minute 8 extra sessions:

    mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified)) 
        -> FROM session;
    +----------+------------------------------+------------------------------+
    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
    +----------+------------------------------+------------------------------+
    |       47 | 2026-03-08 10:58:00          | 2026-03-08 11:05:20          |
    +----------+------------------------------+------------------------------+
    1 row in set (0.00 sec)
    
    mysql> SELECT COUNT(*), MIN(FROM_UNIXTIME(modified)), MAX(FROM_UNIXTIME(modified))  FROM session;
    +----------+------------------------------+------------------------------+
    | COUNT(*) | MIN(FROM_UNIXTIME(modified)) | MAX(FROM_UNIXTIME(modified)) |
    +----------+------------------------------+------------------------------+
    |       55 | 2026-03-08 10:58:00          | 2026-03-08 11:06:40          |
    +----------+------------------------------+------------------------------+
    1 row in set (0.00 sec)
    
    Matomo

  • Relay error: Port 25 outbound is blocked
    imc67I imc67

    This morning in the GUI of all 3 Cloudron Pro servers: Relay error: Port 25 outbound is blocked. IPv4 connect to port25check.cloudron.io failed: connect ECONNREFUSED 165.227.67.76:25

    In the logs:

    Mar 08 09:47:20 box:mail relay (domain.tld): Port 25 outbound is blocked. IPv4 connect to port25check.cloudron.io failed: connect ECONNREFUSED 165.227.67.76:25
    

    Manual check to own server shows port 25 is open, manual check to port25check.cloudron.io via https://dnschecker.org/port-scanner.php?query=165.227.67.76 shows TIMED-OUT

    Support appstore

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    We also found some huge MySQL tables from a Wordpress-app with dedicated MainWP due to incorrect retention settings, after correction and deletion the 1 minute iotop -aoP -d 5 is still:

    • Docker MySQL: 70 MB
    • Host MySQL: 33 MB
    • go-carbon: 6.7 MB
    • jbd2: 9.9 MB
    • Total: ~103 MB per minute

    To put this in perspective:

    • 103 MB/min = 6.2 GB/hour
    • 6.2 GB/hour = 148 GB/day
    • 148 GB/day = 4.4 TB/month

    This is on a server with relatively low visitors across 10 sites. The vast majority of this write activity is caused by the issues identified above (Matomo health checker sessions, box.tasks accumulation, and app-level retention misconfigurations) — not by actual user traffic.

    Note: these are cumulative iotop counters, not sustained rates. The actual average write speed shown by Cloudron's dashboard is ~2.5-4 MB/s, which still translates to 216-345 GB/day of unnecessary disk writes on a lightly loaded server.

    Support graphs

  • Tasks table accumulates tasks indefinitely
    imc67I imc67

    Second issue found: box.tasks table accumulates completed tasks indefinitely

    While investigating the disk I/O issue further, I found a second contributor to the high host MySQL write activity.

    The box.tasks table contains tens of thousands of completed tasks (completed=1, pending=0) that are never cleaned up. These go back years:

    Server 1 (running since ~2021): 17,752 completed tasks, oldest from 2021
    Server 2 (running since ~2019): 22,509 completed tasks, oldest from 2019
    Server 3 (running since ~2019): 26,972 completed tasks, oldest from 2019

    Breakdown per server 3 as an example:

    type                    count   oldest
    cleanBackups            9,628   2019-04-19
    backup_xxx              7,054   2019-04-19
    app                     4,765   2019-10-06
    checkCerts              2,239   2021-07-01
    reneWcerts              1,611   2019-04-19
    updateDiskUsage         1,107   2022-12-03
    

    This large table causes continuous InnoDB buffer pool activity and redo log writes on the host MySQL, contributing to the baseline disk I/O of ~2-3 MB/s — independently of any app issues.

    Query to check on your own server:

    SELECT type, COUNT(*) as aantal,
    MIN(creationTime) as oudste,
    MAX(creationTime) as nieuwste
    FROM box.tasks
    WHERE completed=1 AND pending=0
    GROUP BY type
    ORDER BY aantal DESC
    LIMIT 10;
    

    Questions:

    • Is there a safe way to manually clean up old completed tasks?
    • Should Cloudron implement automatic cleanup of completed tasks older than X days?
    • Is this a known issue or intentional behavior?
    Support tasks

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    Maybe because the three installs are 5-6 years old and had many many updates/upgrades etc?

    can you check how many sessions per hour are being created? Run this query:
    sql

    SELECT HOUR(FROM_UNIXTIME(modified)) AS hour, COUNT(*) AS sessions
    FROM `<your_matomo_db>`.session
    WHERE DATE(FROM_UNIXTIME(modified)) = CURDATE() - INTERVAL 1 DAY
    GROUP BY hour ORDER BY hour;
    

    On my instances this shows exactly 360 per hour = 1 per 10 seconds = health check interval. If yours shows much less, the health checker behaves differently on your setup.

    Support graphs

  • Cloudron v9: huge disk I/O is this normal/safe/needed?
    imc67I imc67

    @joseph I’m pretty sure that more apps suffer from this issue since the introduction of OIDC, I see EspoCRM and FreeScout, also has a Healthcheck to root/ (where the OIDC login is), didn’t check the sessions.

    Support graphs
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search