Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
M

mendoksai

@mendoksai
About
Posts
11
Topics
2
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    @girish @nebulon Server crashed again last night. But this time the pattern is different — no containers in restart loop, no runner issues. The cron cleanup job is working. All containers were stable (Up 11 hours) before the crash.

    The Docker journal shows the DNS resolver dying on its own:

    23:38 - External DNS timeouts begin (185.12.64.2)
    23:57 - Internal Docker DNS fails (172.18.0.1:53 i/o timeout)
    23:59 - [resolver] connect failed: dial tcp 172.18.0.1:53: i/o timeout
    00:xx - Server becomes unresponsive
    

    There's also a container (different ID each time) producing "ignoring event" / "cleaning up dead shim" messages every minute — not sure if related.

    This happens roughly at the same time every night (~23:00-00:00 UTC). All previous fixes applied (no restart loops, domain renewed, hardware clean). I'm running out of ideas on my end.

    Would it be possible to get SSH-level support to debug this? I can provide access anytime. This is really urgent as it's been impacting my mail service daily for weeks now.

    Thank you.

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    Update: I renewed the expired domain and the app (Lychee) is now running properly. No containers in restart loop currently. The earlier crashes today were likely caused by the runner container still being in a stale state from before the domain renewal.

    I have a cron job cleaning up zombie runners every 5 minutes, which seems to be working (log shows it removed 5 runners since setup).

    Will monitor for the next few days and report back. If it stays stable, I'll mark this as resolved.

    Thank you @girish @nebulon @joseph for your help!

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    @nebulon Yes, here's the full timeline of changes:

    1. Server was stable on Ubuntu 20.04 + kernel 5.4 for months
    2. Upgraded to Ubuntu 22.04 + kernel 5.15 (following Cloudron upgrade docs) — instability started
    3. Upgraded to Ubuntu 24.04 + kernel 6.8 (following Cloudron upgrade docs) — issue persists
    4. Installed fail2ban and smartmontools via apt
    5. No other custom modifications

    All upgrades were done following the official Cloudron documentation. The crashes happen on both kernel 5.15 and 6.8, so it doesn't seem kernel-specific.

    One thing that may be relevant: Docker is using cgroupfs driver with cgroup v2. The Cloudron systemd unit explicitly sets --exec-opt native.cgroupdriver=cgroupfs. Could there be a compatibility issue with Ubuntu 24.04's default cgroup v2?

    The server just crashed again twice in one hour. Happy to provide SSH access if that would help debug this. This is urgent as my mail server runs on this machine.

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    Yes, I followed your upgrade docs as you suggest to upgrade due to discontinuing of the support old Ubuntu version, since then this problem happens. And it just happened again, right now. Twice in today.

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    Happened again. Every a few days. 😕

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    Quick update — I just noticed cloudron-support --troubleshoot was reporting:

    [FAIL] Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.js
    

    This migration has been pending since Feb 17 — which is exactly when the instability started. I missed this earlier. Just applied it:

    cloudron-support --apply-db-migrations
    [OK] Database migrations applied successfully
    

    I've also stopped the Mattermost container that was in a restart loop (it was failing to connect to MySQL on boot and never recovering).

    Will monitor for the next few days and report back. Fingers crossed this was the missing piece.

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    Thanks @girish for looking into this.

    You're right — this isn't just about the stopped app. After collecting detailed logs, I found multiple containers in incorrect states on every boot:

    <appid-1>-runner         Created          (stopped app - Lychee)
    <appid-2>                Restarting (1)   (Mattermost)
    <appid-3>                Restarting (1)   (Kimai)
    

    The Mattermost container is the main culprit. On boot, it tries to connect to MySQL before it's ready, fails, and enters an infinite restart loop:

    error: Failed to ping DB  error="dial tcp 172.18.30.1:3306: connect: connection refused"
    Error: failed to initialize platform: cannot create store: error setting up connections
    

    This restart loop seems to degrade Docker networking over time. The Docker journal shows a clear cascade:

    1. Boot → Mattermost enters restart loop (MySQL not ready yet)
    2. Docker resolver starts failing — first external DNS timeouts, then internal (172.18.0.1:53)
    3. Error: listen EADDRNOTAVAIL: address not available 172.18.0.1:3003 on every boot
    4. Eventually host MySQL becomes unreachable → full server lockup

    For journalctl -u docker, there are no explicit error-level entries from Docker daemon itself — only info level "ignoring event" / "cleaning up dead shim" messages repeating every 5 minutes for the same container, plus error level DNS timeout entries from the resolver.

    I've stopped both Mattermost and the Lychee runner for now. Will monitor.

    Environment details:

    • Cloudron 9.1.3
    • Ubuntu 24.04.4 LTS, Kernel 6.8.0-106-generic
    • Dedicated Server: 8 CPUs, 32GB RAM
    • ~35 containers on the cloudron network
    • Docker: Cgroup Driver: cgroupfs, Cgroup Version: 2
    • Hardware check by Hetzner: all clean (CPU, disks, NIC)
    • Issue started ~3 weeks ago, persisted through kernel 5.15 → 6.8 upgrade

    Happy to provide SSH access or full logs if needed.

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    Update: Confirmed that Cloudron recreates the runner container on every boot, even though the app is stopped via the dashboard.

    After each reboot:

    • Main container: Exited (0) ✓
    • Runner container: Created ← this is the problem
    • Redis addon: Up ← also still running

    The runner in "Created" state triggers the scheduler loop → "cannot join network namespace" errors every 15-60 min → eventually cascading into Docker DNS failure → MySQL unreachable → full server lockup.

    I've been manually removing the runner with docker rm -f <appid>-runner after each reboot, but this is not sustainable.

    Is there a way to prevent the scheduler from recreating the runner for a stopped app? Or should I uninstall the app entirely to stop this cycle? The app's domain has expired but I'd like to keep the data for when I renew it.

    Thanks for any guidance.

    Support domains cron

  • Server crashes caused by stopped app's runner container stuck in restart loop
    M mendoksai

    A domain expired for one of my apps. I stopped the app via the Cloudron dashboard. However, the runner container remained in "Created" state and kept trying to join the network namespace of the stopped app container, causing cascading failures:

    1. Runner repeatedly fails with: Cannot restart container <appid>-runner: cannot join network namespace of container: Container <id> is restarting, wait until the container is running
    2. This eventually causes Docker DNS resolution failures (internal Docker DNS timeouts)
    3. Host MySQL becomes unreachable (ECONNREFUSED 127.0.0.1:3306)
    4. SSH stops accepting connections
    5. Server becomes completely unresponsive, requiring hard reboot

    This has been happening daily for the past week.

    What I did

    • Stopped the app via Cloudron dashboard → runner remained in "Created" state
    • docker rm -f <appid>-runner removed the stuck runner
    • Main container shows "Exited (0)" and redis addon is still running — both untouched

    Questions

    1. Will Cloudron's scheduler recreate the runner container for a stopped app? If so, how do I prevent this?
    2. Is there a proper way to fully stop an app including its runner when the domain has expired?
    3. Should I also stop the redis addon container for this app?

    Relevant box.log pattern (repeating every 15-60 min):

    box:scheduler could not run task runner: (HTTP code 500) server error - Cannot restart container <appid>-runner: cannot join network namespace of container
    

    Also seeing on every boot:

    Error: listen EADDRNOTAVAIL: address not available 172.18.0.1:3003
    
    cloudron-support --troubleshoot
    Vendor: System manufacturer Product: System Product Name
    Linux: 6.8.0-106-generic
    Ubuntu: noble 24.04
    Execution environment: none
    none
    Processor: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz
    BIOS Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz       To Be Filled By O.E.M. CPU @ 3.4GHz x 8
    RAM: 32796076KB
    
    Disk: /dev/sda3       909G
    [OK]    node version is correct
    [OK]    IPv6 is enabled and public IPv6 address is working
    [OK]    docker is running
    [OK]    docker version is correct
    [OK]    MySQL is running
    [OK]    netplan is good
    [OK]    DNS is resolving via systemd-resolved
    [OK]    unbound is running
    [OK]    nginx is running
    [OK]    dashboard cert is valid
    [OK]    dashboard is reachable via loopback
    [FAIL]  Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.js. Last migration file: /package.json.
            Please run 'cloudron-support --apply-db-migrations' to apply the migrations.
    [OK]    Service 'mysql' is running and healthy
    [OK]    Service 'postgresql' is running and healthy
    [OK]    Service 'mongodb' is running and healthy
    [OK]    Service 'mail' is running and healthy
    [OK]    Service 'graphite' is running and healthy
    [OK]    Service 'sftp' is running and healthy
    [OK]    box v9.1.3 is running
    [OK]    Dashboard is reachable via domain name
    [OK]    Domain  is valid and has not expired
    
    Support domains cron

  • Daily GeoIP Database Download Limit Reached
    M mendoksai

    Yes, you are right. After deep investigation I found it's another docker that uses the license key. Sorry my bad, first I've got the IP wrong. That's why I assumed it's on Cloudron. My apologize.

    IP2Location

  • Daily GeoIP Database Download Limit Reached
    M mendoksai

    Hello,

    Recently, I've been receiving this email by MaxMind:

    Dear MaxMind Customer,
    
    Your account has reached the daily limit for database downloads. Any additional download attempts today from your account will fail. Learn more about database download limits.
    
    To avoid download errors we recommend that you limit your downloads of each database to no more than once per day per server. You can sign in to your account and check your GeoIP download history for information on IP addresses you are downloading databases from.
    
    To ensure that you are downloading your databases efficiently, you may consult the update schedule for GeoIP databases. If you have any questions, please contact us at support@maxmind.com.
    
    Sincerely,
    The Team at MaxMind
    

    When I check the download history, I can see it's really downloading every a few minutes save file. ex: GeoLite2-City_20240301.tar.gz I wonder it's happening due to failure?

    Can this process be improved?

    Thank you.

    IP2Location
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search