Server crashes caused by stopped app's runner container stuck in restart loop
-
Quick update — I just noticed
cloudron-support --troubleshootwas reporting:[FAIL] Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.jsThis migration has been pending since Feb 17 — which is exactly when the instability started. I missed this earlier. Just applied it:
cloudron-support --apply-db-migrations [OK] Database migrations applied successfullyI've also stopped the Mattermost container that was in a restart loop (it was failing to connect to MySQL on boot and never recovering).
Will monitor for the next few days and report back. Fingers crossed this was the missing piece.
-
Quick update — I just noticed
cloudron-support --troubleshootwas reporting:[FAIL] Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.jsThis migration has been pending since Feb 17 — which is exactly when the instability started. I missed this earlier. Just applied it:
cloudron-support --apply-db-migrations [OK] Database migrations applied successfullyI've also stopped the Mattermost container that was in a restart loop (it was failing to connect to MySQL on boot and never recovering).
Will monitor for the next few days and report back. Fingers crossed this was the missing piece.
Quick update — I just noticed cloudron-support --troubleshoot was reporting:
[FAIL] Database migrations are pending. Last migration in DB: /20260217120000-mailPasswords-create-table.js
This is a bug in the tool and not a real problem. It's fixed in 9.1.5.
-
@nebulon Yes, here's the full timeline of changes:
- Server was stable on Ubuntu 20.04 + kernel 5.4 for months
- Upgraded to Ubuntu 22.04 + kernel 5.15 (following Cloudron upgrade docs) — instability started
- Upgraded to Ubuntu 24.04 + kernel 6.8 (following Cloudron upgrade docs) — issue persists
- Installed
fail2banandsmartmontoolsvia apt - No other custom modifications
All upgrades were done following the official Cloudron documentation. The crashes happen on both kernel 5.15 and 6.8, so it doesn't seem kernel-specific.
One thing that may be relevant: Docker is using
cgroupfsdriver with cgroup v2. The Cloudron systemd unit explicitly sets--exec-opt native.cgroupdriver=cgroupfs. Could there be a compatibility issue with Ubuntu 24.04's default cgroup v2?The server just crashed again twice in one hour. Happy to provide SSH access if that would help debug this. This is urgent as my mail server runs on this machine.
-
Update: I renewed the expired domain and the app (Lychee) is now running properly. No containers in restart loop currently. The earlier crashes today were likely caused by the runner container still being in a stale state from before the domain renewal.
I have a cron job cleaning up zombie runners every 5 minutes, which seems to be working (log shows it removed 5 runners since setup).
Will monitor for the next few days and report back. If it stays stable, I'll mark this as resolved.
-
@girish @nebulon Server crashed again last night. But this time the pattern is different — no containers in restart loop, no runner issues. The cron cleanup job is working. All containers were stable (Up 11 hours) before the crash.
The Docker journal shows the DNS resolver dying on its own:
23:38 - External DNS timeouts begin (185.12.64.2) 23:57 - Internal Docker DNS fails (172.18.0.1:53 i/o timeout) 23:59 - [resolver] connect failed: dial tcp 172.18.0.1:53: i/o timeout 00:xx - Server becomes unresponsiveThere's also a container (different ID each time) producing "ignoring event" / "cleaning up dead shim" messages every minute — not sure if related.
This happens roughly at the same time every night (~23:00-00:00 UTC). All previous fixes applied (no restart loops, domain renewed, hardware clean). I'm running out of ideas on my end.
Would it be possible to get SSH-level support to debug this? I can provide access anytime. This is really urgent as it's been impacting my mail service daily for weeks now.
Thank you.
-
@girish @nebulon Server crashed again last night. But this time the pattern is different — no containers in restart loop, no runner issues. The cron cleanup job is working. All containers were stable (Up 11 hours) before the crash.
The Docker journal shows the DNS resolver dying on its own:
23:38 - External DNS timeouts begin (185.12.64.2) 23:57 - Internal Docker DNS fails (172.18.0.1:53 i/o timeout) 23:59 - [resolver] connect failed: dial tcp 172.18.0.1:53: i/o timeout 00:xx - Server becomes unresponsiveThere's also a container (different ID each time) producing "ignoring event" / "cleaning up dead shim" messages every minute — not sure if related.
This happens roughly at the same time every night (~23:00-00:00 UTC). All previous fixes applied (no restart loops, domain renewed, hardware clean). I'm running out of ideas on my end.
Would it be possible to get SSH-level support to debug this? I can provide access anytime. This is really urgent as it's been impacting my mail service daily for weeks now.
Thank you.
@mendoksai yes, write to me at support@cloudron.io . I can investigate.
-
Server was stable for 14 days after I fixed the DNS configuration myself. The original daily crash issue was resolved.
This morning I received Cloudron's security reboot email. Rebooted via dashboard. Server never came back. Ping responds, SSH returns
kex_exchange_identification: Connection reset by peer. Hard reset via Hetzner Robot didn't help either.So now I'm locked out of my own server because of an automatic security update that I didn't ask for and had no control over. My mail server is down, again.
I have to ask: is anyone actually testing these updates before pushing them? Every major issue I've had in the past two months has been triggered by an automatic update or upgrade. The previous instability started after a Cloudron update in February. Now this.
I need:
- Help getting my server back online — I'll likely need to use Hetzner rescue mode
- A way to permanently disable automatic security updates so I can apply them manually at a time that works for me
- Some assurance that updates are being properly tested before being pushed to production servers
This is a production server running critical mail services. I can't keep being the QA tester for untested updates.
Are you guys vibe coding?
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login
