Shouldn't we get an alert when a service container fails / is unhealthy?
-
Is always the same redis instance affected with this? That would help narrow down the issue.
Also if this happens again, can you restart the redis process within its docker container maybe via SSH and:
docker exec -ti <rediscontainerid> /bin/bash supervisorctl restart redis-service # this would be service process facilitating backups and all # check if things work now # otherwise try also supervisorctl restart redis # the actual redis process inside the container
-
Is always the same redis instance affected with this? That would help narrow down the issue.
Also if this happens again, can you restart the redis process within its docker container maybe via SSH and:
docker exec -ti <rediscontainerid> /bin/bash supervisorctl restart redis-service # this would be service process facilitating backups and all # check if things work now # otherwise try also supervisorctl restart redis # the actual redis process inside the container
@nebulon I'm not sure how to tell which redis container is the culprit. The Cloudron Services page shows which one, but there's no container ID associated with it from the UI that I can find. And in the
docker ps
command, everything shows "up 22 hours" for example, nothing looks wrong from that list. Is there a way to know which one to exec into?Edit: Sorry, I figured it out. Looking at the logs of the affected container shows the container name.
-
Is always the same redis instance affected with this? That would help narrow down the issue.
Also if this happens again, can you restart the redis process within its docker container maybe via SSH and:
docker exec -ti <rediscontainerid> /bin/bash supervisorctl restart redis-service # this would be service process facilitating backups and all # check if things work now # otherwise try also supervisorctl restart redis # the actual redis process inside the container
@nebulon - tl;dr = The commands didn't change the behaviour at all, it was still causing backups to fail and Services page showed it as orange still.
--
When I restart the service affected (redis in this case) by exec'ing into the container and running
supervisorctl restart redis-service
, it doesn't solve the problem. It still suggests that redis is restarting correctly as the output shows the following:root@redis-<UUID>:/app/code# supervisorctl restart redis-service redis-service: stopped redis-service: started
And the container logs show the following:
Apr 03 19:35:53 2025-04-04 02:35:53,254 INFO waiting for redis-service to stop Apr 03 19:35:53 2025-04-04 02:35:53,256 WARN stopped: redis-service (terminated by SIGTERM) Apr 03 19:35:53 2025-04-04 02:35:53,258 INFO spawned: 'redis-service' with pid 321 Apr 03 19:35:53 Redis service endpoint listening on http://:::3000 Apr 03 19:35:54 2025-04-04 02:35:54,320 INFO success: redis-service entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
However the services list still shows yellow and the backups continue to fail.
If I try with the following:
root@redis-<UUID>:/app/code# supervisorctl restart redis redis: stopped redis: started
It's the same behaviour, however I do see a little more in the logs for the container:
Apr 03 19:40:08 2025-04-04 02:40:08,186 INFO waiting for redis to stop Apr 03 19:40:08 330:signal-handler (1743734408) Received SIGTERM scheduling shutdown... Apr 03 19:40:08 330:M 04 Apr 2025 02:40:08.208 * User requested shutdown... Apr 03 19:40:08 330:M 04 Apr 2025 02:40:08.208 * Saving the final RDB snapshot before exiting. Apr 03 19:40:08 330:M 04 Apr 2025 02:40:08.216 * DB saved on disk Apr 03 19:40:08 330:M 04 Apr 2025 02:40:08.216 * Removing the pid file. Apr 03 19:40:08 330:M 04 Apr 2025 02:40:08.216 # Redis is now ready to exit, bye bye... Apr 03 19:40:08 2025-04-04 02:40:08,218 INFO stopped: redis (exit status 0) Apr 03 19:40:08 2025-04-04 02:40:08,220 INFO spawned: 'redis' with pid 337 Apr 03 19:40:08 337:C 04 Apr 2025 02:40:08.225 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Apr 03 19:40:08 337:C 04 Apr 2025 02:40:08.225 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo Apr 03 19:40:08 337:C 04 Apr 2025 02:40:08.225 * Redis version=7.4.2, bits=64, commit=00000000, modified=0, pid=337, just started Apr 03 19:40:08 337:C 04 Apr 2025 02:40:08.225 * Configuration loaded Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.225 * monotonic clock: POSIX clock_gettime Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.226 # Failed to write PID file: Permission denied Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.226 * Running mode=standalone, port=6379. Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.226 * Server initialized Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.226 * Loading RDB produced by version 7.4.2 Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.226 * RDB age 0 seconds Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.226 * RDB memory usage when created 2.70 Mb Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.230 * Done loading RDB, keys loaded: 2177, keys expired: 0. Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.230 * DB loaded from disk: 0.004 seconds Apr 03 19:40:08 337:M 04 Apr 2025 02:40:08.230 * Ready to accept connections tcp Apr 03 19:40:09 2025-04-04 02:40:09,231 INFO success: redis entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
What I was able to do though temporarily to get this to work is restart the docker service entirely, but of course this method tends to cause a little bit of downtime. This should be stable until the next time I need to restart my server for security updates, I suspect, but this really needs to get figured out. If nobody else is seeing this, perhaps it's something unique to my server? I did upgrade to Ubuntu 24.04 a few weeks ago but at the same time I also upgraded to the latest Cloudron version too, so it's hard to know if either of those or which one (if at all) is possibly related to the issue, but something to keep in mind in case you think these are related at all.
What else can we try?
-
Not sure if this is related or not, but I noticed when I SSH'd into my server, and I noticed there was mention of a zombie process. I ran this and found it was node... I know Cloudron runs on node... is this possibly related at all?
ubuntu@my:~$ ps aux | grep Z USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND ubuntu 1999826 0.0 0.0 0 0 ? Zs 03:25 0:00 [node] <defunct>
-
Alright thanks for the debugging. I think the next step to test when you hit this situation, is to see if you can curl the healthcheck manually within the container, since all services within seem fine, so maybe it is a docker network issue then.
You can find the ip using
docker ps
anddocker inspect <rediscontainer>
:curl -v http://<rediscontainerIP>:3000/healthcheck
-
Alright thanks for the debugging. I think the next step to test when you hit this situation, is to see if you can curl the healthcheck manually within the container, since all services within seem fine, so maybe it is a docker network issue then.
You can find the ip using
docker ps
anddocker inspect <rediscontainer>
:curl -v http://<rediscontainerIP>:3000/healthcheck
@nebulon Okay I see the following when I run that healthcheck GET with the current yellow light causing issues again which is blocking the backup process. It does indeed seem like a possible connection issue:
root@my:~# curl -v http://172.18.0.4:3000/healthcheck * Trying 172.18.0.4:3000... * connect to 172.18.0.4 port 3000 from 172.18.0.1 port 44402 failed: Connection timed out * Failed to connect to 172.18.0.4 port 3000 after 135822 ms: Couldn't connect to server * Closing connection curl: (28) Failed to connect to 172.18.0.4 port 3000 after 135822 ms: Couldn't connect to server
When I use a working redis instance showing the green lights, then I get the expected response:
root@my:~# curl -v http://172.18.0.7:3000/healthcheck * Trying 172.18.0.7:3000... * Connected to 172.18.0.7 (172.18.0.7) port 3000 > GET /healthcheck HTTP/1.1 > Host: 172.18.0.7:3000 > User-Agent: curl/8.5.0 > Accept: */* > < HTTP/1.1 401 Unauthorized < X-Powered-By: Express < Content-Type: application/json; charset=utf-8 < Content-Length: 2 < ETag: W/"2-vyGp6PvFo4RvsFtPoIWeCReyIC8" < Date: Thu, 17 Apr 2025 18:43:37 GMT < Connection: keep-alive < Keep-Alive: timeout=5 < * Connection #0 to host 172.18.0.7 left intact
Any recommendations? I've never had the frequency of these issues before until the past month or two, and not sure why this is happening. Unclear if this is something in my environment (it seems like maybe others are not seeing this issue), or if it's a Cloudron thing. Certainly open to this maybe being an issue in my environment only, just not sure what to try next.
-
@nebulon / @girish , is there a possibility that this is related to https://forum.cloudron.io/post/103522 at all? I wouldn't think so since this seems to be intermittent and only affecting one container at a time (usually redis it seems)... but wanted to make sure you were aware just in case this is the root cause.
-
This seems like a port binding issue or Docker network issue somehow, because the health check GET requests work successfully for any working containers, but the non-working containers timeout. However when inside the non-working container, the health check (if using localhost) works fine, so redis is working and everything seems to be running successfully, the issue appears to be more that it can't be accessed for some reason from the host.
-
When I exec into the working container and run the health-check to the broken container, it succeeds. So this definitely confirms to me that the issue is somewhere from the host network to container, where-as container-to-container works perfectly fine.
From the working container to the non-working container:
root@redis-2e36b3e8-22d9-477c-bd56-d2c284909932:/app/code# curl -v http://172.18.0.2:3000/healthcheck * Trying 172.18.0.2:3000... * Connected to 172.18.0.2 (172.18.0.2) port 3000 > GET /healthcheck HTTP/1.1 > Host: 172.18.0.2:3000 > User-Agent: curl/8.5.0 > Accept: */* > < HTTP/1.1 401 Unauthorized < X-Powered-By: Express < Content-Type: application/json; charset=utf-8 < Content-Length: 2 < ETag: W/"2-vyGp6PvFo4RvsFtPoIWeCReyIC8" < Date: Fri, 18 Apr 2025 05:24:05 GMT < Connection: keep-alive < Keep-Alive: timeout=5 < * Connection #0 to host 172.18.0.2 left intact
From the non-working container to the working container:
root@redis-00895422-a1ff-4196-8bb8-cb4ff8d6eeaa:/app/code# curl -v http://172.18.0.3:3000/healthcheck * Trying 172.18.0.3:3000... * Connected to 172.18.0.3 (172.18.0.3) port 3000 > GET /healthcheck HTTP/1.1 > Host: 172.18.0.3:3000 > User-Agent: curl/8.5.0 > Accept: */* > < HTTP/1.1 401 Unauthorized < X-Powered-By: Express < Content-Type: application/json; charset=utf-8 < Content-Length: 2 < ETag: W/"2-vyGp6PvFo4RvsFtPoIWeCReyIC8" < Date: Fri, 18 Apr 2025 05:25:40 GMT < Connection: keep-alive < Keep-Alive: timeout=5 < * Connection #0 to host 172.18.0.3 left intact
But from host to working redis and non-working redis shows results above at https://forum.cloudron.io/post/105899
-
OMG I think I figured it out and I'm really kicking myself. I threw a bunch of logs from Cloudron into ChatGPT for review, and it highlighted a line in the logs:
2025-04-13T01:02:17.357Z box:shell network /usr/bin/sudo -S /home/yellowtent/box/src/scripts/setblocklist.sh
I believe this is from the ipblocklist workflow I implemented based on https://docs.cloudron.io/guides/community/blocklist-updates/. I took a look at the IP address concatenation and sure enough it includes the two IP addresses of 172.18.0.2 and 172.18.0.4, which represent the two redis containers currently failing. I think this explains why this is happening, so it was basically the result of these IPs somehow making it onto the blocklist from firehol lists. Going to determine which list specifically added it so I can remove it from my IP blocklists I guess. As soon as I cleared the IP blocklist in Cloudron, everything worked immediately.
Sorry for the blast of comments, but figured it may help others who run into similar issues.
I'm just glad we sorted it out.
-
Thanks for sharing your journey towards a luckly simple fix in the end! Good to keep the blocklist in mind when hitting such rather random issues.
@nebulon I’m wondering… since Cloudron depends entirely on Docker networking to function… is there maybe room to improve the IP blocklist checking so that it ignores any entries of the current Docker networking ranges such as 172.18.xxx.xxx addresses? It feels to me like there would never be a use-case to block those, and while we certainly need to use reliable IP lists (lesson learned, haha), I also wonder if this feature should be improved in the future to ignore any private IPs or at least any Docker IPs.